Building a Cross Platform 360-degree Video Experience at The New York Times

Over the past few months, 360-degree videos have gained a lot of traction on the modern web as a new immersive storytelling medium. The New York Times has continuously aimed to bring readers as close to stories as possible. Last year we released the NYT VR app with premium content on iOS and Android. We believe VR storytelling allows for a deeper understanding of a place, a person, or an event.

This month, we added support for 360-degree videos into our core news products across web, mobile web, Android, and iOS platforms to deliver an additional immersive experience. Times journalists around the world are bringing you one new 360 video every day: a feature we call The Daily 360.

The current state of 360 videos on the Web

We’ve been using VHS, our New York Times Video Player, for playback of our content on both Web and Mobile Web platforms for the last few years. Building support for 360 videos on those platforms was a huge challenge. Even though the support for WebGL is relatively mature nowadays, there are still some issues and edge cases depending on platform and browser implementation.

To circumvent some of those issues, we had to implement a few different techniques. The first was the use of a “canvas-in-between”: We draw the video frames into a canvas and then use the canvas to create a texture. However, some versions of Microsoft Internet Explorer and Microsoft Edge are not able to draw content to the canvas if the content is delivered from different domains (as happens with a content delivery network, or CDN), even if you have the proper cross-origin resource sharing (CORS) headers set. We investigated this issue and found out that we could leverage the use of HTTP Live Streaming through the use of an external library called hls.js to avoid this problem.

Safari also has the same limitation regarding CORS. It seems to have been an issue in the underlying media framework for years and for this scenario, the hls.js workaround doesn’t solve the problem. We tackled this issue with the combination of two techniques:

  • The creation of an iframe with the video player embedded in it.
  • The use of progressive download renditions such as MP4 or WebM on the embedded player.

By doing this, we avoid the CORS video texture bug since the content and the iframe are in the same domain as the CDN and we were able to show the player in the parent domain, and the content inside the iframe.

Many of our users see our videos from within social media apps on their phones. On iOS, almost all of these social network applications load off-site content on their own in-app browsers instead of using the native browser, which raises a longstanding technical issue: the lack of support for inline playback video, even on iOS 10. This happens because inline playback support is still disabled by default on web views.

The technical problems listed above aside, the basic theory on how we should view 360 content is pretty straightforward. There are basically four steps to implement a simple 360 content view solution:

  1. Have an equirectangular panoramic image or video to be used as a source.
  2. Create a sphere and apply the equirectangular video or image as its texture.
  3. Create a camera and place it on the center of the sphere.
  4. Bind all the user interactions and device motion to control the camera.

These four steps could be implemented just using the WebGL API but there are 3D libraries like three.js that provide an easier way to use renderers for canvas, svg, CSS3D and WebGL. The example below shows how one could implement the four steps described above to render 360 videos or images:

When we first started to work on supporting 360 video playback on VHS, we researched a few projects and decided to use a JavaScript library called Kaleidoscope. Kaleidoscope supports equirectangular videos and images in all versions of modern browsers. The library is lightweight at 60kb gzipped, simple to use and easy to embed into the player when compared with other solutions.

The 360 video mobile native app experience on iOS and Android

Solving 360 video playback on iOS and Android was interesting and challenging since there wasn’t a video library that satisfied our requirements on both platforms. As a result, we decided to go with a different approach for each platform.

For the iOS core app, we created a small Objective-C framework that uses the same approach as Kaleidoscope. Initially we considered to start the development using Metal and OpenGL, but those are lower-level frameworks which require significant development work to create scenes and manipulate 3D objects.

Luckily, there’s another option: SceneKit is a higher-level framework that allows manipulation and rendering of 3D assets in native iOS apps. Investigation revealed that SceneKit provided adequate playback performance, so we chose to use it to render the sphere and camera required for 360-degree video playback.

We also needed to extract video frame buffers into a 2D texture to be applied as a material for the sphere, and to do that we decided to use SpriteKit. SpriteKit is a powerful 2D graphics framework commonly used in 2D iOS games. Our playback framework uses a standard iOS AVPlayer instance for video playback and uses SpriteKit to render its video onto the sphere.

Finally, we bind user interactions and device movements to control the camera’s motion using standard iOS gesture recognizers and device motion APIs.

By using these tools we were able to create a 360 video framework that is very similar to Kaleidoscope. We call it NYT360-Video, and we are happy to announce that we are open sourcing the framework.

On the Android platform we did a deep evaluation of some open source libraries that support 360 video and images, and after an initial prototyping, the Android team decided to use the Google VR SDK. The NYTimes Android app works on various devices and Android OS versions, and Google VR SDK has the features and capabilities that we needed and a straightforward API that allowed a relatively easy integration.

The Google VR SDK has evolved quite a lot from the day we started to work on the integration, and the Google VR team has invested a lot of time improving the project. Along the way, we worked together with Google on feature requests and bug fixes, and that collaboration gave us the certainty that we made the right decision to adopt it. The integration worked as we expected and now we have an immersive 360 video experience on Android.

The future of 360 video encoding and playback at The New York Times

We are investigating new ways of encoding and playing 360 videos to increase performance and improve the user experience. We are excited to explore other interesting features such as spatial audio, stereo images and video.

On the video transcoding side, we are exploring the use of cube map projections, which avoid the use of equirectangular layouts for a more space efficient approach. In theory, we can reduce the bitrate applied to the video by approximately 20 percent while keeping the same quality.

Below you can see a very basic example on how we could support playback of 360 videos encoded with cube map:

The use of cube map projections is a more complex approach than using equirectangular projections since it would not only require changing our video player but also the way we transcode our videos. Earlier this year Facebook released a project called Transform, an FFmpeg filter that converts a 360 video in equirectangular projection into a cube map projection. We are investigating ways to integrate this into our video pipeline. We are also open sourcing the video encoding presets that we use to transcode all of our 360 video outputs.

We hope to receive your crucial feedback and generate contributions from the open source community at large. Feel free to ask questions via GitHub Issues in each project.

Check them out:
github.com/NYTimes/ios-360-videos
github.com/NYTimes/video-presets