Google has announced the open sourcing of Seurat, a dev tool that reduces the complexity in high-fidelity mobile virtual reality (VR) scenes, in a bid to improve its performance.

Seurat is originally conceived as a system for image-based scene simplification for VR, by converting complex 3D scenes with millions of triangles, such as complex lighting and shading effects, into into just tens of thousands of triangles that can be rendered easily and effectively on 6DOF devices.

Google claims that Seurat program succeeded in reducing a scene with 46.6 million triangles down to 307,000.

The open sourcing of Seurat also coincides with the release of the Mirage Solo, which is perhaps the first headset on the Google Daydream VR platform running on the company’s WorldSense positional tracking system. It uses a mobile chipset which makes it a lot more resource-constrained than other headsets that's connected to gaming PCs.

Seurat aims to avail all the viewpoints that a VR user can possibly have given their limited range of movement and also remove the area of the 3D environment that's beyond their sight. And it is especially beneficial to developers with existing renders that they intend to port from more capable hardware to strained mobile VR hardware.

While the processing pipeline for static environments generates data for a single headbox, with the input data generated by any rendering system, e.g. a real-time game engine or an offline ray tracer.

To learn firsthand how to run images through the pipeline to generate the output geometry and RGBA texture atlas, and generating the RGBD input images from your scene needed for the Seurat processing pipeline, refer to this GitHub page.

Google opens up Seurat, a dev tool that reduces complexity in high-fidelity mobile VR scenes



Google has announced the open sourcing of Seurat, a dev tool that reduces the complexity in high-fidelity mobile virtual reality (VR) scenes, in a bid to improve its performance.

Seurat is originally conceived as a system for image-based scene simplification for VR, by converting complex 3D scenes with millions of triangles, such as complex lighting and shading effects, into into just tens of thousands of triangles that can be rendered easily and effectively on 6DOF devices.

Google claims that Seurat program succeeded in reducing a scene with 46.6 million triangles down to 307,000.

The open sourcing of Seurat also coincides with the release of the Mirage Solo, which is perhaps the first headset on the Google Daydream VR platform running on the company’s WorldSense positional tracking system. It uses a mobile chipset which makes it a lot more resource-constrained than other headsets that's connected to gaming PCs.

Seurat aims to avail all the viewpoints that a VR user can possibly have given their limited range of movement and also remove the area of the 3D environment that's beyond their sight. And it is especially beneficial to developers with existing renders that they intend to port from more capable hardware to strained mobile VR hardware.

While the processing pipeline for static environments generates data for a single headbox, with the input data generated by any rendering system, e.g. a real-time game engine or an offline ray tracer.

To learn firsthand how to run images through the pipeline to generate the output geometry and RGBA texture atlas, and generating the RGBD input images from your scene needed for the Seurat processing pipeline, refer to this GitHub page.

No comments