____________________________________________________________________________________________________
The following breaks up the different tasks of a Mixed Reality production into the parts, by modeling the production around existing TV & Film production models and pairing them with their corresponding video game design stages:
— This particular mockup has a biofeedback element that will be integrated into the project & respective diagram.

3D Volumetric Biofeedback VR Simulation
— Team Members — 
General Overview of Roles:
Technical Director — Project Manager — Translating creative decisions made through the writing process into the various technical elements and keeping a coherent, unified vision through the work of the specialist teams.

Creative Director — Writer & Director — Providing valuable input during the production process to ensure that the spirit of the environment is not lost in the waves of technical translation.

Photogrammetry Capture — Team A — Capturing and designing the static landscape at extremely high fidelity to be down-rendered and placed as a backdrop for the scene’s interactive elements. This production process will allow for the same asset’s render fidelity to be upscaled as the headsets increase in capabilities.

Ambisonic Sound — Team B — Capturing high quality sound elements to be integrated with organic 3D assets in the real-time rendering engine (Unity) and constructing a series of cascading 360° soundspheres to mimic the layers of sound elements that should be present in the scene but not hand-modeled as 3D assets (i.e. birds, water, etc.

3D Model Designer — Team C — Creating the organic, realistic 3D assets to be integrated into the scene by the Unity developers.

BCI Integration — Team D — Integrating EEG feedback into the Unity program in close collaboration with the Unity development team so that specific variables affect the intended assets at a proper ratio as to ensure the most engaging and least overwhelming experience.

Unity Developers — Team E — Integrating the EEG feedback as explained above and building the environment from the disparate elements.





Okay, so let's break this down: 
Workflow Specifics


Preproduction:
0. Director, Writer, and Project Manager — work together to transcribe the elements of the vision into specific dimensions, assets, sounds, functionality, and interactivity for the experience — first written in a script, then transferred to a game design document; and a detailed description of the visitor’s experience in a series of “happenings,” or events that the visitor interacts with.

Production:
A. The Photogrammetry Team — sets out to capture the intended environment and builds the asset at extremely high fidelity, to later be downscaled to fit the limitations of the highest tier device.
B. The Ambisonic Audio Capture & Development Team — is given a detailed description of the intended sonic experience and a corresponding list of intended effects. The team then sets out to capture a library of tracks for later integration into 3D assets and 360° soundspheres.
C. The 3D Modelling Team — receives a description of the required assets, technological limitations of specific headsets (poly count), and a collection of images for inspiration. They then begin building low fidelity assets for quick export to Unity team, and continue to work on high quality assets.
D. The Brainwave-Computer Interface Specialist and Unity Developers — will take the low quality environment, created by the Unity Developers, and begin connecting wind intensity to specific brain wavelengths that communicate calmness.
E. The Unity Developers — work with high quality assets from the 3D Modeling Team to build the interactivity with user. After assets are placed in the scene, the team will build a series of interesting wind zones from a premapped, top-down, 2D diagram that details intersection points; these points are based on visitor’s PoV and areas that have ideal assets to see the full effect of the wind.
Post-production:
F. High resolution 3D assets, created by 3D Modeling Team, are integrated into Unity Developers’ environment. Low quality assets are swapped with high quality ones, and all scripts that were on low quality assets are placed on high quality ones and refined.
G. Ambisonic Audio Development Team — receives a polished Unity environment and applies captured sonic assets to high quality 3D assets.
H. Brain-Computer Interface Specialists — tweaks sonic assets to correspond with brainwave frequency so that sonic intensity correlates with desired asset intensity.
I. Unity Developers Finalize and, as those old news guys would say, "Print."

____________________________________________________________________________________________________
Back to Top