This release of the OSVR projects brings dozens of updates for OSVR, including many improvements submitted by the community of OSVR developers on GitHub. Thanks to all of the contributors to OSVR!
Below is a description of the major additions to OSVR, followed by a “release notes” document detailing smaller changes and bug fixes.
OSVR community members – please pay attention to the “contributions wanted” section in each feature and see if you are able to help us accelerate the pace of development for OSVR.
The positional tracking feature uses IR LEDs that are embedded on the OSVR HDK along with the 100 Hz IR camera included with the OSVR HDK to provide real-time XYZ positional and orientation tracking of the HMD.
The LEDs flash in a known pattern, which the camera detects. By comparing the location of the detected LEDs with their known physical locations, a position and orientation (pose) determination is made.
The software current looks for two targets (LED patterns): one on the front of the HDK and one on the back. Additional targets can be added, and thus additional devices that have known IR LED patterns can also be tracked in the same space.
It is also possible to assign different flashing patterns to multiple HDK units, thus allowing multiple HDK units to be tracked with the same camera. This is useful for multi-user play. Changing the IR codes on the HDK requires re-programming the IR LED board.
Sensics is working with select equipment developers to adapt the IR LED board and pattern to the specific shape of an object (e.g. glove, gaming weapon) so that that object can also be tracked with the OSVR camera.
The tracking code is included with the OSVR Core source code and is installed as an optional component while we are optimizing the performance. It will be set as the standard tracker once camera distortion, LED position optimization, and sensor fusion with IMU data have been implemented.
The image below shows a built-in debugging window that indicates the original image overlaid with beacon locations (in red, a tag of -1 means that the beacon has not been visible long enough to be identified) and reprojected 3D LED poses (in green, even for beacons not seen in the image). RANSAC-based pose estimation from OpenCV provides the tracking.
- Make the system more configurable by moving configuration parameters into an external JSON configuration files. These parameters could include the camera to use, optimization parameters and LED positions into the configuration file.
- Add Kalman optimal estimation filter to combine pose information from the video-based tracker and inertial measurements from the HDK’s inertial measurement unit into a combined pose + velocity + acceleration tracker that will provide smoother tracking and that can be used for predictive positional tracking.
- Combine the output from multiple cameras for wide-area tracking.
- Account for the optical distortion of the camera in the analysis.
- Create a calibration tool that improves performance by accounting for the slight manufacturing variation in the LED position.
- Create a tool that simplifies the process of adding new objects and IR LED patterns.
The OSVR-Core API now includes methods to retrieve the output of a computational model of the display. Previously, applications or game engine integrations were responsible for parsing display description JSON data and computing transformations themselves. This centralized system allows for improvements in the display model without requiring changes in applications, and also reduces the amount of code required in each application or game engine integration.
The conceptual model is “viewer-eye-surface” (also known as “viewer-screen”), rather than a “conventional camera”, as this suits virtual reality better. However, it has been implemented to be usable in engines (such as Unity) that are camera-based, as the distinction is primarily conceptual.
As a demonstration of this API, a fairly minimal OpenGL sample (using SDL2 to open the window) is now included with OSVR-Core.
- OpenGL-SDL sample that uses the distortion parameter API to apply the distortion shader.
The Sensics/OSVR Render Manager provides optimal low-latency rendering on any OSVR-supported device. Render Manager currently provides an enhanced experience with NVIDIA’s Gameworks VR technology on Windows. Support for additional vendors (e.g. AMD, Intel) is being worked on. We are also exploring the options to work with graphics vendors for mobile environments.
Unlike most of the OSVR platform, the Render Manager is not open-sourced at this point. The main reason is that the NVIDIA API was provided to Sensics under NDA and thus we cannot expose it at this time.
Key features enabled by the Render Manager:
- DirectMode: Enable an application to treat VR Headsets as head mounted displays that are accessible only to VR applications, bypassing rendering delays typical for Windows displays. DirectMode supports both Direct3D and OpenGL applications.
- Front-Buffer Rendering: Renders directly to the front buffer to reduce latency.
- Asynchronous Time Warp: Reduces latency by making just-in-time adjustments to the rendered image based on the latest head orientation after scene rendering but before sending the pixels to the display. This is implemented in the OpenGL rendering pathway (including DirectMode) and hooks are in place to implement it in Direct3D. It includes texture overfill on all borders for both eyes and supports all translations and rotations, given an approximate depth to apply to objects in the image.
Coming very soon:
- Distortion Correction: Handling the per-color distortion found in some HMDs requires post-rendering distortion. The same buffer-overfill rendering used in Asynchronous Time Warp will provide additional image regions for rendering.
- High-Priority Rendering: Increasing the priority of the rendering thread associated with the final pixel scan-out ensures that every frame is displayed on time.
- Time Tracking: Telling the application what time the future frame will be displayed lets it render the appropriate scene. This also enables the Render Manager to do predictive tracking when producing the rendering transformations and asynchronous time warp. The system also reports the time taken by previous rendering cycles, letting the application know when to simplify the scene to maintain an optimal update rate.
- Unity Low-level Native Plugin Interface: A Rendering Plugin will soon enable Render Manager’s features in Unity, and enable it to work with Unity’s multithreaded rendering.
Render Manager is currently available only for OSVR running on Windows.
Several example programs and configuration files for OpenGL (fixed-pipeline and shader code versions, callback-based and client-defined buffers based) and Direct3D11 (callback-based and client-defined buffers based, library-defined device and client-defined device) are provided and open-sourced. Also included is a program with adjustable rendering latency that can be used to test the effectiveness of asynchronous time warp and predictive tracking as application performance changes.
The RenderManager library features are controlled through a JSON configuration file:
- We are seeking to work with additional graphics chip vendors to create a universal, multi-platform library for high-performance rendering.
Predictive tracking reduces the perceived latency between motion and rendering by estimating head position at a future point in time. At present, the OSVR predictive tracking uses the angular velocity of the head to estimate orientation 16 mSec (1 frame at 60 FPS) into the future.
Angular velocity is available as part of the orientation report from the OSVR HDK. For other HMDs that do not provide angular velocity, it can be estimated using finite differencing of successive angular position reports.
- Improve the algorithm to extract velocity from non-HDK trackers.
- Extract angular acceleration and use that to improve the quality of predictive tracking.
- Fuse angular (yaw/pitch/roll) and linear (X/Y/Z) data to improve quality of positional tracking.
- Configure the look-ahead time either through API or through an external configuration file.
Utilizing ETW – Event Tracing for Windows – the OSVR performance profiler allows optimizing application performance by identifying performance bottlenecks throughout the entire software stack.
Event Tracing for Windows (ETW) is an efficient kernel-level tracing facility that lets you log kernel or application-defined events to a log file and then interactively inspect and visualize them with a graphical tool. As the name suggests, ETW is available only for the Windows platform. However, OSVR-Core’s tracing instrumentation and custom events use an internal, cross-platform OSVR tracing API for portability.
Currently the default libraries have tracing turned off to minimize any possible performance impacts. However, the “tracing” directory contains tracing-enabled binaries along with instructions on how to swap them in to use the tracing features. See this slide deck for a brief introduction on this powerful tool: http://osvr.github.io/presentations/20150901-Intro-ETW-OSVR/
- Identify and add additional useful custom tracing events.
- Implement the internal OSVR tracing API using other platforms’ equivalents of ETW for access to the instrumented events.
- Measure the performance impact of running a tracing-enabled build, to potentially enable it by default.
The default OSVR configuration has the client and server run as two separate processes. Amongst other advantages, this keeps the device servicing code out of the “hot path” of the render loop and allows multiple clients to communicate with the same server.
In some cases, it may be useful to run the server and client in a single process, with their main loops sharing a single thread. Examples where this might be useful include automated tests, special-purpose apps, or apps on platforms that do not support interprocess communication or multiple threads (in which case no async plugins can be used either). The new JointClientKit library was added to allow those special use cases: it provides methods to manually configure a server that will run synchronously with the client context.
Note that the recommended usage of OSVR has not changed: you should still use ClientKit and a separate server process in nearly all cases. Other ways of simplifying the user experience, including hidden/launch-on-demand server processes, are under investigation.
- Automated tests for ClientKit APIs, using JointClientKit to start a temporary server with only dummy plugins loaded.
A new device plugin has been written to support Android orientation sensors. This plugin exposes an orientation interface to the OSVR server running on an Android device. This is available here: https://github.com/OSVR/OSVR-Android-Plugins
A new Android OpenGL ES 2.0 sample demonstrates basic OSVR usage on Android in C++. You can find this sample here: https://github.com/OSVR/OSVR-Android-Samples
An early version of an Android app has been written that launches the OSVR server on a device to run in the background. This eliminates the need to root the phone, which existed in previous OSVR/Android version. You can find this code here: https://github.com/OSVR/OSVR-AndroidServerLauncher
The Unity Palace demo (https://github.com/OSVR/OSVR-Unity-Palace-Demo/releases/download/v0.1.1-android/OSVR-Palace-Android-0.1.1.zip) for Android can now work with the internal orientation sensors, as well as with external sensors
- Predictive tracking/filtering in the Android sensor plugin.
- Sensor fusion to improve fidelity of tracking.
- Additional sample apps.
- Connect to Android camera to provide Imager interface for Android.
OSVR continues to expand the range of available engines to which it integrates., this includes:
- Valve OpenVR (in beta): https://github.com/OSVR/SteamVR-OSVR
Here are new integrations as well as improvements to existing integrations:
The .NET language bindings for OSVR have been updated to support new interface types for eye tracking. This includes 2D and 3D eye tracking, direction, location (2D), and blink interface types.
Unity adapters for the eye tracking interface types have been added, as well as prefabs and utility behaviors that make it easier to incorporate eye tracking functionality into Unity scenes.
The optional distortion shader has been completely reworked, to be more efficient as well as to provide a better experience with Unity 4.6 free edition.
The OSVR-Unity plugin now retrieves the output of the computational display model from the OSVR-Core API. This eliminates the need to parse JSON display descriptor data in Unity, which allows for improvements in the display model without having to rebuild a game. The “VRDisplayTracked” prefab has been improved to create a stereo display at runtime based on the configured number of viewers and eyes.
Coming soon: Distortion Mesh support. Mesh-based distortion uses a pre-computed mesh rather than a shader to do the heavy lifting for distortion. There is a working example in a branch of OSVR-Unity.
- UI to display hardware specs, display parameters, and performance statistics.
The gesture interface brings new functionality that allows OSVR to support devices that detect body gestures including movements of hand, head and other parts of the body. This provides ways to integrate devices such as Leap Motion®, Nod Labs Ring, Microsoft® Kinect® , Thalmic Labs™ MYO™ armband and many others. Developers can combine gesture interface with others to provide meaningful information such as orientation, position, acceleration and/or velocity about user’s body part(s) pose.
New API has been added on the plugin and client sides to report/retrieve gestures for device. The gesture API provides a list of pre-set gestures while staying flexible to allow custom gestures to be used.
We added a simulation plugin – com_osvr_example_Gesture (see description of Simulation Plugins below) that uses gesture interface to feed a list of gestures, and also created a sample client application to visually output gestures
received from the plugins. These useful tools would help when developing new plugins or client apps.
Using the new interface, we are working on releasing a plugin for Nod Ring that will expose a gesture interface as well as existing interfaces.
- Development of new plugins for devices that have gesture recognition such as :
- ThalmicLabs Myo Armband
- Logbar Ring™
- New devices are welcome
The locomotion interface adds an API to support a class of devices also known as Omni-Directional Treadmills (ODT) allow walking and running on a motion platform and then converts this movement into navigation input in a virtual environment. Some examples of devices that would be able to use locomotion interface are: Virtuix Omni™, Cyberith™ Virtualizer, Infinadeck™, and others. These devices are very useful for First Person Shooters (FPS) games and by combining locomotion interface with tracker additional features such as body orientation, jump/crouch sensing could be added.
The API allows the ODTs to report the following data (on a 2D plane):
- User’s navigational velocity
- User’s navigational position
- Development of plugins for ODTs using Locomotion interface (walking / running) combined with additional OSVR interfaces (jumping, crouching, looking around)
The EyeTracker interface provides an API to report detailed information about the movement of one or both eyes.
This includes support for reporting:
- 3D gaze direction – a unit directional vector in 3D
- 2D gaze direction – location within 2D region
- Detection of blink events
EyeTracker devices are effective tools to interact inside VR environment providing intuitive way to make a selection, move objects, etc. The data reported from the devices can be analyzed to understand human behavior, marketing research and other research topics as well as gaming applications. They can also be used to perform the most accurate virtual reality rendering, customized for the location of your pupil every frame.
A .NET binding for EyeTracker (described above) allows easy integration of the eye tracking data into Unity.
- Create new plugins for eye tracking devices
- Expand EyeTracker interface to report additional eye attributes such as pupil size, pupil aspect ratio, saccades
In collaboration with SensoMotoric Instruments GmbH (SMI) we are releasing a new
plugin for SMI trackers. For instance, the plugin supports the SMI Upgrade Package for Oculus Rift™ DK2. It uses the SMI SDK to provide real-time streaming of eye and gaze data and report it via EyeTracker interface.
The SMI plugin also provides an OSVR Imaging interface to stream the eye tracker images.
The plugin is available at – https://github.com/OSVR/OSVR-SMI
- Create similar plugins for other eye-tracking vendors.
Along with the newly added interfaces (eyetracker, gesture, locomotion), we provide simulation plugins that serve as an example on how to use a certain interface. Their purpose is emulate a certain type of device (joystick, eyetracker, head tracker, etc.), connected to OSVR server, and feed simulation data to the client. These plugins were added as a tool for developing applications so that developers can easily run tests without the need to attach multiple devices to the computer. We would be expanding the available simulation plugins to have one for every type of interface. Simulation plugins are available in OSVR-Core and can be modified to a specific purpose.
- Create new simulation plugins for interfaces that do not have one already.
- Add new or improve the existing data generating algorithms used in simulation plugins
- Create new simulation plugins that use a combination of various interfaces such as joystick (button + analog)
Items listed here are generally in addition to the major items highlighted above, and do not include the hundreds of commits involved in the development and tuning of these or the above features – see the Git logs if you’d like to see all the details!
- All API updates have been reflected in the Managed-OSVR .NET bindings.
- Added osvrClientCheckStatus()/ClientContext::checkStatus() method to expose whether the library was able to successfully connect to an OSVR server.
- Display API and OpenGL examples.
- Added C++ wrapper to the configured device constructor API.
- Added an example “configured” plugin
- Improved self-contained example plugin.
- Link from the documentation to the OpenCV video capture plugin – a bundled plugin that also serves as an example.
- Improve completeness and usability of the Doxygen-generated API documentation.
- A large number of improvements, including a review of content as well as custom styling, were applied to the generated API documentation (“Doxygen” output).
- Timestamps are now verified for in-order arrival before updating client interface object state.
- Improved compatibility of vendored COM smart pointer code.
- Preliminary support for building the video-based tracker with OpenCV 3 (assertions quickly fail on an MSYS2/MinGW64 test build).
- Decreased code duplication through use of an internal “UniqueContainer” type built on existing internal “ContainerWrapper” templates.
- Client contexts now store their own deleters to allow safe cross-library deallocation, needed for JointClientKit.
- Vendored “Eigen” linear algebra library updated from 3.2.4 to a post-3.2.5 snapshot of the 3.2 branch.
- Reduced noisy configure messages from CMake.
- Moved non-automated tests to an internal examples folder so they still get built (and thus verified as buildable) by CI.
- Compile the header-dependency tests for headers marked as C-safe as both C and C++, since some headers have extra functionality when built as C++.
- Adjusted the default settings to show full pose, not just orientation, for the /me/head path.
- Print a message to the command prompt about the command line arguments available when none are passed.
- Start up tracker viewer zoomed in (distance of 1 meter to origin) to provide a more usable experience from the start.
- Hide coordinate axes until the associated tracker has reported at least once.
- Compile the “coordinate axes” model into the executable itself to remove a point of failure.
- Updates to include bindings for new APIs added to ClientKit.
- Fix a lifetime-management bug that may have resulted in crashes (double-free or use-after-free) if a ClientContext was disposed of before child objects.
- Added accessor for the raw ClientContext within the safe handle object, for use with code that interoperates with additional native code.
- Operation order tweaks that result in latency improvements.
- ShaderLab/Cg/HLSL distortion shader simplified and optimized for higher performance.
- Updated to newer version of OSVR-Unity.
- Disabled mouselook.
- Included OpenGL distortion shaders simplified and optimized for higher performance.
- Display descriptor schema v1 elaborated with “as-used” details.
- Fixed infinite loop in hand-coded matrix initialization.
- Improved/simplified math code using Eigen and its “map” feature.
As always, the OSVR team with the support of the community is continuously adding smaller features and addressing known issues. You can see all of these on Github such as in this link for OSVR Core
Interested in contributing? Start here: http://osvr.github.io/contributing/
Any questions or issues? email firstname.lastname@example.org
Thank you for being part of OSVR!
Yuval (“VRguy“) and the OSVR team
 Kreylos, 2012, “Standard camera model considered harmful.” <http://doc-ok.org/?p=27>
 Leap Motion is a trademark of Leap Motion, Inc.
 Microsoft and Kinect are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
 ThalmicLabs™ and Myo™ are trademarks owned by Thalmic Labs Inc.
 Logbar Inc. Ring is a trademark of Logbar Inc.