评论删除后,数据将无法恢复
Virtual reality is set to be worth up to $7 billion by 2020. The web is definitely not going to remain an exclusively 2D environment during this time. In fact, there are already a few simple ways to bring VR into the browser. It is also incredibly fun to work with!
To begin your development adventure into the Virtual Web, there are three potential ways to do this:
JavaScript, Three.js and Watching Device Orientation
JavaScript, Three.js and WebVR (My new preferred method!)
CSS and WebVR (still very early days)
I’ll go over each one and show a short summary of how each works.
One of the ways that most browser based virtual reality projects work at the moment is via the deviceorientation browser event. This tells the browser how the device is oriented and allows the browser to pick up if it has been rotated or tilted. This functionality within a VR perspective allows you to detect when someone looks around and adjust the camera to follow their gaze.
To achieve a wonderful 3D scene within the browser, we use three.js, a JavaScript framework that makes it easy to create 3D shapes and scenes. It takes most of the complexity out of putting together a 3D experience and allows you to focus on what you are trying to put together within your scene.
I’ve written two demos here at SitePoint that use the Device Orientation method:
If you are new to three.js and how to put together a scene, I’d recommend taking a look at the above two articles for a more in depth introduction into this method. I will cover key concepts here, however it’ll be at a higher level.
The key components of each of these involve the following JavaScript files (you can get these files from the example demos above and also will find them in the three.js examples download):
three.min.js
– Our three.js framework
DeviceOrientationControls.js
– This is the three.js plugin that provides the Device Orientation we
discussed above. It moves our camera to meet the movements of our
device.
OrbitControls.js
– This is a
backup controller that lets the user move the camera using the mouse
instead if we don’t have a device that has access to the Device
Orientation event.
StereoEffect.js
– A
three.js effect that splits the screen into a stereoscopic image angled
slightly differently for each eye just like in VR. This creates the
actual VR split screen without us needing to do anything complicated.
The code to enable Device Orientation controls looks like so:
function setOrientationControls(e) { if (!e.alpha) { return; } controls = new THREE.DeviceOrientationControls(camera, true); controls.connect(); controls.update(); element.addEventListener('click', fullscreen, false); window.removeEventListener('deviceorientation', setOrientationControls, true); } window.addEventListener('deviceorientation', setOrientationControls, true); function fullscreen() { if (container.requestFullscreen) { container.requestFullscreen(); } else if (container.msRequestFullscreen) { container.msRequestFullscreen(); } else if (container.mozRequestFullScreen) { container.mozRequestFullScreen(); } else if (container.webkitRequestFullscreen) { container.webkitRequestFullscreen(); } }
The DeviceOrientation event listener provides an alpha, beta and gamma value when it has a compatible device. If we don’t have any alpha value, it doesn’t change our controls to use Device Orientation so that we can use Orbit Controls instead.
If it does have this alpha value, then we create a Device Orientation control and provide it our camera
variable to control. We also set it to set our scene to fullscreen if
the user taps the screen (we don’t want to be staring at the browser’s
address bar when in VR).
If that alpha value isn’t present and we don’t have access the device’s Device Orientation event, this technique instead provides a control to move the camera via dragging it around with the mouse. This looks like so:
controls = new THREE.OrbitControls(camera, element); controls.target.set( camera.position.x, camera.position.y, camera.position.z ); controls.noPan = true; controls.noZoom = true;
The main things that might be confusing from the code above is the noPan
and noZoom
.
Basically, we don’t want to move physically around the scene via the
mouse and we don’t want to be able to zoom in or out – we only want to
look around.
In order to use the stereo effect, we define it like so:
effect = new THREE.StereoEffect(renderer);
Then on resize of the window, we update its size:
effect.setSize(width, height);
Within each requestAnimationFrame
we set the scene to render through our effect:
effect.render(scene, camera);
That is the basics of how the Device Orientation style of achieving VR works. It can be effective for a nice and simple implementation with Google Cardboard, however it isn’t quite as effective on the Oculus Rift. The next approach is much better for the Rift.
Looking to access VR headset orientation like the Oculus Rift? WebVR is the way to do it at the moment. WebVR is an early and experimental Javascript API that provides access to the features of Virtual Reality devices like Oculus Rift and Google Cardboard. At the moment, it is available on Firefox Nightly and a few experimental builds of Mobile Chrome and Chromium. One thing to keep in mind is that it the spec is still in draft and is subject to change, so experiment with it but know that you may need to adjust things over time.
Overall, the WebVR API provides access to the VR device information via:
navigator.getVRDevices
I won’t go into lots of technical details here (I’ll cover this in more detail in its own future SitePoint article!), if you’re interested in finding out more check out the WebVR editor’s draft. The reason I won’t go into detail with it is that there is a much easier method to implement the API.
This aforementioned easier method to implement the WebVR API is to use the WebVR Boilerplate from Boris Smus. It provides a good level of baseline functionality that implements WebVR and gracefully degrades the experience for different devices. It is currently the nicest web VR implementation I’ve seen. If you are looking to build a VR experience for the web, this is currently the best place to start!
To start using this method, download the WebVR Boilerplate on Github.
You can focus on editing the index.html
and using all of the files in that set up, or you can implement the
specific plugins into your own project from scratch. If you’d like to
compare the differences in each implementation, I’ve migrated my Visualizing a Twitter Stream in VR with Three.js and Node example from above into a WebVR powered Twitter Stream in VR.
To include this project into your own from scratch, the files you’ll want to have are:
three.min.js
– Our three.js framework of course
VRControls.js
– A three.js plugin for VR controls via WebVR (this can be found in bower_components/threejs/examples/js/controls/VRControls.js
in the Boilerplate project)
VREffect.js
– A three.js plugin for the VR effect itself that displays the scene for an Oculus Rift (this can be found in bower_components/threejs/examples/js/effects/VREffect.js
in the Boilerplate project)
webvr-polyfill.js
– This is a polyfill for browsers which don’t fully support WebVR just yet (this can be found on GitHub and also in bower_components/webvr-polyfill/build/webvr-polyfill.js
in the Boilerplate code)
webvr-manager.js
– This is
part of the Boilerplate code which manages everything for you, including
providing a way to enter and exit VR mode (this can be found in build/webvr-manager.js
)
Implementing it requires only a few adjustments from the Device Orientation method. Here’s an overview for those looking to try this method:
The VR controls are quite simple to set up. We can just assign a new VRControls
object to the controls
variable we used earlier. The orbit controls and device orientation
controls should not be necessary as the Boilerplate should now take care
of browsers without VR capabilities. This means your scene should still
work quite well on Google Cardboard too!
controls = new THREE.VRControls(camera);
The effect is very similar to implement as the StereoEffect
was. Just replace that effect with our new VREffect
one:
effect = new THREE.VREffect(renderer); effect.setSize(window.innerWidth, window.innerHeight);
However, we do not render through that effect in this method. Instead, we render through our VR manager.
The VR manager takes care of all our VR entering/exiting and so forth, so this is where our scene is finally rendered. We initially set it up via the following:
manager = new WebVRManager(renderer, effect, {hideButton: false});
The VR manager provides a button which lets the user enter VR mode if
they are on a compatible browser, or full screen if their browser isn’t
capable of VR (full screen is what we want for mobile). The hideButton
parameter says whether we want to hide that button or not. We definitely do not want to hide it!
Our render call then looks like so, it uses a timestamp
variable that comes from our three.js’ update()
function:
function update(timestamp) { controls.update(); manager.render(scene, camera, timestamp);}
With all of that in place, you should have a working VR implementation that translates itself into various formats depending on the device.
评论删除后,数据将无法恢复
评论(13)
公司可以把高额的办公室租金,发给每位任职者