Project

General

Profile

Feature #1636

Support for Oculus Rift

Added by skyjake about 11 years ago. Updated almost 11 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
3rd party compatibility
Target version:
Start date:
2013-10-23
% Done:

100%


Description

Oculus Rift is cool and we should have support for it in the renderer.

However, having dabbled with stereo 3D, there are some hurdles to overcome (like whether sprites should be allowed at all, and having a realistic depth position for the psprite weapon even though you can walk so close to a wall that the gun goes completely inside it).

Subtasks:
  • Stereoscopic 3D (in any form)
  • Side-by-side scene composition
  • Head tracking control of player orientation
  • Render unwarped view to off-screen render buffer in one pass, then Rift-warp result to primary display in second pass.
left_right.jpg (207 KB) left_right.jpg skyjake, 2013-10-24 21:10
left_right_aspect.png (286 KB) left_right_aspect.png skyjake, 2013-10-25 13:11

Related issues

Related to Feature #1229: Input plugins: generate events from connected controllersRejected2003-07-09

Related to Feature #7: Next-gen renderer (codename "Gloom")Progressed2003-07-10

Related to Feature #1655: Offscreen UI compositionClosed2013-11-05

Related to Feature #1657: Oculus Rift field-of-view angleClosed2013-11-05

Related to Feature #1654: [VR] Displaying the 2D UI within a 3D viewClosed2013-10-24

Related to Feature #1678: Optimize stereoscopic pixel formatsRejected

Related to Feature #1852: Support for Oculus Rift DK2 (extended desktop mode, LibOVR 0.4.3)Closed2014-08-13

Related to Bug #2135: Disable Oculus support by default (until proper LibOVR 1.0 support is implemented)Closed2015-11-28

Precedes Feature #1646: Stereo 3D enhancementsNew2013-10-24

Precedes Feature #1656: UI for VR / Oculus Rift configClosed2013-10-24

History

#1 Updated by cmbruns about 11 years ago

skyjake wrote:

Oculus Rift is cool and we should have support for it in the renderer.

However, having dabbled with stereo 3D, there are some hurdles to overcome (like whether sprites should be allowed at all, and having a realistic depth position for the psprite weapon even though you can walk so close to a wall that the gun goes completely inside it).

Hey I just implemented an Oculus Rift mode for gzdoom http://rotatingpenguin.com/gz3doom/.

I'd like to do Doomsday too, if I could get the build environment working on my dev machine.

I personally think the sprites work well in the Rift.

I chose a tradeoff for the sprite weapon where it only inserts slightly into the closest walls, but is not so close to the viewer it hurts the eyes during normal use.

I'm available for consultation, or actual implementation (if I can get the build environment going).

#2 Updated by skyjake about 11 years ago

cmbruns wrote:

I just implemented an Oculus Rift mode for gzdoom http://rotatingpenguin.com/gz3doom/.
I'd like to do Doomsday too, if I could get the build environment working on my dev machine.

That's excellent. We would certainly appreciate it if you could give it a shot. We are currently busy working on other topics, and personally I wouldn't feel comfortable working on this without having the Oculus Rift hardware to test it on.

As to the build environment, just send me an email if you have any questions or encounter difficulties. There is also a wiki page about how to compile.

#3 Updated by cmbruns about 11 years ago

skyjake wrote:

cmbruns wrote:

I just implemented an Oculus Rift mode for gzdoom http://rotatingpenguin.com/gz3doom/.
I'd like to do Doomsday too, if I could get the build environment working on my dev machine.

That's excellent. We would certainly appreciate it if you could give it a shot. We are currently busy working on other topics, and personally I wouldn't feel comfortable working on this without having the Oculus Rift hardware to test it on.

As to the build environment, just send me an email if you have any questions or encounter difficulties. There is also a wiki page about how to compile.

Cool. I'll try (again) to get the dev environment set up this weekend. In the meantime I'd like to get you thinking about some issues, because I will need your advice.

There are several nearly discrete tasks to get Oculus Rift mode going. One that has turned out to be tricky in gzdoom, is rendering the view twice, once to the left half of the screen, and once to the right side of the screen. Each view independently clears the depth buffer and updates the projection matrix, but between rendering the left and right views, time does not advance, the player view direction does not change, and no actors move. Further, menus, hud items, and the intermission screen need to be rendered to each side. In Oculus Rift mode (as opposed to other stereo 3D modes), those hud items, intermission screens, and menus need to be centered and drawn at a smaller scale than the primary 3D view. Extra bonus points if those two views can optionally be rendered in each of 1) correct aspect ratio and 2) 1:2 aspect ratio where everything is tall and skinny (this effect is for certain useful side-by-side stereo 3D modes.)

Would you mind thinking about how such a composition of views might work within your current architecture? I have resorted to some pretty hairy hacks to get it working in gzdoom. I hope it could be a bit more straightforward in Doomsday?...

#4 Updated by cmbruns about 11 years ago

I brought up side-by-side scene composition in my previous comment because it is a tricky topic and deserves some cogitation. The earlier the better. Let's keep thinking about it.

But in order to get my feet wet I want to start with something simpler, just like I did with gzdoom. The first thing I implemented was stereo 3D with red/cyan or green/magenta glasses. All this requires is two rendering passes with slightly different projection matrices, and different parameters passed to glColorMask(). However gzdoom was already calling glColorMask(0,0,0,0), followed later by glColorMask(1,1,1,1) in several places, so I had to refactor all those glColorMask(1,1,1,1) calls to revert to whatever the previous glColorMask() mode was. I did this with a clever stack-allocated LocalScopeColorMask class instance. That gave me basic stereo 3D functionality, and I was able then to immediately implement the following stereo modes: mono; red/cyan; green/magenta; left-eye-view; right-eye-view; and quad-buffered hardware stereoscopic 3D. And I could have also done scanline interleaved, column interleaved, and checkerboard interlaced modes too, if gzdoom weren't already doing lots of complex clever stuff with the OpenGL stencil buffer; so I have postponed those three stencil-based 3D modes for the time being.

I activate green/magenta 3D mode by typing "vr_mode 1" in the gzdoom console. I hope I will be able to do something similar in Doomsday.

One advantage of implementing these simpler stereo 3D modes first is that folks without Oculus Rift hardware can help accurately test the implementation.

#5 Updated by cmbruns about 11 years ago

List of higher-level tasks needed for Oculus rift mode:

  • Stereoscopic 3D (in any form)
  • Side-by-side scene composition
  • Head tracking control of player orientation
  • Render unwarped view to off-screen render buffer in one pass, then Rift-warp result to primary display in second pass.

Each of these aspects can be developed almost independently. And each has its own bag of interesting technical considerations.

#6 Updated by skyjake about 11 years ago

  • Description updated (diff)

cmbruns wrote:

Would you mind thinking about how such a composition of views might work within your current architecture? I have resorted to some pretty hairy hacks to get it working in gzdoom. I hope it could be a bit more straightforward in Doomsday?...

Recently I've been refactoring the UI into separate, independent widgets, and the game view is currently drawn by one of these (LegacyWidget; "legacy" because it's just a thin wrapper around some very old code — it will be further broken down into smaller widgets).

One requirement for this was to be able to position the game view anywhere inside the window, so that the Renderer Apperance sidebar editor could be shown next to it. In practice it simply sets a glViewport and scissor to the appropriate rectangle inside the window. I would imagine this is exactly what is needed to have the left/right frames side by side.

However, some code may be making hidden assumptions that the view is in the top left corner of the window — I will set up a work branch ("oculus-rift") for this and see how much is there to be done.

The downside with the current renderer is that it will not be very efficient to render the same frame twice. The cost will effectively be the same as rendering two consecutive frames. This is one key area that we will be addressing in the longer term, as we're migrating to OpenGL 2+.

glColorMask

Using a color mask in that manner should be quite straightforward, as the only place we're using a color mask is when the sky is being drawn (rend_list.cpp:2052 and 2070).

vr_mode

Adding a new console variable is also easy: you just add a global variable and then register it as a console variable, for instance in Rend_Register().

#7 Updated by skyjake about 11 years ago

The branch "oculus-rift" is now on GitHub. I've made some initial changes: I've verified that the entire UI can be drawn side by side for the left/right frames (as seen in attached screenshot).

There are still some minor glitches, though, that need investigating.

#8 Updated by skyjake about 11 years ago

  • Status changed from New to In Progress

#9 Updated by cmbruns about 11 years ago

skyjake wrote:

The downside with the current renderer is that it will not be very efficient to render the same frame twice. The cost will effectively be the same as rendering two consecutive frames. This is one key area that we will be addressing in the longer term, as we're migrating to OpenGL 2+.

The sobering fact is that stereo 3D rendering generally costs a 2X speed performance penalty. If the fragment shader is taking most of the render time, you can get some of that penalty back in side-by-side or in stenciled modes, because the total number of pixels rendered has not increased.

If loads of textures and polygonal geometry are being uploaded to the video card every frame, and if that takes a significant amount of time, then yes, that part could be optimized for stereo 3D.

attached screenshot

Very nice! That was fast. It looks like I could use that view mode RIGHT NOW in side-by-side mode on my 3D TV, which requires that sort of squished aspect ratio. For Oculus Rift and cross-eyed viewing, the aspect ratio would need to be corrected back to normal within each viewport.

#10 Updated by skyjake about 11 years ago

cmbruns wrote:

For Oculus Rift and cross-eyed viewing, the aspect ratio would need to be corrected back to normal within each viewport.

I have now pushed a few commits to the work branch that apply aspect correction to the UI (see screenshot).

This turned out to be quite easy to do because Doomsday's UI can use a different logical size than the window. In this case, I simply made the UI 2x taller to compensate for the horizontal split. There is also a freely adjustable scaling factor for determining how big/small the UI should be within each split.

The old graphics code was still making a number of unmanaged scissor etc. calls, but after ironing out those details it looks pretty glitch-free to me. :)

As a bonus feature, I'm also mapping mouse coordinates appropriately so both left/right frames can be used normally with the mouse.

#11 Updated by skyjake about 11 years ago

  • % Done changed from 0 to 10

#12 Updated by cmbruns about 11 years ago

skyjake wrote:

cmbruns wrote:

For Oculus Rift and cross-eyed viewing, the aspect ratio would need to be corrected back to normal within each viewport.

I have now pushed a few commits to the work branch that apply aspect correction to the UI (see screenshot).

This turned out to be quite easy to do because Doomsday's UI can use a different logical size than the window. In this case, I simply made the UI 2x taller to compensate for the horizontal split. There is also a freely adjustable scaling factor for determining how big/small the UI should be within each split.

The old graphics code was still making a number of unmanaged scissor etc. calls, but after ironing out those details it looks pretty glitch-free to me. :)

As a bonus feature, I'm also mapping mouse coordinates appropriately so both left/right frames can be used normally with the mouse.

Great work. Mapping the mouse coordinates already puts this implementation ahead of the gzdoom version in one respect. Players have complained to me about the mouse mapping, but I have not even starting investigating how to fix it in gzdoom.

I've already invested a few hours into setting up my build environment. I recently got as far as creating a doomsday executable; so now that I'm dealing with just runtime errors I feel I am in the home stretch, and might be able to actually start coding soon.

#13 Updated by cmbruns about 11 years ago

I finally got the build environment working. First on the command line, then in Qt Creator.

Using skyjake's awesome side-by-side composition as a starting point, I created 3 stereoscopic modes: mono; green-magenta; and side-by-side. I could spend a while making many others. Type "vr_mode 1" in the console, for example.

Although I haven't done much yet, I submitted a pull request on github, to exercise the repository gymnastics.

One glitch I see is that, in side by side mode, there are briefly FOUR screens shown (instead of two) during the transition from "New Game" menu to active play.

#14 Updated by cmbruns about 11 years ago

One more thing: In my haste, I hard-coded the player point-of-view height to 41.0 map units above the floor. Where should I really be reading this value from?

#15 Updated by skyjake about 11 years ago

  • % Done changed from 10 to 20

cmbruns wrote:

Using skyjake's awesome side-by-side composition as a starting point, I created 3 stereoscopic modes: mono; green-magenta; and side-by-side. I could spend a while making many others. Type "vr_mode 1" in the console, for example.

Nicely done. I pulled your changes and applied some fine-tuning: the mouse coordinate mapping is only applied in the side-by-side mode, and there's a callback on "vr_mode" that automatically updates the logical UI size.

One glitch I see is that, in side by side mode, there are briefly FOUR screens shown (instead of two) during the transition from "New Game" menu to active play.

I'll fix this later. It's a relatively minor issue related to taking a screenshot to be used for the UI transition animation during/after busy mode (in BusyWidget).

One more thing: In my haste, I hard-coded the player point-of-view height to 41.0 map units above the floor. Where should I really be reading this value from?

We already have a game-side cvar for this called "player-eyeheight". Rather than adding another, you can just call Con_GetInteger("player-eyeheight").

#16 Updated by skyjake about 11 years ago

BTW, don't worry too much about conforming to our coding/naming conventions. When we're done, I'll update/rename cvars/functions/classes/files as needed...

#17 Updated by cmbruns about 11 years ago

skyjake wrote:

Nicely done. I pulled your changes and applied some fine-tuning: the mouse coordinate mapping is only applied in the side-by-side mode, and there's a callback on "vr_mode" that automatically updates the logical UI size.

Are your changes committed to your github oculus-rift or master branch? I'd like to stay in sync, as I might be ready to start moving fast.

BTW, don't worry too much about conforming to our coding/naming conventions. When we're done, I'll update/rename cvars/functions/classes/files as needed...

Actually I would prefer to include those changes earlier rather than later. I want to maintain consistency between doomsday and gzdoom conventions. So I want to understand precisely how it will actually look in doomsday. Could you please make a few comments about your intended changes?

#18 Updated by skyjake about 11 years ago

cmbruns wrote:

Are your changes committed to your github oculus-rift or master branch? I'd like to stay in sync, as I might be ready to start moving fast.

They're in the oculus-rift branch. We use the master for the biweekly unstable builds, so (as a rule) unfinished work shouldn't be merged there. I'll merge oculus-rift to master once it's been polished to a suitable state.

Actually I would prefer to include those changes earlier rather than later. I want to maintain consistency between doomsday and gzdoom conventions. So I want to understand precisely how it will actually look in doomsday. Could you please make a few comments about your intended changes?

Sure:

  • We use a dash-separated cvar naming convention. "vr_mode" should be called "rend-vr-mode", for instance. The idea is to treat the parts as a hierarchy: 'rend' is the base group, 'vr' a subgroup, etc.
  • Variable naming shouldn't use underscores. Instead, we prefer camel case ("vrMode").
  • There's a pi constant defined in <de/math.h> (de::PI).
  • For Git commits, we've been using a "tagging" convention in the message subject line (these are indexed in the Codex), for example:
    UI|Client|AudioSettings: Added toggle and default for sound-overlap-stop
  • There are two brace placement conventions in use.
    // The normal style. This should be used almost everywhere.
    void function()
    {
        if(true)
        {
            // stuff
        }
    }
    
    // The compact/inline style (e.g., an inline method in a class).
    int member() {
        if(true) {
            return 2;
        }
        return 1;
    }
    
  • Finally, some tiny white space conventions:
    switch (vr_mode)
    glColorMask(0,1,0,1);
    
    // Our style:
    switch(vr_mode)
    glColorMask(0, 1, 0, 1);
    

#19 Updated by cmbruns about 11 years ago

skyjake wrote:

cmbruns wrote:

Are your changes committed to your github oculus-rift or master branch? I'd like to stay in sync, as I might be ready to start moving fast.

They're in the oculus-rift branch. We use the master for the biweekly unstable builds, so (as a rule) unfinished work shouldn't be merged there. I'll merge oculus-rift to master once it's been polished to a suitable state.

I'm still learning to understand git properly. I had foolishly misconfigured my git remote called "upstream" to point to my own github fork, not yours. So I was not getting your changes until now. Oops.

Thanks for the style explanation. I'm coding now.

#20 Updated by cmbruns about 11 years ago

I have added a number of different stereo 3D modes, and lightly tested most of them. Basic stereoscopic functionality is working.

To implement head tracking, I will need to link to the Oculus SDK. This should be a totally optional build option. How should I configure the build to optionally use the Oculus Rift SDK, and then create some #ifdefs to mask the code that uses that SDK? I could use some pointers on how to accomplish this using the doomsday build system.

I forked the side-by-side stereo mode into four different modes: parallel, cross-eye, side-by-side, and Oculus Rift, each with its own appropriate aspect ratio and eye order. One thing I don't know how to do, is to get all hud items, (including taskbar, weapon sprite, ammo counts etc) moved toward the center of the viewport for Oculus Rift mode. The very bottom of the viewport is essentially dead space with the VR unit on, so those hud items need to be shrunk in toward the center of the view, zooming in about a point centered on the crosshair.

#21 Updated by cmbruns about 11 years ago

cmbruns wrote:

To implement head tracking, I will need to link to the Oculus SDK. This should be a totally optional build option. How should I configure the build to optionally use the Oculus Rift SDK, and then create some #ifdefs to mask the code that uses that SDK? I could use some pointers on how to accomplish this using the doomsday build system.

OK I got the conditional build working.

Next I need to figure out where to apply the view orientation from the Oculus Rift. Is it too late to set it in the ClientWindow::canvasGLDraw() method? Minimizing latency is absolutely critical with the Oculus Rift. We need to minimize the time between 1) reading the head orientation from the Rift, and 2) the photons representing that orientation emerge from the Rift display. Shaving as little as one millisecond off that latency can be important. We need to count the milliseconds. So we should set the orientation at the last possible moment for which it will affect the view that we are about to render.

It is useful to represent the head orientation from the Oculus Rift in terms of the three Euler angles Pitch (up/down), Roll, and Yaw(left/right). The absolute values of Pitch and Roll will be directly set into the player orientation. The Yaw angle, on the other hand, is updated incrementally, because it is often combined with turning moves from player keyboard/mouse interactions. The Roll angle has no gameplay effect (that I know of), so I have been hard coding into the OpenGL modelview matrix calculation. To keep the view smooth, mouselook should be turned OFF during Oculus Rift mode, so the Pitch angle is controlled only by the Rift.

#22 Updated by cmbruns about 11 years ago

cmbruns wrote:

The Roll angle has no gameplay effect...

...but if we wanted to get really fancy, the rotated hud map could be affected by the Roll angle, so that the "forward" direction on the map always matches "up" direction of the player's head, rather than the "up" direction of the player's monitor.

But that would probably be too clever by half.

#23 Updated by danij about 11 years ago

Re: UI and HUD drawing

It would be useful for future development/maintenance to have a visual representation of the region in which the HUD and other UI elements should be drawn. As neither skyjake or myself currently have access to the hardware we'd be essentially working blind otherwise. Drawing a simple unlit, untextured quad we could uncomment in a debug build would likely suffice.

Re: Head tracking / Oculus Rift SDK

In the future we plan to implement an API which plugins can use to communicate user input from devices like joysticks to the engine. I'm guessing that we could do the same with input from the head tracker. This would be the ideal arrangement as then we could simply give users the option of installing this plugin/library (removing the need for a build time option in the engine itself).

However, we haven't yet scheduled the necessary input system refactoring(s).

#24 Updated by cmbruns about 11 years ago

danij wrote:

Re: UI and HUD drawing

It would be useful for future development/maintenance to have a visual representation of the region in which the HUD and other UI elements should be drawn. As neither skyjake or myself currently have access to the hardware we'd be essentially working blind otherwise. Drawing a simple unlit, untextured quad we could uncomment in a debug build would likely suffice.

Until I have the warping implemented, I probably won't be able to precisely create such a quad in doomsday either. But what I can do right now, is share a couple of screen shots from my gzdoom implementation. The first, gzdoom_hud_unwarped2.png, demonstrates the sort of thing I am talking about. All of the hud chrome is reduced to a small quad about 40% the width of the viewport. There is a tiny green crosshair in the very center, placed just where a full sized crosshair would be. That is the center of hud shrinkage.

Unwarped Image:

For comparison, I have also included an example of the final warped image. But we only need to worry about getting something like the unwarped image, in terms of hud placement.

Warped Image:

Re: Head tracking / Oculus Rift SDK

In the future we plan to implement an API which plugins can use to communicate user input from devices like joysticks to the engine. I'm guessing that we could do the same with input from the head tracker. This would be the ideal arrangement as then we could simply give users the option of installing this plugin/library (removing the need for a build time option in the engine itself).

However, we haven't yet scheduled the necessary input system refactoring(s).

In that case I'll probably just find a hack to get the orientation working for now.

By the way, I have a wireless game controller on my desk I bought so I could play in the Rift without being tethered to the mouse/keyboard. But it does not seem to immediately work with doomsday. Options->Controls->..."Press Key or Move Controller" does not seem to recognize the device. Do you have any tips for troubleshooting this?

#25 Updated by skyjake about 11 years ago

cmbruns wrote:

I will need to link to the Oculus SDK. This should be a totally optional build option.

OK I got the conditional build working.

Your setup looks good, however we've been using a "DENG_" prefix on all preprocessor definitions like this (DENG_USE_OCULUS_RIFT).

One thing I don't know how to do, is to get all hud items, (including taskbar, weapon sprite, ammo counts etc) moved toward the center of the viewport for Oculus Rift mode.

There are a few elements to this.

When it comes to the Doomsday UI itself (taskbar, console, etc.), we need to set up suitable layout rules that define the size of the view. I can set this up in GuiRootWidget so that one can define a separate rectangle inside the true root UI size that other widgets can then layout themselves into. Or, it might be enough to modify the projection matrix for selected/flagged widgets (GuiRootWidget::projMatrix2D()).

The game-side UI is a different matter, though. danij has been working on this area more recently than I, but a quick glance shows that st_stuff.c:2555 might be a good place to start from.

Naturally, on game-side, the VR variables have to accessed via the Con_Get*() functions.

Next I need to figure out where to apply the view orientation from the Oculus Rift. Is it too late to set it in the ClientWindow::canvasGLDraw() method?

An appropriate place might be in the end of R_UpdateViewer() (around r_main.cpp:855). The important thing to note here is that the view angles use interpolation to account for the requirement to keep player movement updated at 35 Hz due to full vanilla compatibility. R_UpdateViewer() performs the interpolation, but after it has been done it should be ok to apply an additional yaw/pitch offset to the angles.

When it comes to minimizing latency, how is vsync treated? Should it be disabled? What is the Oculus Rift's display refresh rate?

What is the control scheme, though? Which view angles are under Oculus Rift control and which are controlled with the keyboard?

To keep the view smooth, mouselook should be turned OFF during Oculus Rift mode, so the Pitch angle is controlled only by the Rift.

Due to the aforementioned 35 Hz limitation, this will likely have to be done in two places:
  • For rendering, R_UpdateViewer() should override the pitch angle with the Oculus Rift pitch. This is updated for every frame.
  • For playsim, libcommon's P_PlayerThinkLookPitch() updates the effective pitch angle at 35 Hz to the latest value.

By the way, I have a wireless game controller on my desk I bought so I could play in the Rift without being tethered to the mouse/keyboard. But it does not seem to immediately work with doomsday. Options->Controls->..."Press Key or Move Controller" does not seem to recognize the device. Do you have any tips for troubleshooting this?

Check that joystick input is enabled in the Input Settings. Also, start with verbose messages and try to look for a message about joystick initialization.

#26 Updated by danij about 11 years ago

It seems to me that to fully integrate the Oculus head tracking it must be folded into the binding system in a similar fashion to mouse and joystick axes. One could then configure the control scheme in exactly the same way as with other input devices.

With regard to the game HUD I would suggest leaving this as-is and instead modify the projection prior to calling the game-side DrawWindow drawer in LegacyWidget

#27 Updated by skyjake about 11 years ago

danij wrote:

It seems to me that to fully integrate the Oculus head tracking it must be folded into the binding system in a similar fashion to mouse and joystick axes.

That is true, however impossible at the moment because we don't support zero-latency controls.

#28 Updated by danij about 11 years ago

I rather see this from the perspective that the lack of zero-latency input is a serious bug.

#29 Updated by cmbruns about 11 years ago

skyjake wrote:

DENG_USE_OCULUS_RIFT

Got it. Will do.

Naturally, on game-side, the VR variables have to accessed via the Con_Get*() functions.

Good point. I'm learning to use this approach.

When it comes to minimizing latency, how is vsync treated? Should it be disabled? What is the Oculus Rift's display refresh rate?

The Oculus refresh rate is 60Hz. I'm unsure how to treat vsync. Perhaps we should simply begin with a 35 Hz implementation, and prioritize latency optimization later.

What is the control scheme, though? Which view angles are under Oculus Rift control and which are controlled with the keyboard?

Oculus Rift head tracking, when active, should absolutely override pitch and roll angles from other inputs, and it should blend yaw angle adjustment with the keyboard/mouse/joystick controllers.

danij wrote:

I rather see this from the perspective that the lack of zero-latency input is a serious bug.

Significant non-zero latency is a recipe for nausea with the Rift. After I started looking at how to insert the view angles though, I agree that it might be best to treat the Rift like another input. How yucky would it be to develop support for zero-latency input? Perhaps for now I should simply implement roll angle tracking, to prove that the angles can be read, and move on to the warp implementation, to give us all more time to think about the pitch/yaw input mechanics.

#30 Updated by danij about 11 years ago

If the input from the head tracker effects the angle at which the player moves then this fundamentally requires an input mode which is distinct from that used for vanilla compatibility.

This new mode could be zero-latency however it would take some effort to get it working correctly, as essentially it asks for movement to be applied as soon as its received rather than "storing it up" until the next 35 Hz "sharp" time slice. This in itself is potentially problematic as the playsim assumes that the player only moves during these so-called sharp game tics.

Indeed I would agree that implementing the roll angle tracking and then moving on to geometry warping would be the best course of action at this time. Fully integrating the head tracking control scheme is something I think skyjake and I need to discuss in detail.

FYI: I'm personally particularly sensitive to refresh rate, sync and latency issues and have found myself feeling nauseous with the current setup. I can only imagine that I would find using the Rift impossible if the head tracking suffered the same degree of latency.

#31 Updated by skyjake about 11 years ago

danij wrote:

Indeed I would agree that implementing the roll angle tracking and then moving on to geometry warping would be the best course of action at this time. Fully integrating the head tracking control scheme is something I think skyjake and I need to discuss in detail.

That sounds sensible. Realistically, implementing latency-free controls the Right Way will not occur in the near term future unless we change our roadmap significantly. Therefore, I would recommend that the tracking is hacked in at this stage and later refactored into better shape as part of the Input drivers proposal.

#32 Updated by danij about 11 years ago

Hacking it in would inherently mean that player movement lags behind the refresh update. While it would certainly be good to have a working implementation we can later improve (remove latency), from a user perspective I question how usable this will be in practice. However, we don't have much choice at this time.

I recall how terribly the game controlled when we did that with mouse input. In the Rift it'll be far worse.

#33 Updated by cmbruns about 11 years ago

danij wrote:

Hacking it in would inherently mean that player movement lags behind the refresh update. While it would certainly be good to have a working implementation we can later improve (remove latency), from a user perspective I question how usable this will be in practice. However, we don't have much choice at this time.

I recall how terribly the game controlled when we did that with mouse input. In the Rift it'll be far worse.

For clarity, let's get quantitative here. I was being a bit hyperbolic when I said that every millisecond counts. Practically, if we could get a latency of, say, 15 ms, we would be doing better than most Rift hackers and should be proud. A latency of 30 ms (read 35 Hz) would be good but imperfect. A latency above, say, 120 ms might be unreleasable. How do these numbers compare with doomsday frame render times on modern hardware (times 2 for stereo...)? Do you think we could hit any of those numbers without redesigning the input system?

#34 Updated by skyjake about 11 years ago

cmbruns wrote:

For clarity, let's get quantitative here. I was being a bit hyperbolic when I said that every millisecond counts. Practically, if we could get a latency of, say, 15 ms, we would be doing better than most Rift hackers and should be proud. A latency of 30 ms (read 35 Hz) would be good but imperfect. A latency above, say, 120 ms might be unreleasable. How do these numbers compare with doomsday frame render times on modern hardware (times 2 for stereo...)? Do you think we could hit any of those numbers without redesigning the input system?

With the current input system and renderer we should be able to (quite easily) handle 35 Hz input on a modern gaming system.

If we forget trying to reach absolute minimum latency for now, we should be able to use the existing player controls and bindings system for Oculus, too.

There are some parts of the input system that are not planned to change, for instance the virtual input devices. We could add a new type of "Head Tracker" virtual input device for Oculus and set it up with 3 'stick' type axes. The bindings system will need to be tweaked a little to be aware of the new type of device (e.g., saving and parsing bindings). Then we would need to add new player controls for the view yaw/roll angles. (A pitch control already exists.)

#35 Updated by cmbruns about 11 years ago

skyjake wrote:

With the current input system and renderer we should be able to (quite easily) handle 35 Hz input on a modern gaming system.

If we forget trying to reach absolute minimum latency for now, we should be able to use the existing player controls and bindings system for Oculus, too.

There are some parts of the input system that are not planned to change, for instance the virtual input devices. We could add a new type of "Head Tracker" virtual input device for Oculus and set it up with 3 'stick' type axes. The bindings system will need to be tweaked a little to be aware of the new type of device (e.g., saving and parsing bindings). Then we would need to add new player controls for the view yaw/roll angles. (A pitch control already exists.)

Sounds good. Is it possible for a noob like myself to add a new virtual input device? Or is that something you guys would need to do?:

While investigating this, I've just realized that I have not been using the poorly documented Rift API for predicting future head orientation. It's not a panacea, but apparently is useful for latencies less than 80 ms. I should be doing this in gzdoom too. Assuming I find a way to estimate the latency.

#36 Updated by skyjake about 11 years ago

cmbruns wrote:

Is it possible for a noob like myself to add a new virtual input device? Or is that something you guys would need to do?:

I can take a look at that. I've written much of the current bindings/controls system and there are some details that might be a little tricky.

#37 Updated by cmbruns about 11 years ago

Shifting gears for a moment...

To do the oculus rift warping, I need to do the following:
  1. Render the whole scene to an offscreen render buffer, with an attached color texture
  2. Blit that texture to the screen using a particular GLSL shader

I notice there are some classes like GLTarget and GLTexture that would appear to wrap the functionality I need for this. Is there an example of using one of those custom GLTargets to render the whole scene, including scene, weapon, taskbar, menus etc? I suppose I will be needing color and depth and stencil buffers in that offscreen render buffer?

#38 Updated by danij about 11 years ago

Integrating head tracking via the existing binding system is definitely the better approach, regardless of optimal latency. Doing otherwise would mean that many of the associated problems including control scheme mapping and sensitivity adjustment would need to be solved again specifically for Oculus.

cmbruns wrote:

Shifting gears for a moment...

To do the oculus rift warping, I need to do the following:
  1. Render the whole scene to an offscreen render buffer, with an attached color texture
  2. Blit that texture to the screen using a particular GLSL shader

I notice there are some classes like GLTarget and GLTexture that would appear to wrap the functionality I need for this. Is there an example of using one of those custom GLTargets to render the whole scene, including scene, weapon, taskbar, menus etc? I suppose I will be needing color and depth and stencil buffers in that offscreen render buffer?

Skyjake is better placed to advise on such matters as he has only recently implemented those components. BusyWidget is probably closest to what you want. Although I wouldn't have thought you'd need stencil info for this.

#39 Updated by cmbruns about 11 years ago

danij wrote:

Skyjake is better placed to advise on such matters as he has only recently implemented those components. BusyWidget is probably closest to what you want. Although I wouldn't have thought you'd need stencil info for this.

I don't need stencil for doing the warping, but Doomsday might need the stencil for rendering everything that gets rendered. I guess that's what I'm asking. Is a stencil needed for the target of GuiRootWidget::draw()?

#40 Updated by skyjake about 11 years ago

cmbruns wrote:

I notice there are some classes like GLTarget and GLTexture that would appear to wrap the functionality I need for this. Is there an example of using one of those custom GLTargets to render the whole scene, including scene, weapon, taskbar, menus etc? I suppose I will be needing color and depth and stencil buffers in that offscreen render buffer?

Yes, GLTarget and GLTexture should be used here. The only potential issue is that I haven't yet used these extensively, so you may encounter some bugs.

The best example of how to use these is in GuiWidget, where it's doing a blur operation on the UI. I suppose what you need is exactly a similar case except you just render the entire UI instead of a partial widget tree. In practice this should be doable within ClientWindow::canvasGLDraw().

I suppose you will be drawing the entire UI, task bar and everything included? We can worry about scaling it into the center ~40% area a bit later.

Is a stencil needed for the target of GuiRootWidget::draw()?

Yes, the sky uses stencil.

#41 Updated by cmbruns about 11 years ago

skyjake wrote:

Yes, GLTarget and GLTexture should be used here. The only potential issue is that I haven't yet used these extensively, so you may encounter some bugs.

The best example of how to use these is in GuiWidget, where it's doing a blur operation on the UI. I suppose what you need is exactly a similar case except you just render the entire UI instead of a partial widget tree. In practice this should be doable within ClientWindow::canvasGLDraw().

I suppose you will be drawing the entire UI, task bar and everything included? We can worry about scaling it into the center ~40% area a bit later.

Yes, task bar, menus, monsters, hallways, and everything will be warped, and scaling into the center will be a later enhancement.

#42 Updated by cmbruns about 11 years ago

I'm struggling with the warping.
I'm trying to paint to an offscreen buffer and then just paint it back on the screen, to get started.
It works a little, but there are problems.
Fading to white: When I activate the game menu, using ESC, the screen fades to white in about 200 ms and remains pure white. ESC again restores the initial art screen. When I enter a game, the screen fades to white again.
OpenGL 1.1+: When I try running lower level OpenGL calls to try to build it up from scratch, I'm seeing a lot of OpenGL link errors, like

clientwindow.obj : error LNK2001: unresolved external symbol "void (__stdcall* glUseProgram)(unsigned int)" (?glUseProgram@@3P6GXI@ZA)

You can call glUseProgram from glprogram.cpp, so why can't I call glUseProgram from clientwindow.cpp?

skyjake wrote:

In practice this should be doable within ClientWindow::canvasGLDraw().

That canvasGLDraw() is getting pretty ugly. I'm tempted to break out a separate method for the stereo mode cases.

#43 Updated by danij about 11 years ago

I'm not familiar with that area of the codebase but I expect this is because glprogram.cpp resides in libgui and therefore the glUseProgram entry point is not available. We are transitioning the Doomsday client to interface with GL through a strict OO library API. The existing low-level drawing which remains in the engine is using a much older, "parallel" graphics layer based on OpenGL 1.4 -level functionality.

To use a shader outside of libgui you should be using a GLProgram instance.

The fade-to-white issue sounds like a color buffer clearing problem.

cmbruns wrote:

skyjake wrote:

In practice this should be doable within ClientWindow::canvasGLDraw().

That canvasGLDraw() is getting pretty ugly. I'm tempted to break out a separate method for the stereo mode cases.

By all means fork the flow internally into separate methods of the private Instance, if its becoming unwieldy.

#44 Updated by skyjake about 11 years ago

cmbruns wrote:

I'm struggling with the warping.
I'm trying to paint to an offscreen buffer and then just paint it back on the screen, to get started.
It works a little, but there are problems.

I can help debug it if you commit the GLTarget-using version to GitHub.

#45 Updated by skyjake about 11 years ago

cmbruns wrote:

skyjake wrote:

Naturally, on game-side, the VR variables have to accessed via the Con_Get*() functions.

Good point. I'm learning to use this approach.

A clarification regarding "game-side" and "engine-side". We use these terms to refer to code that resides in a game plugin (libdoom, libcommon, etc.) and the engine (client/server), respectively. So in clientwindow.cpp or other client sources, one shouldn't use the public API (Con_Get*) but instead preferably an internal getter function (let's say VR::mode();) (or just directly use the global variable). The Con_Get* methods have some overhead in the variable look-up.

#46 Updated by skyjake about 11 years ago

  • % Done changed from 20 to 30

#47 Updated by cmbruns about 11 years ago

skyjake wrote:

...So in clientwindow.cpp or other client sources, one shouldn't use the public API (Con_Get*) but instead preferably an internal getter function (let's say VR::mode();) (or just directly use the global variable)

OK. Thank you for your patience while I absorb these subtleties.

I can help debug it if you commit the GLTarget-using version to GitHub.

I will do so soon. I wonder if I might need to enforce proper push/pop of offscreen render buffers to get this to work perfectly. This might explain some glitches I see in gzdoom too.

RE: Latency
I've been giving latency some thought. I think we could possibly survive forever with the following approach:
  1. Treat Oculus as a standard input controller with whatever latency inherent to that system.
  2. Perform a second update of view direction, at render time, that only affects the OpenGL view matrix. (What John Carmack calls "late scheduling" http://www.altdevblogaday.com/2013/02/22/latency-mitigation-strategies/). This approach would minimize latency, at the expense of presumably minor defects in weapon direction and software culling during rapid head motion.
  3. Apply predictive head orientation using Oculus SDK GetPredictedOrientation().
A couple of other minor points I need to remember:
  • In Oculus Rift mode, the angular field of view is deterministic, and should perhaps override the user field of view setting.
  • The Oculus Rift also presents a particular hardware aspect ratio. I probably need to enforce that better in the gzdoom implementation as well.

#48 Updated by skyjake about 11 years ago

cmbruns wrote:

RE: Latency
I've been giving latency some thought. I think we could possibly survive forever with the following approach:

That sounds like a very good approach to me.

I have just pushed the virtual device changes to the work branch, where I added a new type of device for head tracking. The bindings system should recognize it as "head", e.g., "bindcontrol look head-pitch".

The physical head tracking input should be polled in DD_ReadHeadTracker().

I didn't yet add a bindable control for roll, though. Also, I'm guessing the value from "head-yaw" needs to be added as an extra offset in P_PlayerThinkLookYaw().

I also made some changes that allow player look and yaw to be updated on every frame and not just 35 Hz ticks. This occurs when "input-sharp" is set to zero. However, this will cause all sorts of juddering artifacts in player movement as that is still locked to 35 Hz, and therefore we should keep input-sharp at 1 for now.

#49 Updated by cmbruns about 11 years ago

skyjake wrote:

I can help debug it if you commit the GLTarget-using version to GitHub.

With your help, I got warping working (with another static QShaderProgram instance that will need to be moved to the right place...). Thank you. I also got the depth buffer working. There is still a problem with texturing: most of the floors and ceilings are black.

For the first time I am getting a sensible image in the Oculus Rift! Next I simply must take advantage of the controller infrastructure you created. We've now got significant progress in all four primary tasks.

#50 Updated by danij about 11 years ago

Nice work. At least for cross-eye viewing your screenshot is converging well for me.

So as to better my understanding of this - is this approximately what your implementation is doing?
http://rifty-business.blogspot.co.uk/2013/08/understanding-oculus-rift-distortion.html

#51 Updated by skyjake about 11 years ago

  • % Done changed from 30 to 40

#52 Updated by cmbruns about 11 years ago

danij wrote:

So as to better my understanding of this - is this approximately what your implementation is doing?
http://rifty-business.blogspot.co.uk/2013/08/understanding-oculus-rift-distortion.html

That's a great writeup I had not seen before. Yes, that article describes what the shader does. But the article does not describe the additional correction for chromatic aberration that the shader also does. The chromatic aberration correction significantly increases the complexity of the shader. You might notice a slight color fringing on the exterior window border at the extreme upper-right corner of the screen shot I posted. This effect of the shader actually counteracts an opposite color fringing caused by the optics of Oculus Rift lenses.

Here is a list of some remaining tasks:
  • Rift angular field-of-view setting, independent of regular field of view for other modes
  • Rift head tracking as an input controller
  • Late-scheduling update of head tracking
  • Predictive head tracking
  • Row/column/checker interleaved stereo 3D modes (→ #1646)
  • Play test and tweak all stereo 3D modes, especially Rift mode
  • Fix problem with floor/ceiling textures in Rift mode (will affect column/row/checker modes too)
  • Independent adjustment (in Rift mode) of apparent depth of a) weapon sprite (~0.5 meter) and b) all other HUD chrome (~3 meter) (→ #1654)
  • Zoom shrink Rift hud toward center of view (→ #1654)
  • Convert many hard-coded warp shader parameters to uniform variables
  • Set warp shader parameters using values read using Oculus SDK.
  • Fix glitchy intermediate scene in side-by-side modes (between menus and game start) (→ #1655)
  • Test/tweak Rift mode on Linux and Mac (I'm developing on Windows at the moment)
  • Neck model to add small x/y/z translation, based on Rift angles (→ #1646)
  • Console command(s) to semantically set VR mode, instead of just a number (→ #1656)
  • Headless 3D player model in Rift mode, so you see a body when looking down (→ #1646)
  • Paint non-weapon hud items on a world space transparent quad, so user simply looks down to see ammo, health, taskbar. (→ #1646)
  • Independent view and aim direction, like Team Fortress 2 (→ #1646)

We should think about which tasks need to be accomplished to close this issue (#1636), and which should spawn their own issues, to be resolved later.

#53 Updated by cmbruns about 11 years ago

(merged with previous comment)

#54 Updated by cmbruns about 11 years ago

To be able to adjust the hud item placement in Rift mode, it might be useful to be able to independently adjust the screen view transforms for
  1. The 3D stuff (monsters, hallways, sky, powerups, etc.)
  2. The weapon sprite (which should appear closer to the player than all other HUD chrome)
  3. Other HUD chrome, including menus, status, crosshair(?), and taskbar
  4. (maybe crosshair should be separable too)
    Thus far, I have been using root().draw() as my entry point to rendering one eye's view. Could I independently invoke, say, weaponSprite.draw(), monstersAndHallways.draw(), and nonWeaponHUDChrome.draw()?

#55 Updated by skyjake about 11 years ago

cmbruns wrote:

[...remaining tasks...]

I can create a continuation task for the more forward-looking / non-Rift items.

Thus far, I have been using root().draw() as my entry point to rendering one eye's view. Could I independently invoke, say, weaponSprite.draw(), monstersAndHallways.draw(), and nonWeaponHUDChrome.draw()?

Unfortunately that is not possible. All the game graphics are drawn under LegacyWidget — maybe at some point in the future we'll get around to cleaning up and OO-ifying also this section of the code. Currently it just makes a call to the game plugin via gx.DrawWindow on legacywidget.cpp:105.

The weapon sprite is drawn in R_RenderPlayerView() (r_main.cpp in client), while the game menus, status bar, and crosshair are drawn on game-side (libcommon's Hu_Drawer()).

It might be a good idea to create a few cvars for the scaling factors and then modify the projection matrices used in the relevant locations using those cvars, as we're dealing with both engine and game side drawing.

#56 Updated by skyjake about 11 years ago

I have now added some new player controls and implemented them (roughly) on game-side. Check the commit notes for details.

Now the view angles from the Rift should be posted as events in DD_ReadHeadTracker(). In this branch, the input system works at 35 Hz, so the end result is likely a bit choppy, as the head tracker axes have been configured to be unfiltered.

I have another branch, "low-latency-input", where I've been looking into true, low-latency controls that can be updated for every frame. However, since they will cause gameplay-related changes, I'll need to work on it more before it is useful to us. I don't expect to finish this in the near future, so we should instead proceed with the late-scheduled angle updates.

#57 Updated by skyjake about 11 years ago

  • Target version set to 1.13

Scheduling this for 1.13. Anything that cannot make it in time for a mid-December release should be pushed to a continuation task.

#58 Updated by cmbruns about 11 years ago

skyjake wrote:

I have now added some new player controls and implemented them (roughly) on game-side. Check the commit notes for details.

Now the view angles from the Rift should be posted as events in DD_ReadHeadTracker().

I started adding Pitch control. Even if I set ev.axis.type = EAXIS_ABSOLUTE in DD_ReadHeadTracker(), it seems that the pitch value I specify is treated as a relative offset. So I think I need to query the current pitch angle somehow and compute a relative offset, analogous to how I treated Pitch in gzdoom.

One thing I miss from gzdoom is when I fire the weapon at a wall, a bullet hole decal is placed where the bullet hit. This is very useful for ensuring that head tracking is working correctly. Is there a similar feature I can activate in Doomsday?

I see some comments related to head vs. body yaw. Is this related to the future feature of independent head and weapon tracking? I want to stress that independent view vs. aim is an advanced complex feature that should be optional, and should be completely implemented only after standard view-to-aim behavior is working flawlessly. For example, a true 3D projected crosshair would probably be required as part of that feature. Even the Team Fortress 2 developers are still tweaking the various heuristics required to pull off such a feature.

#59 Updated by skyjake about 11 years ago

cmbruns wrote:

I started adding Pitch control. Even if I set ev.axis.type = EAXIS_ABSOLUTE in DD_ReadHeadTracker(), it seems that the pitch value I specify is treated as a relative offset. So I think I need to query the current pitch angle somehow and compute a relative offset, analogous to how I treated Pitch in gzdoom.

I specifically added a new control you can bind to for controlling the pitch as an absolute angle:

bindcontrol lookpitch head-pitch

In contrast, the "look" control is always interpreted as a delta (to be used with mouse/joystick/keys). If "lookpitch" has a binding, it will override whatever is bound to "look".

One thing I miss from gzdoom is when I fire the weapon at a wall, a bullet hole decal is placed where the bullet hit. This is very useful for ensuring that head tracking is working correctly. Is there a similar feature I can activate in Doomsday?

Unfortunately there isn't...

I see some comments related to head vs. body yaw. Is this related to the future feature of independent head and weapon tracking? I want to stress that independent view vs. aim is an advanced complex feature that should be optional, and should be completely implemented only after standard view-to-aim behavior is working flawlessly. For example, a true 3D projected crosshair would probably be required as part of that feature. Even the Team Fortress 2 developers are still tweaking the various heuristics required to pull off such a feature.

What is there is quite straightforward: the "yawbody" control adds an offset to the player's (gun's) angle. It works independently of the regular yaw controlled with mouse/keys/etc. The "yawhead" instead turns the camera viewpoint while keeping gun direction the same in world coordinates. This is actually using code that has been in Doomsday for ages.

When it comes to gameplay, though, there is absolutely no subtlety to any of this; it's intended as a starting point so that we can have a meaningful reaction to the angles produced by the Rift.

#60 Updated by skyjake about 11 years ago

A further note about "lookpitch". If the control's position is 1.0, it means the player's actual look pitch is 85 degrees, as that's the maximum we allow.

With "yawhead" and "yawbody", 1.0 means 180 degrees.

#61 Updated by cmbruns about 11 years ago

skyjake wrote:

The "yawhead" instead turns the camera viewpoint while keeping gun direction the same in world coordinates.

So I should be using yawhead while implementing view-to-aim, right?

A further note about "lookpitch". If the control's position is 1.0, it means the player's actual look pitch is 85 degrees, as that's the maximum we allow.

That's good to know. The Oculus Rift experience encourages occasionally looking straight up (pitch 90 degrees), during interludes of architectural exploration. It might be possible to accommodate this entirely within late-scheduling angle updates. At these extreme pitch angles, the yaw angle can rapidly whip back and forth by 180 degrees, though the user experience should be smooth and natural. Hopefully I will soon be ready to investigate such issues. There is no gameplay reason to shoot straight up or down.

Does the pitch angle affect where bullets go, and which enemies are hit? I know it has an effect in zdoom, and I have been assuming it has an effect in Doomsday too.

#62 Updated by skyjake about 11 years ago

cmbruns wrote:

So I should be using yawhead while implementing view-to-aim, right?

If you want to shoot where the camera is pointing at, you need to use yawbody.

With yawhead, shooting direction is unaffected by turning the Rift.

Although, having never used Oculus Rift, I'm unsure exactly what you're trying to do: do we need to e.g. point the gun and shoot at a different direction than is used for walking?

Does the pitch angle affect where bullets go, and which enemies are hit? I know it has an effect in zdoom, and I have been assuming it has an effect in Doomsday too.

The pitch angle does affect where bullets go, yes.

#63 Updated by cmbruns about 11 years ago

skyjake wrote:

If you want to shoot where the camera is pointing at, you need to use yawbody.

With yawhead, shooting direction is unaffected by turning the Rift.

I'm trying to point the gun and shoot in the same direction used for walking. Gun direction is walk direction is camera direction; Just like in non-Rift modes. I need to use yawbody for this. Sorry about the confusion. I was misunderstanding the phrase "while keeping the gun the same". I understand now. Thanks for clearing that up!

#64 Updated by cmbruns about 11 years ago

skyjake wrote:

With "yawhead" and "yawbody", 1.0 means 180 degrees.

So I'll assume that if roll ever gets used, 1.0 will mean 180 degrees there too.

#65 Updated by cmbruns about 11 years ago

skyjake wrote:

bindcontrol lookpitch head-pitch

OK. I got pitch working; and yes, it is jerky. Hopefully late scheduling will clear that up.

But when I try "bindcontrol yawbody head-yaw" I see a small brief displacement to yaw that quickly returns to its previous position. I think some other control may be forcing it back to an absolute position. Never mind. It's working now.

cmbruns wrote:

One thing I miss from gzdoom is when I fire the weapon at a wall, a bullet hole decal is placed where the bullet hit.

I now see there is a brief flash followed by a puff of smoke. This should be enough for me to use.

#66 Updated by cmbruns about 11 years ago

skyjake wrote:

When it comes to minimizing latency, how is vsync treated? Should it be disabled?

I've read that vsync should be enabled with the Oculus Rift.
Now I'm thinking that maybe the estimated latency for prediction should depend on how much time remains before the next screen refresh.

#67 Updated by skyjake about 11 years ago

  • Parent task deleted (#7)

#68 Updated by cmbruns about 11 years ago

It turns out a lot of the jerkiness I was seeing was due to mirroring the Rift display on another monitor.

I just tried it with the Rift as it's own display (which is much more tedious to use) and it looks great. Head tracking is really smooth. Almost perfect in fact. I don't think I have seen any better. This has put me in a very good mood about this project.

One glitch is that the late scheduled Yaw update is not perfect. Is it possible to turn off interpolation of the view angle/yaw? That might help.

I'm setting the screen resolution to 1280x800@60Hz, the native resolution of the Oculus Rift, which I suspect helps reduce latency. But I see I could improve the apparent resolution at the view center by using a much larger offscreen Oculus Rift texture. But I suspect using an offscreen target that is a different size than the real screen will probably require some tedious debugging.

Finally, is it possible to specify which screen Doomsday will show up on? With my two desktop monitors and the Oculus Rift, I've got three screens here. I'm having to disable one of the desktop monitors to make Doomsday show up on the Rift.

#69 Updated by skyjake about 11 years ago

cmbruns wrote:

Head tracking is really smooth. Almost perfect in fact. I don't think I have seen any better. This has put me in a very good mood about this project.

Excellent. :)

Is it possible to turn off interpolation of the view angle/yaw? That might help.

I don't think there is a way to do that in the oculus-rift branch. I added a cvar for removing camera interpolation in the low-latency-input branch, however I'm not sure if it's done in a manner you'd find helpful (in practice, R_UpdateViewer is always using the "sharp" view position and angles).

But I see I could improve the apparent resolution at the view center by using a much larger offscreen Oculus Rift texture. But I suspect using an offscreen target that is a different size than the real screen will probably require some tedious debugging.

It might actually work correctly with the current implementation. For the blur effect I'm rendering the UI into a target that is smaller than the view; it should work just as well for a larger target. It's simply a matter of scaling via the GL viewport.

Finally, is it possible to specify which screen Doomsday will show up on? With my two desktop monitors and the Oculus Rift, I've got three screens here. I'm having to disable one of the desktop monitors to make Doomsday show up on the Rift.

Unfortunately there is no option for specifying the monitor/desktop to use. Doomsday should save the window position at shutdown, however it probably only works in windowed mode.

Maybe you could hack it like this: http://stackoverflow.com/questions/3203095/display-window-full-screen-on-secondary-monitor-using-qt

#70 Updated by cmbruns about 11 years ago

skyjake wrote:

[non-interpolated yaw]

I'm currently interrogating viewData->latest.angle (instead of viewData->current.angle) while trying to cleverly late-schedule yaw update, under the assumption that "latest" is the non-interpolated value. In any case, I think it's looking better. Either better, or I am getting used to the small yaw glitches.

[Larger offscreen texture] might actually work correctly with the current implementation.

You're right! At least in-game it seems to be working well. I'm hard coding it to 2560x1600 (2x2 * Rift size), which is probably about right, regardless of the OS apparent screen size.

[Choice of screen]

I worked around it by entering windowed mode, resizing to tiny size, dragging the window onto the Rift screen, and then maximizing. Now it is coming up on the Rift each time.

#71 Updated by cmbruns about 11 years ago

As of today, 3D gameplay is playable and working well enough in the Rift. This is excellent. I can't wait to try this in full jHexen mode, with 3D models and all.

The biggest obstacle to fun uninterrupted play is the non-3D HUD items. An important goal should be to enable effective HUD use from within the Rift. I have hardly begun to investigate how to resolve these issues. Current problems are as follows:
  • The menus appear too large in the Rift, though they are (barely) usable
  • The taskbar cannot be seen at all. I press apostrophe and type what I imagine to be in the taskbar console. This is tedious in the extreme.
  • The weapon sprite is way way too low. It's as if I'm shooting from my ... lower extremities.
  • I cannot see my health/ammo/armor/current weapon; these items are displayed too close to the edge of the screen.
  • When the player activates/deactivates the status HUD (+/- keys), the OpenGL viewport for 3D viewing changes. In Rift Mode, the 3D scene content must not move like this under any circumstances.
  • The crosshair is too big by about 2.5X, and is at infinite distance. It should be smaller and appear at a finite distance, say 2-5 meters. This apparent distance effect can be accomplished by subtly adjusting the HUD item placement in each eye view to the left or right by a small amount.

In gzdoom I addressed these issues by rendering all of the HUD items within a smaller quad toward the center of the main view. Plus I applied a small left/right offset to the hud items, with a separate offset applied to the weapon sprite vs. other HUD items.

Perhaps this could be accomplished by rendering the HUD items (taskbar, menus, weapon sprite, status) to yet another offscreen render buffer, and then copy that render buffer into the main scene, including the desired scale and offset. Could something like that fit into the current architecture?

#72 Updated by danij about 11 years ago

cmbruns wrote:

  • When the player activates/deactivates the status HUD (+/- keys), the OpenGL viewport for 3D viewing changes. In Rift Mode, the 3D scene content must not move like this under any circumstances.

Why? Logically I would expect that this works in precisely the same way in Rift mode. A Rift mode user would simply not use it (I see no good reason to force-ably disable this behavior specifically).

Your other points sound reasonable enough but this one seems unnecessary.

#73 Updated by skyjake about 11 years ago

cmbruns wrote:

The taskbar cannot be seen at all. I press apostrophe and type what I imagine to be in the taskbar console. This is tedious in the extreme.

I'm looking into this now. Earlier I tried a couple of simple tweaks but they didn't really pan out...

#74 Updated by cmbruns about 11 years ago

danij wrote:

cmbruns wrote:

  • When the player activates/deactivates the status HUD (+/- keys), the OpenGL viewport for 3D viewing changes. In Rift Mode, the 3D scene content must not move like this under any circumstances.

Why? Logically I would expect that this works in precisely the same way in Rift mode. A Rift mode user would simply not use it (I see no good reason to force-ably disable this behavior specifically).

The arguably useful case is where the player shows or hides the HUD display with the Doom guy face on it. In non-Rift mode, this causes the 3D viewport size to change. In Rift mode, this act would ideally either show or hide the face-containing HUD over an otherwise unchanged 3D view. See the screen shot in comment 24 http://tracker.skyjake.fi/issues/1636#note-24 to help imagine what I am describing.

What happens if the user keeps pressing "-" after the Doom guy hud shows? You and I may have a philosophical difference here. If the view shrinks, and causes the Rift optics to no longer be correct, you might say "Stupid user; don't activate incompatible settings!". Another perspective would be to say "Stupid programmer, you allowed the user to enter an inconsistent state!". In gz3doom I specifically disabled the effect of minus-key past the doom-guy-face-hud mode. So I'm suggesting we might choose to take the same approach in Doomsday.

#75 Updated by cmbruns about 11 years ago

skyjake wrote:

cmbruns wrote:

The taskbar cannot be seen at all. I press apostrophe and type what I imagine to be in the taskbar console. This is tedious in the extreme.

I'm looking into this now. Earlier I tried a couple of simple tweaks but they didn't really pan out...

I'd like to follow up on the suggestion of using an offscreen buffer. There is a peculiar GUI inversion when using the Oculus Rift: In other modes, the 3D view is one of several GUI items that must share space on the screen. In the Rift, GUI items are necessarily inserted into the 3D view. One way to achieve this would be to render everything except the 3D view into an offscreen render buffer, then composite that offscreen texture into the 3D view. This would require hacking the legacy renderer for Rift mode, but would not require separate hacks to the menus, taskbar, status area, etc. This is the approach I took with gz3doom. (Well, OK, there would be a specific hack for the weapon sprite.)

#76 Updated by cmbruns about 11 years ago

Potential uses of offscreen render buffers:
  • Rift warping like we already have.
  • HUD rendering, to aid composition of HUD items with 3D view in Rift mode
  • Interleaved stereo modes, just like Rift warping, but with different buffer size, shader, and initial scene composition. With a bit of API extension, the same buffer could be reused for rift warping, row interleaving, column interleaving, or checker interlacing, since they would not be used simultaneously.

#77 Updated by cmbruns about 11 years ago

By the way, the size of the off-screen HUD buffer for Rift mode would be 512x384 256x192 (i.e. really small). The font choice and layout for the taskbar console and other items would need to fit in this area. Even if we don't use an off-screen HUD buffer, this is the area (512x384 256x192) we would be rendering everything non-3D into.

EDIT: Doubled size because I forgot about double size offscreen buffer.

#78 Updated by cmbruns about 11 years ago

I created a brief video of Rift gameplay
https://www.youtube.com/watch?v=sgCRDGfO448

#79 Updated by danij about 11 years ago

cmbruns wrote:

danij wrote:

cmbruns wrote:

  • When the player activates/deactivates the status HUD (+/- keys), the OpenGL viewport for 3D viewing changes. In Rift Mode, the 3D scene content must not move like this under any circumstances.

Why? Logically I would expect that this works in precisely the same way in Rift mode.

What happens if the user keeps pressing "-" after the Doom guy hud shows? You and I may have a philosophical difference here.

Indeed we do have a philosophical difference here. My perspective is that attempting to enforce settings in a certain configuration, that the end user is quite capable of determining for themselves to be incompatible/unsuitable is not only extra development effort but more importantly, its unintuitive behavior. Where do we draw the line here? I fundamentally disagree with the concept of logical "usage modes" tailored to a specific display/input device which change/override unrelated user configuration settings. At most there should be a console command/script that effects all "Rift mode" changes at once; not enforce them in code.

Furthermore, I don't see why the window border can't work in Rift mode in the same way it usually does.

#80 Updated by cmbruns about 11 years ago

Indeed we do have a philosophical difference here.

Fair enough. This is your house. I accept your judgement, especially now that you seem to understand my argument. Thank you for listening.

Furthermore, I don't see why the window border can't work in Rift mode in the same way it usually does.

Do you mean the green window border that decreases the 3D view size? If so, I suppose that could be made to work correctly by adjusting the angular field of view accordingly at the same time.

If by "window border" you are including the taskbar and status HUD, those are invisible in the Oculus Rift, and therefore useless. Which interferes with game play.

By the way, it might be possible to arrange an Oculus Rift demo from a developer in your area:
https://maps.google.com/maps/ms?msa=0&msid=216080568301080371629.0004d6d217a93e60f7cc6

#81 Updated by danij about 11 years ago

cmbruns wrote:

Furthermore, I don't see why the window border can't work in Rift mode in the same way it usually does.

Do you mean the green window border that decreases the 3D view size? If so, I suppose that could be made to work correctly by adjusting the angular field of view accordingly at the same time.

That, yes. I suspect the reason this is so jarring in the Rift is because the change from fullscreen viewport to bordered viewport is not smoothly animated. My thinking is that if the change was animated smoothly and the field of view responded accordingly, as you say; it should be compatible with Rift mode also.

By the way, it might be possible to arrange an Oculus Rift demo from a developer in your area:
https://maps.google.com/maps/ms?msa=0&msid=216080568301080371629.0004d6d217a93e60f7cc6

I'll bear it in mind, thanks.

#82 Updated by skyjake about 11 years ago

offscreen render buffers

I agree this would be useful. After taking a closer look I concluded that the only reasonable way to reposition the 2D UI would indeed be via offscreen buffers -- otherwise there would be plenty of problems trying to tackle all the viewports and scissors needed by the 2D widgets. An offscreen buffer allows sidestepping all of that.

I added a new type of widget that draws all its children on a GL texture, and then draws the texture back to the screen. My progress has since been hindered by having to debug this, as the GL drawing seems to be going slightly wrong in certain cases. I'm sure I'll find the root cause soon enough, though...

#83 Updated by cmbruns about 11 years ago

skyjake wrote:

offscreen render buffers

I agree this would be useful...

One advantage of such an offscreen buffer I might not have mentioned recently, is that it could be used (eventually) to paint a proper in-world 3D status quad, which could respond to pitch/roll changes immersively.

#84 Updated by cmbruns about 11 years ago

danij wrote:

I suspect the reason this is so jarring in the Rift is because the change from fullscreen viewport to bordered viewport is not smoothly animated...

I don't think the bordered viewport is jarring actually, unless the player continuously adjusts it. I deactivated the bordered viewport in gz3doom because I am too lazy to implement bordered viewport properly for the Rift (with adjustment of field-of-view).

Speaking of field-of-view, I'm currently setting the field-of-view to 110 degrees when the player activates Rift mode. And it probably stays 110 degrees forever after that, unless the user changes it. This field-of-view will eventually be set automatically from the Rift SDK too. I need to remember to find a way to revert to the non-Rift field of view when leaving Rift mode. I began by adding a rend-vr-rift-fovx CVAR, but found I needed to update the internal field-of-view value to get proper software culling.

#85 Updated by danij about 11 years ago

cmbruns wrote:

Speaking of field-of-view, I'm currently setting the field-of-view to 110 degrees when the player activates Rift mode. And it probably stays 110 degrees forever after that, unless the user changes it. This field-of-view will eventually be set automatically from the Rift SDK too. I need to remember to find a way to revert to the non-Rift field of view when leaving Rift mode. I began by adding a rend-vr-rift-fovx CVAR, but found I needed to update the internal field-of-view value to get proper software culling.

Maybe I'm being naive but this seems wrong to me on so many levels. Why should we allow the Rift SDK to override the field-of-view? In my opinion the FOV is something which the application should have complete control over and it is the app which should decide how much (if any) control the user has to change it. Why can't the Rift mode user simply adjust the FOV via existing in-engine mechanism for this?

#86 Updated by skyjake about 11 years ago

danij wrote:

Why should we allow the Rift SDK to override the field-of-view?

I believe the Rift should determine the FOV because that way the end result is a natural-looking 3D view, as one would see things through human eyes. A fish eye / telescope view would destroy the suspension of disbelief. cmbruns probably can elaborate.

#87 Updated by danij about 11 years ago

I agree with the intention. However I disagree with the method. Field-of-view in a cinematic sense is a matter of artistic vision. Why should all applications be forced to use the "One True" FOV in all situations?

One should also consider that doing so completely opposes the idea of respecting the original games' FOV. Presently in Doomsday in non-Rift mode we aim to recreate a similar perspective regardless of the resolution and/or dimensions of the display mode/window.

Are we saying that VR is so exceptional that by default we shouldn't honour the FOV of the original game(s) and that in VR, there is no place for artistic manipulation of the FOV?

#88 Updated by skyjake about 11 years ago

danij wrote:

Are we saying that VR is so exceptional that by default we shouldn't honour the FOV of the original game(s)

IMO, absolutely yes in the case of Oculus Rift, where the goal is full immersion.

In VR generally (anaglyph, for instance), there is more room to adjust for intended effect.

In VR, there is no place for artistic manipulation of the FOV?

There should be, but I believe it should be done via a relative zoom factor rathan than absolute FOV angle. That way one can do binoculars etc. but still rest assured that the 1.0 zoom is the correct FOV for the VR method in question.

#89 Updated by danij about 11 years ago

skyjake wrote:

danij wrote:

Are we saying that VR is so exceptional that by default we shouldn't honour the FOV of the original game(s)

IMO, absolutely yes in the case of Oculus Rift, where the goal is full immersion.

In that case this should be implemented fully, so that when the user explicitly enables VR mode from the display modes UI that the other options are obviously disabled (e.g., FOV) rather than quietly overridden.

danij wrote:

In VR, there is no place for artistic manipulation of the FOV?

There should be, but I believe it should be done via a relative zoom factor rathan than absolute FOV angle. That way one can do binoculars etc. but still rest assured that the 1.0 zoom is the correct FOV for the VR method in question.

Precisely my point. If the FOV provided from the Oculus SDK is not treated as an absolute then of what benefit is there to accepting this from the SDK rather than calculating it ourselves? I'm not familiar with the SDK but this would suggest that the Rift needs a specially configured FOV which is only partly determined by the user specific interpupillary distance. In which case, any offset we wish to apply for artist manipulation would also need to factor these values.

#90 Updated by skyjake about 11 years ago

offscreen render buffers

I've got this mostly working. The situation in the oculus-rift branch is that I've hardcoded an offscreen target for the overlaid UI elements. In a final solution this offscreen target would only be necessary in VR modes. I'm still not doing anything fancy with the resulting texture because I'm figuring out how to alpha-blend it correctly, so that it'll look exactly the same with or without the offscreen buffer in between.

#91 Updated by cmbruns about 11 years ago

danij wrote:

If the FOV provided from the Oculus SDK is not treated as an absolute then of what benefit is there to accepting this from the SDK rather than calculating it ourselves?

There is definitely one single unique mathematically-correct value for the field-of-view in a particular Rift configuration. Deviating from this correct value would represent a big red hairy high priority bug, with side effects of nausea, confusion, and headache. Thus in Rift mode, there is no room for artistic choice of field-of-view. That said, I consider the fact that I have not been retaining/restoring the user-chosen field-of-view for non-Rift modes to be a bug. I raised this issue in part because I want remember this omission.

This is a complex issue. I'm pleased danij that you are encouraging us all to reason carefully about how to handle these questions. To help move this discussion forward, I'd like to mention a few reasons why I chose to add a CVAR for rend-vr-rift-fov
  1. FOV is not directly reported by the Oculus Rift SDK. A properly configured Rift can tell you the interpupillary distance, the distance between eye and screen, and the dimensions of the screen. From these values, the computation of field-of-view is a moderately straightforward geometry problem. But I've been too lazy to implement this computation, electing instead to focus on even simpler aspects of the implementation. In fact I have not even started reading interpupillary distance from the Rift SDK. Plus I still have some questions about when and how to apply the SDK-supplied parameters.
  2. The Oculus Rift might not be properly calibrated, in which case I want to provide the player with the ability to manually adjust the field-of-view to the correct value. During this project I have developed a technique to manually estimate the correct value to within a few degrees. This capability can be useful, for example, if a group of players are sequentially viewing a demonstration in the same Rift device.
  3. I want players who do not have a Rift to be able to view scenes, and to create screenshots and animations that, to a first approximation, correctly represent the images created for a real Rift. For example, my Quality Assurance tester, who does not have a Rift, can create videos in Rift mode that I can then view in VR for debugging purposes.
I'm inclined to consider this field-of-view issue resolved, once I have:
  • Created a mechanism for maintaining or restoring the non-Rift-mode field of view setting
  • Provided a CVAR for whether or not to automatically load parameters from the Rift SDK
  • Settled on sensible default values for interpupillary distance and Rift-FOV
  • Actually implemented reading and storing the parameters from the Rift SDK

It sounds like there is a GUI dialog somewhere that needs to have certain values grayed out in Rift mode too?

#92 Updated by cmbruns about 11 years ago

skyjake wrote:

offscreen render buffers

I've got [offscreen render buffers] mostly working.

That's great news. Once we can paint a scaled, centered, slightly displaced UI onto the Rift texture, this implementation should start being really playable. We might be just days away from the moment when Doomsday in Rift becomes better than GZ3Doom in Rift. I am gratified and humbled that you both have given me the support and attention to help make this implementation really great. I have admired your work for years and years; I'm so pleased you have welcomed me to contribute to this one small aspect of the wonderful game engine you have created and improved for so long. Thank you.

That said, there are still about a hundred more little details we need to nail to make Oculus Rift support just so.

#93 Updated by danij about 11 years ago

Thanks for the detailed explanation of which values can/should be obtained from the SDK. The fact that an effective FOV must still be calculated on application side according to parameters provided from the Oculus SDK changes things significantly. Hitherto it has been my impression (from the information presented during this discussion) that an absolute FOV was provided and it was this that I took exception to (apparently wrong domain/out-of-scope etc).

With the FOV no longer being calculated directly from a single value provided by the user, in VR mode, I believe it is even more important to make this calculation transparent from the end user's perspective. Ideally I would like to see this represented in the video UI (in the task bar) directly -- when a VR mode is enabled the controls presented in this dialog/menu should change, disabling and/or presenting values acquired from the Oculus SDK as applicable. This would be the place to present any options for overriding/fine-tuning such values.

#94 Updated by skyjake about 11 years ago

  • % Done changed from 40 to 70

cmbruns wrote:

We should think about which tasks need to be accomplished to close this issue (#1636), and which should spawn their own issues, to be resolved later.

I have now created a few more issues to split this down (comment 52 updated).

To close #1636 I think we'll still need to address:

  • Appropriate 2D UI projection (in #1655, #1654)
  • Play test and tweak all stereo 3D modes, especially Rift mode
  • Convert many hard-coded warp shader parameters to uniform variables
  • Set warp shader parameters using values read using Oculus SDK
  • Test/tweak Rift mode on Linux and Mac (I'm developing on Windows at the moment)

For the FOV there is issue #1657.

#95 Updated by skyjake about 11 years ago

  • Tags changed from Renderer, Input, Graphics, 3DModel to Renderer, Input, Graphics, 3DModel, VR

#96 Updated by cmbruns about 11 years ago

Minor sidetrack with good news for owners of nvidia 3D vision systems:

On a whim I requested a stereo OpenGL context in clientwindow.cpp, and now to my surprise Nvidia 3D vision WORKS!!! (rend-vr-mode 13). The special thing here is that it works on a consumer level video card (i.e. non-Quadro), though in full-screen mode only. There has been discussion that this is recently possible with OpenGL games (I wanted to insert a link, but mtbs3d site seems to be down at the moment). I have uttered many a bitter soliloquy since 2008 about Nvidia's antipathy toward OpenGL stereo on consumer cards. So now Doomsday is on queue to be one of the first OpenGL games (after Doom 3 BFG) to take advantage of nvidia's recent undocumented relent from their five years of shame. I am so pleased.

I want to play-test this 3D vision system a bit. Can someone please remind me how to get mouse-look reactivated, now that I bound pitch control to the Oculus Rift? How do I revert from what happened when I typed "bindcontrol lookpitch head-pitch"?

#97 Updated by danij about 11 years ago

cmbruns wrote:

Minor sidetrack with good news for owners of nvidia 3D vision systems:

On a whim I requested a stereo OpenGL context in clientwindow.cpp, and now to my surprise Nvidia 3D vision WORKS!!! ...So now Doomsday is on queue to be one of the first OpenGL games (after Doom 3 BFG) to take advantage of nvidia's recent undocumented relent from their five years of shame. I am so pleased.

Very cool.

I want to play-test this 3D vision system a bit. Can someone please remind me how to get mouse-look reactivated, now that I bound pitch control to the Oculus Rift? How do I revert from what happened when I typed "bindcontrol lookpitch head-pitch"?

One way is to open the console, do a "listbindings" to find the unique identifier of the binding in question, then do a "delbind uid", then simply "restart" the game (Doomsday should automatically recreate the default binding).

Failing that "clearbindings;defaultbindings" should do the trick but naturally this will clear all your existing bindings.

#98 Updated by cmbruns about 11 years ago

skyjake wrote:

To close #1636 I think we'll still need to address:

  • Appropriate 2D UI projection (in #1655, #1654)
  • Play test and tweak all stereo 3D modes, especially Rift mode

Rift weapon and crosshair still need work. Rift judder in yaw is starting to really irritate me again. I probably need to tweak the size and placement of the HUD.

  • Convert many hard-coded warp shader parameters to uniform variables

Done. https://github.com/cmbruns/Doomsday-Engine/commit/8cb79fa30bb5909d85ccd4ef156353444ed33663

  • Set warp shader parameters using values read using Oculus SDK

Done. https://github.com/cmbruns/Doomsday-Engine/commit/7bfd9dd3efc0cb5d27be1f15c4ff95699bd10519

  • Test/tweak Rift mode on Linux and Mac (I'm developing on Windows at the moment)

I've started to set up a build environment on Mac, but that's all I've done so far.

For the FOV there is issue #1657.

My recent FOV updates may be good enough for the time being.

#99 Updated by cmbruns almost 11 years ago

One compiler error I get on Linux (Ubuntu 12.04):

libgui/src/glstate.cpp:188
glBlendFuncSeparate was not declared in this scope.

If I replace the call with the commented out glBlendFunc() version, it's OK. What am I missing?

#100 Updated by skyjake almost 11 years ago

cmbruns wrote:

If I replace the call with the commented out glBlendFunc() version, it's OK. What am I missing?

I will sort this out. The problem is that like on Windows, the system OpenGL headers only provide functionality up to OpenGL version 1.x. For later versions, one has to query function entrypoints manually.

(glBlendFuncSeparate() is being used so that the alpha channel of the composited UI layer is drawn appropriately.)

#101 Updated by cmbruns almost 11 years ago

skyjake wrote:

I will sort this out.

Thank you for fixing this. Now I'm getting a seg fault when I try to leave rift mode (e.g. "rend-vr-mode 9", followed by "rend-vr-mode 0"), on Linux and Windows. The fault occurs in the "->" operator of the following DENG2_FOR_EACH line, near line 376 of libdeng2/src/widgets/widget.cpp:

Widget::NotifyArgs::Result Widget::notifyTree(NotifyArgs const &args)
{
    NotifyArgs::Result result = NotifyArgs::Continue;

    bool preNotified = false;

    DENG2_FOR_EACH(Instance::Children, i, d->children)
    {
        if(*i == args.until)
        {

Also, I'm unable to post to this tracker from Ubuntu Firefox. The "quote" button either does nothing, or leads to a page-not-found error. Any suggestions?

#102 Updated by skyjake almost 11 years ago

cmbruns wrote:

Now I'm getting a seg fault when I try to leave rift mode (e.g. "rend-vr-mode 9", followed by "rend-vr-mode 0"), on Linux and Windows.

I'm investigating this. Something apparently goes wrong when the widget tree is modified to enable/disable the offscreen compositor.

I'm also working on a set of changes related to the busy mode window content freeze. It will work better in VR mode 9, although in all 3D modes the stereo effect will be flattened during busy mode because only one copy of the window content is kept.

#103 Updated by skyjake almost 11 years ago

I'm planning to merge the progress from "oculus-rift" into master this week. Even though the Rift support isn't fully complete yet, there are quite a few changes there that would benefit from wider availability/testing (like the UI framework improvements).

Even after this first merge, the "oculus-rift" branch would still be used for developing the Rift support further.

#104 Updated by danij almost 11 years ago

I've just been trying out the vr modes. With mode 9 I'm noticing thin gaps in the map geometry when the viewer is looking along a wall, from around a corner/in a doorway. It looks as though the angle clipper needs to be relaxed a little when in this mode.

#105 Updated by cmbruns almost 11 years ago

danij wrote:

I've just been trying out the vr modes. With mode 9 I'm noticing thin gaps in the map geometry when the viewer is looking along a wall, from around a corner/in a doorway. It looks as though the angle clipper needs to be relaxed a little when in this mode.

This is probably because in the stereo 3D modes, the left and right eye positions are offset to the left and right by about 3 cm, and the clipper does not know about it. Because this effect is really a translation and not a rotation, there is no theoretically perfect angle to relax by. So the extra angle would need to be determined empirically.

Another approach would be to make the angle clipper aware of the eye position offsets (without otherwise affecting the gun position and player position).

Its also possible that, because vr_mode 9 uses a larger field of view than other modes, the problem is a side effect of the larger field-of-view angle.

#106 Updated by cmbruns almost 11 years ago

I'm still struggling a bit with the Mac build system. The good news is that I can build the Doomsday app in Qt Creator.

The first issue I encountered is that the stock Mac Oculus SDK does not work with RTTI on Mac. So I need to learn basic use of XCode to rebuild the Oculus library with RTTI support. For the time being I have disabled Oculus Rift support for my Mac build.

Even without Rift support, I get a Doomsday runtime error dialog that says 'loadSystemFont: Failed loading font "console18"' on Mac, after the initial progress wheel has advanced to about 8 o'clock (i.e. not very far).

I have got much further with Linux testing, which looks pretty good so far.

#107 Updated by cmbruns almost 11 years ago

I just found another bug: Sequentially activating "rend-vr-mode 13", followed by "rend-vr-mode 9" results in an assertion failure in debug mode on Linux. The assertion failure occurs within the GLProgram::beginUse() method in glprogram.cpp.

I suspect that entering rend-vr-mode 13 creates a new OpenGL context, and that rend-vr-mode 9 creates yet another OpenGL context. The OpenGL context for which shader program was originally created might no longer be the current context.

#108 Updated by skyjake almost 11 years ago

cmbruns wrote:

Another approach would be to make the angle clipper aware of the eye position offsets

This would likely be the best approach. In practice, it should be as easy as using the correct adjusted eye positions in rend_clip's view-relative functions.

Even without Rift support, I get a Doomsday runtime error dialog that says 'loadSystemFont: Failed loading font "console18"' on Mac, after the initial progress wheel has advanced to about 8 o'clock (i.e. not very far).

How are you launching the built app? You need to run the bundled Doomsday.app; for instance, I'm using this custom Run config:
  • Executable: %{buildDir}/client/Doomsday.app
  • Options: @~/client.rsp
  • Working directory: %{buildDir}/client

~/client.rsp contains:

-vdmap .. }Data 
-bd Doomsday.app/Contents/Resources

Furthermore, if you're using lldb, there are known issues in Qt Creator that presently cause Doomsday to fail with an error when starting a debugging run from within Qt Creator.

Sequentially activating "rend-vr-mode 13", followed by "rend-vr-mode 9" results in an assertion failure in debug mode on Linux

I'll try looking into this later.

#109 Updated by cmbruns almost 11 years ago

danij wrote:

I've just been trying out the vr modes. With mode 9 I'm noticing thin gaps in the map geometry when the viewer is looking along a wall, from around a corner/in a doorway. It looks as though the angle clipper needs to be relaxed a little when in this mode.

skyjake wrote:

cmbruns wrote:

Another approach would be to make the angle clipper aware of the eye position offsets

This would likely be the best approach. In practice, it should be as easy as using the correct adjusted eye positions in rend_clip's view-relative functions.

I went a bit further, and started updating the global variable vOrigin[], in rend_main.h, to represent the position of the currently active eye, leaving viewData->current.origin[] to represent the position between the two eyes. In non-stereo modes the both would represent the same position. This change fixes the clipping problem that danij reported, and might improve the accuracy of other stereo rendering effects, such as specular highlights (are there specular highlights in doomsday?).

#110 Updated by cmbruns almost 11 years ago

Is greyscale rendering possible?

I recently got a request to implement greyscale anaglyph 3D modes in GZ3Doom. This would require rendering the scene in greyscale rather than full color. I took a crack at implementing this in Doomsday by attempting to populate and use the GL_COLOR matrix, but that approach did not seem to work.

Could greyscale rendering be accomplished using the recent enhancements to camera lens effects?

#111 Updated by cmbruns almost 11 years ago

Crosshair refinement questions

  • In Rift mode, some crosshair details are missing, presumably because the crosshair is rendered with single-pixel line width to a buffer that later gets rendered at a smaller scale. Either the crosshair needs to be rendered with a thicker line width in Rift mode, or the offscreen HUD buffer needs to be created at approximately its final rendered size.
  • Is it possible to share variables between x_hair.cpp and vr.cpp, or is this the sort of thing that needs to communicate via console variables?
  • In Rift mode, I want to adjust the crosshair depth to be closer to the player, rather than at infinity. In other 3D modes, I want to adjust the crosshair depth to be farther from the player, rather than at screen distance. In each case, I need the crosshair rendering to be sensitive to which eye view is currently being rendered. What is the right way to carry this eye offset information into the crosshair rendering code?

#112 Updated by skyjake almost 11 years ago

cmbruns wrote:

I went a bit further, and started updating the global variable vOrigin[]

That's good. vOrigin is used everywhere as the current eye position.

are there specular highlights in doomsday?

Not as such, however model/surface shininess effects might be positively affected (even though they aren't physically accurate in the old renderer).

Could greyscale rendering be accomplished using the recent enhancements to camera lens effects?

Yes. With the cost of one additional display-sized offscreen texture, one could add a separate fx::PostProcessing effect that applies the fx.post.monochrome shader (the shader is already implemented).

However, the work in gl2-lensflare is still ongoing and shouldn't yet be merged anywhere.

some crosshair details are missing

I suppose as a first step we could just increase the line width for the crosshair. In x_hair.c there's

#define XHAIR_LINE_WIDTH    1.f
Have you tried changing this?

Is it possible to share variables between x_hair.cpp and vr.cpp / What is the right way to carry this eye offset information into the crosshair rendering code?

No. vr.cpp is in the client executable and x_hair.c is in the game plugin library, and therefore does not have direct visibility to any variables in the engine binary. You need to use the Con_Get* methods to access the VR variables.

You could add read-only cvars that contain the current eye offset. A cvar is marked read-only using the CVF_READ_ONLY flag when registering it.

#113 Updated by skyjake almost 11 years ago

With today's merge of the gl2-lensflare changes back to oculus-rift and master, VR mode 13 is probably broken. The window is still requested to use a stereo buffer, however rendering is now occurring in GLFramebuffer, using renderbuffers and/or textures. At the moment I'm unsure whether VR mode 13 can actually work via this new path.

I created a new issue for supporting stereoscopic pixel formats: #1678.

#114 Updated by skyjake almost 11 years ago

Good news, I now have a Rift devkit of my own to play around with.

I can address some of the biggest usability issues, like making the Doomsday UI more legible in mode 9. I can also put together a settings dialog and hopefully also include libovr in the Windows/OS X distribution packages (I'll have to check the license again, though).

#115 Updated by cmbruns almost 11 years ago

skyjake wrote:

Good news, I now have a Rift devkit of my own to play around with.

Awesome news! Welcome to the Rift. The libovr can/should be statically linked, so you should not need to ship a separate lib file. Perhaps you will have an easier time than me figuring out how to build libovr with rtti support on Mac, which I think might be necessary.

#116 Updated by skyjake almost 11 years ago

  • % Done changed from 70 to 90
This weekend I have made several Rift-related improvements:
  • Fixed rendering glitches so that the left and right eyes don't have different opacity levels (for instance).
  • There is a special mouse cursor for the warped Rift mode.
  • One can zoom up fonts with the -fontsize command line option (1.5 is good for the Rift).
  • I applied antialiasing and tweaked the filtering to smooth out the Rift framebuffer appearance.
  • I'm now applying the head tracking angles from the Rift directly to the view, so that there is as little latency as possible. It's feeling pretty much as good as OculusWorldDemo now.
  • There is a "3D & VR" settings dialog with stereoscopic 3D and Oculus Rift settings.
  • I adjusted the defaults for the average male (Rift SDK also uses similar defaults).

The final thing is incorporating libovr in the distribution packages, which shouldn't be difficult.

#117 Updated by skyjake almost 11 years ago

  • Status changed from In Progress to Closed
  • Assignee set to skyjake
  • % Done changed from 90 to 100

I'm closing this issue as complete. libovr is now included in Doomsday's Windows and OS X 10.8 packages and there is an easy to use settings dialog for VR and Oculus Rift.

There is certainly still room for improvement (e.g., HUD weapon sprites) however the game is now quite well playable with the Rift.

#118 Updated by skyjake over 9 years ago

  • Related to Feature #1852: Support for Oculus Rift DK2 (extended desktop mode, LibOVR 0.4.3) added

#119 Updated by skyjake almost 9 years ago

  • Related to Bug #2135: Disable Oculus support by default (until proper LibOVR 1.0 support is implemented) added

Also available in: Atom PDF