Jump to content

Oculus Rift with DCS World Discussion


NineLine

Recommended Posts

I do not disagree with hansangb.

 

But to keep things in perspective. Questions still remain over the VR performance of a 980TI in DCS. Even players with the best card you can buy (980TI) still find scenarios in DCS that are a problem. It could be that for a foreseeable future the optimal performanceis out of the reach of currently available hardware. Certain parts of the map, certain modules and heavy AI load OR lots of objects will bring any system below comfortable VR performance levels.

 

Krupi, in your case and others like you (ME:cause I have a 780 too). When you get the Rift put all settings to LOWEST and disable all recordings and/or exports and see how well your 780 performs in VR doing exactly the flying that you want to do. This becomes the base level for you. Then ask other people who do similar type of flying (same modules, same map regions, same missions) how their 980TI performs in VR.

 

If players with a 980TI have frequent frame drops below 45 then you know that upgrading to a 980TI (at this time) will not buy you optimal performance (perhaps it is better to wait before buying new hardware). BUT if players with a 980TI have a a very good or optimal experience then you know that upgrading is an idea with a lot of merit.

 

 

 

 

Here's the Issue,

 

The Highest end GPUs (980Ti, R9-FuryX, etc) are still limited by how many DirectX Commands are able to be processed by the CPU Layer of the DirectX 11 API.

 

w/ VR, you're not Just Rendering One Image on a Normal 1080p Screen.,

Your Rendering 2x 3D View ports,

 

So it's alot more Work for both the CPU and GPU to Process.

 

Then you have to take into account, nearly every other "VR" Game has Limited Amount of Objects and View Distances.

 

Even the Most Detailed Racing Sims (PCars, iRacing, etc) have A Fraction of DCS's View Distance, and a Fraction of the Objects to Render.

 

Outside of Other Flight Sims, nothing Comes close to the Draw Distances, Object and Cockpit Details used by DCS.

 

 

However in the Future, Oculus or AMD/nVidia need to Optimize Side By Side Rendering Pipelines, and Multi GPU Side By Side Rendering. (aka Split Screen Rendering or Per Eye Rendering).

 

Having One GPU Per Eye in a Multi-GPU Environment will bring FPS back up and reduce latency radically.

 

Will also enable Mid-Level Dual GPU Setups to be VR Capable if not better than a single High End GPU Setup.

(ie 2x 960s rendering each eye, vs a 980Ti trying to Render Both.)

 

For Example, I can Render 1080x1200 in a Window at a Stupid High FPS on a Mid-Level GPU, however, tell it to render 2 different ViewPorts, and that FPS is reduced incredibly.

Two Mid Tier GPUs Can Render 1080x1200 3D Images at 90FPS Each Easily in 99.9% of Games, Better than a Top Tier Card Can Render 2 Viewports and a Total of 2160x1080 At 90FPS.

Not to Even mention 2 Mid Tier Cards is likely Cheaper than the Top Tier Family's.

 

As for the CPU Overhead, it will be always present, though DX11 Removes a large chunk of the overhead off the CPU,

DX12 removes nearly all of it, as does other Low Level API's

 

DirectX1-9, Existed as they did because there were too many different GPU Architectures on the Market for developers to program for each one, using DirectX as a API, Programmers would program under the DirectX API, which would then use CPU to Process commands before sending them to the GPU.

 

SiS / S3D, 3DFx / Glide, ATi / CIF, Matrox / MSi, nVidia / NV1 etc,

Each of those companies and GPU Manufacturers had their own Architecture and API,

As a Developer, having to Develop for Glide, S3D, CiF, Matrox MSI, NV1, to work for each GPU etc was Expensive,

Enter DirectX and OpenGL, as A Layer to Program Once, and Have DirectX/OpenGL Send the Correct Commands to the GPU at the Cost of CPU Usage to process those commands.

Most Developers Chose OpenGL for cross platform games (Linux and Windows), or DirectX/Glide for Windows Games.

And Glide Was only Used by most developers because at the Time, Nothing was Capable of Pumping out the Performance of the VooDoo and VooDoo2 GPUs.

 

You Now have 2 Major GPU Vendors Left AMD (Formerly ATi) and nVidia (who actually bought out 3DFx, Matrox and SiS among others.), And 3 Major Graphics API's.

DirectX 11.x (Limited to Windows (and Limited to Windows 10 for 11.2))

Vulkan (Works On All Platforms, is A Low Level API, All the Graphics Commands are sent to the GPU)

AMD Mantle (Works only with AMD GCN1.0+ GPU's)

 

With nVidia and AMD;'s GPUs both using a similar GPU Architectures, it's now significantly easier to program directly to the GPU,

DirectX 11 Removed most of the Software Processing Layers, and DirectX 12 removed nearly all of them, thus removing the CPU Load and Overhead when a scene calls for hundreds of thousands of Objects.


Edited by SkateZilla
  • Like 1

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

SkateZilla is spot on. Game engines will keep on getting optimized but there'll be new tech that'll be developed for VR to allow it to work properly at the FPS we need. There's no way around it really. ED can only do so much. Right now VR is mostly treated as screens, it needs to get its own tech and be treated as VR.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

Hi SkateZilla,

 

You seem to be on the forefront of this technology in terms of knowledge. What do you see in the foreseeable future timeline (next few years) to remedy the early issues in VR? When will we have the perfect sweet spot of hardware and software to max out extreme software like DCS? I use a 980TI and DK2 and the experience is good… I would say a modest 7 on a scale of 1-10.

Link to comment
Share on other sites

Hi SkateZilla,

 

You seem to be on the forefront of this technology in terms of knowledge. What do you see in the foreseeable future timeline (next few years) to remedy the early issues in VR? When will we have the perfect sweet spot of hardware and software to max out extreme software like DCS? I use a 980TI and DK2 and the experience is good… I would say a modest 7 on a scale of 1-10.

 

soooo...

SkateZilla can talk about the latest tech trends, but CAN HE FLY? :joystick:

find me on steam! username: Hannibal_A101A

http://steamcommunity.com/profiles/76561197969447179

Link to comment
Share on other sites

soooo...

SkateZilla can talk about the latest tech trends, but CAN HE FLY? :joystick:

 

 

I can FLY, in Sim World and Real Life. :pilotfly::joystick:

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

A great summary, Skatezilla. I only made my post as I did to make players aware and have reasonable expectations before they rush out and buy a 980TI to make VR work perfectly in DCS. If only it was so easy.

 

I have cautious long term optimism for VR SLI. I think VR SLI is only being used in a few of Valve's LAB demos for Vive at this time. I don't actually think it is used anywhere else right now.

 

Anyone using it in their game would certainly tell the world.

Link to comment
Share on other sites

A great summary, Skatezilla. I only made my post as I did to make players aware and have reasonable expectations before they rush out and buy a 980TI to make VR work perfectly in DCS. If only it was so easy.

 

I have cautious long term optimism for VR SLI. I think VR SLI is only being used in a few of Valve's LAB demos for Vive at this time. I don't actually think it is used anywhere else right now.

 

Anyone using it in their game would certainly tell the world.

 

 

 

Because current SLi and Crossfire Modes all use Alternate Frame Rendering and queued frames, which caueses severe latency.

 

Split Screen / Side By Side / Per Eye rendering, Both GPUs are rendering at the same time, one responsible for left and one for the right. No alternating frames or queued frames, no latency.

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Because current SLi and Crossfire Modes all use Alternate Frame Rendering and queued frames, which caueses severe latency.

 

Split Screen / Side By Side / Per Eye rendering, Both GPUs are rendering at the same time, one responsible for left and one for the right. No alternating frames or queued frames, no latency.

 

I'm curious as to what impact NVIDA will have moving away from SLI to NVLINK is? From my understanding it has big implications for data processing?? I'm actually not all that sure I understand it.... :music_whistling:

Link to comment
Share on other sites

A great summary, Skatezilla. I only made my post as I did to make players aware and have reasonable expectations before they rush out and buy a 980TI to make VR work perfectly in DCS. If only it was so easy.

 

I have cautious long term optimism for VR SLI. I think VR SLI is only being used in a few of Valve's LAB demos for Vive at this time. I don't actually think it is used anywhere else right now.

 

Anyone using it in their game would certainly tell the world.

 

we know DCS engine is demanding and not optimized, and time after time, i see people ask if their 980Ti or 970 is good enough, but when they dont like what they hear that performance is still subpar they still go out to buy the current gen video cards, thinking that they will get a good DCS experience...

 

patience is a virtue...

unless you want to throw away money, and gamble on a prospect of a good VR DCS experience... people have to check them selves and wait!

 

but yes, the wait is long... and hard because DCS is painstakingly long in its improvement.

 

im also starting to feel impatient myself.

 

but the reality is that i dont even have my CV1 yet... also the next gen video cards are coming out soon,

SO ITS SO OBVIOUS for me THAT IT MAKES NO POINT TO BUY ANY CURRENT GEN VIDEO CARDS.

 

i will continue to be patient.

will check back when i do have my CV1.. maybe i will change my attitude lol

find me on steam! username: Hannibal_A101A

http://steamcommunity.com/profiles/76561197969447179

Link to comment
Share on other sites

As for the CPU Overhead, it will be always present, though DX11 Removes a large chunk of the overhead off the CPU,

DX12 removes nearly all of it, as does other Low Level API's

So does that mean that even the strongest GPUs will still be bottlenecked by the CPU in DCS until ED develops a DX12 engine to replace DX11 EDGE?

i9-13900K @ 6.2GHz oc | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | 24GB GeForce RTX 4090 | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5

Link to comment
Share on other sites

So does that mean that even the strongest GPUs will still be bottlenecked by the CPU in DCS until ED develops a DX12 engine to replace DX11 EDGE?

 

CPUs Are not a Bottleneck parsey, the issue is with more and more and more objects in a scene, a Middle Man Graphics API such as Direct X has to process every draw comma d for the GPU first.

 

Like I said DX12, AMD Mantle and Vulkan both remove the issue significantly by sending all the GPU commands direftly to the CPU instead of thru a software layer as DirectX did all the way up to DX11.2

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

So does that mean that even the strongest GPUs will still be bottlenecked by the CPU in DCS until ED develops a DX12 engine to replace DX11 EDGE?

 

its going to be a long road. maybe DX12 will come after the 2.5 integration. ED already has a lot on their plate in modernizing the game.

 

from multi-seat to carrier ops to weather and clouds and ground mapping radar.... supporting DX12 this early just introduces more problems. step by step approach ED is taking..

find me on steam! username: Hannibal_A101A

http://steamcommunity.com/profiles/76561197969447179

Link to comment
Share on other sites

Im no expert but surely it would be easier to optimise VR rendering for SLI/Xfire so that each GPU is rendering one viewport each rather than upgrading DCS to DX12?

Wouldn't SLI/ per eye still end up getting capped by the CPU as long as the game is running DX11?

I'm running Titan X cards in SLI and still hit low frame rates sometimes due to the CPU, and mine is an overclocked Devils Canyon.

i9-13900K @ 6.2GHz oc | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | 24GB GeForce RTX 4090 | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5

Link to comment
Share on other sites

Maybe...like I said im no expert but I cant help but think its a real shame DCS is not DX12 since its one of the few modern games that would benefit the most from oflloading the cpu from a performance pov. Regardless of any new graphical effects.

[sIGPIC][/sIGPIC]



64th "Scorpions" Aggressor Squadron

Discord: 64th Aggressor Squadron

TS: 195.201.110.22

Link to comment
Share on other sites

FYI I had a DK2 and didn't experience any issues with my setup and from what I understand the 1.3 sdk from oculus really improved the experience.

 

So if I was to update my cpu to i5 6400, 16GB ram and keep the 780 I should be okay and then when pascal or later generations of GPU arrive I can replace the 780, would that make sense?

 

I just don't buy that 780 is not VR compatible and the next generation 980 suddenly is!


Edited by Krupi

Windows 10 Pro | ASUS RANGER VIII | i5 6600K @ 4.6GHz| MSI RTX 2060 SUPER | 32GB RAM | Corsair H100i | Corsair Carbide 540 | HP Reverb G2 | MFG crosswind Pedals | Custom Spitfire Cockpit

Project IX Cockpit

Link to comment
Share on other sites

New 2.0.2 Patch Out today, w/ Oculus 1.3 Runtime Support.

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

CPUs Are not a Bottleneck parsey, the issue is with more and more and more objects in a scene, a Middle Man Graphics API such as Direct X has to process every draw comma d for the GPU first.

 

Like I said DX12, AMD Mantle and Vulkan both remove the issue significantly by sending all the GPU commands direftly to the CPU instead of thru a software layer as DirectX did all the way up to DX11.2

So as long as DCS is running DX11, yes the CPU will be the bottleneck?

i9-13900K @ 6.2GHz oc | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | 24GB GeForce RTX 4090 | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5

Link to comment
Share on other sites

Sorry, I typed that on my Phone, Shoulda Read:

CPUs Are not a Bottleneck parsey, the issue is with more and more and more objects in a scene, a Middle Man Graphics API such as Direct X has to process every draw command for the GPU first.

 

Like I said DX12, AMD Mantle and Vulkan all remove the issue significantly by sending all the GPU commands directly to the GPU instead of thru a software layer as DirectX did all the way up to DX11.2

 

 

The easiest explanation is CPU's arent designed to process graphical commands and tasks, and GPU's arent designed to perform CPU Tasks.

 

DX12, Vulkan, Mantle, and even the Long Aborted GLide API all sent commands directly to the GPU to process, which is why in the 90s, the VooDoo2,3,4,5 Games had insanely better frames per/second while running stupendously higher Anti-Aliasing and immensely better special effects.

 

The Bottleneck isnt the CPU, it's the Software Layer of the API, you cant fault the CPU for being told to do something that it was not designed to do in the first place.

 

The Software Layer of the DirectX API Existed for one reason only, Compatibility between all the different Types of GPU's and Architectures, Which is no longer needed.

 

That being said, EDGE is still being Tuned and Tweaked under the hood, But I can say, if you tried to run Nevada w/ the Objects count and detail on the map inside the DX9 API, the DirectX Kernel would eat itself, and crash.

 

The DX12 Developer SDK isnt even Fully Deployed, so there's no point begging ED For it now.

Would DCS Benefit from No Software Layer in the API Directing GPU Instructions? Definitely, But Im not a Graphics Engine Engineer, So I can sit and point out API differences left and right, but whether or not they come to be is another story.

 

You could argue that it's a CPU bottleneck, when in reality it's a Software Layer Bottleneck, Overclocking your CPU will yield improvements to a certain point, but it's not the CPU's IPC or Frequency that chokes the DX Commands to the GPU, but the actual command count itself, You can have a 10GHz Intel Voyager Neural Gel Pack CPU and the software layer will still begin to backup the commands to the GPU.

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Maybe a brute force approach can help. Forums are predicting that the big perf gains in the new Nvidia cards will mostly come from higher core clock speeds. Maybe 1400-1500 Mhz.

 

===

 

On subject of ATW. In use I find ATW sufficient but not as immersive as remaining constantly over 90fps. Flying at 45fps with ATW you do not become sick; but it is better when EVERYTHING runs at 90fps. There is a reason that 90fps is targeted by HMD makers, it feels much better.

 

What I do think is that the existence of ATW (and other reprojection techniques) creates a tier of performance between 45 and 90 which a lot of people might settle for. Right now there are people who play DCS on a potato and they are happy. It follows that some players will put a potato on each eye and be happy.

Link to comment
Share on other sites

Today's patch for Alpha has some bugs, but the ATW implementation is pure love. Bravo to ED's developers!

 

I took a fast flight down the main strip in the F15 and my ATW frame rates dipped below 75 only for a tick. Excellent work!

Derek "BoxxMann" Speare

derekspearedesigns.com 25,000+ Gaming Enthusiasts Trust DSD Components to Perform!

i7-11700k 4.9g | RTX3080ti (finally!)| 64gb Ram | 2TB NVME PCIE4| Reverb G1 | CH Pro Throt/Fighterstick Pro | 4 DSD Boxes

Falcon XT/AT/3.0/4.0 | LB2 | DCS | LOMAC

Been Flight Simming Since 1988!

Useful VR settings and tips for DCS HERE

Link to comment
Share on other sites

Maybe a brute force approach can help. Forums are predicting that the big perf gains in the new Nvidia cards will mostly come from higher core clock speeds. Maybe 1400-1500 Mhz.

 

===

 

On subject of ATW. In use I find ATW sufficient but not as immersive as remaining constantly over 90fps. Flying at 45fps with ATW you do not become sick; but it is better when EVERYTHING runs at 90fps. There is a reason that 90fps is targeted by HMD makers, it feels much better.

 

What I do think is that the existence of ATW (and other reprojection techniques) creates a tier of performance between 45 and 90 which a lot of people might settle for. Right now there are people who play DCS on a potato and they are happy. It follows that some players will put a potato on each eye and be happy.

 

Pascals will likely run lower clocks, as they are smaller Chips.

Smaller chips usually have lower frequency ceilings.

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...