Jump to content

2080Ti vs RTX 3090 for VR


Sn8ke

Recommended Posts

I have a HP Reverb G2 on preorder. My computer had a 8700K @ 5.3 GHz, a 2080Ti, and 32 gigs of RAM. Question is, since DCS is DX11, not DX12, would it be pointless to upgrade to a 3090 in regards to performance?

Asus ROG Maximus X Apex//Core I7 8700K @ 5.3Ghz //32GB DDR4 RAM//Asus 3090 RTX//4K monitor w/ TrackIR 5

 

 

 

Link to comment
Share on other sites

No, a 3090 will definitely provide a boost in performance. (The 3090 plays just fine with all Direct X code. The shaders don't care what API is being used.) With what I have personally seen, the 3090 is about 41% faster than a 2080 Ti with the Reverb (VR headset) in DCS 2.5.6. The question is whether that boost is worth ~ $1700 or more.

 

I'd love to see a bit more detail if you can. Was that in just raw FPS, and what was the system used?

New hotness: I7 9700k 4.8ghz, 32gb ddr4, 2080ti, :joystick: TM Warthog. TrackIR, HP Reverb (formermly CV1)

Old-N-busted: i7 4720HQ ~3.5GHZ, +32GB DDR3 + Nvidia GTX980m (4GB VRAM) :joystick: TM Warthog. TrackIR, Rift CV1 (yes really).

Link to comment
Share on other sites

Well 41% difference is pretty big, at least for me. I'd like 100% but that is not realistic for one upgrade cycle.

PC: 5800X3D/4090, 11700K/3090, 9900K/2080Ti.

Joystick bases: TMW, VPC WarBRD, MT50CM2, VKB GFII, FSSB R3L

Joystick grips: TM (Warthog, F/A-18C), Realsimulator (F-16SGRH, F-18CGRH), VKB (Kosmosima LH, MCG, MCG Pro), VPC MongoosT50-CM2

Throttles: TMW, Winwing Super Taurus, Logitech Throttle Quadrant, Realsimulator Throttle (soon)

VR: HTC Vive/Pro, Oculus Rift/Quest 2, Valve Index, Varjo Aero, https://forum.dcs.world/topic/300065-varjo-aero-general-guide-for-new-owners/

Link to comment
Share on other sites

41% would definitely be fair justification. I know some 3080 users have posted comparisons, one user reported no noticeable difference between the 3080 and 2080Ti, however the 3090 is a different animal. Whatever the case, all 3090’s sold out worldwide until later in 2021, unless you want to pay $5,000-$8,000 for one off of ebay.

Asus ROG Maximus X Apex//Core I7 8700K @ 5.3Ghz //32GB DDR4 RAM//Asus 3090 RTX//4K monitor w/ TrackIR 5

 

 

 

Link to comment
Share on other sites

I know some 3080 users have posted comparisons, one user reported no noticeable difference between the 3080 and 2080Ti

 

I've seen that claim and it is just not credible. Either some settings have been changed; otherwise there is a CPU or memory bottleneck.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

I've seen that claim and it is just not credible. Either some settings have been changed; otherwise there is a CPU or memory bottleneck.

 

Yeah agree that really does not make a lot of sense.

Don B

EVGA Z390 Dark MB | i9 9900k CPU @ 5.1 GHz | Gigabyte 4090 OC | 64 GB Corsair Vengeance 3200 MHz CL16 | Corsair H150i Pro Cooler |Virpil CM3 Stick w/ Alpha Prime Grip 200mm ext| Virpil CM3 Throttle | VPC Rotor TCS Base w/ Alpha-L Grip| Point Control V2|Varjo Aero|

Link to comment
Share on other sites

41% would definitely be fair justification. I know some 3080 users have posted comparisons, one user reported no noticeable difference between the 3080 and 2080Ti, however the 3090 is a different animal. Whatever the case, all 3090’s sold out worldwide until later in 2021, unless you want to pay $5,000-$8,000 for one off of ebay.

 

The same user, also after fixing his system m reported significant upgrade in perf... YMMV

SYSTEM SPECS: Hardware Intel Corei7-12700KF @ 5.1/5.3p & 3.8e GHz, 64Gb RAM, 4090 FE, Dell S2716DG, Virpil T50CM3 Throttle, WinWIng Orion 2 & F-16EX + MFG Crosswinds V2, Varjo Aero
SOFTWARE: Microsoft Windows 11, VoiceAttack & VAICOM PRO

1569924735_WildcardsBadgerFAASig.jpg.dbb8c2a337e37c2bfb12855f86d70fd5.jpg

Link to comment
Share on other sites

Reports out 3rd part manufacturers may have constructed 3080 and 3090 cards with incorrect parts

 

Never mind: [interesting. Please share details: specifically which AIBs?]

 

For people interested in details, either look at latest Igor's Labs or Jayz2cents video. Low end EVGA, Zotac, Gigabyte all using lower specification capacitor arrays on their low end boards. This may not be the case (hope not) on more expensive boards.


Edited by Milou

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

The problem seems to be on cards with all 6 capacitor (blocks) being POSCAP rather than a combo of POSCAP and MLCC, MLCC being more expensive.

 

For clarity it seems that EVGA on all their 3090's is operating at spec or above spec for MLCC on all its cards (not an evga employee, but i have just bought one so i thought i would check ) spec seems to be one MLCC (20) for 3080 and two (40) for 3090, which is what is in the FE, which does not seem to be exhibiting the problem.


Edited by speed-of-heat
better understanding of the MLCC

SYSTEM SPECS: Hardware Intel Corei7-12700KF @ 5.1/5.3p & 3.8e GHz, 64Gb RAM, 4090 FE, Dell S2716DG, Virpil T50CM3 Throttle, WinWIng Orion 2 & F-16EX + MFG Crosswinds V2, Varjo Aero
SOFTWARE: Microsoft Windows 11, VoiceAttack & VAICOM PRO

1569924735_WildcardsBadgerFAASig.jpg.dbb8c2a337e37c2bfb12855f86d70fd5.jpg

Link to comment
Share on other sites

Phew, that's a relief!

Intel i7 12700K · MSI Gaming X Trio RTX 4090 · ASUS ROG STRIX Z690-A Wi-Fi · MSI 32" MPG321UR QD · Samsung 970 500Gb M.2 NVMe · 2 x Samsung 850 Evo 1Tb · 2Tb HDD · 32Gb Corsair Vengance 3000MHz DDR4 · Windows 11 · Thrustmaster TPR Pedals · Tobii Eye Tracker 5 · Thrustmaster F/A-18 Hornet Grip · Virpil MongoosT-50CM3 Base · Virpil Throttle MT-50 CM3 · Virpil Alpha Prime Grip · Virpil Control Panel 2 · Thrustmaster F-16 MFDs · HTC Vive Pro 2 · Total Controls Multifunction Button Box

Link to comment
Share on other sites

This question keeps coming up, and peeps need to realize for the moment that DCS for the most part is CPU bound, and you will not see much of an improvement. In layman's terms, it's a pipeline, CPU->GPU. @60fps you have ~17ms to generate a frame, @50fps, 20ms.

 

Let's say on your system with a 2080ti, the CPU takes 15ms and your vid card takes 5ms. Voila, you gets 50 fps. You drop $1700 on a new video card, now your CPU still takes 15, and you GPU is down to 3ms... now you get 60fps.

 

The point is, with a CPU bound game, yes you'll see an improvement with a vid card upgrade but you should invest in a CPU with high single core performance first. The numbers above are not reflective of DCS, just numbers to illustrate the issue, and how industry benchmarks mean nothing on a case by case, game by game basis.

 

ED are working on multi threaded improvements and Vulkan integration. If i were you I'd wait until early/mid next year for some those gains and pickup a 3090 for 60% of the price by then as well. In the meantime, I'd also recommend using a small fraction of that money on upgrading your potato CPU.


Edited by Vanguard
Link to comment
Share on other sites

so Jacob from EVGA phones Steve on GN live (

at one hour 57) ; to explain the situation from EVGA and its been further clarified on their forums: summary , all the 3080's and 3090's from EVGA are able to run at their advertised OC. The identified the problem in testing and the change delayed production and (i suspect) availability at launch as a result.

SYSTEM SPECS: Hardware Intel Corei7-12700KF @ 5.1/5.3p & 3.8e GHz, 64Gb RAM, 4090 FE, Dell S2716DG, Virpil T50CM3 Throttle, WinWIng Orion 2 & F-16EX + MFG Crosswinds V2, Varjo Aero
SOFTWARE: Microsoft Windows 11, VoiceAttack & VAICOM PRO

1569924735_WildcardsBadgerFAASig.jpg.dbb8c2a337e37c2bfb12855f86d70fd5.jpg

Link to comment
Share on other sites

This question keeps coming up, and peeps need to realize for the moment that DCS for the most part is CPU bound, and you will not see much of an improvement. In layman's terms, it's a pipeline, CPU->GPU. @60fps you have ~17ms to generate a frame, @50fps, 20ms.

 

Let's say on your system with a 2080ti, the CPU takes 15ms and your vid card takes 5ms. Voila, you gets 50 fps. You drop $1700 on a new video card, now your CPU still takes 15, and you GPU is down to 3ms... now you get 60fps.

 

The point is, with a CPU bound game, yes you'll see an improvement with a vid card upgrade but you should invest in a CPU with high single core performance first. The numbers above are not reflective of DCS, just numbers to illustrate the issue, and how industry benchmarks mean nothing on a case by case, game by game basis.

 

ED are working on multi threaded improvements and Vulkan integration. If i were you I'd wait until early/mid next year for some those gains and pickup a 3090 for 60% of the price by then as well. In the meantime, I'd also recommend using a small fraction of that money on upgrading your potato CPU.

 

Here we go again!

 

DCS is not CPU bound per se. Depends on your CPU, GPU and settings. Of course, your simple example is correct for those circumstances, but not when you are getting 8-10ms CPU and 20-25ms on GPU. Of course a GPU upgrade will help.

Intel i7 12700K · MSI Gaming X Trio RTX 4090 · ASUS ROG STRIX Z690-A Wi-Fi · MSI 32" MPG321UR QD · Samsung 970 500Gb M.2 NVMe · 2 x Samsung 850 Evo 1Tb · 2Tb HDD · 32Gb Corsair Vengance 3000MHz DDR4 · Windows 11 · Thrustmaster TPR Pedals · Tobii Eye Tracker 5 · Thrustmaster F/A-18 Hornet Grip · Virpil MongoosT-50CM3 Base · Virpil Throttle MT-50 CM3 · Virpil Alpha Prime Grip · Virpil Control Panel 2 · Thrustmaster F-16 MFDs · HTC Vive Pro 2 · Total Controls Multifunction Button Box

Link to comment
Share on other sites

This question keeps coming up, and peeps need to realize for the moment that DCS for the most part is CPU bound, and you will not see much of an improvement. In layman's terms, it's a pipeline, CPU->GPU. @60fps you have ~17ms to generate a frame, @50fps, 20ms.

 

Let's say on your system with a 2080ti, the CPU takes 15ms and your vid card takes 5ms. Voila, you gets 50 fps. You drop $1700 on a new video card, now your CPU still takes 15, and you GPU is down to 3ms... now you get 60fps.

 

The point is, with a CPU bound game, yes you'll see an improvement with a vid card upgrade but you should invest in a CPU with high single core performance first. The numbers above are not reflective of DCS, just numbers to illustrate the issue, and how industry benchmarks mean nothing on a case by case, game by game basis.

 

ED are working on multi threaded improvements and Vulkan integration. If i were you I'd wait until early/mid next year for some those gains and pickup a 3090 for 60% of the price by then as well. In the meantime, I'd also recommend using a small fraction of that money on upgrading your potato CPU.

 

I assert your statement is true for some people and certainly obviously for you... at the moment i don't see that, and i have spoken to several others on 2080ti's who are CPU bound as well...

 

How do i know this because my CPU frame time is way lower than my GPU frame times .. something about 7ms rising to about 12-18 ms in really high unit density...

 

I have no doubt that the next thing that will bind my experience will be CPU, but that's because there is ALWAYS a bottle neck . it also depends on your display, I am running a reverb which is a 4k + display which tends to be more GPU than CPU bound as a result... YMMV ...

SYSTEM SPECS: Hardware Intel Corei7-12700KF @ 5.1/5.3p & 3.8e GHz, 64Gb RAM, 4090 FE, Dell S2716DG, Virpil T50CM3 Throttle, WinWIng Orion 2 & F-16EX + MFG Crosswinds V2, Varjo Aero
SOFTWARE: Microsoft Windows 11, VoiceAttack & VAICOM PRO

1569924735_WildcardsBadgerFAASig.jpg.dbb8c2a337e37c2bfb12855f86d70fd5.jpg

Link to comment
Share on other sites

Let's say on your system with a 2080ti, the CPU takes 15ms and your vid card takes 5ms. Voila, you gets 50 fps. You drop $1700 on a new video card, now your CPU still takes 15, and you GPU is down to 3ms... now you get 60fps.

 

No, you dont add the fram times. The highest frame time sets the possible FPS.

 

For VR and a 90hz headset, if both cpu and gpu have frametimes below 11.1ms you get 90 fps, and if both is above 11.1 but below 22.2 you get 45 if using ASW/reprojection.

 

For your example above, if cpu framtime is 15ms you will get 66.7 frames per scond, and if changing to a better gpu and lowering the fram time from 5 to 3ms you still get 66.7 fps.

 

The numers in your example is not really representable to people having a 2080ti, as this thread is comparing 2080ti to 3090.

 

Most people using a 2080ti would have a processor that not shoots the cpu fram time above the gpu frame time.

 

I have mostly 50% hiher fram times on the GPU(2080ti overclocked) then on my cpu(9900ks).

The representable numers in fram time for me is 8-10ms for CPU and 13-15 for the GPU

[T.M HOTAS Warthog Stick & Throttle + T.Flight pedals, Varjo Aero, HP Reverb pro, Pimax 8KX] 🙂

[DCS Mirage 2K; Huey; Spitfire Mk IX, AJS 37, F-14, F-18, FC3, A-10 Warthog II and a few more ]

i9 13900KF@5.8/32Gb DDR5@6400/ Gigabyte Gaming OC RTX4090, ASUS STRIX Z790-F , 2Tb m2 NVMe

Link to comment
Share on other sites

I have mostly 50% hiher fram times on the GPU(2080ti overclocked) then on my cpu(9900ks).

The representable numers in fram time for me is 8-10ms for CPU and 13-15 for the GPU

I get pretty much the same with an 8600k @ 4.8GHz and a 2080Ti. That is in free flight or a 1v1. If I load say a Channel furball instant action my CPU shoots up to 20ms or so and the GPU stays in the 13-15 range, dropping down as I increase altitude to occasionally see 90fps.

 

A little more GPU power and I reckon I could maintain 90fps in a free flight scenario. The moment I add more than a couple of aircraft or objects the CPU is the problem and there is no way of achieving 90fps whatever the GPU. Push too far and it can't even maintain 45fps.

 

So we have to be careful about tests, a free flight might show a significant improvement by swapping a 2080Ti with a 3090 but the moment you do anything that involves a number of other aircraft the CPU becomes the bottleneck and your 3090 is likely doing nothing for you.

 

Now a 8600K at 4.8 isn't the very fastest out there but I am so far away from 11.1ms in anything other than the simplest of missions that I don't believe there is an upgrade out there that would make a significant difference to avoid reprojection other than give a little headroom at dropping below 45.

AMD 5800X3D · MSI 4080 · Asus ROG Strix B550 Gaming  · HP Reverb Pro · 1Tb M.2 NVMe, 32Gb Corsair Vengence 3600MHz DDR4 · Windows 11 · Thrustmaster TPR Pedals · VIRPIL T-50CM3 Base, Alpha Prime R. VIRPIL VPC Rotor TCS Base. JetSeat

Link to comment
Share on other sites

Now a 8600K at 4.8 isn't the very fastest out there but I am so far away from 11.1ms in anything other than the simplest of missions that I don't believe there is an upgrade out there that would make a significant difference to avoid reprojection other than give a little headroom at dropping below 45.

 

Yeah I think this is right. For simple missions, offline with some limited number of units you can keep CPU frame times reasonable maybe even under the magic 11ms number. And my 2080ti with "decent quality" gfx settings is more like 14-17, so if a 3090 can shave of 40% of that its likely I could get under 11ms there too.

 

For online play on Buddyspike I'm typically seeing Much higher frame times, like in the mid to high teens to the low 20's. And really, with a 9700k running at stock clocks say 4.6ghz boosted, to get a 40% speed improvement (to 6.5ghz) or more in frame times is just not possible with OC'ing or moving up to the next generation 10700k. With an OC I might shave off 1 or 2 Ms at best, and I'd need like 8-12ms for it be competitive.

New hotness: I7 9700k 4.8ghz, 32gb ddr4, 2080ti, :joystick: TM Warthog. TrackIR, HP Reverb (formermly CV1)

Old-N-busted: i7 4720HQ ~3.5GHZ, +32GB DDR3 + Nvidia GTX980m (4GB VRAM) :joystick: TM Warthog. TrackIR, Rift CV1 (yes really).

Link to comment
Share on other sites

I really don't want to sound like a arrogant d-bag, or throw my credentials around to make a point, but you seriously don't know what you're on about. Not CPU bound... maybe just my case?:

attachment.php?attachmentid=248446&d=1601142368

 

2 cores at 4.7Ghz pinned on 9900K, hmmmm. While the system is at 21% total utilization you say, hmmmmm. I have a 2080 in case anyone is interested. The GPU utilization on task manager is bound on memory utilization and should be full all the time FWIW. Everything set to max, 60fps.

 

After you pull your head of of DCS and look at the latest and greatest titles out there and see what is possible with today's hardware and come to grips with the fact the DCS is about "fisher price" quality graphics relative to gaming, (yet top notch in terms of flight/combat sims) or maybe you're in the gaming industry and understand what the cards can/should be able to do compared to what's it's accomplishing here... it's all about optimization. Sure you can throw your wallet at it just as much as you can strap a supercharger on a V-12 engine instead of re-grading the hill you're trying to climb.

 

Right... how about you "science the **** out of it", and maybe do some tests. Set DCS CPU affinity to use a single core, then run a CPU bench test and consume 20% CPU. Shocking... you lose about 18.8-19.2% frames. Hmmmm.... Now run a GPU bench test and consume 20% GPU while DCS is running. FPS loss? Rounding error. This isn't "my system", it's true for yours as well. There is so much in these forums regarding "evidence" when speaking about modules, how about you show some yourself before... leading your flock astray and wasting their money.

 

Why do I reply to these things :doh: I have no personal interest in what y'all do... go ahead and buy a 3080 and then start your own thread like this one: https://forums.eagle.ru/showthread.php?t=286339&highlight=3090 and cry to the community it didn't really help...

cpu.jpg.3bdd37e4cdc55b75783361c6fe5e7b94.jpg


Edited by Vanguard
Link to comment
Share on other sites

So while I see some high utilization of my "core"(s), the better metric in terms of figuring out the bottleneck is the CPU frame time versus your GPU frame time, that you can get from FPSVR or other programs. Generally if the CPU frame time is lower, you are GPU bound, or if its the other way around CPU bound.

 

This doesn't necessarily tell the whole story on CPU usage, as its still doing other stuff for the game, not just sending out scenes for rendering.

New hotness: I7 9700k 4.8ghz, 32gb ddr4, 2080ti, :joystick: TM Warthog. TrackIR, HP Reverb (formermly CV1)

Old-N-busted: i7 4720HQ ~3.5GHZ, +32GB DDR3 + Nvidia GTX980m (4GB VRAM) :joystick: TM Warthog. TrackIR, Rift CV1 (yes really).

Link to comment
Share on other sites

 

2 cores at 4.7Ghz pinned on 9900K, hmmmm. While the system is at 21% total utilization you say, hmmmmm.

 

Why run a 9900K at 4,7Ghz? Doesnt make sense.

For running DCS that mainly run on ome core you should go as high as possible in clock.

At least make those two cores run at 5Ghz all the time during gaming.

[T.M HOTAS Warthog Stick & Throttle + T.Flight pedals, Varjo Aero, HP Reverb pro, Pimax 8KX] 🙂

[DCS Mirage 2K; Huey; Spitfire Mk IX, AJS 37, F-14, F-18, FC3, A-10 Warthog II and a few more ]

i9 13900KF@5.8/32Gb DDR5@6400/ Gigabyte Gaming OC RTX4090, ASUS STRIX Z790-F , 2Tb m2 NVMe

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...