Jump to content

2018 Hardware Benchmark - DCS World


Recommended Posts

EDIT: Updated Facts and added DCSv2.5 bench. o7

 

I want to continue huge work done here but also focus not on fps but on frame latency.

 

Like most of you already know frame latency/frame render times are much more important than pure average fps. With all stuttering and inconsistencies of DCS engine and DCS maps it's frame render time most important statistics. For those who play DCS with Vsync because they don't like stuttering or play in VR it's even more important. For VR every frame has to be pushed by CPU in 11ms time window (90Hz) and for 60Hz monitor it has to be prepared in 16ms window. If not - this particular frame won't be rendered and fps will be cut in half to 45fps(VR) and 30fps(60Hz monitor vsync). Of course i'm omitting g-sync etc.

 

 

Facts to be aware of:

 

1. Fast frame render times and good frame consistency is what people perceive as a sense of flying and flying speed. That's why BMS is regared as very good at it - it's just old as hell and has very low frame latencies on modern hardware.

 

2. In case of VR or Vsync you can have CPU utilization at 40% and GPU utilization 100% and be CPU bottlenecked. It just means your CPU can't push all the data for a frame in 11ms - it doesn't have to be utilized 100% for that. It just will not make it in time.

 

3. People tend to think that if CPU is 100% utilized on every core Game engine is well optimized - it's the opposite when it's done with work on 30% utilization and waiting for new task it's much better optimized.

 

4. Yes DCS Game Engine (mark difference between game engine and graphics engine) is CPU bound not GPU bound (less truth in new v2.5 in non VR scenario). Of course you can throw at it 5x supersamplng, 8x MSAA, 5x DSR and much more and you will push the GPU to it's limits (hence 100% GPU utilization in the point 2 above). But when you set lowest settings and start mission you can be restricted by your CPU power because those 11ms and 16ms restriction and it will bottleneck your system.

 

5. It's highly possible you will have average 70fps with those frame drops and stuttering because fps measurement is just a statistic - temporary fps or average fps but just a statistic. Useless in case of stuttering, flight feeling, fluidity and all of what I focus on here.

 

6. VR asynchronous time warp/asynchronous reprojection help much in case of dropping to 45fps but will break flight feeleing and in case of infrequent drops it will be unpleasant stuttering of game objects (not your view).

 

7. In VR if you drop to 45fps your head will still move with 90fps(see above) but ground and all objects inside game will move 45 fps (every second frame) - hence frame jumps and stuttering. Most of it when you fly fast and look sideways (not in front of you) at the ground.

 

8. Even if you don't use Vsync or VR all of above affects your fps quality (BMS example above)

 

9. After all tests I lowered my CPU speed from 4.7GHz to 4.0GHz to see what diffrence it will make. I've got about 0.5-1ms more on every test maximum in v1.5.8 tests.

 

10. You should turn off any affinity bullshit from autoexec.cfg (if you edited that). It can ruin CPU frame time consistency (increase frame time variation). Leave it at default auto (all cores).

 

11. While testing I changed process priority to HIGH for DCS.exe, vrcompositor.exe, vrserver.exe to decrease frame time variation. I don't recommend setting it to real time because it could impact input latencies or make game unstable (I crashed couple times). But all in all helps coz Windows 10 wants to do every unimportant action while I want to squeeze every ms from rig :doh:. I'm not utilizing CPU 100%, just want to be sure DCS will be first in line. If you use oculus remember that setting DCS.exe higher without oculus rendering pipeline processes can make DCS slower not faster.

 

12. testing GPU and CPU frame times is the only proper way to tell why those frames you have don't give you expected flight quality. Only way to tell if you should have new hardware. Only way to know if this new CPU is worth the money or not.

 

13. Someone asked me important question Why do I turn off terrain shadows and cockpit global illumination in prefered settings if it had no impact to performance?

It has impact on performance - just not on CPU frame render times. First you need to take care of CPU if it makes in time with frame data than You want to max out your GPU potential but not overload it and make it in frame time. All in all i can't have 11ms in VR so I'm choosing lower than 22ms variant for both CPU and lower than 22ms for GPU.

 

14. I wanted to know is if it's possible to use DCS without stuttering in 90Hz in VR with current existing hardware. If it's possible - I leave this question to you.

 

 

 

*I measured the impact of the DCS graphics settings on CPU frame render times VERY roughly with Steam frame statistics tool while running DCS in VR. Every game graphics setting was tested with ALL others turned off or lowest possible. I've chosen two Su-25 missions which struggle to give good performance on my rig even on lowest settings. I tested simplest possible scenario standing on runway in VR in singleplayer. If this struggles with stable frame times and asynchronous time warp/asynchronous reprojection kicks in than more complicated air battle or multiplayer wont work with stable 90fps. Measurement was done in cockpit (simulation was of course started by clicking start mission). Intel speed step turned OFF and all of possible sys settings to high performance. My CPU is 4.7Ghz watercooled and it's always 4.7GHz without any boost/ jumps or anything. My Rig is in my footnote. With hardware I have i'm not GPU limited but CPU limited. I hope my work will be useful to you. I will update it with 2.5 stats.

 

 

 

DCS 2.5.0.13818.311 - in VR

NVIDIA 386.69

Simple analysis:

Differences between engines are very clear when we measure frame times instead of average fps. v2.5 engine is much better than v1.5 no doubt about it. I'm very impressed, great work DEV team! :thumbup::thumbup::thumbup::thumbup: Waiting for more to finally play in VR at 90fps instead 45fps (Vulcan renderer? Advanced rendering pipeline for VR like lens matched shading and simultaneous multi-projection?)

 

CPU times on lowest settings are little higher than v1.5 but I see much less load on CPU overall and more load on GPU which is very good. v1.5 had too much load on CPU especially under VR. Render object counts decreased substantially (great - it was a big problem for v1.5 and quite a trouble maker - proof that v1.5 was not optimized properly some users didn't believe)

 

DCS v2.5 Engine looks as single threaded.:huh: :huh: Apart from directinput threads, sound threads, starforce threads, it looks there's no threads for simulation, ai or anything. No change here from 1.5:huh: :huh:.

 

Graphics texture size change don't need engine restart - nice touch and sign of new renderer. Same with tree and cluter/grass settings change in mission. Handy for testing ;)

 

Water results show long awaited optimization in obvious "no water here" scenario so we can proudly say we can stop measuring water performance in the middle of plains. :doh:

 

Top of the roof latencies from water/visible range and civ traffic has been steadied out finally. Visible range frame times has changed surprisedly. Maybe some map optimizations/changes helped here. We need better test scenario.

 

Speedtree is great, much much faster than 1.5 - there's no comparison. Another example of v1.5 unused potential for optimizations.

 

Pay attention to anisotropic filtering - it governs look of speedtree trees. Low settings here look awful.

 

Without deferred shading there was ~0,5ms less frame time on lowest settings in both tests.

 

Obiously we need much better testing scenario now i.e. for terrain shadows, trees, water etc.:thumbup:

 

My prefered DCS graphics settings for VR are on the end of this post. Prefered DCS graphics settings CPU frame times (lower right corner of results print-screen) are measured not calculated.

 

DCS 1.5.8.13716.422 - in VR

NVIDIA 386.69

Simple analysis:

Where delta is 0ms there is no CPU bottleneck. It's GPU bound. So if you have good GPU just throw it at GPU but watch out for maxing GPU.

 

GPU is hard to measure like this because it scales with load. So if you go low settings it puts its finger in its nose and render it in 11ms. If you throw more calculations at it it will scratch its head give some more power and..... you guessed it - render it in 11ms.

 

First test on my rig on lowest settings i had 8ms CPU render time (3ms reserve from 11ms 90Hz frame time limit for CPU) I could set i.e. civ traffic to medium (+2ms) and water to medium (+1ms) to make frame in 11ms. BUT bear in mind that frame times are very unstable especially in DCS so you should leave ~2ms reserve.

 

Second test on my rig on lowest settings i had 12ms CPU render time (1ms OVER 11ms 90Hz CPU frame time limit) - no fun with 90fps

 

Uppon setting all those to maximum CPU frame times would gone far beyond 24ms... In this case GPU doesn't do much - just wait for CPU..

 

Shadows are interesting. If your gpu can render shadows (most of you have 1080's/ 1080ti's so it can) CPU is the real bottleneck for them. But if you really want them - it does not matter if you choose Low or High - it's 4ms - no difference. Just go High if GPU is good enough.

 

It's been 2 years from this analysis. They did not find a cause of stutter but they had the same conclusions about minor impact of todays GPUs for stuttering problems. Here you have the cause. Sadly after 2 years not much has changed.

 

 

DCS 2.5.0.13818.311 - in VR

NVIDIA 386.69

RodE7ae.png

DjOWf2R.png

 

 

 

DCS 1.5.8.13716.422 - in VR

NVIDIA 386.69

lS4qGZd.png

v8Uj1C4.png

 

 

DCS preferred settings for VR

Xk5kjuR.png


Edited by soundslikerust
  • Like 11

looking through Pimax 5k+

Windows 10 64bit

i9 9900ks @5,4Ghz

1080Ti Asus Poseidon @2126 mem@12650

32GB DDR4 @3200GHz

SEASONIC 850W

ASUS ROG Swift UQHD PG348Q G-sync

Corsair MP510 NVMe /Samsung 960 NVMe

Link to post
Share on other sites
Tomorrow we will have long awaited v2.5 with caucasus map.

 

It's already out now :thumbup:

VR Cockpit (link):

Custom Throttletek F/A-18C Throttle w/ Hall Sensors + Otto switches | Slaw Device RX Viper Pedals w/ Damper | VPC T-50 Base + 15cm Black Sahaj Extension + TM Hornet or Warthog Grip | Super Warthog Wheel Stand Pro | Steelcase Leap V2 + JetSeat SE

 

VR Rig:

Pimax 5K+ | ASUS ROG Strix 1080Ti | Intel i7-9700K | Gigabyte Z390 Aorus Master | Corsair H115i RGB Platinum | 32GB Corsair Vengeance Pro RGB 3200 | Dell U3415W Curved 3440x1440

Link to post
Share on other sites
Brilliant work, very useful!

 

Happy to help,

Gathering reputation if it's worth a rep :music_whistling:

looking through Pimax 5k+

Windows 10 64bit

i9 9900ks @5,4Ghz

1080Ti Asus Poseidon @2126 mem@12650

32GB DDR4 @3200GHz

SEASONIC 850W

ASUS ROG Swift UQHD PG348Q G-sync

Corsair MP510 NVMe /Samsung 960 NVMe

Link to post
Share on other sites
Happy to help,

Gathering reputation if it's worth a rep :music_whistling:

 

Repped!

 

Great work - once you learn how to read and understand it all. :thumbup:

Proud owner of:

PointCTRL VR : Finger Trackers for VR -- Real Simulator : FSSB R3L Force Sensing Stick. -- Deltasim : Force Sensor WH Slew Upgrade -- Mach3Ti Ring : Real Flown Mach 3 SR-71 Titanium, made into an amazing ring.

 

My Fathers Aviation Memoirs: 50 Years of Flying Fun - From Hunter to Spitfire and back again.

Link to post
Share on other sites

Thanks, working on v2.5 already.

I used steamVR frame timing tool which you can find in steamVR settings/performance page. But it will soon change for FCAT because i would like to measure more scenarios and test non VR environments

looking through Pimax 5k+

Windows 10 64bit

i9 9900ks @5,4Ghz

1080Ti Asus Poseidon @2126 mem@12650

32GB DDR4 @3200GHz

SEASONIC 850W

ASUS ROG Swift UQHD PG348Q G-sync

Corsair MP510 NVMe /Samsung 960 NVMe

Link to post
Share on other sites

Wondering why you turn off terrain shadows and cockpit global illumination if it had no impact to performance?

"It takes a big man to admit he is wrong...I'm not a big man" Chevy Chase, Fletch Lives

 

3700x - 32gb ram - GTX 1080ti - Windows 10

Link to post
Share on other sites
Wondering why you turn off terrain shadows and cockpit global illumination if it had no impact to performance?

 

It has impact on performance - just not on CPU frame render times. First you need to take care of CPU if it makes in time with frame data than You want to max out your GPU potential but not overload it and make it in time frame. All in all i can't have 11ms in VR so I'm choosing lower than 22ms variant for both CPU and lower than 22ms for GPU. I hope my explanation makes sense


Edited by soundslikerust

looking through Pimax 5k+

Windows 10 64bit

i9 9900ks @5,4Ghz

1080Ti Asus Poseidon @2126 mem@12650

32GB DDR4 @3200GHz

SEASONIC 850W

ASUS ROG Swift UQHD PG348Q G-sync

Corsair MP510 NVMe /Samsung 960 NVMe

Link to post
Share on other sites

I found that NVIDIA drivers 385.69 give LITTLE better more stable performance on my rig than newest 390.77 ones. It's noticeable I.e. in Su-25 anti-radar missile practice shortly after take off early flight. Anyone else can confirm this?


Edited by soundslikerust

looking through Pimax 5k+

Windows 10 64bit

i9 9900ks @5,4Ghz

1080Ti Asus Poseidon @2126 mem@12650

32GB DDR4 @3200GHz

SEASONIC 850W

ASUS ROG Swift UQHD PG348Q G-sync

Corsair MP510 NVMe /Samsung 960 NVMe

Link to post
Share on other sites
  • 3 weeks later...

You could do some testing with fully isolated DCS CPU Affinity like we discussed here recently: From page 8-11 https://forums.eagle.ru/showthread.php?t=201530&page=8

 

Priority is not a sure way to get everything else out of the way because what kernel/firmware/chipset is doing is "load balancing" everything in such a way that same threads bounce around many cores as well which only balances out the overall CPU utilization statistics per-core but has absolutely zero actual peak performance advantage. While it may happen to a lesser extent it does not do full isolation.

 

For the ultimate test what you would do is use Process Lasso, set it up so it starts at boot (only the underlying process, not the configurator) and assign every single process to use only 1 CPU core.

 

For this example mentioned here I used a QuadCore without HyperThreading, if you use HyperThreading it would get more complicated because you would have to know which HW THREAD (Logical Core) belongs to which physical CPU core, in order that you know if you're testing DCS for example on 2 logical cores belonging on the same phyiscal CPU core, this in theory should most probably not produce as good results vs DCS using two physical CPU cores, but didn't test that myself yet to get an idea how good it is vs a single core.

 

To setup Process Lasso, after you install it, prepare for testing, but you need to first do a clean reboot to reset set up the affinities, as soon as it boots up launch the Lasso configuration and select all processes, right click and set them to CPU Affinity -> ALWAYS -> CORE0 ... keep running the computer for a while, 10-20 minutes and keep looking for more processes popping up as the services with "delayed start" get launched, and other programs launch things based on triggers, some processes only appear for less than a few minutes so make sure you keep checking, then start opening up your barebone common programs you use during the testing you normally do, make sure those are set to CORE0, after you think you covered them all, start DCS and assign it's affinity to CORE1 CORE2 and CORE3, then setting CPU Priority depending on your test whether you do both.

 

Now DCS has 2 major Threads, the third core will cover around 1% at most I think, there's a few threads that take that much together, but it will help prevent less interruptions of the main thread, that means possibly 1% more performance and responsiveness but that may not reflect linearly 1% to FPS numbers, it may or may not, maybe to frametimes.

 

Unfortunately CPU Affinity per-thread can only be done in programming, if we could do it right now ourselfs then you wouldn't need a thrid core set aside exclusively for DCS, because it would fit just fine in the second one, because the second DCS thread is practically never bottlenecked, apparently it's the AUDIO and IO thread, and unless you're doing some heavy IO, flying mach 10 all over the map having to load so much, or producing 1000 sounds, you're not going to bottleneck it, unfortunately don't take my words for it because my test was without any IO or AUDIO and the second thread got up to 65% or something so there's 35% left, how much would those 35% would get eaten away when it's loaded with work, I really am not sure right now.

 

 

Also, because I can't find a way to make individual process threads assigned to a specific physical CPU core exclusively, I can't test any perf difference whether thread-bouncing really is helping or skewing the peak performance, or whether it's neutral, in theory according to various tech sites when threads bounce to another physical CPU core it has to wait for the cache's to transfer data and that is logically not a good thing and that's what could show up as mircostuttering as it was theorized by PCPer.com, however unnoticable on CPUs where the transfer speeds between L1 and L2 caches are small enough.

 

I would first do it all without HT, before even thinking about HT. An an interesting bit of theory about HT increasing performance is that why people see a performance boost, may not be really because 1 of those cores is helping somehow DCS do a better job of it, it's because there's more elements that all the other threads can pick, and all HT does is a way for other things to get away from the DCS's path, so that's how tricky it could be, you would be able to achieve the same exact effect by just purely using more phyiscal cores. So all this good PR might be falsely attributed to HT, which means Intel could be sued for false advertising in theory (if they were to advertise it like that). But we need a proper test to confirm this theory.

 

*** For Intel to be at fault, they would specifically have to advertise "our technology of HT doing this and that makes games run faster" if the focus is on HT it self, but if they just said "If you enable option called HT you would get better perf in gaming" that would not be actionable because it's true in practice. But ofcourse this is more complicated because at the same time it is actually helping by running another game , in a game which would have many threads and some of it running on the same physical core would still do some good anyway. Then gaming is such a broad term, how would you define it, as single-thread depenand or not? In the end the whole OS and all the threads have to get run down to usually just 4 or 8 cores.


Edited by Worrazen

Getting back in action!

1st.: PC Specs WIP: Win10P 2004 (20H1), 1440p@75"32 - MB: Asus ROG Strix X-570E - CPU: AMD Ryzen ... - GPU: AMD Radeon ... - RAM: 64 GB - SSD: Samsung 970 EVO Plus 1TB NVMe

2nd.: PC Specs: Win10P 2004 (20H1), 1440p@75"32 - MB: Asus P9X79 - CPU: Intel i7 3820 - RAM: 32GB - GPU: AMD Radeon RX480 8GB - SSD Samsung 860 EVO 250GB (DCS), Input: Saitek Cyborg X/FLY5

Modules: A-10C I/II, F/A-18C, Mig-21Bis, M-2000C, AJS-37, Spitfire LF Mk. IX, P-47, FC3, SC, CA, WW2AP.

Terrains: NTTR, Normandy, Persian Gulf, Syria.

Link to post
Share on other sites
You could do some testing with fully isolated DCS CPU Affinity like we discussed here recently: From page 8-11 https://forums.eagle.ru/showthread.php?t=201530&page=8

 

Priority is not a sure way to get everything else out of the way because what kernel/firmware/chipset is doing is "load balancing" everything in such a way that same threads bounce around many cores as well which only balances out the overall CPU utilization statistics per-core but has absolutely zero actual peak performance advantage. While it may happen to a lesser extent it does not do full isolation.

 

For the ultimate test what you would do is use Process Lasso, set it up so it starts at boot (only the underlying process, not the configurator) and assign every single process to use only 1 CPU core.

 

For this example mentioned here I used a QuadCore without HyperThreading, if you use HyperThreading it would get more complicated because you would have to know which HW THREAD (Logical Core) belongs to which physical CPU core, in order that you know if you're testing DCS for example on 2 logical cores belonging on the same phyiscal CPU core, this in theory should most probably not produce as good results vs DCS using two physical CPU cores, but didn't test that myself yet to get an idea how good it is vs a single core.

 

To setup Process Lasso, after you install it, prepare for testing, but you need to first do a clean reboot to reset set up the affinities, as soon as it boots up launch the Lasso configuration and select all processes, right click and set them to CPU Affinity -> ALWAYS -> CORE0 ... keep running the computer for a while, 10-20 minutes and keep looking for more processes popping up as the services with "delayed start" get launched, and other programs launch things based on triggers, some processes only appear for less than a few minutes so make sure you keep checking, then start opening up your barebone common programs you use during the testing you normally do, make sure those are set to CORE0, after you think you covered them all, start DCS and assign it's affinity to CORE1 CORE2 and CORE3, then setting CPU Priority depending on your test whether you do both.

 

Now DCS has 2 major Threads, the third core will cover around 1% at most I think, there's a few threads that take that much together, but it will help prevent less interruptions of the main thread, that means possibly 1% more performance and responsiveness but that may not reflect linearly 1% to FPS numbers, it may or may not, maybe to frametimes.

 

Unfortunately CPU Affinity per-thread can only be done in programming, if we could do it right now ourselfs then you wouldn't need a thrid core set aside exclusively for DCS, because it would fit just fine in the second one, because the second DCS thread is practically never bottlenecked, apparently it's the AUDIO and IO thread, and unless you're doing some heavy IO, flying mach 10 all over the map having to load so much, or producing 1000 sounds, you're not going to bottleneck it, unfortunately don't take my words for it because my test was without any IO or AUDIO and the second thread got up to 65% or something so there's 35% left, how much would those 35% would get eaten away when it's loaded with work, I really am not sure right now.

 

 

Also, because I can't find a way to make individual process threads assigned to a specific physical CPU core exclusively, I can't test any perf difference whether thread-bouncing really is helping or skewing the peak performance, or whether it's neutral, in theory according to various tech sites when threads bounce to another physical CPU core it has to wait for the cache's to transfer data and that is logically not a good thing and that's what could show up as mircostuttering as it was theorized by PCPer.com, however unnoticable on CPUs where the transfer speeds between L1 and L2 caches are small enough.

 

I would first do it all without HT, before even thinking about HT. An an interesting bit of theory about HT increasing performance is that why people see a performance boost, may not be really because 1 of those cores is helping somehow DCS do a better job of it, it's because there's more elements that all the other threads can pick, and all HT does is a way for other things to get away from the DCS's path, so that's how tricky it could be, you would be able to achieve the same exact effect by just purely using more phyiscal cores. So all this good PR might be falsely attributed to HT, which means Intel could be sued for false advertising in theory (if they were to advertise it like that). But we need a proper test to confirm this theory.

 

*** For Intel to be at fault, they would specifically have to advertise "our technology of HT doing this and that makes games run faster" if the focus is on HT it self, but if they just said "If you enable option called HT you would get better perf in gaming" that would not be actionable because it's true in practice. But ofcourse this is more complicated because at the same time it is actually helping by running another game , in a game which would have many threads and some of it running on the same physical core would still do some good anyway. Then gaming is such a broad term, how would you define it, as single-thread depenand or not? In the end the whole OS and all the threads have to get run down to usually just 4 or 8 cores.

 

 

 

I've done different variants of affinity testing but the results were unsatisfactory. It's a placebo some people are experiencing and results was always worse on clean system. The best option was to leave affinity to defaults, make it balanced by system and change DCS and VR processes priorities higher. (With all unnecessary processes turned off of course)

 

p.s. I would like to point to anyone considering this (not you) that manual messing with process affinity is quite stupid because of CPUs shared caches between cores and cache hit problems. There's much more to affinity than some people think (not talking about you :)) so I would advise to leave it to developers and system managed default and instead turn everything else to the minimum.

 

If affinity speculation helped you - look closer because you have some fundamental problems with your system.

Normally there is very little to gain and much more to lose


Edited by soundslikerust

looking through Pimax 5k+

Windows 10 64bit

i9 9900ks @5,4Ghz

1080Ti Asus Poseidon @2126 mem@12650

32GB DDR4 @3200GHz

SEASONIC 850W

ASUS ROG Swift UQHD PG348Q G-sync

Corsair MP510 NVMe /Samsung 960 NVMe

Link to post
Share on other sites

Yes, I never figured out what's at the bottom of some of the lockups which many people call stuttering, so I went into a dive with the CPU stuff.

 

Well as you were testing performance, it was expected the difference not to be big, I was targeting to see if those bits could cause stutering like some people reported, saying that setting CPU Affinity was helpful ... but that could all be another thing on their sytem setting it to to multiple physical cores if infact their system was using two logical cores from the same physical core.

 

The main thread may also stop if the secondary (Audio IO ?) thread hits a bump, but I don't know how to make that happen in order to test it and see if it's true.

 

 

 

Some words on the stuff I was around before getting into CPU testing:

 

There's like 10 different kinds of stuttering, first of all, with very different root causes.

 

Dynamic streaming of assets is a must and probably already is a thing, it might be something to do with that. Also different patches behave differently, I don't get some of the lockups anymore now that I'm on a SSD with Win10.

 

But still, I don't want to be nitpicky, the whole point of dynamic streaming is to not lock up the engine and stream in, even if the data can't be accessed, while the textures would look weird or missing, the audio would only be delayed.

 

I've also noticed on a lot of the time, it's not really textures being streamed in that produces a stutter, but actually some specific models, the pilot ejection scene stuff.

 

All of that is magically gone if you throw brute force at it, a SSD or even a NVMe SSD, again I'm not nitpicking, but this only hides the issues, it's still inefficient way to solve a problem, and ofcourse it's a burden on the customer, but at least it's working.

 

So on one side I might just as well shut up about it and enjoy the game, but on another side it's hard to ignore a few tiny bits causing a momentary stutter while all the rest of the mass data is all loaded and streamed just fine, it's one of those musquito things, small but deadly.

Getting back in action!

1st.: PC Specs WIP: Win10P 2004 (20H1), 1440p@75"32 - MB: Asus ROG Strix X-570E - CPU: AMD Ryzen ... - GPU: AMD Radeon ... - RAM: 64 GB - SSD: Samsung 970 EVO Plus 1TB NVMe

2nd.: PC Specs: Win10P 2004 (20H1), 1440p@75"32 - MB: Asus P9X79 - CPU: Intel i7 3820 - RAM: 32GB - GPU: AMD Radeon RX480 8GB - SSD Samsung 860 EVO 250GB (DCS), Input: Saitek Cyborg X/FLY5

Modules: A-10C I/II, F/A-18C, Mig-21Bis, M-2000C, AJS-37, Spitfire LF Mk. IX, P-47, FC3, SC, CA, WW2AP.

Terrains: NTTR, Normandy, Persian Gulf, Syria.

Link to post
Share on other sites

Quickly, thanks for all your hard work! I love the discussion on the frame render times. What was your default pixel density setting? I probably missed this somewhere so I apologize if you stated it already...

i7-6700K OC'd 4700ghz | MSI RTX 3090 | 32gb 3200 RAM | 2 x SSD | Reverb G2 | TM Warthog

Link to post
Share on other sites
Quickly, thanks for all your hard work! I love the discussion on the frame render times. What was your default pixel density setting? I probably missed this somewhere so I apologize if you stated it already...

 

Hi, I'm happy to help. For now, I usually use 1.5 DCS supersampling. Strangely it's little smoother. I used a steam 2.4 supersampling before which is not a perfect equivalent but it's normally smoother.... But it's not the case in DCS. Like many other things...

 

[TABLE]Steam DCS

supesampling method supesampling method

1,96 1,40

2,25 1,50

2,40 1,55

2,50 1,58

2,56 1,60

2,89 1,70

3,24 1,80

3,61 1,90

4,00 2,00

6,25 2,50

[/TABLE]

 

I've also noticed on a lot of the time, it's not really textures being streamed in that produces a stutter, but actually some specific models, the pilot ejection scene stuff.

 

All of that is magically gone if you throw brute force at it, a SSD or even a NVMe SSD, again I'm not nitpicking, but this only hides the issues, it's still inefficient way to solve a problem, and ofcourse it's a burden on the customer, but at least it's working.

 

So on one side I might just as well shut up about it and enjoy the game, but on another side it's hard to ignore a few tiny bits causing a momentary stutter while all the rest of the mass data is all loaded and streamed just fine, it's one of those musquito things, small but deadly.

 

You're right about it, the streaming engine is not as good as it could be and in fact that the real problem is a stuttering from nonexistent thread engine, dreaded multiplayer net code and bad choice of old graphics API ( ..we just had a major engine upgrade ehhh...) which underperforms on modern CPUs.

 

:pilotfly::pilotfly:DCS community:pilotfly::pilotfly: overall has some of the highest-end PC hardware out there and in many ways is trying to "cover" those problems with it. We cannot have faster hardware so the devs have to keep up with us. I listed just a handful of techniques which could help here but are not used in DCS.


Edited by soundslikerust

looking through Pimax 5k+

Windows 10 64bit

i9 9900ks @5,4Ghz

1080Ti Asus Poseidon @2126 mem@12650

32GB DDR4 @3200GHz

SEASONIC 850W

ASUS ROG Swift UQHD PG348Q G-sync

Corsair MP510 NVMe /Samsung 960 NVMe

Link to post
Share on other sites

Yeah but let's make sure we're accurate, I try to be whenever I troubleshoot something, I specifically wouldn't want to mix in all the rest because I have no experience with multiplayer, but I did one big post on the networking and you can't blame everything on the netcode, miliseconds are in terms of computing an astronomical measure, the infrastructure it self is not good enough for what the software is asking them to do, even on FTTH you don't get out of milisecond ranges, multiplayer might never ever be solved completely.

 

Secondly, the API it self doesn't have much to do with the streaming and preloading behavior.

 

I made big posts about all of this last year and earlier, but the whole thing was more complicated, reinstalling DCS on Win10 and on an even slower HDD made things a lot better in that department, so it might haven't been the standard behavior, but a bugged behavior, and those things were improved, now that I'm on a SSD I don't see pretty much anything and ofcourse the recommended system specs now include the use of SSD just as it should, so there was things done.

Getting back in action!

1st.: PC Specs WIP: Win10P 2004 (20H1), 1440p@75"32 - MB: Asus ROG Strix X-570E - CPU: AMD Ryzen ... - GPU: AMD Radeon ... - RAM: 64 GB - SSD: Samsung 970 EVO Plus 1TB NVMe

2nd.: PC Specs: Win10P 2004 (20H1), 1440p@75"32 - MB: Asus P9X79 - CPU: Intel i7 3820 - RAM: 32GB - GPU: AMD Radeon RX480 8GB - SSD Samsung 860 EVO 250GB (DCS), Input: Saitek Cyborg X/FLY5

Modules: A-10C I/II, F/A-18C, Mig-21Bis, M-2000C, AJS-37, Spitfire LF Mk. IX, P-47, FC3, SC, CA, WW2AP.

Terrains: NTTR, Normandy, Persian Gulf, Syria.

Link to post
Share on other sites

 

Secondly, the API it self doesn't have much to do with the streaming and preloading behavior.

 

 

No, it doesn't, but API has much to do with frame times and this is what I focused on (not the streaming engine) because that's the biggest of DCS/VR stuttering problems right now.

Streaming engine problems are nowhere near and negligible with good hardware :thumbup:


Edited by soundslikerust

looking through Pimax 5k+

Windows 10 64bit

i9 9900ks @5,4Ghz

1080Ti Asus Poseidon @2126 mem@12650

32GB DDR4 @3200GHz

SEASONIC 850W

ASUS ROG Swift UQHD PG348Q G-sync

Corsair MP510 NVMe /Samsung 960 NVMe

Link to post
Share on other sites
  • 2 weeks later...

Soundslikerust, you really are my friend !

 

I do not have any real expertise in computer hardware but I needed to understand why I wasn't able to have a solid 50FPS with my simulator.

 

I use 4 videoprojectors (1920*1080) on a dome display. I also use a software for warping purpose.

 

With a previous simulator that only used 3 videoprojectors but with higher resolution (1920*1200), I had better consistant FPS.

 

If you read my answer on this topic https://forums.eagle.ru/showthread.php?t=202092&page=10, you can see that I was wondering why I did not reach 50FPS although my GPU and my CPU was not at 100% use.

 

I thought that my GPU was the bottlneck, now with your precious informations I understand that the CPU is the problem !

 

Doesn't it means that increasing the number of viewports is far more CPU demanding than increasing resolution ?

 

Am I right ? If so, having a super overcloacked CPU may be the solution ?


Edited by Chafer
signature

Dome display

I7 6700k (HT deactivated), 16go drr4, GTX 1080 TI, SSD, 4*1080 display @50hz (resolution 7680*1080), warping software

Link to post
Share on other sites

Another question :

Why is there such a difference in performance when the pause is active ?

 

https://tof.cx/images/2018/03/15/445fc50b4f88d9950f2021b861d84df6.png

https://tof.cx/images/2018/03/15/a42c164834c9363f1ade535de379817c.png

 

These two screenshots are not exactly at the same time, but even if I pause/unpause immediately, I have the same delta, almost 60 FPS unpause and 100 paused (caped at 100 FPS in the graphics.lua)

Dome display

I7 6700k (HT deactivated), 16go drr4, GTX 1080 TI, SSD, 4*1080 display @50hz (resolution 7680*1080), warping software

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...