Jump to content

2016 Hardware Benchmark - DCS World 1.5.x


Recommended Posts

Hi tiborrr, are you able to attach the track file to your first post so that I (and maybe others would want to do the same) can benchmark my current system (I am looking at upgrading) and compare to your benchmarks?

No longer active in DCS...

Link to comment
Share on other sites

  • 2 weeks later...
  • Replies 235
  • Created
  • Last Reply

Top Posters In This Topic

5.6 Multi-monitor gaming performance analysis:

The purpose of this test is to test out the impact of the multi-monitor high FOV display setup on the general FPS performance. As for the multi monitor setup I did try it out (3x FHD = 5760x1080) and was surprises how demanding this is, despide lower megapixel count (6.2MPix vs. 8.3MPix) compared to 4K. Very large FOV, which is caused with "3 Monitor" option literally drops performance to the floor.

 

"1 Monitor" (WALL) option works perfectly and within expectations - it performs equally good as the single-monitor setup at comparable (total) resolution

 

sEQDjWQl.jpg

 

UKaUmzel.jpg

My findings were:

- GTX 780 Ti, GTX 970 and GTX 980 Ti: they all perform pretty much exactly the same - around 38FPS (9FPS min ; 66FPS max)

- There must be a bottleneck somewhere... Not sure if it's CPU bound, GPU arch bound or DCS engine limitation.

- I tried further overclocking the CPU but didn't get much better, maybe 2.5 FPS better on average (in the from 38FPS to ~ 40FPS)

- Doesn't appear to be CPU bound, at least not frequency wise. Has to be DCS engine limitation or simply GPUs cannot cope with number of polys on the screen

- I tried GPU overclocking but also only did *maybe* 1-2FPS better on average

- I observed GPU video ram usage - on 4K it never goes above 2.8GB with general preset at High.

- I tried lowering Trees Visibility, Lowering Preload Radius, Disabling shadows and HDR: No real improvement, altogether maybe 4-5 FPS better performance on average.

- I tried "3 Monitor" option then on a single 4K resolution screen (8.3MPix image) and results we're better - around 45 FPS average (10FPS min ; 94FPS max) - but most likely due to smaller FOV. I was getting frustrated by then...

 

Then came the big revelation - the power draw of a system with an overclocked (1400MHz GPU Titan X) used to be up to 380W throughtout the majority of the benchmark. With 4K "3 Monitor" and everything on HIGH settings it only reaches this value for a very short period of time, otherwise it's somewhere between 230-260W. But... When main quality PRESET is set to MEDIUM the power draw is constantly over 300W and average FPS goes sky high instantly! On LOW preset the power draw never drops below 320W. This means the GPU is being starved.

 

rBEkuwU.png

 

I strongly believe there is something with the DCS 1.5 engine not being able to feed the GPU in time when running 3 screens in high FOV ("3 Monitors" profile) WALL mode. pilotfly.gif

 

Bottom line with high-FOV "3 Monitor preset" multi-monitor setup (BOX): It's playable at butter smooth FPSs for sure however one needs to set general quality PRESET to MEDIUM or LOW!

 

Bottom line with "1 Monitor preset" multi-monitor setup (WALL): Such setup performs equally good as the single-monitor setup at comparable (total) resolution. Therefore it's possible to build a cheap large, high-resolution screen out of smaller panels.

5.7 VR resolution gaming performance analysis:

The purpose of this test is to test out the impact of the running stereo "VR" 2160x1200 resolution display setup (thus simulating Oculus Rift VR headset) on the general FPS performance. The performance hit is not as obvious as on the "3 Monitor" High FOV display setup but still big enough to conclude the DCS engine is starving the cards. A 4K resolution results (HIGH preset) is added to the chart for reference.

 

HW setup:

- CPU: Intel Core i7 5775C @ 4GHz core / 3.3GHz uncore

- MB: ASUS ROG Maximus VII Gene

- RAM: 4x4GB DDR3-2133 C10 12-12-28 1T @ 1.35V

- GPU: GTX 960 and GTX 980 Ti

- Drive: 128GB Crucial BX100

- OS: Windows 10 Pro x64

- Cooling: EK-XLC Predator 240 - liquid cooled CPU; factory cooling on GPUs

- Monitor: Dell 2713HM

- Drivers: Nvidia 361.43

Image settings as tested (2160x1200 "Stereo" mode):

31xVw4Kl.jpg

 

Results:

ksVQnWc.png

 

Simulated VR resolution performance analysis:

- Ironically enough, GTX 960 (a 200€ card) here performs identically to the GTX 980 Ti (a 650€ card) which further points to an obvious bottleneck, which is most likely not hardware related. Since the GPUs are pushing a mere ~ 2.6MPix image here (compared to ~ 8.3MPix at 4K), both GPUs more than suffice for the workload.

- In order to maintain higher FPS the same tricks apply as when running "3 Monitor" High FOV setup. Drop the preset to either MEDIUM or LOW.

- Just like before, the GPUs are starving pretty much for the majority of the time when image quality is set to HIGH preset. The only time GPUs run at peak performance is when the view is facing towards the sky or clouds (no ground objects needs to be drawn).

- I have also tested custom HIGH preset (only Visibility set to LOW and Trees set to MINIMUM) trying to show the effect of number of polygons/objects on the FPS.

- Doesn't appear to be CPU nor GPU bound. Has to be DCS engine limitation or simply GPUs cannot cope with number of polys on the screen.

5.8 GPU architecture impact on the FPS:

Spent a ton of time trying to get 4K VSR on a R9 280/290/390 (CGN) family (since I'm testing on a 2560px monitor using NV DSR / AMD VSR for 4K emulation). The problem is these cards can only put out 4K@30Hz over HDMI 1.4.

 

AMD is currently not a good choice for 'cheap' TV 4K. The problem is that these R9 200/300/Fury/Nano cards can only put out 4K@30Hz over HDMI 1.4. The only way to get 4K@60Hz on current gen AMD is to use DisplayPort (DP) video outputs, which works great - 4K PC monitors with DP video inputs are readily available, however DP 4K TVs are not. If your 4K TV only supports HDMI it's better if you get NVIDIA. Alternatively you can purchase DP 1.2 to HDMI 2.0 adapter which bypasses all of the aforementioned issues. At the time of writing (Jan 2016) the cheapest 55" 4K@60Hz TVs with DP video inputs are still 8-10x more expensive than their HDMI 2.0 counterparts.

Test system:

- CPU: Intel Core i7 5775C @ 4GHz core / 3.3GHz uncore

- MB: ASUS ROG Maximus VII Gene

- RAM: 4x4GB DDR3-2133 C10 12-12-28 1T @ 1.35V

- GPU: (various)

- Drive: 128GB Crucial BX100

- OS: Windows 10 Pro x64

- Cooling: EK-XLC Predator 240 - liquid cooled CPU; factory cooling on GPUs

- Monitor: Dell 2713HM

- Drivers: Nvidia 361.43 / AMD Crimson 15.12

 

ek-predator_240_box_art_800.jpg

 

Final results are as following:

Eq4jNv2.png

 

xZTxHxt.png

 

qOJlWkW.png

 

GPU Architecture Impact result analysis:

- Please allow for +/- 2% result accuracy. The overlapping graphs peak/dips are always a good sign of a reliable data aquisition.

- Currently all the tested GPUs are powerful enough to drive FHD resolution. At FHD resolution most of the NVIDIA GPUs seem to be CPU bound (as shown by almost identical max and min values).

- All of the tested GPUs, including the old GTX 580, are still proving to show adequate performance at WQHD (2540x1440) resolution by maintaining minimum FPS above 30. Smooth gameplay at this resolution starts with R9 280X (HD 7970). All of the tested NVIDIA GPUs are still pretty much CPU bound as the results between the GPUs are close to nill.

- Only at 4K the difference in GPU power starts to show - currently, the most powerful GTX 980 Ti and Titan X are in the league of their own while GTX 780 Ti still holds ground against the new GTX 970.

- Generally, NVIDIA cards offer higher minimum- and lower max FPS

- CrossFireX seem to cause micro-stuttering which can be remedied by setting MODEL VISIBILITY to off (however I haven't found that function in settings as of version 1.5.2)

 

just a question - how come you have such a low performance with the r9 295x2 - on my setup it beats the gtx 1080 by nearly 60%

  • Like 1
Link to comment
Share on other sites

Could someone explain to me how you actually can get this game to utilize 4 cpu cores? I am running an overclocked I7 and it is only using 2 cores.. and seems to be using mainly one (i think the other is for sound.)

 

At the beginning of this thread it says that DCS 1.5 scales great to 4 cores.. I would love to know how to make it use all my cores so it isn't maxing out 1 core.

HTC Vive, Saitek X52 Pro, i7-950 Overclocked to 4ghz, Noctua NH-D14 cooler, ASRock x58 Extreme MB, EVGA GTX 970 FTW, 24GB G Skill Sniper DDR3 1600 RAM, EVGA 650-GQ (650 watts) PSU, Windows 10 Home

Link to comment
Share on other sites

Oh I also noticed it is only using about 4 gb of my 24gb of ram so not sure if there is a way to allow it to use more.

HTC Vive, Saitek X52 Pro, i7-950 Overclocked to 4ghz, Noctua NH-D14 cooler, ASRock x58 Extreme MB, EVGA GTX 970 FTW, 24GB G Skill Sniper DDR3 1600 RAM, EVGA 650-GQ (650 watts) PSU, Windows 10 Home

Link to comment
Share on other sites

Could someone explain to me how you actually can get this game to utilize 4 cpu cores?

 

You can't.

 

It is working just the way it was coded to work. One core for sound and 1 core for everything else. That isn't going to change, no matter what you do on your end.

ASUS ROG Maximus VIII Hero, i7-6700K, Noctua NH-D14 Cooler, Crucial 32GB DDR4 2133, Samsung 950 Pro NVMe 256GB, Samsung EVO 250GB & 500GB SSD, 2TB Caviar Black, Zotac GTX 1080 AMP! Extreme 8GB, Corsair HX1000i, Phillips BDM4065UC 40" 4k monitor, VX2258 TouchScreen, TIR 5 w/ProClip, TM Warthog, VKB Gladiator Pro, Saitek X56, et. al., MFG Crosswind Pedals #1199, VolairSim Pit, Rift CV1 :thumbup:

Link to comment
Share on other sites

You can't.

 

It is working just the way it was coded to work. One core for sound and 1 core for everything else. That isn't going to change, no matter what you do on your end.

 

Then what is the meaning of this statement by the OP on page 1: ???

 

1. If budget allows - get a fast (overclockable) quad-core CPU! Yes, DCS scales great up to 4 cores!

 

 

I understand that DCS World is running only on 2 cores (essentially just 1 not including sound) but I was asking cause this statement suggests that 4 cores would make a difference which doesn't seem to be true.

 

 

Also, this needs to change because I don't see how you can have a stable VR experience otherwise. It seems a lot of the problems with DCS World 1.5 in VR are related to the lack of CPU utilization. This game requires a core 2 duo as minimum but VR requires much more.

 

Thats my only issue right now is VR performance and I heard something is supposed to be in the works to fix that.

  • Like 2

HTC Vive, Saitek X52 Pro, i7-950 Overclocked to 4ghz, Noctua NH-D14 cooler, ASRock x58 Extreme MB, EVGA GTX 970 FTW, 24GB G Skill Sniper DDR3 1600 RAM, EVGA 650-GQ (650 watts) PSU, Windows 10 Home

Link to comment
Share on other sites

I have no idea what that statement means.

 

ED has previously stated that they have no intention of changing the code to use more cores/threads.

 

EDIT: The advantage in having a four core CPU is that Windows has a place to run it's processes which reduces their impact on the cores that DCS is using. DCS, itself, doesn't care.


Edited by cichlidfan

ASUS ROG Maximus VIII Hero, i7-6700K, Noctua NH-D14 Cooler, Crucial 32GB DDR4 2133, Samsung 950 Pro NVMe 256GB, Samsung EVO 250GB & 500GB SSD, 2TB Caviar Black, Zotac GTX 1080 AMP! Extreme 8GB, Corsair HX1000i, Phillips BDM4065UC 40" 4k monitor, VX2258 TouchScreen, TIR 5 w/ProClip, TM Warthog, VKB Gladiator Pro, Saitek X56, et. al., MFG Crosswind Pedals #1199, VolairSim Pit, Rift CV1 :thumbup:

Link to comment
Share on other sites

I have no idea what that statement means.

 

ED has previously stated that they have no intention of changing the code to use more cores/threads.

 

EDIT: The advantage in having a four core CPU is that Windows has a place to run it's processes which reduces their impact on the cores that DCS is using. DCS, itself, doesn't care.

 

I understand but every other game that uses 4 cores doesn't cause windows any issues. It really wouldn't hurt anything to utilize 4 cores and would only help but I guess its too much trouble for them to change it that much maybe.

 

They have to do SOMETHING though because the VR needs to have a more stable FPS and when people are claiming they have GTX 1080s (which have simultaneous multi projection to help FPS a lot) and still are having lower than 90 fps... something is very wrong.

HTC Vive, Saitek X52 Pro, i7-950 Overclocked to 4ghz, Noctua NH-D14 cooler, ASRock x58 Extreme MB, EVGA GTX 970 FTW, 24GB G Skill Sniper DDR3 1600 RAM, EVGA 650-GQ (650 watts) PSU, Windows 10 Home

Link to comment
Share on other sites

I understand but every other game that uses 4 cores doesn't cause windows any issues. It really wouldn't hurt anything to utilize 4 cores and would only help but I guess its too much trouble for them to change it that much maybe.

 

They have to do SOMETHING though because the VR needs to have a more stable FPS and when people are claiming they have GTX 1080s (which have simultaneous multi projection to help FPS a lot) and still are having lower than 90 fps... something is very wrong.

 

 

You are not alone, we all wish DCS would utilise more Cores and in one way it actually does now in NTTR 2.x as DX11 uses more Cores and thus DCS now utilises more of the given hardware.

Actually, the way that is handled aint bad at all as DX chooses a different core(s) and wont put that load on the 2 cores already in use.

 

You can clearly see how much this helps when you measure performance and HW utilisation in 1.5.x and 2.x....worlds apart in GPU utilisation. NTTR easily goes 99% on my 980GTX but I struggle to get it beyond 2/3 utilisation in 1.5

  • Like 2

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

You are not alone, we all wish DCS would utilise more Cores and in one way it actually does now in NTTR 2.x as DX11 uses more Cores and thus DCS now utilises more of the given hardware.

Actually, the way that is handled aint bad at all as DX chooses a different core(s) and wont put that load on the 2 cores already in use.

 

You can clearly see how much this helps when you measure performance and HW utilisation in 1.5.x and 2.x....worlds apart in GPU utilisation. NTTR easily goes 99% on my 980GTX but I struggle to get it beyond 2/3 utilisation in 1.5

 

Well thats comforting to hear that I am not alone. I am also glad to hear that 2.0 is running better than 1.5. I really want to try 2.0 but since you have to buy NTTR to do it, I was hesitant. I am looking to buy a module or two in the next week or so.. maybe I will consider NTTR now so that I can run 2.0 and get some better VR. :)

HTC Vive, Saitek X52 Pro, i7-950 Overclocked to 4ghz, Noctua NH-D14 cooler, ASRock x58 Extreme MB, EVGA GTX 970 FTW, 24GB G Skill Sniper DDR3 1600 RAM, EVGA 650-GQ (650 watts) PSU, Windows 10 Home

Link to comment
Share on other sites

  • 2 weeks later...

So what do you think clock to clock i3 6100(2c/4th) vs i5 6600(4c/4th) would perform the same in dcs? If DCS uses only 2? I mean as you can overclock 6100 via blck so to get at least 4.5 but as my practise shows even 4.7-4.8 it would be as good as 6600 at the same 4.7-4.8 right? 6100 now costs cheap and you dont need as beaffy cooler so i estimate system specifically for dcs wqould cost 150eur cheaper than 6600 or even 200e than 6600k. What do you think? Maybe someone is on overclocked 6100 and can give some feedback;)

Link to comment
Share on other sites

So what do you think clock to clock i3 6100(2c/4th) vs i5 6600(4c/4th) would perform the same in dcs? If DCS uses only 2? I mean as you can overclock 6100 via blck so to get at least 4.5 but as my practise shows even 4.7-4.8 it would be as good as 6600 at the same 4.7-4.8 right? 6100 now costs cheap and you dont need as beaffy cooler so i estimate system specifically for dcs wqould cost 150eur cheaper than 6600 or even 200e than 6600k. What do you think? Maybe someone is on overclocked 6100 and can give some feedback;)

As you can see in the benchmarks above, and in BitMaster's comment, DCS clearly scales at least up to 4 cores.

So I would not choose less than an i5 for CPU.

System specs:

 

Gigabyte Aorus Master, i7 9700K@std, GTX 1080TI OC, 32 GB 3000 MHz RAM, NVMe M.2 SSD, Oculus Quest VR (2x1600x1440)

Warthog HOTAS w/150mm extension, Slaw pedals, Gametrix Jetseat, TrackIR for monitor use

 

Link to comment
Share on other sites

I can't give any specific numbers but I would imagine it would be minimal benefit to run 4 cores with HT unless you have lots of applications running at the same time. As DCS uses 2 threads (1 for game engine and 1 for sound engine) that's 2 cores you would want to be always free for DCS. I think HT enables 2 threads to run on 1 core what would appear to be in parallel, if both of the threads are not utilizing 100% of the core. So 2 cores is not enough for DCS, 4 cores is good already. I have 6 core CPU (now and previous system also) and I have HT disabled.

 

Maybe someone with better knowledge and details into it can chip in more.

 

I remember back in the days when IL-2 came out, CPU's with HT were being bench marked to see what difference there is with HT on and off, and I remember there were few fps difference only. Very minor.

No longer active in DCS...

Link to comment
Share on other sites

It takes 1 reboot and 1 change in Bios setting to find out ;)

 

I would use handbreak and convert a big video file with and w/o HT enabled and check times.

 

From what I understand, w/o HT it will just take twice as much time as you skip 4 threads and the core wont be twice as fast w/o HT on a single thread config...but I might be wrong.

 

If it was twice as fast w/o HT every gamer would by an i7 and disable HT.....think that way.

 

Tho having all the Cache for half the cores "might" speed up computing in limited scenarios on selected systems.

 

Actually, I think I will try this myself once I get to my gaming rig.


Edited by BitMaster

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

The thing with hyperthreading is with overclocking, your cpu won't overclock as much under heavy 5-8 threaded loads on a 4 core cpu with hyperthreading, because it generates more heat, so turning off hyperthreading will allow higher clocks in those conditions, otherwise I don't see how it would make much difference unless the 2-4 threaded app can't tell the difference between a logical and a physical core.

 

As for speed increases, not even close to double the speed, hyperthreading just insures there's less downtime on the physical cores, because some instructions leave the cpu sitting there doing nothing for a while, when it could be starting another task while it waits.


Edited by Hadwell

My youtube channel Remember: the fun is in the fight, not the kill, so say NO! to the AIM-120.

System specs:ROG Maximus XI Hero, Intel I9 9900K, 32GB 3200MHz ram, EVGA 1080ti FTW3, Samsung 970 EVO 1TB NVME, 27" Samsung SA350 1080p, 27" BenQ GW2765HT 1440p, ASUS ROG PG278Q 1440p G-SYNC

Controls: Saitekt rudder pedals,Virpil MongoosT50 throttle, warBRD base, CM2 stick, TrackIR 5+pro clip, WMR VR headset.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

  • 2 months later...

Very hard work was performed by tester, and it's inspire big respect.

 

Unfortunately test track isn't here, and we can't see what it is tested.

Judge by definition in Para 4. "Testing methodology explained" this is a simple dogfight without large quantity of aircraft/ships/ground technics, without ground attack, flight into city or ground battles.

 

My DCS testing is included 6 tracks with various scene (Ordinary panorama, Multypolygonal scene, Multyexplosion scene, Rocket launches, Smokes, Tank sight view). And in any situation the results was very different.

 

I consider that conclusion "GTX960/970 P/P king for playing DCS at WQHD/4k" is absurd for many DCS user, because in very many situation even overclocked 980Ti demonstrate the "slide show".

 

In any case it was very informative for fighter pilot for example.

  • Like 1
 

VR HP Reverb G2, Monitor Samsung C32HG70, CPU Ryzen 5800Х3D, MSI X570 ACE, RAM CMK32GX4M2F4000C19 2X16Gb (@3600), M.2 960PRO 500Gb, M.2 2000Gb, VGA RedDevil 6900XTU, EKWB CPU+GPU

Link to comment
Share on other sites

  • 2 weeks later...

HT is not SMP, HAdwell put it real nice in words.

 

I have to admit, I keep my HT enabled while riding the 5GHz wave ;)

 

Maybe I should disable it and go for 5.2 :)

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

I can't give any specific numbers but I would imagine it would be minimal benefit to run 4 cores with HT unless you have lots of applications running at the same time. As DCS uses 2 threads (1 for game engine and 1 for sound engine) that's 2 cores you would want to be always free for DCS. I think HT enables 2 threads to run on 1 core what would appear to be in parallel, if both of the threads are not utilizing 100% of the core. So 2 cores is not enough for DCS, 4 cores is good already. I have 6 core CPU (now and previous system also) and I have HT disabled.

 

Maybe someone with better knowledge and details into it can chip in more.

 

I remember back in the days when IL-2 came out, CPU's with HT were being bench marked to see what difference there is with HT on and off, and I remember there were few fps difference only. Very minor.

 

Back in those days when Pentium-4's introduced HT it caused quite some grief as manay titles started to stutter at the point where 1 core was flooded and unloaded to the next "HT- core" through the former HT method. Whenever that unload took place my BF1942 and BF2 had nice "hiccups" that

even cursed our root-server on Linux in the DataCentre. We all hated HT back then, really HATED it. Single Core + HT is a nightmare !

 

Meanwhile I do not "feel" any unloading stuttering etc.. that got a lot better. Still, HT is a lame substitute for true SMP.

 

The good thing is, HT works regardless of programming code afaik whereas SMP needs to be in the code to make use of it. Many titles if not most disregard SMP, which is a shame and pitty as our new CPU's get more cores but only same or less MHz clock ;(


Edited by BitMaster

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

  • 2 weeks later...

Has anybody tried settting aplication core affinities specifically for DCS! I mean has anybody tried to give DCS exlusive 2 cores so no other background apps or windows itself uses those two? I know windows is good enough to manage resources itself, but as we know we need every last drop of CPU single core power maybe that helps? What do you think guys?

Link to comment
Share on other sites

I did it some time ago and couldnt see any jump ahead but the lost time trying it.

 

The limit is rather DX and not the already fast CPU's. I often see my CPU at 85-90%, not fully loaded, but bottlenecked somewhere else. I suspect DirectX being the cause but that is beyond my understanding as I am no D3D-dev. SkateZilla might know this one a lot better than I do, he may read this and give his salt as well ;)

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...