Jump to content

Crossfire and future CPU build with Ryzen


Recommended Posts

I have some questions regarding hardware.

 

Does DCS support AMD crossfire or will it ever? GPUs are ridiculously expensive and mine is 3 years old and still nearly as powerful as a Nvidia 1070 a used AMD Fury is like 250 USD, rather than buying a $800 GPU and only getting 25% performance (that is probably really optimistic) I could just buy another one of mine and double the power, why is DCS not optimized for dual GPUs yet? Or is it?

 

When I upgrade to a Ryzen 2700x will I see any performance increase from my 4.5ghz i5 6600k?

Link to comment
Share on other sites

Best of my knowledge multi GPU = no improvement

 

Compare the IPC of your current CPU and the 2700x to see if it worth it

DCS is pretty much single threaded so more cores aint gonna help you

METAR weather for DCS World missions

 

Guide to help out new DCS MOOSE Users -> HERE

Havoc Company Dedicated server info Connect IP: 94.23.215.203

SRS enabled - freqs - Main = 243, A2A = 244, A2G = 245

Please contact me HERE if you have any server feedback or METAR issues/requests

Link to comment
Share on other sites

Why are game devs not improving their games for multi GPU and multi core CPUs?

 

To get the same power out of 2 cheaper GPUs id have to wait about 5 years or more, it doesn't make sense.

 

I am going to have to upgrade regardless because I use my PC for work, school and gaming but was not sure if DCS would run any better.

 

Thanks

Link to comment
Share on other sites

  • 2 weeks later...

Irritating to say the least fore me......

Not only has the latest patch has completely made the game unplayable....( I have best setup possible) and it can't run for crap...

 

Not to mention I have 2-GPU cards that run in crossfire for ALL NEW GAMES.... FLAWLESS......

And it will make a HUGE improvement on the game FPS and I can turn everything on full.... but I don't have a 4k monitor or a ultrawide screen.... but... still if this DCS could get a Crossfire and other (not sure what Nvida calls it nowdays) up and running......would help...

because yes.... it is a old Single threaded game that can not run better with more cores... but would run better if 2 gpu's would be able to run the graphics..

Link to comment
Share on other sites

Looking at the path, and the past,

 

you prolly wont see much in terms of mGPU improvement until Vulkan.

 

AMD's XFire Drivers are troublesome most of the time, and nVidia's have their own quirks.

 

The Move the Vulkan will shift the mGPU Resources management to the kronos Vulkan driver.

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

1 core, one core, everyone says ONE core.

 

LOOK at your CPU graph when you play ONLINE with a HEAVY Mission.

 

My 6c/12t CPU is often, really OFTEN, 50% used and I do force DCS on to physical, non-HT cores.

 

Ergo, it already does employ all SIX cores I have. I have shown multiple screenshots and even vids on YT that proof this.

 

Stop that 1 core talk, it is by far obsolete now.

 

Throw in more cores, more MHz too for sure do help too but DCS can and does thread in the overall picture.

 

As c0ff (?) said, DCS already multithreads wherever they can and also DX11 does use more than 1 core, not very clever but it does. The outcome is what I se ein my graph, 6 cores are often used to a high extend. Yes, there are also times in the same mission when the total CPU usage is 12%, but 10 seconds later, different view angle + whatnotelse and I see my CPU with 6 full cores being 70-100% each, REPRODUCABLE anytime.

 

One Core ? NO !


Edited by BitMaster

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

DCS will be fully converted to vulkan and that can leverage any amount of cores and graphics cards (using multi GPU tech and not SLI/crossfire anymore), though at this time the impact of going high parallelism on the graphics card side is totally unknown. So for now the sane decision is to get the best GFX you can get and thats it.

[sigpic]http://forums.eagle.ru/signaturepics/sigpic4448_29.gif[/sigpic]

My PC specs below:

Case: Corsair 400C

PSU: SEASONIC SS-760XP2 760W Platinum

CPU: AMD RYZEN 3900X (12C/24T)

RAM: 32 GB 4266Mhz (two 2x8 kits) of trident Z RGB @3600Mhz CL 14 CR=1T

MOBO: ASUS CROSSHAIR HERO VI AM4

GFX: GTX 1080Ti MSI Gaming X

Cooler: NXZT Kraken X62 280mm AIO

Storage: Samsung 960 EVO 1TB M.2+6GB WD 6Gb red

HOTAS: Thrustmaster Warthog + CH pro pedals

Monitor: Gigabyte AORUS AD27QD Freesync HDR400 1440P

 

Link to comment
Share on other sites

128 cores

4 x Über-GPU

1TB RAM DDRx

 

way to go !

 

We never thought drives will get bigger than 40MB, RAM will never exceed 1MB, internet ? what's THAT ??

 

Tempus Fugit and I wish DCS a long life, therefor it needs to adopt to new technologies every so often.

 

Vulkan, if best, will only be one milestone of many to come.

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

1 core, one core, everyone says ONE core.

 

LOOK at your CPU graph when you play ONLINE with a HEAVY Mission.

 

My 6c/12t CPU is often, really OFTEN, 50% used and I do force DCS on to physical, non-HT cores.

 

Ergo, it already does employ all SIX cores I have. I have shown multiple screenshots and even vids on YT that proof this.

 

Stop that 1 core talk, it is by far obsolete now.

 

Throw in more cores, more MHz too for sure do help too but DCS can and does thread in the overall picture.

 

As c0ff (?) said, DCS already multithreads wherever they can and also DX11 does use more than 1 core, not very clever but it does. The outcome is what I se ein my graph, 6 cores are often used to a high extend. Yes, there are also times in the same mission when the total CPU usage is 12%, but 10 seconds later, different view angle + whatnotelse and I see my CPU with 6 full cores being 70-100% each, REPRODUCABLE anytime.

 

One Core ? NO !

 

The Base DCS World Code is Serialized Coding,

 

But ED have Stated recently, that when parts of the Base Code are Re-Written, they try to Code it for Parallel Threads.

 

So it may not be DCS Core + XAudio Threads anymore

 

what is being refactored/rewritten does use multi-threading where possible.

 

atm the major parts which do it are:

resource loading and other I/O (logging, input, ffb ...), integrity checking, sound

 

w/o rewriting the engine, splitting simulation into a separate thread from the graphics will benefit it most - which will be, in fact, the mentioned client-server approach, just inside the app.

 

The question about effective use of multicore is of course a pretty legitimate and a hot one.

C++ code base of DCS is >1.5M lines of non-trivial and sometimes very complex code, which was mosty architectured in the single-CPU era.

The size of the team can be easily estimated using the about box of the game.

 

We are definitely moving to multicore, obviously it's the present and the future (if there's a future for PC's at all).

But DCS is our bread and butter and we just can't throw it away.

 

So the things are definitely moving, but not as fast as everyone, including us, wants.


Edited by SkateZilla

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Good to read, but it has his limitations.

Ex: The bullet and his trajectory, the moment of impact with another object has to be synchronized. Can't do that with 2 threads on 2 different cores.

The 6 cores will be obsolete :lol:


Edited by Demon_

Attache ta tuque avec d'la broche.

Link to comment
Share on other sites

Good to read, but it has his limitations.

Ex: The bullet and his trajectory, the moment of impact with another object has to be synchronized. Can't do that with 2 threads on 2 different cores.

The 6 cores will be obsolete :lol:

 

That's not the point.

 

The point is to take as much work away from that ONE core so it can do exactly those things that you cannot splitt, undisturbed, dedicated, and doesnt have to do this or that which any other free core could do as well.

 

There are many options to spread the combined workload across more chips/cores.

Let's talk about what can be done, not what we know cannot be done as much as we would love it to happen.

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

what is being refactored/rewritten does use multi-threading where possible.

 

atm the major parts which do it are:

resource loading and other I/O (logging, input, ffb ...), integrity checking, sound

 

w/o rewriting the engine, splitting simulation into a separate thread from the graphics will benefit it most - which will be, in fact, the mentioned client-server approach, just inside the app.

 

"splitting simulation into a separate thread from the graphics will benefit it most"

 

This here is very interesting to me, the simulation engine with be separate to "Edge" graphics engine.

 

These then will potentially run on separate cores/threads.

 

Then the I/O (logging, input, ffb ...), integrity checking, sound would be on others potentially.

 

Even this will be huge in DCS, 3 fully used cores/threads,

 

I see that as a huge step up on the CPU side, then add the valken api to the mix..!!

 

Quote

Wags: For the past several months, we have also been working on Vulkan API integration into DCS World, which we expect to result in a substantial performance boost.

End Quote Article

 

I'm thinking they have run the numbers here before really pushing this way, Wags seems a little excited...Me too!

 

Good to read, but it has his limitations.

Ex: The bullet and his trajectory, the moment of impact with another object has to be synchronized. Can't do that with 2 threads on 2 different cores.

The 6 cores will be obsolete lol.gif

 

The only thing that will need updating across cores here would be the simulation hits to the edge graphics engine "client-server" Make sense for local or multiplayer (dedicated server perhaps ;))... If the hits are even .01 of a second off the visual reference to the hit, you would never see it and I'm thinking (Unless Net Lag in MP ;)) it would be quicker than that too when synchronizing locally.

 

Here is a free Open Source simulation engine in C++ and XML configuration files if you want to take a look at one, that can be use with a visual engine, like this one has Outerra.

 

-


Edited by David OC

i7-7700K OC @ 5Ghz | ASUS IX Hero MB | ASUS GTX 1080 Ti STRIX | 32GB Corsair 3000Mhz | Corsair H100i V2 Radiator | Samsung 960 EVO M.2 NVMe 500G SSD | Samsung 850 EVO 500G SSD | Corsair HX850i Platinum 850W | Oculus Rift | ASUS PG278Q 27-inch, 2560 x 1440, G-SYNC, 144Hz, 1ms | VKB Gunfighter Pro

Chuck's DCS Tutorial Library

Download PDF Tutorial guides to help get up to speed with aircraft quickly and also great for taking a good look at the aircraft available for DCS before purchasing. Link

Link to comment
Share on other sites

As much as Microsoft States that DirectX 11 is Mutli-Threaded, it's not, at least not Correctly.

 

It was added to DirectX 11 After the Fact, and it was done so very Poorly.

 

Large Object Count Scenes will still cause overhead and for a single thread to be overloaded.

 

Vulkan will not have such a problem, AT ALL.

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Yeah because the additional threads are completely reliant on the parent thread to dish out the render work in DX11 which is still trapped behind the rest of the engine pipeline, where as Vulkan render threads can behave independently/asynchronously.

 

Vulkan's Threads will be used for Vulkan's API Functions,

But, The Bulk of the GPU Commands will be sent Directly to the GPU for Processing, and not channeled though an API Thread to be converted from API's Language to the Hardware's.

 

Thus Removing Significant, if not all, of the CPU Overhead/Bottleneck, and the Lag/Low GPU Usage/Low FPS from High Object Count Scenes.

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

I'm pretty sure you're never bypassing the API, its just that the API is way faster in the way it works. To bypass the API and talk directly to the GPU as a game developer would mean interfacing with the GPU driver directly which means having to have vendor specific code paths and thus going against the vendor agnostic nature of an API in the first place.

Link to comment
Share on other sites

I'm pretty sure you're never bypassing the API, its just that the API is way faster in the way it works. To bypass the API and talk directly to the GPU as a game developer would mean interfacing with the GPU driver directly which means having to have vendor specific code paths and thus going against the vendor agnostic nature of an API in the first place.

 

The days of Needing a Heavy API Buffer between your graphics engine and the Hardware are over.

Now to process graphics quicker, most of the commands are send directly to the GPU.

 

15-20 Years ago, this was not possible, as the GPUs were limited in functionality, and there were over half a dozen different architectures and languages.

 

Modern GPUs of the last 8 years have the ability to do more graphics functions by raw commands as well as other functions and instruction sets embedded. The GPUs are capable of doing the work, quicker and more efficiently than a CPU Bound Software Layer API would.

 

Vulkan is a Low Overhead, Low Level API,

It Offers significantly reduced CPU Overhead on any type of Draw/Texture Calls compared to DirectX11, 10, 9, OpenGL, etc.

 

There are only 3 Low Overhead API's in use for General PC Gaming:

DirectX12 : Windows 10

Vulkan : Android, Linux, Tizen, Windows 7, Windows 8, and Windows 10, iOS and macOS

Metal : macOS, iOS, tvOS.

 

Vulkan has better performance and resource management than DirectX12 as well.

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Draw calls are still however a factor of the CPU, which is what I'm trying to point out. You can refer to this vulkan demo video and even run the demo yourself.

. You can dynamically scale between 1-8 threads, and see how it effect CPU and GPU load. I currently have a Ryzen 2700 and a 1080ti, and the video maker has his settings in the video. I out perform his system by a lot of frames and draw calls on 8 threads, but lose by a decent margin on 1 thread even though my video card is 3 generations ahead of his.

 

https://drive.google.com/file/d/16klXl6fAxeJee4j9WOXyaQ9aYpAvy3Tr/view?usp=sharing

 

https://drive.google.com/open?id=1lQbk_48Uy8kuifMmcHdT_2nDc6J0i-1W


Edited by blkspade
Link to comment
Share on other sites

MEEHHH

 

Those fish produce a very ugly coil whining...@1770fps, 72% GPU load and ca 33% CPU load LoL

 

The choppers stop at 144fps with ca. 60% GPU and 3% CPU LoL WOW

 

impressive,,,just that coil whine...very very BAD. Some Algos have a minor coil whining while mining, Skein for example does this, but oh boy, those fish, they make the card scream !!!

 

 

 

**all tests stock 8700k, no OC at all....fighting some ugly BSOD's currently

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...