Jump to content

Ryzen 5-3600X results


DeltaMike

Recommended Posts

Platform in sig. For reference, I typically run the same settings on my Vega as my buddies do on 1080 or 1070ti.

 

The Ryzen 5-3600X is a six core processor with base clock of 3.8GHz and a max boost of 4.4GHz. AMD is claiming a 15% increase over the last generation in instructions-per-cycle. That doesn't mean it's 15% faster than an Iintel chip running the same clock speed, but I figure it's nothing to sneeze at.

 

Comparison is with a Ryzen 5-2600, 3.4GHz base, it'll bost up to 3.9GHz and I've seen it do that during stress testing. I was able to overclock it to 4.0GHz, no more, and finally just wound up running it stock.

 

I "overclocked" the memory to get it to its rated speed, the 3600X was run stock without overclocking.

 

All tests done in the F18. Graphics settings were held constant, if you are dying to know I'll go over them but it's not really relevant here I don't think.

 

Since we are talking about VR here, I'm going to report render times rather than FPS. To convert render time to FPS, divide the GPU render time into 1000. So, a 22ms render time should give you about 1000/22 = 45 fps.

 

Here's what I found.

 

1. Free flight over the Caucasus. With the 2600, my CPU render times were in the 9-12 range, GPU time 22ms. With the 3600X, CPU render times fell to 8-9, GPU time fell to 19.

 

2. The real challenge, and the reason for the upgrade, is multiplayer. Previously, on Growling Sidewinder, my CPU times on the ground were ~26ms, GPU ~41. For those who don't want to do the math, that's a frame rate of 22.5ms. That typically settled down to 16/22 in the air (45fps). With the 3600X, I was running 16-17ms CPU times and 22ms GPU times, for a fairly smooth 45fps even on the ground.

 

3. The ultimate test, for my system anyway, is Georgia at War, a very busy map with tons of AI units. With the Gen1 and Gen2 Ryzens, my CPU times were 27ms, GPU 42ms on the ground, and it didn't improve much in the air. I've been pegged at 22.5 fps. After the upgrade, I spawned in with 16-17ms CPU times, 22ms GPU times. FPS wasn't consistent on the ground, my GPU times spiked up to 42 a couple of times, but it settled down once I took off and gave me a consistent 45fps.

 

I'm actually kind of amazed at the results. Going from 22.5fps to 45fps in GAW is a game changer. I was also surprised that my GPU render times improved as much as they did. CPU time and GPU time are highly correlated, and I'm getting the feeling the GPU has to wait on the CPU for pretty much everything other than anti-aliasing (although I wouldn't know for sure).

 

Upgrading was easy. I bought an MSI B450M Gaming Plus about six months ago anticipating this upgrade. An OK-ish board, probably not the best for overclocking but perhaps enough for six core chips. I don't think they are shipping with zen-2 compatible BIOS yet but it's an easy flash, even if you haven't installed the CPU yet. The BIOS is still in beta but I had no problems getting it running, it booted right up. Worst part about the whole thing was getting that frickin Wraith cooler bolted in.

 

Net of everything, that's about the best $250 I've spent on this game yet.

Ryzen 5600X (stock), GBX570, 32Gb RAM, AMD 6900XT (reference), G2, WInwing Orion HOTAS, T-flight rudder

Link to comment
Share on other sites

So after upgrading CPU your GPU frametime lowered by 100%? Ok....

 

Yeah it's gotta be reporting total frametime.

 

In VR, a lot of things have to happen sequentially. Turn pixel off, figure out which way head is pointing, render the image, turn pixel on. I get the feeling that the CPU and GPU have to do their work sequentially, at least sometimes. So, with Oculus Tray tool anyway, "how long it takes the GPU to do its thing" does not equal "GPU render time" and it's not even exactly "GPU time - CPU time" near as I can tell.

 

So far I've found the following:

 

1. Anti-aliasing affects GPU time but not CPU time to any significant extent

2. Adding units to the map affects CPU time much more GPU time

3. Other things like shadows and trees affect the two numbers similarly

 

See also https://forums.eagle.ru/showthread.php?t=200737

 

Bottom line is, with minimal settings I now have 3-4ms to work with. I can add a lot of units, a little bit of shadows, or a little bit of anti-aliasing. If I had a faster GPU, I would probably have more than 4ms to work with, and I could add a lot of shadows or anti-aliasing, but probably not a whole bunch more units than I can now and keep FPS at target. Likewise if I upgrade to a Rift S, I can push my frame render time out to 25ms, so I could drive more pixels and might still have something left over.


Edited by DeltaMike

Ryzen 5600X (stock), GBX570, 32Gb RAM, AMD 6900XT (reference), G2, WInwing Orion HOTAS, T-flight rudder

Link to comment
Share on other sites

... my CPU times on the ground were ~26ms, GPU ~41. For those who don't want to do the math, that's a frame rate of 22.5ms. ...

 

 

Could you elaborate on the formula used for the maths? What unit is GPU?

Windows 10 64bit, Intel i9-9900@5Ghz, 32 Gig RAM, MSI RTX 3080 TI, 2 TB SSD, 43" 2160p@1440p monitor.

Link to comment
Share on other sites

I have ryzen 7 1700 over locked at 3.8 with stock cooler with 16 Gn ram and 1080ti on VR I run 22 lowest and 45 max.I think you can push more your cpu with a liquid cooler.

 

Could you please upgrade to a 3700x and see if you get any further improvements? ;)

 

My thinking with the 2600 was, I can overclock that sucka and save $50! Didn't work out that way. I dunno, kinda had the idea that the 2600X was an enhanced 2600 when the reality is, I'll bet the 2600 was dug out of the trash can. But maybe it was silicon lottery, and maybe the cheap mobo

 

Hearing much the same about the 3600, haven't seen any OC results on the 3600X, interesting question, given the modular design of the 3000 series.

 

I'm just not sure I see the point in overclocking unless I can get it significantly above the single-core boost speed. For CPU mining, which is a parallel process, yeah I would definitely overclock, even if the final number was a shade less than the boost speed. But for DCS, I don't think I want to leave anything on the table single-core-wise. That's also why I didn't go with the 3700X, I mean maybe six cores is enough for load balancing, and the burst speed is the same... but I wouldn't know

Ryzen 5600X (stock), GBX570, 32Gb RAM, AMD 6900XT (reference), G2, WInwing Orion HOTAS, T-flight rudder

Link to comment
Share on other sites

Could you elaborate on the formula used for the maths? What unit is GPU?

 

There's an inverse relationship between the render time and the FPS. 1000ms in a second, so 1000 / frametime in ms = frames per second.

 

In VR, you're typically locked into a certain FPS, so Oculus defaults to 90, 45 or 22.5 usually. If you want a more granular measurement -- and anything would be more granular than that -- you could turn ASW off and measure frame rates, or you could just look at the render time.

 

Latency matters, arguably it's the key to the whole thing. You don't want a delay of much more than 20ms between when something happens in the virtual world and that information hits your retina, otherwise you'll puke. Persistence, or in other words how fast you can turn a pixel off, has an effect on latency. Faster monitors have lower persistence -- they pretty much have to -- but I don't know that you have to have 90fps to have a comfortable experience.

 

Net of everything, VR users (I would submit) are more interested in latency than framerate. The two are closely, if not perfectly related, because render time accounts for a lot of your latency

 

I used to think I knew what GPU render time was, now I'm not so sure. I think it's probably total frame time


Edited by DeltaMike

Ryzen 5600X (stock), GBX570, 32Gb RAM, AMD 6900XT (reference), G2, WInwing Orion HOTAS, T-flight rudder

Link to comment
Share on other sites

My thinking with the 2600 was, I can overclock that sucka and save $50! Didn't work out that way. I dunno, kinda had the idea that the 2600X was an enhanced 2600 when the reality is, I'll bet the 2600 was dug out of the trash can. But maybe it was silicon lottery, and maybe the cheap mobo

 

Hearing much the same about the 3600, haven't seen any OC results on the 3600X, interesting question, given the modular design of the 3000 series.

 

 

AMD Ryzen 5 3600X Review vs. R5 3600: $50 for a Letter

 

 

${1}

 

 

 

and

 

AMD Ryzen 5 3600 CPU Review & Benchmarks: Strong Recommendation

 

 

 

That said, I am currently upgrading my gaming rig from i5-3570k and frankly after watching a ton of gaming benchmarks, I think I will go with... i7-9700k. I will despise myself for that (slightly) but I don't think AMD delivers enough punch for VR uses (Index demands high frame rates). It also isn't priced competitively enough like I anticipated when you factor x570 motherboard cost (and the old ones are currently plagued by various issues). At that price points I might as well buy an Intel for two more years and upgrade whole kit (mobo + cpu) as I am not doing any "production" workloads, only (VR) gaming.

Link to comment
Share on other sites

Then there's this

https://www.gamersnexus.net/hwreviews/3489-amd-ryzen-5-3600-cpu-review-benchmarks-vs-intel

however I feel you make good points.

 

I think for people already invested in mobo and memory, 3rd gen Ryzen makes some sense --especially if you're running a 2600 which imo is a piece of garbage -- but I don't know that people should be eager to kick intel to the curb for DCS, not by a long shot.

 

My only thing is, that jump from 3.9GHz on a gen2 to 4.4GHz on a gen3 is a huge jump, I thought it might help a little but holy cow


Edited by DeltaMike

Ryzen 5600X (stock), GBX570, 32Gb RAM, AMD 6900XT (reference), G2, WInwing Orion HOTAS, T-flight rudder

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...