Jump to content


  • Content Count

  • Joined

  • Last visited

About Sn8ke_iis

  • Rank
  • Birthday 11/11/1918

Personal Information

  • Flight Simulators
    IL2 Flying Circus, Falcon BMS, Prepar3d, IL2:CLOD, BOx, Bodenplatte, DCS:YAK-52, Spitfire, FW-190, ME-109, FW-190, P-51, Hawk, L-39, F-86, MiG-15, A-10 and all the rest too.
    Most flight time in Spittie, Mustang, Sabre, YaK-52, CEII, F-14B, F-16C
  • Location
    Rocky Mountains, USA
  • Interests
    Large scale aircraft modeling 1/32 - 1/48; Cockpit building; Aviation History
  • Occupation
    I was a soldier, now I fly imaginary airplanes in DCS.
  1. The Hyperthreading issue depends on the chip. If you have a 7700K keep HT on, if you have 8700K, 9900K, or 10600K you won't necessarily gain anything in performance by turning it off but you can overclock the core to a higher GHz and still be stable. Anytime Gamer's Nexus does tests with HT or SMT off there is usually a performance gain but depends on the game. In general games don't use hyperthreading of cores like Blender or Cinebench does. If your cooling system is limited it might be worth a try to turn off some cores but if you lock affinity to 3 cores with Process Lasso the other c
  2. Per these it's kind of a toss up. Depends on the game and how comfortable/good you are with overclocking and luck with silicon lottery. If I had the budget for a second rig I would like to compare the two. But nobody has done a specific comparison on DCS. [YOUTUBE]OnMEAW5VVUo[/YOUTUBE] Now Steve also had some good benchmarks where a stock 5950X is beating his 10900K OC'd to 5.2 GHz on some games, which is very impressive, but depends on the game. RDR2 for instance seems to be partial to Intel. [YOUTUBE]zYhwBk8GE6M[/YOUTUBE] Before I've always recommended Intel for gaming rigs for
  3. You are going to be very limited in your BIOS setting with that chipset. For your CPU you want a Z490 to get the full performance of your chip and be able to overclock.
  4. Thanks Supmua, this is good info. Looking forward to your data.
  5. Uhh...yes? Those cards are two generations apart (4 years). 1080 Ti has 3584 CUDA cores of an older uArch (Pascal). A 3070 has 5884 CUDA cores of the new "Ampere" architecture. The new chip has denser components that run on less power as well. Gamers wouldn't be freaking out for the new cards if they didn't improve performance. Youtube is full of gaming benchmark videos right now. The 3070 is the best price/performance card out until the new AMD cards are properly tested. Now if you are hitting CPU bottlenecks then it won't matter. But I wouldn't pay 400 pounds for a 1080 Ti unless it w
  6. Motherboards? RAM? Graphics Card? Resolution? Screenshot of settings? You've left out a lot of important details. Is that the Caucasus map? A lot of AI elements can be a good CPU stress test/comparison but it's also not very indicative of day to day game play. Can you do some other missions too?
  7. Max power under stress tests is not indicative of power usage while gaming. A 10900K will not draw 250W while gaming or playing DCS. My overclocked 9900K draws a smidge under 70 watts while playing DCS. I've actually seen benchmarks of the previous Ryzen 3000 series drawing more watts than Intel while gaming. Haven't seen any power draw data for the new Ryzen chips yet. To get a 9900K or 10900K over 200W you have to run a stress test in Blender or Prime95, something that really hammers your CPU.
  8. Hey, this is actually useful information. I wish we could pin it or something. No one is expecting new 3080 or 3090 owners to go all Gamer's Nexus and make a video with graphs and charts. Just establish a simple baseline and post before and after numbers. Percent change formula is [(new - old / old) * 100] for those that aren't aware. Have we figured out a way to properly compare VRAM allocation, quantity, quality, etc? i.e. Is the performance bump from more CUDA cores and improved uArch, or does the memory help too? And this isn't me being pedantic guys. I was just waiting for the 30
  9. I got up to about 83% scaling in my tests in 4K with high AA settings. Which is actually very good relative to other games. In 1080p not worth it. In a nutshell, if the graphic setting is GPU bound it will benefit from 2 GPUS. It won't help with CPU bound settings like Visib Range, shadows, or anything that increases the quantity and frequency of draw calls. If you have the slot and the surplus wattage and can get a second card cheap, go for it. I think a lot of 1080 Ti and 2080 Ti owners would be pleasantly surprised. By my rough math you can beat the performance of a single 3090 with 2 20
  10. Depends, you could get a 10600K that's faster depends on random chance and/or if you are willing to pay for a binned chip. Gamer's Nexus has gotten really good results overclocking 10600Ks. DCS is CPU limited on a single core. It doesn't scale from 6 to 8 cores. I can run it on my 9900K with 5 cores disabled, 3 cores on, and hyperthreading off.
  11. His benchmark for the 2080 Ti was accurate enough. He only got to test the 3090 really quickly though. Last time for the 2080 he did some really thorough benchmarks. Even then I bet it will be around 40% based on his report and other gaming benchmarks in general. Curious to see the frametimes for VR at different settings especially AI objects, shadows, etc. Memory usage as well. Good to hear some of you guys actually have the card on order. I'll probably end up getting whatever is in stock for me first. I've been using the T-51 free flight over Caucauses, but any mission will work
  12. It already does that though. If the application is only using 2-3 cores those cores will hit 5.0 GHz stock or whatever you have your overclock set for. One of the main reasons it switches around like that is to keep hot spots from forming on the chip. As long as you aren't hitting thermal limits the cores will do 5+ GHz. The main limiting factor for most people will be their cooling solution not some arbitrary GHz limit. All core overclocks will get you higher scores in Cinebench and Timespy Physics score but don't actually benefit in games that I've seen. For DCS you want to push 1 co
  13. Rather than giving you the answer you guys should ask yourself why would overclocking all cores benefit a game that runs on three cores and is limited by single core speed? Curious what the rationale for that would be. Anyways pretty easy to test this yourself guys. Intel XTU can do per core overclocks. I'm not sure why someone would run a 9900K with 2 cores at 4.7 when the stock turboboost pushes 3 to 5.0 GHz by default.
  14. Short answer yes, although it's more complicated of course. Intel's Turboboost feature in general picks the best core and allocates demanding applications to that core. The current Windows scheduler does have a feature that hops demanding apps from core to core as a way to prevent hot spots on the chip. For day to day DCS at 60fps 2D you might not notice a difference in performance from this process happening. Although I've never specifically tested in CPU stress/bound scenarios like lot of shadows and AI objects. I've set up my gaming overclocks to work in conjunction with a utility c
  15. I'd give the Intel XTU utility a try. You can set the overclock per core in a much more user friendly interface than most BIOS. You really can't go wrong with 1.35v ADAPTIVE. In adaptive the chip will only draw as much voltage as necessary and not when idling. You are correct in that you want to turn off any kind of auto voltage or multicore enhancement as these tend to overvolt by default to keep stability. And for DCS you want to push one core as far as you can. Trying to overclock all 6 of your cores will just require more voltage/current/heat. There's not benefit to gaming or DCS f
  • Create New...