Jump to content

tiborrr

Members
  • Posts

    31
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This is outstanding work, I will buy this app immediately. One question though - I want to output my Android to 21.5" touchscreen LCD with 1920x1080. Can I have more panels on a single big screen, akin to Helios, instead of multiple Android devices? This way I can repurpose my old phone and a new 21.5" touchscreen HDMI LCD. Thanks, Niko
  2. I just found out the issue, my DCS 2.2 standalone was stuck with Affinity being set on Core 0 only thus only running single-core. The symptoms were: - super slow loading times (almost a minute on SSD RAID0 compared to 10-15s) - intermediate slowdowns from 100fps to 10fps (microstuttering), sometimes freeze down for a second or two. - slow loading of terrain, immediate microstutter in external view when rotating the view. - GPU load jumping from 99% down to 20% or even 0% when checked with Afterburner every time microstutter/freeze occur (these are all symptoms of GPU being CPU starved) I don't know if it was my Windows (I recently upgraded from 1603 to Creators Fall W10 1709) or DCS executable. I spend two days infuriated why my 6950X @ 4GHz, 32GB DDR-3200 and GTX Titan X @ 1.95GHz isn't running the game smoothly @4K, despite dropping all the details to minimum. It turns out I can run all maxed out and experience no microstuttering! TL;DR: Check your affinity of the DCS executable, make sure all cores are enabled :) Regards, Niko
  3. Hello everybody! Sorry I hadn't had any time to follow the debate. In the mean time I did some wrapping up on the CPU architecture tests before I proceed to DCS 2.0 testing. For sure I will not do the CPU arch testing ever again, it's way too time consuming. 5.3 CPU architecture impact on the FPS: The purpose of this test is to test out the impact of the different CPU architecture on the general FPS performance. I have selected the FHD resolution as a starting point, which is in my books considered a 'low' resolution. CPU tests are always done on 'low' resolutions to exclude the possible GPU bottleneck. This was one of the most time consuming tests which I don't want to repeat ever again. I hadn't had the chance to test FX 8350 yet, but hopefully I will be able to do so in the upcoming weeks. I guess it will do around ~ 65FPS on average. Test systems: - CPU: (as listed before) - MB: Gigabyte GA-990FXA-UD7 Gigabyte F2A88XN Wifi ASUS ROG Maximus IV Gene-Z ASRock Z77-Extreme4 ASUS ROG Maximus VII Gene MSI B150 Gaming Night Elf ASUS ROG Maximus VIII Extreme - RAM: 4x4GB DDR3-2133 C10 12-12-28 1T @ 1.35V 2x8GB DDR4-2800 CL15 16-16-34 1T 2x8GB DDR4-2133 CL15 15-15-31 2T - GPU: MSI GeForce GTX 970 Gaming 4G - Drive: 128GB Crucial BX100 - OS: Windows 10 Pro x64 - Cooling: factory cooling on CPU; factory cooling on GPUs - Monitor: Dell 2713HM - Drivers: Nvidia 361.43 Game settings (expect for the resolution which is 1920x1080 FHD): Final results are as following: GPU Architecture Impact result analysis: - Please allow for +/- 2% result accuracy. The overlapping graphs peak/dips are always a good sign of a reliable data aquisition. - Please note that Core i5 6600 setup was running on a non-overclockable B150 motherboard, thus memory ran at DDR4-2133 CL15 15-15-31 2T. Just by running it on a Z170 motherboard and cranking up the memory speed we would see up to 10FPS boost. - Please note that Core i7 6700K setup was running using DDR4-2800 CL15 16-16-34 1T memory settings. - Core i7 6700K stock clock is 4GHz, hence the same results for non-OC and OC test - Broadwell with it's 128MB L4 seem to offer nice boost compared to Haswell. When set to 4GHz (which this sample did with factory voltage) it even surpases Skylake. Also updated TL;DR :pilotfly: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - What have we learned so far (tl;dr): Compiled list of lessions learned:
  4. Hello everybody again! @T_A: Sadly, I'm not able to get GTX 980 at the moment. My guess it somewhere between the 980 Ti and the 970, closer to the latter. If you feel irritated just crank up the AA to make your card sweat a little bit :) @Lenop: Hyperthreading helps for sure (on an i3 for example), but there's no more performance gains going from 4cores to 8 "cores" (or threads). If you want to save money, get the i5 K, a good liquid cooler (like EK ;)) for the CPU and overclock that beast. Frequency and ICP are the kings once you go quad-core or better. @JorgeIII: Thanks man! 5.7 VR resolution gaming performance analysis: The purpose of this test is to test out the impact of the running stereo "VR" 2160x1200 resolution display setup (thus simulating Oculus Rift VR headset) on the general FPS performance. The performance hit is not as obvious as on the "3 Monitor" High FOV display setup but still big enough to conclude the DCS engine is starving the cards. A 4K resolution results (HIGH preset) is added to the chart for reference. HW setup: - CPU: Intel Core i7 5775C @ 4GHz core / 3.3GHz uncore - MB: ASUS ROG Maximus VII Gene - RAM: 4x4GB DDR3-2133 C10 12-12-28 1T @ 1.35V - GPU: GTX 960 and GTX 980 Ti - Drive: 128GB Crucial BX100 - OS: Windows 10 Pro x64 - Cooling: EK-XLC Predator 240 - liquid cooled CPU; factory cooling on GPUs - Monitor: Dell 2713HM - Drivers: Nvidia 361.43 Image settings as tested (2160x1200 "Stereo" mode): Results: Simulated VR resolution performance analysis: - Ironically enough, GTX 960 (a 200€ card) here performs identically to the GTX 980 Ti (a 650€ card) which further points to an obvious bottleneck, which is most likely not hardware related. Since the GPUs are pushing a mere ~ 2.6MPix image here (compared to ~ 8.3MPix at 4K), both GPUs more than suffice for the workload. - In order to maintain higher FPS the same tricks apply as when running "3 Monitor" High FOV setup. Drop the preset to either MEDIUM or LOW. - Just like before, the GPUs are starving pretty much for the majority of the time when image quality is set to HIGH preset. The only time GPUs run at peak performance is when the view is facing towards the sky or clouds (no ground objects needs to be drawn). - I have also tested custom HIGH preset (only Visibility set to LOW and Trees set to MINIMUM) trying to show the effect of number of polygons/objects on the FPS. - Doesn't appear to be CPU nor GPU bound. Has to be DCS engine limitation or simply GPUs cannot cope with number of polys on the screen. I will now go onto CPU architecture testing. I have done some testing on the AMD setups but found out I'm loosing activation tickets like it's nothing. The system considers swapping out a CPU a major hardware change (while it didn't say anything when I swapped out a ton of GPUs) so I only have a few activations left - even though I deactivated them as per instructions :( Best Regards, Niko P.S.: Also updated TL;DR, see #1 post: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - What have we learned so far (tl;dr): Compiled list of lessions learned:
  5. There's one current topic about Hardware Benchmark, see here: http://forums.eagle.ru/showthread.php?t=157374
  6. Check out this thread: http://forums.eagle.ru/showthread.php?t=157374 HD7970 is basically a 280X so look for these results. Are you running DSC 1.5 openbeta? Is you CPU overclocked? I would definitely look into this, especially since your CPU has unlocked multiplier. Memory speed might also be the factor here.
  7. Hello everybody, ran a benchmark with GTX 960 and R9 295X2 while the NTTR is downloading, thanks to Decibel_dB! :) Initial finding: Dang, we have the new best bang for a buck card for WQHD resolution. If the money is tight and you want the latest gen GPU, the GTX 960 might the right choice. Crossfire however has micro-stutter issues, I have read about it on these forums and there only fix for it seem to be to set the MODEL VISIBILITY to OFF. However, I haven't found that function anywhere in the menu. Regarding 'VR' testing: I have found a way to sort-of simulate VR using Stereo profile inside DCS while setting a custom resolution of 2160x1200 (2x 1080x1200 like the Rift). The initial results are pretty similar to 3 Monitor "High FOV" profile, but the performance drop is not so steep. I have to retest it using several different GPUs of various performance range to see whether it is system/DCS engine bound or GPU bound. I have a feeling it's again the engine... Back to regular FHD, WQHD and 4K results: The TL;DR sections has also been updated (see #1 post): What have we learned so far (tl;dr): Compiled list of lessions learned: The stuff that remains: - VR resolution analysis - CPU architecture testing - DCS World 2.0 NTTR range benchmark Best Regards, Niko
  8. Update - I will be getting GTX 960 from my friend to throw it in the mix!
  9. Yes, the test are being conducted on 1.5.2, correct. DCS 1.5 engine is indeed light years better than 1.2! I am trying to say the legacy functions, much of them carried over from older standard, doesn't seem to run as good as on NVIDIA. The proof to this are the new games, built on entirely new engines, where AMD cards more than adequately keep up the pace with NVIDIA. I hope I'm not causing too much confusion.
  10. Thanks, I'll try to figure it out :) Yes, for sure, probably later today or over the weekend. You guys are lucky, the weather is bad, lots of fresh snow, climbing is off for at least this weekend. I don't want to try out the avalanche equipment (again)... Once was enough :doh:
  11. Hi, I wold like for everybody to refrain from calling people names or fanboys. After all, I am an die hard AMD fanboy. AMD cards are great but the thing is DCS performs better on NVIDIA cards at the moment. Truth be told, DCS engine is fairly aged (not that it hasn't aged well) and AMD never bothered to fix somewhat broken legay functions (including DX9) performance when they purchased ATi back in 2006. This was the sole purpose of creating this thread and making the benchmark because comparing the GPUs in modern games doesn't (necessarily) reflect the performance in DCS. I have found a way to 'simulate' 4K performance on a 1920x1080 resolution: Simply, force SSAA - Super Sampling Anti-Aliasing mode (a.k.a. FSAA - Full Scene Anti-Aliasing) through drivers instead of default MSAA mode. In game, select 4x MSAA and resolution 1920x1080 and there you have it. Since SSAA (one of the oldest and most basic, yet very demanding AA methods) works by super sampling the scene - calculating it at higher resolution - and then scaling it down this is perfect. Factor x4 means the image is drawn and calculated at 4x the display resolution - in our case 1920x1080 resolution - which equals 3840x2160 (4K). I have tested the theory with R9 Nano and got exactly the same results on native 4K or FHD 4xSSAA. So, without further ado - R9 390X added: Best Regards, Niko :joystick:
  12. Hello everybody, thanks to Decibel, who offered to give me the key for the Nevada map I will soon be starting DCS 2.0 alpha engine testing. Only thing left to finish is to run R9 295X2 and to complete CPU platform testing, which has already begun. :pilotfly: Thanks for the reference, I'll try to do the similiar mission. Pit a few ground attack planes against my F-15, throw a MiG-23 or similar in the mix and then do some dogfight and indulge in a energy game :) This way we will get a nice mix of high- and low-level flying. Thank you very much for your kind words. Funny you mention, my coworker wanted to purchase the Rift yesterday (we was a DK2 user for some time) but got turned off by the price. Is there a way to simulate VR resolution testing? If there is, I might include it in the test, at least as a detailed analysis for a single GPU. I believe there's no GPU bottle neck, but rather the DCS engine bottleneck. GPU bottleneck would mean the stronger GPU would perform better. If the results are pretty much the same across the performance range that is the sign the GPU is not the limiting factor. GTX 980 Ti for sure is a great card. Crank up AA (anti-aliasing) if you wish to stress it badly! :smilewink: As promised, I have added GTX 780 (KFA2 version, slightly overclocked) and reference Fury X to the test suite. GTX 780 is still a solid choice for DCS and would be pointless to upgrade it at this point unless you want 4K. High fillrate is what this card has. Fury X performs as expected given the previous results where NVIDIA cards perform slightly better across the board.
  13. I may try all the options but please do understand this is huge amount of work. For starters I can try settings you mentioned, but I don't think I can afford the time to test each setting individually (snow is here, winter climbing begins :music_whistling:) That's the thing - I don't believe there's any performance difference even on the Vegas map (which I don't have actually), because the fillrate or bandwidth doesn't seem to be problem here, but rather the DCS 1.5 engine. This is why I belive the performance drops to the ground with "3 Monitor" high FOV setting - the engine simply cannot process all the elements in time and is bottle-necking the GPU, which has nothing to do in the mean while (hence lower power draw as mentioned above). Graphics wise DCS 1.5 is not a demanding title compared to current standards. Large number of elements/poligons is something that hurts all GPUs equally, at least in 1.5.x. If ED is willing to send me a license for Nevada map (they can revoke it later) I can do this testing as well. I am confident DCS 2.0 will fare better. Best Regards, Niko
  14. @Axion: I can send you the track file if you wish to contribute: - Use FRAPS, set recording time length to 305s - Set FRAPS recording trigger to middle mouse button (this also kills the TrackIR head movement in replay which is what we want here) - Set the same settings as described in #1 post. - Load the replay track, trigger the recording button and click "FLY" - Do not move the mouse after clicking FLY! - After 305 seconds the recording is complete :) P.S.: Just tried the theory that the GPUs might be bandwidth limited in high FOV scenario. Plugged in the R9 Nano with 512GB/s bandwidth - exactly the same outcome. So it must be the DCS engine...
  15. Hey Wizz, glad I could help you out. Actually, in my work we do professionally work with PCCG and the aforementioned rig is a well balanced PC indeed. A lot of power and the right will soar through 4K like it's nothing. It also possesses a nice overclocking potential so there is a way to even further increase performance of the rig! This setup would be a good balance between price and performance for DCS for 4K gaming: - CPU: Intel Core i7 6700K - ASUS Maximus VIII Hero motherboard - RAM: 2x8GB DDR3-3200 Corsair Vengeance LPX C16 (or similiar) - SSD: Intel SSD750 400GB drive or 2x256GB SATA6GB/s in RAID0 - GPU: GeForce GTX 980 Ti (ASUS Strix, Gigabyte WindForce 3X, MSI Gaming 6G) - Chassis: Fractal Design Define S - PSU: Corsair RM650 (or any renown 650W 80Plus Gold or better) - Cooling: EK-XLC Predator 360 (inql. QDC) + pre-filled water block for your GPU It's pointless to invest more money in e.g. GTX Titan X or Haswell-E platform for DCS. Spend the money on good CPU cooling, power supply and chassis as these parts will stick with you the longest. These are just my 2 cents. You could save additional 100 USD/EUR by choosing i5 6600K instead. Hey grefte, I will try CFX on a 295X2 and hopefully SLI on GTX 970 (but no promises on the latter). 5.6 Multi-monitor gaming performance analysis: The purpose of this test is to test out the impact of the multi-monitor high FOV display setup on the general FPS performance. As for the multi monitor setup I did try it out (3x FHD = 5760x1080) and was surprises how demanding this is, despide lower megapixel count (6.2MPix vs. 8.3MPix) compared to 4K. Very large FOV, which is caused with "3 Monitor" option literally drops performance to the floor. "1 Monitor" (WALL) option works perfectly and within expectations - it performs equally good as the single-monitor setup at comparable (total) resolution. My findings were: - GTX 780 Ti, GTX 970 and GTX 980 Ti: they all perform pretty much exactly the same - around 38FPS (9FPS min ; 66FPS max) - There must be a bottleneck somewhere... Not sure if it's CPU bound, GPU arch bound or DCS engine limitation. - I tried further overclocking the CPU but didn't get much better, maybe 2.5 FPS better on average (in the from 38FPS to ~ 40FPS) - Doesn't appear to be CPU bound, at least not frequency wise. Has to be DCS engine limitation or simply GPUs cannot cope with number of polys on the screen - I tried GPU overclocking but also only did *maybe* 1-2FPS better on average - I observed GPU video ram usage - on 4K it never goes above 2.8GB with general preset at High. - I tried lowering Trees Visibility, Lowering Preload Radius, Disabling shadows and HDR: No real improvement, altogether maybe 4-5 FPS better performance on average. - I tried "3 Monitor" option then on a single 4K resolution screen (8.3MPix image) and results we're better - around 45 FPS average (10FPS min ; 94FPS max) - but most likely due to smaller FOV. I was getting frustrated by then... Then came the big revelation - the power draw of a system with an overclocked (1400MHz GPU Titan X) used to be up to 380W throughtout the majority of the benchmark. With 4K "3 Monitor" and everything on HIGH settings it only reaches this value for a very short period of time, otherwise it's somewhere between 230-260W. But... When main quality PRESET is set to MEDIUM the power draw is constantly over 300W and average FPS goes sky high instantly! On LOW preset the power draw never drops below 320W. This means the GPU is being starved. I strongly believe there is something with the DCS 1.5 engine not being able to feed the GPU in time when running 3 screens in high FOV ("3 Monitors" profile) WALL mode. :pilotfly: Bottom line with high-FOV "3 Monitor preset" multi-monitor setup (BOX): It's playable at butter smooth FPSs for sure however one needs to set general quality PRESET to MEDIUM or LOW! Bottom line with "1 Monitor preset" multi-monitor setup (WALL): Such setup performs equally good as the single-monitor setup at comparable (total) resolution. Therefore it's possible to build a cheap large, high-resolution screen out of smaller panels. I have also update the lessions learned (tl;dr) with lession no. #8: - - - - - - - - - - - Sounds good :) I have one GTX 780 non-Ti at work, I might give it a spin tomorrow. I have an 2550K and 2600K somewhere at work, need to dig out my old Z77 mobo and I will give it a spin as well! This should not be the case. More memory should always run faster, provided that the memory settings remain the same. But the system never exceeded ~ 5GB of usage in any of my test. 8GB should yield the same performance however I would always recommend 16GB if money allows. This way you can have you work running in the background :)
×
×
  • Create New...