Jump to content

A major share of blame for overall perf goes to the industry not ED


Worrazen

Recommended Posts

Let me before I say anything else, as I can't write that of amount of context into the title directly.

The "blame" used in the title is meant for what unfamiliar people think when they experience various performance issues in DCS, but also for everyone at large, even me, it's something ever present, it's constant for the past so many years, so we may be forgetting about it, getting used to it, adapting, we should not, I wrote this to kind of remind us about this, but at the same time to mention some good news ahead as it's the video below that sparked me to write this here.

 

However when you do blame the industry the blame is a real blame I guess, maybe not directly, but consequentially, this semantic/meaning stuff is complicated so I won't go that deep into it here, but in this case I chose not to put "quotes" on it in the title.

 

We *all* know it was never the focus of ED to be an engine developer so you can't blame something that was chosen not to be focused on, in basics. TLDR is the bottom paragraph. Also this thread is a bit more relaxed, a bit ranty but in a good way, I'm actually writing it with enthusiasm, not with the angry tone.

 

 

Okay, the first thing is the CPU's them selfs, single-core performance has been disastrously stagnating for almost a decade, yes small improvements but those are ridicolous to what should have been by now if the industry actually innovated or we moved away from this horribly inadequate PC-ATX standard of a little case that you can barely fit hands to civily connect cables. I have EATX Full Tower case for 6 years now and I'm never moving to anything smaller ever again and this not spacey enough still, not at all, I have SSDs dangling down and had to use some stings to attach to the top ceiling to hold things in mid-air because there's no slots to put things anywhere. This obsession with power saving and size factor has severely slowed down actual performance improvements IMO. Every time a new product is released they say ""look at what we can do at this XYZ wattage""

 

Okay sure whatever yeah, now show me what your CPU does at 500 Watts please! They keep going lower and lower and lower, what if they just stayed at something like some manageable wattage that most people can afford to cool like 150W-200W , is there a law or something that power has to go down? Nope, sure saving is good, yes the cores draw less when they're idle, yes I turn off the PC when I'm not home, but that's about it, the rest is for lower end, the rest is for different types of use cases, fields, etc, I would have said okay if it was meant for only those segments, but it is not, we're all forced into it, there is nothing for the use cases that wants just performance, all that cooling and power is a cost I'm willing to pay, I also have much better solutions to cool and avoid noise anyway, mainly involving the enclosure and form factor which makes all those arguments invalid as these things become a total non-issue. The segments aren't really there, once you get up higher into the "enterprise" segment, it's not a segment of gaming anymore, it's either only workstation, only server, only labolatory, there is no higher gaming/workstation/everdayuse segment, this is the biggest problem, because those higher segments don't actually care about single-threaded performance.

 

Also I didn't say TDP because it has nothing to do with electrical watts (power consumption). Whatever that standard could be it could be kept to market segment, let's say the low ends would usually come with may 100W, the mid would be 200W and high end would be 300-400W and for each of that segment there would be cooling solutions tied to it and it would be all cherry picked to be a good balance between cooling and performance and you'd just be done with it, no more fiddling with which cooler and all the drama, if a power standard would be picked it would just made things so much easier.

 

When you increase performance per watt YOU ARE ALREADY SAVING POWER - But they purposelly design it so it lower the total power consumption of the predaccessor AT THE COST OF SOME PERFORMANCE and they label it as some kind of a power saving feature, no, it's all PR, they BUILT it with less cores or the size of the cores were smaller, they could have built it bigger. They're not honest in PR, sure AMD reddit page is more open than I would have expected, but IMO it's about YIELDS, smaller chips make better yields per waffer so our performance suffers for their maximization of profits as they want to have better yields.

TDP explanation is here:

 

 

 

 

The second thing is the huge inefficiencies created by OpenGL and DX which had their way too extended IMO including because of people who were talking but not doing much about it like John Carmack for crying out loud, I followed everything he said in the past 10 years, interviews, etc, from way back before mantle got announced. He would always talk about "coding to the metal" and how consoles are efficient and PC isn't and and I would be so excited but he was way too deep into his stuff and would really not do much to pressure the industry, he did some things like pushed to do Adaptive VSYNC, that happened becaue of him spamming nvidia about it, that's the reality of some of the innovation, for what one would expect from a studio like that, john carmack is way too ego on his career than the PC platform CLEARLY, I guess he was only there where it suited his devving experience.

 

I basically from just logic knew how fundamental the change is and I didn't knew how to program squat, but I followed tech news and read a lot, just on that kind of research I was able to work out this is a big deal and it's taking them so long and I was getting tired of it, , and so many of these so called "industry buffs" out there were so ridicolously not getting it, there was a LOAD of almost targeted spam and hatred against Mantle API when it was announced, and released.

 

https://www.bit-tech.net/reviews/tech/graphics/farewell-to-directx/1/

OxG9xtB.png

Notice the DATE: March 2011

 

 

So everyone got tired and fed up of waiting and DICE's Johan Anderson teamed up with AMD, but this did not happen out of the blue, as I mentioned it was brewing for a long time among the actual programming people in game studios, Johan and a few others less popular took up upon themselfs to break this stalemate, but it should have happened earlier.

 

All of that draw call, multi-threading, low-driver overhead and ofcourse are all major things but not the only ones, there's huge benefits with the Vulkan and possibly other newer APIs, on the lesser known developer sides, it's IMO also important for end user experience in terms of stability (less bugs), quality of a fix (less side effects) and response to a bug (how fast a fix can come)

 

A quote from wikipedia Mantle API page, this was said by Firaxis Games: (I contributed to that page myself at the time so I can vouch for the accuracy, ofcourse wikipedia is not friendly toward original information, unless it's "reported by a reputable 3rd party source".)

Much of the work that drivers used to do on an application’s behalf is now the responsibility of the game engine. ... It also means that this work, which must still be done, is done by someone with considerably more information. Because the engine knows exactly what it will do and how it will do it, it is able to make design decisions that drivers could not.

 

I could go on in depth but basically, what DX11 and OGL drivers are is that they're basically hacks, when those GPU manufacturers release fix, they basically don't write real code as it should be, the developer would need to write actual source code in the application to fix it properly but the older API doesn't allow that accesss, the GPU manufacturer have to write some instructions and things to instruct the GPU "to do this when you play DCS.exe in when you experience that unit that uses that" ... that's why the drivers are so big with so many scenarios, ofcourse it's probably heuristics so it doesn't have to be super specific but it's just not a quality fix, that's why you get so many side effects and compatability issues, some machines work, other don't.

 

With Vulkan API when there's a bug pertaining to that area, it's all fixable by the developer most of the time, there's no need for GPU driver update, and ofcourse, let every developer care for their own games rather than GPU manufacturer having for every single PC game released out there.

 

This is where the difficulty aspect comes from, because a lot is the application responsibility, most developers that were PC only weren't used/experienced with this stuff in general, so that's why it takes a learning curve to get up to speed, it's a major transition, ofcourse it takes time, but this added more fuel for the stupid anti-Mantle API trolls back then.

 

 

 

The third, The rest is down to efficient multi-threading which is something that is up to the developer. There is quite a bit of room for splitting things off that are splittable, what is siplittable, in basic, multiple non-interdependable serial workloads (if I got that term right) that are all running on one thread, DCS does have some separation, some are split in "half" some here some there, it is better than I thought more than a year ago, which I think I didn't do good enough of a test back then, from what I can see I can say Quad Core should be worthwhile for DCS.

 

 

Perhaps if resources are the issue, why can't there be some kind of one-time "major upgrade" kind of a paid campaign that unlocks it for all after it reached the goal.

Something like this could be used for other major upgrades that pertain to the CORE DCS and not to any particular module, at least what the logic/technicals and commuinty thinks about it, anything can be ofcourse made into a module if one wants it, I do agree that for now the beginnings of a new ATC are part of a carrier module, but for when it comes to the whole DCS then maybe a different kind of model is more appropriate, it's not really an established model but hey DCS does things different so why not. A major upgrade being it's too big for it to be free, but this method of compensation would be would allow for more hardcore fans to help out the rest of the community by being able to fund more. Once a major upgrade of the engine or core content or a module is ready to release, then a special upgrade specific fund page would be opened up where users could give from let's say 5 to 1000€. For more privacy the exact goal amount can be hidden and amount completed can be hidden under percentage. Once goal is reached it would release for all for *free* and both sides would be fullfilled, the upgrade would not have to be forced into a separate module which is technically(devs) and practically(users) inoptimal (eg. all users need X unit in same session) The goal amount would need to be smartly picked, and I think it works best if used to cover raw costs (break even) otherwise it might not work if set too high (profit) if many low-end people just wait and wait for other's to drop 1K's that would never come. During the upgrade fund campaing to remind folks there could be checkpoint or milestones which would drop more bits of info about the upgrade (youtube video quick preview one feature, screenshots, etc), and it could scroll in the special web page like a timeline... etc.

But not crowdfunding, you'd use your same DCS account the same way you buy existing stuff, I don't think there's need to bother with any "awards" and "signed copy".

 

So maybe that's how we could get to some better perf in that regard, possibly faster. Ocassional boosts like this would be welcome for DCS to not lag behind in that regard.

 

 

The bottom line is the home PC platform should be like 10 times more powerful than it is, the industry is kind of purposelly shifting focus on world of holdables, wearables, and cloud stuff. Unnless it's a genuine bug, in general you would kinda need to go down a checklist to make sure who the perf issue should be addressed to. But it looks like AMD is apparently looking to break that stalemate with quite a surprising Zen 2 debut althought the reviews are still a bit, industry knows it's good because motherboard prices went up quite a bit, the partners know it's a big deal, they're not treating AMD as the budget option anymore they apparently are saying. It's still only reaching intel, and not quite going over it in terms of IPC and frequencies but this is very close and people didn't expect it. If the Ryzen IPC uplift continues on in addition to increased core counts then there's some good future ahead, if they start recognizing some of these type of workloads and giving it equal support and not treating it as if it doesn't exist.


Edited by Worrazen

Modules: A-10C I/II, F/A-18C, Mig-21Bis, M-2000C, AJS-37, Spitfire LF Mk. IX, P-47, FC3, SC, CA, WW2AP, CE2. Terrains: NTTR, Normandy, Persian Gulf, Syria

 

Link to comment
Share on other sites

You made a really good about the power consumption. I never considered that. 500w, baby! =) Few other good points, too.

 

I personally really like the idea of voluntary crowd sourcing of major engine specific overhauls that would then be released enmasse, but we are probably in the minority. Ironically, people will throw several hundred million dollars at a washed up hack with delusions of being a Hollywood director, but around here people piss and moan about literally everything.

 

Different ''culture'' here =/

Де вороги, знайдуться козаки їх перемогти.

5800x3d * 3090 * 64gb * Reverb G2

Link to comment
Share on other sites

Raising single core performance is difficult (major problems heat dissipation and leaking gates), it's easier and much cheaper to add cores.

 

Multicore CPUs have been around for quite some time now. I bought a quad core 2500k in 2011 - it's high time for software developers to adapt to this situation.

Windows 10 64bit, Intel i9-9900@5Ghz, 32 Gig RAM, MSI RTX 3080 TI, 2 TB SSD, 43" 2160p@1440p monitor.

Link to comment
Share on other sites

It is a very comprehensive post and I believe the things written in this post must be very close to reality.

 

However, what type of performance issues are being discussed ? Keeping in view the graphics and the quality of simulation, getting a 60 fps is more than enough.

 

I run 4k with high settings and get 60 fps. I am more than happy with the performance. All the other high end titles give me the same fps at 4k.

Link to comment
Share on other sites

There is a law, it's called Moore's law. Essentially they have reached the near limits of CPU-design, in terms of scaling in combination with heat production and power consumption, that's why you see more cores in stead of higher clock speeds (which isn't physically possible anymore if I remember correctly).

 

Excessive heat means faster degradation, which means lower lifespan, which means higher cost, less efficiency, etc etc etc.

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...