Jump to content

Waiting for 8700K?


Recommended Posts

My current rig is an i7 920 OCed to 3.6 Ghz (3.8 for the first 5ish years, too hot rendering) with 12GB of RAM and some SSDs/HDDs. I have the GTX 1060 6 GB running the Razer HDK2 VR headset (same stats as Rift/VIve but with extra overhead for their software) and an older 720p/1080i 110 inch HD projector.

 

Modern games are never an issue taking advantage of up to 8 threads on my 9 year old beast, I max the setting on all of them and get great frames at 720p or 1080i.

 

DCS World is a different story. I spend about 80% of my DCS time on the projector/TrackIR running at 720p (so CPU limited for sure) while sitting 12 feet away getting 90FPS, max'ish settings.

 

In the other 20%, VR (more CPU/GPU balanced), i get 40-90 fps while away from cities, mostly around 45 or so. I enjoy the hell out of it, competing against/with other VR players is a blast. I've adapted to the low FPS so I know my next upgrade will blow me away when I see it running at a silky 90 fps most of the time, hopefully, with a 4k VR headset.

 

My main point is that progress has slowed to a crawl. Being able to keep your motherboard/cpu for 6-10 years with overclocking kind of negates buying the cheaper slower platform that can, but MUST, get the new CPU every year for the next 3-4 years before it even catches up to that 4 year old OCed Intel setup (in DCS World anyway).

 

I'm not sure if I want to wait for cannon lake\zen2 or get an 8700K. My main goal is to have a platform that can do "next gen" 4k VR, with future video cards obviously. DCS VR is my main driver so Intel is a must at this point but we will see what the future holds, I am no brand loyalist.

Link to comment
Share on other sites

appearently the next batch of 8700Ks won't be available for 2-4 weeks and the 8700K wasn't supposed to be released till Q1 of next year, ryzen kinda forced intel to release the 8700K before they had enough stock...

 

also a bunch of new chipsets for the 8700K will be coming in Q1 of next year.

 

I would say, based on reviews and benchmarks, the 8400 is the best "bang for buck", and the 8700K is the most "future proof" cpu out there right now.

 

intel always does this... they wait until AMD has something that's near their cpus in performance, then release something just a bit better.

 

there's a reason why the 2600K has been such a good cpu for such a long time... I think intel jumped the gun and released something a bit too good for the time(the 2600K), so from the 2600K up to the 7700K it was just intel competing with itself, wich is why the performance gains had been so shit up till ryzen...


Edited by Hadwell

My youtube channel Remember: the fun is in the fight, not the kill, so say NO! to the AIM-120.

System specs:ROG Maximus XI Hero, Intel I9 9900K, 32GB 3200MHz ram, EVGA 1080ti FTW3, Samsung 970 EVO 1TB NVME, 27" Samsung SA350 1080p, 27" BenQ GW2765HT 1440p, ASUS ROG PG278Q 1440p G-SYNC

Controls: Saitekt rudder pedals,Virpil MongoosT50 throttle, warBRD base, CM2 stick, TrackIR 5+pro clip, WMR VR headset.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

I think I'll go with Threadripper since I'm planning on 4k gaming anyways, and all processors are equivalent at 4k since the system is overall GPU-limited.

 

Next generation I'll have a gaming-specific system, with Ice Lake or Ryzen2.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

This Intel Ryzen thing is doing my head in, my 2600k is fine but the boy wants a rig to complement his XBox and PS4 - bloody kids, when I was young it was either an Amiga, an Atari ST, a Nintendo or a Megadrive, not all of them.

 

So he can have the 2600k and I can get a new one.

 

So my hunch is just go for the 8700k and live with it for the next 5+ years. But now its Threadripper (not heard of this until now reading this thread) and Ryzen folk are talking about.

 

So I dont do much else other than gaming, no rendering, spreadsheets or video stuff. Looking at the newest 8700k its marginally better than the 7700k and blows away the AMD in all games tests.

 

I too am wanting to try an future proof a bit as I think VR will be common during the life of new rig. So really, they way I see it is Intel is the fastest solution because even if games start using multiple cores, say even 4 cores in the next 5 years, we might only see a handful of games, Intel is still a faster chip?

Link to comment
Share on other sites

This Intel Ryzen thing is doing my head in, my 2600k is fine but the boy wants a rig to complement his XBox and PS4 - bloody kids, when I was young it was either an Amiga, an Atari ST, a Nintendo or a Megadrive, not all of them.

 

So he can have the 2600k and I can get a new one.

 

So my hunch is just go for the 8700k and live with it for the next 5+ years. But now its Threadripper (not heard of this until now reading this thread) and Ryzen folk are talking about.

 

So I dont do much else other than gaming, no rendering, spreadsheets or video stuff. Looking at the newest 8700k its marginally better than the 7700k and blows away the AMD in all games tests.

 

I too am wanting to try an future proof a bit as I think VR will be common during the life of new rig. So really, they way I see it is Intel is the fastest solution because even if games start using multiple cores, say even 4 cores in the next 5 years, we might only see a handful of games, Intel is still a faster chip?

 

It depends on what you want to do. If you want to only do gaming, and you don't care about storage speed, add-ons, etc, then 8700k will suit you well.

 

If you want more than one NVMe SSD, RAID, sound card, capture card, 4k video editing, 3d modeling, server hosting, virtual machines, etc, then the 8700k with its 16 lanes to CPU from video card, and 4 lanes to everything else, will not suit you at all.

 

That's my dilemma: go with known faster gaming (8700k) compared with Threadripper, or go with a platform (Threadripper) which will be able to handle complex storage solutions and manipulation of large files for video editing.

 

I want to start getting into 4k video editing, both for gaming and for real life needs, and the 8700k just doesn't cut it. Maybe once Intel gets its head out of its rear-end, we'll have mainstream Intel parts with more than 16+4 lanes. It's comically out-of-touch with the needs of users in 2017.

 

The reason why I would caution against 8700k at this point is that if you are going for a 100% fast gaming solution, PCIe 3.0 will hold you back once PCIe 4.0 comes around. The throughput of 4.0 is so massive for video cards, that you can be sure that this standard will last you for the foreseeable future.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

  • Like 1

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

i7 8700K sure is interesting but will not help at all if ED can't spread their single thread on all 12. Staying with old 4 Core stuff is sensible for now.

 

For gaming. But we need to remember that PC is not for everyone just for DCS use.

Many do lot of work with their PC so it can be required to be top there too. And currently a Threadripper is a CPU that beats everything Intel throws at it when it comes to work like 3D rendering, video encoding and other simulations right after 7-8th cores.

 

But for pure gaming, the Intel processors are better by all meters really, especially games that use just one or two cores.

 

And it is shame that DCS ain't using more than about 12% of the AMD processors capacity by being limited to single core for everything else than sound.

 

Like consider what multi-core processors could do if DCS could place a player aircraft to single core, a weather for single core, a terrain shadows and texture loading to one core, a one side AI for one core and another team AI side for another core and then spend rest of the cores based if the AI is active near player aircraft, one core for all special effects calculations like explosion damages etc.

 

There should be some benefits at critical moments to have own cores rendering something so those waiting times or peak times are not affecting other processing.

 

Now it is silly that you can have a computer that gives you all graphics maxed out on 4K about 140-230 FPS but when you go to a VR it drops to 22-30 range with AI in missions.

 

There simply is no reason to wait a newer CPU as the DCS can't handle itself correctly.

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Link to comment
Share on other sites

For gaming. But we need to remember that PC is not for everyone just for DCS use.

Many do lot of work with their PC so it can be required to be top there too. And currently a Threadripper is a CPU that beats everything Intel throws at it when it comes to work like 3D rendering, video encoding and other simulations right after 7-8th cores.

 

But for pure gaming, the Intel processors are better by all meters really, especially games that use just one or two cores.

 

And it is shame that DCS ain't using more than about 12% of the AMD processors capacity by being limited to single core for everything else than sound.

 

Like consider what multi-core processors could do if DCS could place a player aircraft to single core, a weather for single core, a terrain shadows and texture loading to one core, a one side AI for one core and another team AI side for another core and then spend rest of the cores based if the AI is active near player aircraft, one core for all special effects calculations like explosion damages etc.

 

There should be some benefits at critical moments to have own cores rendering something so those waiting times or peak times are not affecting other processing.

 

Now it is silly that you can have a computer that gives you all graphics maxed out on 4K about 140-230 FPS but when you go to a VR it drops to 22-30 range with AI in missions.

 

There simply is no reason to wait a newer CPU as the DCS can't handle itself correctly.

 

 

If games were properly coded for multi-threading, DCS included, everyone's game performance would be so much better.

 

It's sad that it's 2017, and we still don't have decent multi-core support.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

If games were properly coded for multi-threading, DCS included, everyone's game performance would be so much better.

 

It's sad that it's 2017, and we still don't have decent multi-core support.

 

My old & spare i7-920 was plenty, even with the X58 Chipset and PCIe-Gen2 if all cores were used.

 

Since them cores aren't used as hoped for we gotta come with the biggest single barrel gun there is to shoot the DCS bear :(

 

Bang Bang...and we still run one core, BANG BANG

 

TBH, I see DCS nearing a STOP sign if they dont get this issues straightened out. How much more can you add before it all breaks? More detail, more candy, better maps etc... it all tortures ONE core, on a die of four or far more.


Edited by BitMaster

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

My old & spare i7-920 was plenty, even with the X58 Chipset and PCIe-Gen2 if all cores were used.

 

Since them cores aren't used as hoped for we gotta come with the biggest single barrel gun there is to shoot the DCS bear :(

 

Bang Bang...and we still run one core, BANG BANG

 

TBH, I see DCS nearing a STOP sign if they dont get this issues straightened out. How much more can you add before it all breaks? More detail, more candy, better maps etc... it all tortures ONE core, on a die of four or far more.

A 10Ghz single core would be fastest possible if it would have plenty of L1 and L2 memory, like gigabytes almost. As single core could run all without swapping penalties. But it is impossible as distances are too big.

 

Yet we need to push this single core philosophy as far as possible with current technology.

 

Need to remember that world most used OS and fastest OS as well is running all via a single thread... Everything goes through that as it is only single process. Yet it can allocate itself all cores for its use as logical one up to 2048 or so processors.

 

 

 

--

I usually post from my phone so please excuse any typos, inappropriate punctuation and capitalization, missing words and general lack of cohesion and sense in my posts.....

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Link to comment
Share on other sites

A 10Ghz single core would be fastest possible if it would have plenty of L1 and L2 memory, like gigabytes almost. As single core could run all without swapping penalties. But it is impossible as distances are too big.

 

Yet we need to push this single core philosophy as far as possible with current technology.

 

Need to remember that world most used OS and fastest OS as well is running all via a single thread... Everything goes through that as it is only single process. Yet it can allocate itself all cores for its use as logical one up to 2048 or so processors.

 

 

 

--

I usually post from my phone so please excuse any typos, inappropriate punctuation and capitalization, missing words and general lack of cohesion and sense in my posts.....

 

also don't forget that not everything can run in parallel, all cpus do is math, just very fast, and you can't have 8 cpu's doing a single math problem...

 

parallelism (having multiple cpus) works because you can have different math problems running on each cpu...

 

when one math problem has to be solved so next math problem has all the information it needs to be solved, you need better single-core performance... hence why DCS is so single-threaded...

 

they do plan on seperating AI and such to different cores in DCS, but they've also said that won't make a big difference, explaining why they haven't done it already...


Edited by Hadwell

My youtube channel Remember: the fun is in the fight, not the kill, so say NO! to the AIM-120.

System specs:ROG Maximus XI Hero, Intel I9 9900K, 32GB 3200MHz ram, EVGA 1080ti FTW3, Samsung 970 EVO 1TB NVME, 27" Samsung SA350 1080p, 27" BenQ GW2765HT 1440p, ASUS ROG PG278Q 1440p G-SYNC

Controls: Saitekt rudder pedals,Virpil MongoosT50 throttle, warBRD base, CM2 stick, TrackIR 5+pro clip, WMR VR headset.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

they do plan on seperating AI and such to different cores in DCS, but they've also said that won't make a big difference, explaining why they haven't done it already...

 

Then why do FPS drop seriously when many AI units are used in a mission?

Windows 10 64bit, Intel i9-9900@5Ghz, 32 Gig RAM, MSI RTX 3080 TI, 2 TB SSD, 43" 2160p@1440p monitor.

Link to comment
Share on other sites

Then why do FPS drop seriously when many AI units are used in a mission?

 

software limitations more to do with graphics and APIs than cpu...

My youtube channel Remember: the fun is in the fight, not the kill, so say NO! to the AIM-120.

System specs:ROG Maximus XI Hero, Intel I9 9900K, 32GB 3200MHz ram, EVGA 1080ti FTW3, Samsung 970 EVO 1TB NVME, 27" Samsung SA350 1080p, 27" BenQ GW2765HT 1440p, ASUS ROG PG278Q 1440p G-SYNC

Controls: Saitekt rudder pedals,Virpil MongoosT50 throttle, warBRD base, CM2 stick, TrackIR 5+pro clip, WMR VR headset.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

Just ordered my upgrade to Coffee Lake arriving in 3 weeks.

I hope I can turn up the graphics to almost maximum and still get around 100 FPS.

 

I will try to share some numbers in 3 weeks. :pilotfly:

 

New System:

Planform: Z370 - ASUS ROG MAXIMUS X HERO

CPU: i7 8700K (I hope to get it running at 5GHz) on H115i

GPU: EVGA GTX 1080 Ti SC2 HYBRID (no SLI anymore as it's not working)

RAM: G.Skill 3866MHz CL18

Monitor: ASUS ROG Swift PG348Q with G-Sync 3440x1400

VR: Oculus Rift CV1

M.2: 960 PRO 1TB

SSD: 3x 850 PRO 1TB

 

Current System:

Platform: X99 - ASUS RAMPAGE EXTREME

CPU: i7 3970X @ 4.5GHz on H100i

GPU: 2x EVGA GTX Titan SC (original first Titan cards from 2013)

RAM: Corsair Dominator 16GB 1866MHz CL19

Monitor: Dell 2560x1600

i9 13900KS  (H150i), RTX 4090, 64 GB RAM @ 6000 MHz CL30, TM Warthog HOTAS + MFD's, MFG Crosswind, LG 48GQ900, Valve Index VR, TrackIR 5, Win11 Pro x64

 

Link to comment
Share on other sites

software limitations more to do with graphics and APIs than cpu...
Sure the DCS ain't optimized, but still AI drops all down. To many calls and calculations when not required. When player ain't near, AI should be calculated and not simulated, more like simple chess how units moves and what is priority.

 

Now the AI and many other will just kill graphics rendering that it shouldn't. If game gets slowed down as 100 units collide, graphics should still be at Max rendering so when you rotate view etc, it is smooth and not a slide show.

 

--

I usually post from my phone so please excuse any typos, inappropriate punctuation and capitalization, missing words and general lack of cohesion and sense in my posts.....

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Link to comment
Share on other sites

5 ghz overclocked 8700k is consuming 50% more power than 7700k, about 95w vs 150w

isn't it insane?

FC3 | UH-1 | Mi-8 | A-10C II | F/A-18 | Ka-50 III | F-14 | F-16 | AH-64 Mi-24 | F-5 | F-15E| F-4| Tornado

Persian Gulf | Nevada | Syria | NS-430 | Supercarrier // Wishlist: CH-53 | UH-60

 

Youtube

MS FFB2 - TM Warthog - CH Pro Pedals - Trackir 5

Link to comment
Share on other sites

5 ghz overclocked 8700k is consuming 50% more power than 7700k, about 95w vs 150w

isn't it insane?

If I remember correctly, TDP value is only the peak value in maximum load, not even average value!

And TDP is about thermal generation measured in watts, not about how much power CPU/GPU consumes.

 

50% higher TDP means just that in heavy load the CPU generate 50% more heat. Requiring better cooling in some cases if cooler can't keep temperature down enough.

 

 

 

--

I usually post from my phone so please excuse any typos, inappropriate punctuation and capitalization, missing words and general lack of cohesion and sense in my posts.....

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Link to comment
Share on other sites

Just ordered my upgrade to Coffee Lake arriving in 3 weeks.

I hope I can turn up the graphics to almost maximum and still get around 100 FPS.

 

I will try to share some numbers in 3 weeks. :pilotfly:

 

New System:

Planform: Z370 - ASUS ROG MAXIMUS X HERO

CPU: i7 8700K (I hope to get it running at 5GHz) on H115i

GPU: EVGA GTX 1080 Ti SC2 HYBRID (no SLI anymore as it's not working)

RAM: G.Skill 3866MHz CL18

Monitor: ASUS ROG Swift PG348Q with G-Sync 3440x1400

VR: Oculus Rift CV1

M.2: 960 PRO 1TB

SSD: 3x 850 PRO 1TB

 

Current System:

Platform: X99 - ASUS RAMPAGE EXTREME

CPU: i7 3970X @ 4.5GHz on H100i

GPU: 2x EVGA GTX Titan SC (original first Titan cards from 2013)

RAM: Corsair Dominator 16GB 1866MHz CL19

Monitor: Dell 2560x1600

 

How much RAM you thinking of getting 16G or 32G ?

METAR weather for DCS World missions

 

Guide to help out new DCS MOOSE Users -> HERE

Havoc Company Dedicated server info Connect IP: 94.23.215.203

SRS enabled - freqs - Main = 243, A2A = 244, A2G = 245

Please contact me HERE if you have any server feedback or METAR issues/requests

Link to comment
Share on other sites

5 ghz overclocked 8700k is consuming 50% more power than 7700k, about 95w vs 150w

isn't it insane?

 

Well, that really depends what you calculate, mine can consume up to130-150w at times with p95 or IBTv2.

 

Things like Aida64 Stresstest consume FAR less. Crunch Small FFT's and check that value.

 

 

DCS is 35-50w game, no more

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

You know, DCS uses one core, then the consumption of the 7700K and 8700K will be the same at same frequency.

 

 

TDP value is the average power dissipated at Base frequency under full load.

The TDP of the 8700K is low because the base frequency is low (3.7Ghz). The 8700K has 50% more core.


Edited by Demon_

Attache ta tuque avec d'la broche.

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...