What's new
Carbonite

South Africa's Top Online Tech Classifieds!
Register a free account today to become a member! (No Under 18's)
Home of C.U.D.

Intel Alder Lake might have issues with older games with DRM

Nyt Ryda

VIP
VIP Supporter
Rating - 100%
83   0   0
Joined
Dec 10, 2010
Messages
3,578
Reaction score
2,195
Points
9,565
Location
Johannesburg
As stated in the Intel Developer Guide, “If your existing or upcoming game uses a DRM middleware, you might want to contact the middleware provider and confirm that it supports hybrid architectures in general, and the upcoming Intel ADL platform in particular. Due to the nature of modern DRM algorithms, it might use CPU detection, and should be aware of the upcoming hybrid platforms. Intel is working with leading DRM providers such as Denuvo* to make sure their solutions support new platforms.”

Now, this is concerning for a variety of reasons. First of all, buying a new CPU only to find out whatever game you want to play is unplayable because the copy protection system doesn’t support your processor sounds like quite frankly, a very infuriating thing to happen.

Additionally, what about past games?

There are many older games that aren’t updated anymore that use copy protection systems like Denuvo. If game developers don’t bother updating their Denuvo version, will those older games just prove to be completely unplayable?

This is a very legitimate concern to have, older copy protection methods like SecuROM suffered from similar issues, and have major compatibility issues on newer systems. We’re at the point where some older games are quite literally unplayable if you bought them legitimately

Source : Denuvo and other DRM might have issues on Intel's new CPUs


One workaround is apparently disabling E-cores. Will be interesting to see what happens.

DRM in a game is often such a pain for paying customers. Increased CPU usage with all the draw calls and just more inconveniences as time goes on.
 
I like Intel but I honestly think Alder Lake is a bust. I'd recommend buying a 10900K and a cheap Z490 clearance sale board, along with a kit of dual rank Samsung B-Die.
 
I like Intel but I honestly think Alder Lake is a bust. I'd recommend buying a 10900K and a cheap Z490 clearance sale board, along with a kit of dual rank Samsung B-Die.
Or just Ryzen :)
 
Heard about this, this morning don't know if it will be as big as media is making it out to be. I think it's something small that has been blown out of proportion and now sounds like nobody will be able to Far Cry One Billion and two or GTA 7.

Then again I might also be extremely wrong. I have been before...
 
I like Intel but I honestly think Alder Lake is a bust. I'd recommend buying a 10900K and a cheap Z490 clearance sale board, along with a kit of dual rank Samsung B-Die.

I'm on the other side of the argument. I think Alder Lake is going to be fantastic. The "little" efficiency cores are rumored to be on par with Skylake cores.

I think that an i5 12600k with 6 performance cores and 4 efficiency cores is going to beat that 10900k in gaming on Windows 11.
And I think it's going to arrive at around R6.5k. Motherboard costs will be higher if you get a board using PCIE 5.0 or DDR5.

The 8+4 i7 and 8+8 i9 are going to be very quick.

Next year is going to be even better. The i9 will have a whopping 24 cores (8 performance and 16 enthusiast cores), that would be a mighty upgrade from my 9900K.
 
I'm on the other side of the argument. I think Alder Lake is going to be fantastic. The "little" efficiency cores are rumored to be on par with Skylake cores.

I think that an i5 12600k with 6 performance cores and 4 efficiency cores is going to beat that 10900k in gaming on Windows 11.
And I think it's going to arrive at around R6.5k. Motherboard costs will be higher if you get a board using PCIE 5.0 or DDR5.

The 8+4 i7 and 8+8 i9 are going to be very quick.

Next year is going to be even better. The i9 will have a whopping 24 cores (8 performance and 16 enthusiast cores), that would be a mighty upgrade from my 9900K.
It's not really on the side of the architecture. I think they did a fine job. I think the miss, is with these sorts of quirks, and the necessity to run DDR5 on higher-end boards. Some articles have also claimed that you may need to opt to disable the smaller cores to have better OCing, and the FIVR is going to make OCing, in general, more difficult. Next to that, a YouTuber I watched hypothesized that the small cores might be handling the scheduling to the big cores, so they may introduce latency. Alder Lake is just seeming like a latency fest to me at this point, although those single-core scores are very impressive.
 
It's not really on the side of the architecture. I think they did a fine job. I think the miss, is with these sorts of quirks, and the necessity to run DDR5 on higher-end boards.

Hopefully minor quirks. Just like Ryzen not liking certain memory or even certain graphics cards.

Some articles have also claimed that you may need to opt to disable the smaller cores to have better OCing, and the FIVR is going to make OCing, in general, more difficult.

OCing is getting more difficult, even on Ryzen. You sacrifice higher single core boost for higher sustained boost.
Eventually people are just going to use Turbo boost with higher power limits or PBO and call it a day.

Next to that, a YouTuber I watched hypothesized that the small cores might be handling the scheduling to the big cores, so they may introduce latency. Alder Lake is just seeming like a latency fest to me at this point, although those single-core scores are very impressive.

https%3A%2F%2Fspecials-images.forbesimg.com%2Fimageserve%2F611e4da3bd5e1f3ee3383124%2FIntel-s-Thread-Director%2F960x0.jpg%3Ffit%3Dscale


Thread Director is a new hardware-based technology that actively recognises tasks and decides whether to assign them to P-cores or E-cores. For example, you want high-priority tasks such as games, content creation and AI to be assigned to P-cores, while E-cores can hand less-intensive tasks and background to save power. Thread Director is software-transparent and real-time adaptive so it it can work with any operating system to ensure Alder Lake's two core types are handled correctly.

However, Microsoft has stated that it has worked with Intel to ensure that Windows 11 is specifically optimized for Alder Lake's new CPU core design and Thread Director. Both companies have said that thanks to the optimizations, Alder Lake will perform better on Windows 11 than other operating systems as a result.
Source : Intel’s New Alder Lake Processors Will Run Faster On Windows 11


It will be interesting to see how it pans out.

The 10700k was basically a 9900k, the 10900K had 2 extra cores with higher latency, the 11700k was famously dubbed a "waste of sand" by GamersNexus. This is the most exciting Intel launch in ages, especially with Windows 11 out.
 
Hopefully minor quirks. Just like Ryzen not liking certain memory or even certain graphics cards.



OCing is getting more difficult, even on Ryzen. You sacrifice higher single core boost for higher sustained boost.
Eventually people are just going to use Turbo boost with higher power limits or PBO and call it a day.



https%3A%2F%2Fspecials-images.forbesimg.com%2Fimageserve%2F611e4da3bd5e1f3ee3383124%2FIntel-s-Thread-Director%2F960x0.jpg%3Ffit%3Dscale



Source : Intel’s New Alder Lake Processors Will Run Faster On Windows 11


It will be interesting to see how it pans out.

The 10700k was basically a 9900k, the 10900K had 2 extra cores with higher latency, the 11700k was famously dubbed a "waste of sand" by GamersNexus. This is the most exciting Intel launch in ages, especially with Windows 11 out.
I wholeheartedly agree. I was prepared to adopt Alder lake and DDR5 for ages, but what I've been seeing floating around the tech space makes me come to the conclusion that I'm better off saving my money. That's a revelation coming from someone like me haha. I did hear about the OS scheduler and I know about W11's work with Intel. The thing I mentioned is more of a theory. It did seem to make sense while watching the video discussion.
 
Think there's potential for there to be significant growing pains especially early on. 1st gen Ryzen had it's quirks and the only new thing it introduced was the architecture. Intel is launching a new architecture, 2 different types of core on the same chip, new ram, support for old and new ram, new PCIe standard. I'm very excited for what it could all mean but I'm also on 4th gen intel and an R9 380, I'm not going to be upgrading to the new things anytime soon either so not very invested in the difficulty that it could cause but I'm definitely going to laugh at people that jump on it from the get go and then complain about there being issues.
 
Think there's potential for there to be significant growing pains especially early on. 1st gen Ryzen had it's quirks and the only new thing it introduced was the architecture. Intel is launching a new architecture, 2 different types of core on the same chip, new ram, support for old and new ram, new PCIe standard. I'm very excited for what it could all mean but I'm also on 4th gen intel and an R9 380, I'm not going to be upgrading to the new things anytime soon either so not very invested in the difficulty that it could cause but I'm definitely going to laugh at people that jump on it from the get go and then complain about there being issues.
G.SKILL Trident Z5 6800 46-46-46 :eek:🥳
 
Yeah man I might fall for the hype when speed/CL >200 :sleep:
Now that you mention. Yeah wow honestly. I just feel like I’m missing something? But no, this is the kind of early-adopter speed we can expect. My only question is why not launch DDR5 on server for the first year or two, then move to HEDT, and then finally onto mainstream consumer stuff?

My argument remains the same. The average Joe doesn’t all of a sudden require 200-300Gbps of read/write/copy, and I literally no no human being who currently requires more than 64GB of RAM in their daily system. I literally only know one individual who even requires that!

Like don’t get me wrong, DDR4 hits a hard cap at 5100-5333 for daily usable RAM configurations, but I don’t even think one needs to daily 5000Mbps right now. Games and current consumer applications just don’t need transfer speeds that high *yet*.

It’s just so peculiar to me especially because their current silicon can’t even run memory speeds such as 4800 at Gear 1/1:1 ratio. It screams ‘not ready for primetime’ to me. I just don’t get it. Perhaps they wanted to win over those with major CUD so that they could grab market share from Zen?

No one should upgrade to Alder Lake unless they absolutely require the workstation performance and/or 256GB+ of RAM. I’m sure there’s someone on this forum who actually needs that, but that isn’t the case for 99.999% of people. DDR5 is gonna get body’d by good dual rank B-Die and if you actually want to one an enthusiast grade board, you’d have to go DDR5. The only other option is going for a lower end board, which will probably detract from your ability to OC the CPU or the RAM.

TL; DR:
If you like Intel, buy a cheap 10-core and Samsung B-Die in dual rank. If you need PCIe 4.0 then buy an 11700K.

If you like AMD and/or want the best performance on desktop, get a 6950X whenever that launches and pair it with dual rank B-Die or high speed Micron/Hynix ICs.
 
Last edited:
Now that you mention. Yeah wow honestly. I just feel like I’m missing something? But no, this is the kind of early-adopter speed we can expect. My only question is why not launch DDR5 on server for the first year or two, then move to HEDT, and then finally onto mainstream consumer stuff?
I mean its not uncommon for early adopter, DDR3 was faster and at lower timings than some of the first DDR4 stuff. Can't find Jedec Specs for timings but lowest bin is 4800 which would put it firmly in the "slower than top end DDR4" grouping.

I think they are "rushing it to market" as the first chips were made in 2018 and the Spec has been around for awhile. Also no datacenter and HEDT platforms for DDR5 yet anyhow. Somehow Intel Consumer is getting it first.
 
I mean its not uncommon for early adopter, DDR3 was faster and at lower timings than some of the first DDR4 stuff. Can't find Jedec Specs for timings but lowest bin is 4800 which would put it firmly in the "slower than top end DDR4" grouping.

I think they are "rushing it to market" as the first chips were made in 2018 and the Spec has been around for awhile. Also no datacenter and HEDT platforms for DDR5 yet anyhow. Somehow Intel Consumer is getting it first.
4800 40-40-40-77 😕 1.1V though because we all know how big of a deal making our RAM run cooler and more efficient is!
 
:sick::sleep:

WTF, that's terrible.
It’s scary bad. I thought my eyes were lying to me. The issue is, if these ICs can’t be safely over-volted to like 1.5, then we likely will never see super tight timings. I think there’s something fundamentally different about the process. I still can’t believe some people are defending it.
 
It’s scary bad. I thought my eyes were lying to me. The issue is, if these ICs can’t be safely over-volted to like 1.5, then we likely will never see super tight timings. I think there’s something fundamentally different about the process. I still can’t believe some people are defending it.
As if ever there needed to be more proof of the fact that people are stupid. There's plenty of shitty takes in the PC space but saying DDR5 is currently the better option is like saying FX was the best CPU architecture ever invented.
 
As if ever there needed to be more proof of the fact that people are stupid. There's plenty of shitty takes in the PC space but saying DDR5 is currently the better option is like saying FX was the best CPU architecture ever invented.
I can’t wrap my head around it though. It’s like looking at 4+4+4=12, and then questioning if there isn’t some sort of step you did incorrectly.

RAM has always had these different timings, and you always had to keep them reasonable because they’re as important as raw transfer speed. But now there are people saying that at higher speeds the timings aren’t as important and I’m just like: I’ve tested 5000C18 vs 3800C13 and they basically trade blows.

Transfer speed helps for raw data crunching but that’s about it. That being said, 4800 40-40-40. If a 2X8GB 4800 kit isn’t worth like R1200, then DDR5 is a scam.
 
I can’t wrap my head around it though. It’s like looking at 4+4+4=12, and then questioning if there isn’t some sort of step you did incorrectly.

RAM has always had these different timings, and you always had to keep them reasonable because they’re as important as raw transfer speed. But now there are people saying that at higher speeds the timings aren’t as important and I’m just like: I’ve tested 5000C18 vs 3800C13 and they basically trade blows.

Transfer speed helps for raw data crunching but that’s about it. That being said, 4800 40-40-40. If that 4800 kit isn’t worth like R1200, then DDR5 is a scam.
Yeah its a very confusing time, like when I calculated the whole 6800/46 = 147.... I couldn't believe it like I knew it was bad but damn. There's no point for 99.99% of the population not doing HEDT or server compute to even getting the higher bandwidth offered by high end DDR4 never mind what we're seeing with DDR5, I can only think of people doing high res photo editing and even then I'm not sure what type of uplift even 6800 46 46 46 is going to give you vs good ddr4 at 4400 18 or even 20-22.

Getting the latency down is what will actually give better performance than going to some ultra high bandwidth crap.
 
Yeah its a very confusing time, like when I calculated the whole 6800/46 = 147.... I couldn't believe it like I knew it was bad but damn. There's no point for 99.99% of the population not doing HEDT or server compute to even getting the higher bandwidth offered by high end DDR4 never mind what we're seeing with DDR5, I can only think of people doing high res photo editing and even then I'm not sure what type of uplift even 6800 46 46 46 is going to give you vs good ddr4 at 4400 18 or even 20-22.

Getting the latency down is what will actually give better performance than going to some ultra high bandwidth crap.
I 100% agree with you. The memory controller ratio can be less significant in some workloads yes - but raw timings will affect performance everywhere and you can’t just keep adding more and more data transfer to compensate.
 
There will be DDR4 boards too, so you can keep your good DDR4 ram with low latency. I suspect that even with DDR4 ram it will be ahead of everything. So there's not much of a downside to Alder Lake besides the cost.

I feel like the more @affxct says to stay on the 10th or 11th series, the more he's trying to reason with himself that he doesn't need to upgrade. But I think you'll be on Alder Lake by 2022 already :LOL:
I mean, didn't you go from a 3080 to a 6800xt to a 3070Ti to a 3080Ti ? All that change for 7% more performance after 4 cards.
Or am I remembering wrong.
 
There will be DDR4 boards too, so you can keep your good DDR4 ram with low latency. I suspect that even with DDR4 ram it will be ahead of everything. So there's not much of a downside to Alder Lake besides the cost.

I feel like the more @affxct says to stay on the 10th or 11th series, the more he's trying to reason with himself that he doesn't need to upgrade. But I think you'll be on Alder Lake by 2022 already :LOL:
I mean, didn't you go from a 3080 to a 6800xt to a 3070Ti to a 3080Ti ? All that change for 7% more performance after 4 cards.
Or am I remembering wrong.
I’ll definitely upgrade when it becomes viable, don’t get me wrong. I actually posted some info on DDR4vs5. They’re actually two very different systems and it explains part of what we’re seeing. With some napkin math I came to the conclusion that I can’t afford Alder Lake - not actually sure who can to be honest.

The issue with the DDR4 boards is that they’re going to be quite literally the MSI A Pros and the budget-oriented of the bunch. I like motherboards, so I’d much rather own a high end old gen vs a mid range new gen board. Stuff like PCIe 5 etc doesn’t really strike me as anything I’d ever want/need anytime soon.

With regards to the GPU thing, I’m just never content with what I have and I always find a reason to change something. By this point I just sell whenever I feel like it’s time and move onto the next. I don’t really think too much about it anymore. I don’t really feel like any of the stuff I own is truly mine. Feels more like I’m borrowing it before it’s time to move onto the next one.
 
The issue with the DDR4 boards is that they’re going to be quite literally the MSI A Pros and the budget-oriented of the bunch. I like motherboards, so I’d much rather own a high end old gen vs a mid range new gen board. Stuff like PCIe 5 etc doesn’t really strike me as anything I’d ever want/need anytime soon.

There are a few solid midrange DDR4 boards. MSI Z690 Tomahawk WiFi, Asus Strix Z690-E Gaming, TUF Gaming Z690-Plus and the Aorus Elite Z690.

The higher end boards don't usually appeal to me. They're mostly overpriced for just a few extra USB ports and fan connectors.

It does make sense to wait for the tighter DDR5 to come down in price. GSKILL's got some 36-36-36-76 DDR5-6400 coming.
Next year there's Zen 4 and Raptor Lake too so it does make sense to not be an early adopter.
 
There are a few solid midrange DDR4 boards. MSI Z690 Tomahawk WiFi, Asus Strix Z690-E Gaming, TUF Gaming Z690-Plus and the Aorus Elite Z690.

The higher end boards don't usually appeal to me. They're mostly overpriced for just a few extra USB ports and fan connectors.

It does make sense to wait for the tighter DDR5 to come down in price. GSKILL's got some 36-36-36-76 DDR5-6400 coming.
Next year there's Zen 4 and Raptor Lake too so it does make sense to not be an early adopter.
I have hope for the Z690 Elite to some degree, but the Elite has always had a significantly downgraded VRM and has lost out on the better of the topologies. I think the Pro AX is usually the sweet spot for Aorus.

They’ve been some killer boards as of late on X570 and Z490/590. The Tomahawk should be capable, but I noticed first hand that MSI do gimp its memory topology in contrast to the Unify’s and Ace’s of the world.

I don’t know if you saw the prices of X570S boards, but I genuinely am foreseeing stuff like the Z690 Strix-E/Tomahawk/Aorus Elite going for somewhere in the R6000-8000 price range. If they’re actually R4000-5000 I’ll fall off my fucking chair and I mean that. It would explode my brain.

Objectively TG were selling 4800 40-40-40 for $299 at 2X16 and that’s not me trying to make shit up. That was their Amazon price. As you can imagine when we add on 15% VAT and 5-10% for other inherent fees, we’re gonna be paying closer to $380-400 (nothing new obviously). We’re talking a buy-in price of around R7000-ish for DDR5, and that’s the only way you’ll be getting to even an Aorus Ultra level board, which itself should kick off at R9000-10000 (the price of the X570S Ace Max on Wootware should kinda give credibility to what I’m saying even though people are trying to paint me as some sort of lunatic).

With regards to the chips themselves, well the 12700K and 12900K are set to match Zen 3, and we have always inherently had higher Intel prices circa the R11000 initial price of the 11900K. I would imagine stock won’t be great as Intel has always been constrained on their Intel 7 (10nm) process.

Call me a lunatic, but I see a R12000 12700K and a R17000 12900K, and that’s me being optimistic. Realistically I expect closer to R15K/20K. Yes that probably sounds crazy, but I’ve yet to be impressed by fair ZA sale prices. At some point you’ve just gotta learn to expect the worst when it’s all that you’ve been seeing throughout history.
 
Call me a lunatic, but I see a R12000 12700K and a R17000 12900K, and that’s me being optimistic. Realistically I expect closer to R15K/20K. Yes that probably sound crazy, but I’ve yet to be impressed by fair ZA sale prices. At some point you’ve just gotta learn to expect the worst when it’s all that you’ve been seeing throughout history.

Calling it now, MSI Tomahawk Wifi Z690 DDR4 for R6k, 8+4 core i7 12700k for R10k.
Not terrible, that's how much it cost for a decent X570 and 12 core 5900x last year at the Zen 3 launch.

Let's see who is closer to the mark come November 4th.
 
Calling it now, MSI Tomahawk Wifi Z690 DDR4 for R6k, 8+4 core i7 12700k for R10k.
Not terrible, that's how much it cost for a decent X570 and 12 core 5900x last year at the Zen 3 launch.

Let's see who is closer to the mark come November 4th.
Mmmm R10K? I mean it’s possible if we pay 1:1 MSRP with just VAT. At R10K it would be a very reasonable price, and the board at R6K would also be fairly reasonable. That wouldn’t be too bad for that level of performance I reckon.
 

Users who are viewing this thread

Latest posts

Back
Top Bottom