Jump to content

Recommended Posts

Posted
1 hour ago, Sarex said:

Yeah, I'm going to go out on a limb and say that 5 nodes in 4 years ain't gonna happen. That is an extraordinarily optimistic roadmap.

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Posted (edited)

GN testing of that unfinished APO/thread-scheduling program @Gfted1 linked to. tl;dr: The efficiency cores don't seem like they should be used as primary threads for gaming workloads, they're not powerful enough and lead to both lower performance and ironically worse power efficiency, and this application seems to rather successfully optimize thread scheduling in order to minimize those issues and get more out of the CPU. Unfortunately, if Intel's explanation for why there are so few models and games supported is true, it also seems like an application they might never get fully off the ground and will probably just end up dropping because of it being too much work to maintain...not unless they figure out some automated way of running games through tests to figure out the best thread-scheduling configurations. Which is certainly possible at some point in the future, but it doesn't seem like that's what they're doing right now.

I've never been a huge fan of this idea of CPUs having different/asymmetrical core configurations, especially not with how many threads CPUs have these days, so it is kind of funny to see that Intel's approach here is causing them havoc and leaving unrealized gains on the table. And now AMD has similar issues with its X3D cores...

Edited by Bartimaeus
  • Like 1
Quote

How I have existed fills me with horror. For I have failed in everything - spelling, arithmetic, riding, tennis, golf; dancing, singing, acting; wife, mistress, whore, friend. Even cooking. And I do not excuse myself with the usual escape of 'not trying'. I tried with all my heart.

In my dreams, I am not crippled. In my dreams, I dance.

Posted

APO is a great use case for machine learning once enough data has been accumulated, that could also automate testing and should provide results in a much smaller timeframe. At least that's what I would try with APO, because manual tuning after every patch and for each new game (or even application, not like Photoshop for instance has great multithreading, could probably also benefit from APO) release would otherwise only be feasible if the optimizations could somehow be handed over to the community.

6 hours ago, Bartimaeus said:

I've never been a huge fan of this idea of CPUs having different/asymmetrical core configurations, especially not with how many threads CPUs have these days, so it is kind of funny to see that Intel's approach here is causing them havoc and leaving unrealized gains on the table. And now AMD has similar issues with its X3D cores...

Well, those are here to stay, what with AMD adding "c" cores to their CPUs in the future. Oh, they sure like to tell how much better their c-cores are going to be to Intel's e-cores because they're basically the same and thus have the same IPC, just more tightly packed and clocked lower, but one just needs to look at the 7900X(3D) and 7950X(3D) and its myriad scheduling problems to give lie to that statement. There are games where a 7700X beats the 7900X and 7950X due to coordination issues between separate CCDs that are identical, and the 7800X3D is consistently as fast or in some cases even faster than the 7950X3D - and that is with the artificially lower clock speeds of the 7800X3D.

Having cores that clock lower is going to be an issue for scheduling, whether or not they have the same IPC, and between Intel's Thread Director and AMD's core parking on the 7900/7950X3D, the conclusion is pretty obvious. The c-cores are not going to have the same performance as the regular cores, and therefore AMD's going to run into scheduling issues, unless they just turn off the c-cores in gaming like they do with the 79X0X3Ds - which also not always works properly. Proof is in Steve's benchmarks.

With the way APO seems to work according to Hardware Unboxed's video, i.e. making sure one e-core per cluster has access to the full cluster L2 cache by turning the other three off and directing threads to make better use of the CPU, it sort of makes sense that the only supported CPUs so far are the 14900K and 14700K. APO might not work properly or produce no real performance uplift on a 14600K with only two e-core clusters. The 13700K also has only two of them, so for 13th gen that probably would just leave the 13900K.

Not unlocking APO for the 13900K is a pretty lame move though, if understandable from a marketing point of view (assuming APO becomes more than a tech demo). There's no reason it would not work just as well on a 13900K, seeing how the 14900Ks are just 13900Ks that won the silicon lottery. I mean, like literally.

 

  • Sad 1

No mind to think. No will to break. No voice to cry suffering.

Posted (edited)

A hybrid design with both high power and low power, high efficiency cores make so much more sense for laptop chips, especially ones aimed at ultralights and handhelds, then they do for desktops. Especially a gaming-focused desktop, the hybrid design just makes zero sense for that. We've yet to see AMD's implementation of that design, but I imagine the story will be similar.

Edited by Keyrock

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Posted

I'll be curious to see how the iGPUs in the Meteor Lake chips perform once they're out in the wild. There's a leaked Geekbench benchmark out there showing promising results, buuuuut... 1) It's a "leak" so I'm naturally skeptical as to the authenticity of said benchmark. b) Even if it's real it's just a single benchmark, it could be an outlier. III) It's Geekbench, a synthetic benchmark, I much prefer real world applications (games).

Anyway, I do hope they are the leap forward that some people seem to think they are. It would be nice for AMD to get some competition in the APU space. I'll be in the market for a handheld in the near future (read: probably when the Strix Point based handhelds arrive in roughly a year). Right now all the handhelds coming out are basically the same in terms of processor, most of them are running 7840U derived chips (close to my dream of a handheld that can run any game at 1080p at least medium-ish settings, but comes up short) or previous gen AMD APUs made to run at roughly 15W to 30W. It would be nice to have some Intel choices on the market.

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Posted (edited)

I don't disagree with your assessment that AMD could use some competition there, and theoretically Arrow Lake, with potential Battlemage based iGPUs, could provide that, assuming for a moment that Battlemage is much more efficient than Arc - which remains to be seen. Perhaps even in the console space - but that rides on Intel delivering the goods quickly enough (and in the case of Battlemage, in working condition from the beginning), and the SoCs not being prohibitively expensive, both of which are a bit in doubt at the moment*.

Arc was really good for a first attempt at a dedicated consumer GPU (Larrabee doesn't really count). It eats up a bit more power than most other cards on the market, especially compared with nVidia 4000 series GPUs, but it performs well enough when it works, and they beat AMD in Ray Tracing from day one, and XeSS is better than FSR. If - and that is a captial I if - Battlemage really hits its performance target, which is the RTX 4080, Intel will have effectively caught up with Navi 31, albeit with a year and a half in between.

Not to mention that Arc is already a pretty good production GPUs for what hey cost.

Geekbench scores are pretty worthless, not only is that a synthetic benchmark, it is also wildly inconsistent between runs.

*Edit: And I suppose Sierra Forrest, the new 288 core server CPUs, will probably take precendence and production capacity. Intel needs to fire something back at AMD's EPYC lineup, like yesterday.

Edited by majestic

No mind to think. No will to break. No voice to cry suffering.

Posted (edited)
1 hour ago, majestic said:

and XeSS is better than FSR.

Disagree. Granted, small sample size, but in the very few games I've played (2) that had XeSS and FSR2 I've found that XeSS gives better performance at comparable settings but at the expense of significantly more visual artifacts. The image quality I experienced with XeSS was unacceptable to me. Not quite the vaseline smear that was DLSS1 but still pretty bad. The performance was great, though. It's new tech, I'm sure they'll work it out like they did their drivers... Eventually.

Edited by Keyrock
  • Hmmm 1

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

  • 3 weeks later...
Posted (edited)

Congratulations Intel, after re-releasing Raptor Lake on desktop as "new" generation, complaining about AMD's laptop naming scheme is really, really pathetic. Never mind the refreshes of the past, and the confusing naming schemes of the past... uhm... fire whoever made this immediately.

Edit: anyone wanna bet whether an intern made these slides or if they tried their new AI accelerators on a marketing AI? Can't have been someone with more than two brain cells or at least semi-familiarity with the current tech landscape.

Edited by majestic
  • Haha 3
  • Gasp! 1

No mind to think. No will to break. No voice to cry suffering.

Posted (edited)

@majesticI have a differing view on this. I applaud Intel for taking the next step in helping crack addicts get work. It's one thing to help a recovering crack addict, but Intel went above and beyond by hiring a crack addict still smoking crack, on the job even, to their marketing team. Intel is being both brave and compassionate by helping this individual to become a productive memb... Okay, maybe not productive, but a member of the workforce. Bravo Intel. Bravo.

Edited by Keyrock
  • Haha 1

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Posted

Jokes aside, it is doubly strange to make slides like that just shortly before the Meteor Lake laptops come out. Meteor Lake is going to be similar to the AMD 7520U insofar as they are going to have Raptor Cove cores on a smaller node they call "Redwood Cove", much like the 7250U has Zen 2 cores on a smaller process node. It is the whole reason there are internal slides and leaks showing the MTL CPUs being at best a boost in 5% in ST performance, with a larger multithread performance boost due to moving to Crestmont for the e-cores.

Steve also said it in the video, Intel does have a point about the confusing naming schemes, but stones and glass houses and whatnot.

No mind to think. No will to break. No voice to cry suffering.

Posted

Maybe Raja brought in the Vega marketing team when he was at Intel and they stayed after he left?

(I think the guy who was responsible for a lot of AMD's shonky marketing left recently, would be ironic if it was for Intel like a lot of the others. OTOH, not the first time Intel's done similarly fairly recently- eg AMD's 'glued' chips which of course Intel was planning on making a version of themselves while slamming the concept)

Posted

Was my first reaction too, actually already had the post typed up saying "Looks like Intel recruited AMD's marketing team", but then decided against it.

Kinda reminds me of Microsoft's early 2000 anti-Linux marketing campaign. That was a major facepalm moment too.

2 hours ago, Zoraptor said:

(I think the guy who was responsible for a lot of AMD's shonky marketing left recently, would be ironic if it was for Intel like a lot of the others. OTOH, not the first time Intel's done similarly fairly recently- eg AMD's 'glued' chips which of course Intel was planning on making a version of themselves while slamming the concept)

The maximum die size in newer process nodes is going to go down drastically, as High-NA EUV lithography will basically shrink the maximum die size achievable down to 429mm², and to make matters worse, I/O is scaling poorly with new process nodes and SRAM has all but stopped scaling, but at the same time caches are getting larger and larger, and processors need more and more I/O. Not like Intel did not realize that at the time, but... actually, since Foveros is different enough, that's not actually the really stupid part about saying that AMD is gluing their CPUs together.

The Pentium D series was just that. Two Pentium 4 CPUs glued together. Intel did it first, and pretty poorly at that. :p

I actually had a Pentium D based system back then. That was, uhm, yeah. Pretty bad. I mean, it worked, and it was stable, but it required a ginormous copper block with a high RMP fan to cool it down, and the mainboards had to be designed to allow for massive bending due to the weight of the coolers. Netburst also turned out to be a dead end idea. Intel kept saying that the Netburst architecture will pay off once software is written specifically for it, but that never materialized. AMD would go on to copy the marketing idea when all they had were more cores on their Bulldozer CPUs, well, we have more cores, they're going to pay off eventually. Sure they did, just not within the lifespan of that architecture.

Things to seem a bit cyclical, although Intel is not as far behind in performance as they were back before the Core 2 and later the Core series came out.

No mind to think. No will to break. No voice to cry suffering.

Posted

I'd give them a pass on Pentium-D for it being an eternity ago in chip making terms and a stopgap after some truly awful decisions.

'Glued' chips though... that was UserBenchmark level cringe marketing.

Posted

Mr. Intel, I don't feel so good. In the comments, GN said they would also cover idle power at some point within the next few months, which might be a bit more interesting.

Quote

How I have existed fills me with horror. For I have failed in everything - spelling, arithmetic, riding, tennis, golf; dancing, singing, acting; wife, mistress, whore, friend. Even cooking. And I do not excuse myself with the usual escape of 'not trying'. I tried with all my heart.

In my dreams, I am not crippled. In my dreams, I dance.

Posted (edited)

Well, the difference in efficiency provided by TSMC's 5N process node compared to Intel 7 cannot fully be made up by tuning, and even though you can do more for better effect on an Intel system by undervolting and setting power limits, physics gonna physic... :shrugz: 

On 12/19/2023 at 10:54 PM, Bartimaeus said:

Mr. Intel, I don't feel so good. In the comments, GN said they would also cover idle power at some point within the next few months, which might be a bit more interesting.

If I were GREG I'd complain about them primarily using 7-zip for productivity efficiency under power restraints. The Ryzen CPUs are really far ahead in this particular workload, while it is much closer (or the other way around) in others. Limiting the CPUs to 95W and performing the Chromium compilation test would probably yield more favorable results for Intel - or at least results that aren't as much in favor of the AMD CPUs for efficiency. I posted about that before, but Der8auer made some tests back when the 13900K released, and locked to 95W it roughly performed the same in his benchmarks as a 7950X in Eco mode (i.e. a 95W power limit).

GREG should really get the feeling that Steve picked the 7-zip test primarily to make a point because that is the single one of his productivity benchmark suite that will very clearly be in favor of AMD in terms of efficiency even if you'd lock both CPUs to the same power limit. That was very much a forgone conclusion.

Well, GREG set himself up for it by saying all workloads, so sucks to be GREG.

The argument that you could beat the larger 3rd level cache in gaming by power limits was also not very well thought out, to put it mildly. The 7800X3D performs exceptionally well even with its artificially crippled clock speeds (well, crippled might be a strong word, but at least compared to the other Zen 4 X3D CPUs, they are) due to having much less cache misses. The gap between the 7950X3D and the 7800X3D in games that do not have scheduling issues is so small that really proves how little the X3D's performance is reliant on clock speeds. That is how the 5800X3D is still sitting very close to the top of the charts in spite of having an older architecture, and the 7800X3D can be that efficient in gaming.

That will only change if we either start running into a meaningful performance gain reduction for larger 3rd level caches (technically we're already looking it at, what with the difference between the regular Zen 4 parts and the X3D ones being less than the difference in Zen 3, but the scaling is still more than great) or when Intel just does the same with their new packaging and interconnection technology. The latter will come eventually, if all goes well, the former may or may not happen depending on cache sizes, game development and increasing memory bandwidth, or physical restrictions due to SRAM no longer scaling with node shrinks.

Edited by majestic
if -> wiff, yay
  • Like 1

No mind to think. No will to break. No voice to cry suffering.

  • 3 weeks later...
Posted

Rumor is that MSI is preparing a handheld powered by meteor lake, supposedly called The Claw (lol). Well, MSI preparing a handheld is confirmed AFAIK, the rumor is that it's powered by meteor lake. If true, this is exciting news. I'm always pro competition and AMD has thus far been pretty much unchallenged in the APU space. I'm very curious to see how it will perform.

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Posted
On 11/16/2023 at 11:18 PM, Keyrock said:

Disagree. Granted, small sample size, but in the very few games I've played (2) that had XeSS and FSR2 I've found that XeSS gives better performance at comparable settings but at the expense of significantly more visual artifacts. The image quality I experienced with XeSS was unacceptable to me. Not quite the vaseline smear that was DLSS1 but still pretty bad. The performance was great, though. It's new tech, I'm sure they'll work it out like they did their drivers... Eventually.

Eh, somehow I forgot to reply. That is - was, with XeSS 1.2 out now - only true if you used XeSS with AMD or nVidia cards, as Intel is using an entirely different AI model for XeSS on ARC. XeSS 1.2 is looking rather good on non-ARC cards too, although performance is generally less than DLSS or FSR at the same quality settings, but XeSS with a lower rendering resolution can sometimes still beat FSR in terms of image quality (i.e. XeSS Balanced is as fast as FSR Quality while having the same if not better visual fidelity). Depends on the game of course. 

32 minutes ago, Keyrock said:

Rumor is that MSI is preparing a handheld powered by meteor lake, supposedly called The Claw (lol). Well, MSI preparing a handheld is confirmed AFAIK, the rumor is that it's powered by meteor lake. If true, this is exciting news. I'm always pro competition and AMD has thus far been pretty much unchallenged in the APU space. I'm very curious to see how it will perform.

Yeah, that is going to be interesting. The regular Intel Core Ultra 7 155H would be enough for 1080p with Medium settings, or at least it was on pre-production sample notebooks from Acer, which makes sense as the Meteor Lake CPUs have (up to) 128 Execution Units, i.e. as much as an ARC A380. Still seems kind of risky, I mean, Intel's drivers improved a lot, but there's the odd game or two here and there that won't run at all, or just run terribly.

Not going to buy a handheld either way. I mean, one that isn't the Nintendo Switch 2, or whatever it is going to be called. Still hoping they'll call it Super Nintendo Switch. That would be hilarious. :p

No mind to think. No will to break. No voice to cry suffering.

Posted (edited)
1 hour ago, majestic said:

The regular Intel Core Ultra 7 155H would be enough for 1080p with Medium settings, or at least it was on pre-production sample notebooks from Acer, which makes sense as the Meteor Lake CPUs have (up to) 128 Execution Units, i.e. as much as an ARC A380.

But that's in a laptop, typically laptop chips are roughly 65 W, you ain't running the chip at 65 W in a handheld... Well, not unless you want to play games with welding gloves on. For a handheld you're looking at around 15 or 20 W, which will obviously hamper performance as compared to 65 W. Who knows, though, maybe they've managed to squeeze out some extra efficiency since the engineering samples? 

This is where the Switch has an advantage running on an ARM chip. x86 chips have gotten more power efficient... Well, some of them, but they still can't compete with ARM in that department. One type of chip that can is RISC-V, but we're probably a ways off from the glorious RISC-V future for anything other than embedded systems.

Edited by Keyrock

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Posted
7 hours ago, Keyrock said:

But that's in a laptop, typically laptop chips are roughly 65 W, you ain't running the chip at 65 W in a handheld... Well, not unless you want to play games with welding gloves on. For a handheld you're looking at around 15 or 20 W, which will obviously hamper performance as compared to 65 W. Who knows, though, maybe they've managed to squeeze out some extra efficiency since the engineering samples? 

It might also be a custom 155H, that is entirely possible, so... yeah. Idle musings at this point. :)

No mind to think. No will to break. No voice to cry suffering.

Posted

If it's based on the 155h it's not going to be price competitive anyway.

Ultimately the trouble any other PC handheld is going to have is Valve owning the store most of the games will be bought on and at a hefty 30% cut, allowing them to loss lead the hardware. MSI/ ASUS etc have to make a profit on the hardware as well, Valve will make one from the games sold even on the MSI system. And unlike the Mendocino (?) based 'Deck which was a mid tier for 2022 laptop chip a 155h is reasonably close to top tier and fricking huge comparatively. They may get competitive pricing from Intel to show that it can be done, but...

(Is there anything official about it being the 155h or is it just the leaks from ?China? saying so? Surely it has far too many CPU cores for a handheld, three times the threads of the 'Deck seems just a tad excessive when most games barely tap the 'Decks 8, still)

Posted

There is a leaked Geekbench result.

The score is pretty much the same as that of any 155H geekbench results, well, the single thread score is better than most, but Geekbench has notorious variances between runs. Now there's of course the issue that Keyrock mentioned, the results would indicate a laptop level TDP, which in a handheld device is going to be a bit of a challenge.

Most comments about The Claw I read online are silly debates about the placement of the analogue sticks - mostly started by Playstation Layout Theists, a group of humans who should be burned as heretics they are (eh, I might have played a little too much Rogue Trader recently :p).

Well, CES is around the corner anyway. If this is really powered by a regular 155H, it'll probably be able to run most games in 720p at 30 fps (if they run, that is), and it'll probably be pretty expensive and really hot (in both the figurative and literal sense), but the rumor mill has it that Intel's been selling CPUs at cost recently while they play node catchup, so who knows. Intel might just be looking for entry into the market to put some pressure on AMD.

No mind to think. No will to break. No voice to cry suffering.

Posted

Guess it's official now and it is indeed a 155h with the full 16/6/2 core config.

Not sure the maths on the battery life works out. Doesn't seem to make any specific claims for gameplay time but that's only 3Whr more than the newer 'Deck's battery which claims 3-12 hours so the inference would be 4.5-18 hours of gameplay which seems... unlikely.

(MSI's website made my GPU fans rev up every time I scroll. It is 30 odd degrees ambient here, but that's still loltastic web design)

Posted

I'm looking forward to seeing 3rd party benchmarks. I'm also hoping for an ill conceived partnership with the Von Erichs movie. 

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...