Jump to content

Welcome to Obsidian Forum Community
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

AMD's APU Future and The PS5's Architecture


  • Please log in to reply
5 replies to this topic

#1
injurai

injurai

    Arch-Mage

  • Members
  • 2302 posts
  • Location:Not the oceans
  • Pillars of Eternity Backer
  • Kickstarter Backer
  • Deadfire Backer
  • Fig Backer
  • Black Isle Bastard!


  • Bartimaeus likes this

#2
Zoraptor

Zoraptor

    Arch-Mage

  • Members
  • 2644 posts
  • Pillars of Eternity Backer
  • Kickstarter Backer
  • Deadfire Backer
  • Fig Backer

Yeah... Decent APUs should probably kill off dedicated low to mid range graphics cards in notebooks etc and that would hurt nVidia a lot, but there are limitations in how they can be applied in desktops and certainly in terms of getting above mid range even in laptops without some fundamental shifts.

 

The main problem for a classic APU's graphic performance is that the graphics uses system resources and RAM which is slow. The 2400G scales very well with faster RAM but there will be a point at which system RAM will simply be too slow to keep feeding added graphics cores- especially in laptops where you'll often get single channel and slow RAM to save costs. The problems with system RAM are why graphics cards have specialist RAM on their boards, after all- well, except that execrable joke DDR3 1030 and even that at least doesn't use system RAM. So you  would have to either change from DDR# to GDDR#/ HBM or put some fast RAM on the chip. Once you do that though you're greatly increasing complexity and price, and instead of an APU you're more or less making a SoC/ NUC instead. Indeed, those solutions are used by Hades Canyon and the PS5. You're also in the situation where if you want new graphics processing you have to buy a new processor as well, fine for laptops but a lot more of a problem for desktops; and in desktop AMD would potentially be doing themselves out of the low to mid range market where they're genuinely competitive. They would also have to deal with potential backlash from hardware makers- AMD would to most practical purposes then be making motherboards and graphics cards themselves, it's unlikely that Gigabyte/ ASUS etc are going to like that.

 

I also have to say that I find his obsession with S curves a bit disingenuous. I mean, he's right in principle but it's a bit... off starting an S curve off with the GTX480 which was simply not a good card but was preceded by some good ones. Not that he should necessarily go all the way back to NV1 or anything, but GPUs have always had generational plateaux where you'd have the tail end of the last generational leap offering smaller improvements, then a big improvement that gets iterated on until it too offers smaller improvements.


Edited by Zoraptor, 26 January 2019 - 06:47 PM.


#3
injurai

injurai

    Arch-Mage

  • Members
  • 2302 posts
  • Location:Not the oceans
  • Pillars of Eternity Backer
  • Kickstarter Backer
  • Deadfire Backer
  • Fig Backer
  • Black Isle Bastard!

I wouldn't be surprised if eventually we get an SPU/SOC-like architecture for desktops where the GPU is still modular, where a faster mainstore is shared between cpu and gpu. Probably not for a while now though.



#4
Zoraptor

Zoraptor

    Arch-Mage

  • Members
  • 2644 posts
  • Pillars of Eternity Backer
  • Kickstarter Backer
  • Deadfire Backer
  • Fig Backer

I can't see it happening any time soon. End of the day there is a reason why PCs are designed as they are and why they aren't (generally) set up the same way that consoles are.

 

Most I could see happening some time recently would be selling essentially a 'boxless' PS5/ Xbox type SoC and that won't be as competitive as either console unfortunately. Current gen 1X may use a lot of tricks to claim it's 4k ready but nevertheless it still has considerably better performance than the desktop equivalent 570/580 due to optimisations that you won't have playing the PC version; and the pricing won't have volume advantage either.

 

Having said that, AMD is the only company with a full suite approach of CPU and GPU at the moment (frankly, I'm skeptical of Intel making significant consumer waves even when they do start with the discrete graphics) and AM4/ Ryzen has a fair bit of functionality on the chip that would usually be part of the chipset.



#5
injurai

injurai

    Arch-Mage

  • Members
  • 2302 posts
  • Location:Not the oceans
  • Pillars of Eternity Backer
  • Kickstarter Backer
  • Deadfire Backer
  • Fig Backer
  • Black Isle Bastard!

The industry successfully moved away from front-side busses and northbridges. I would think a similar sort of memory hierarchy change could be slated to bring the CPU and GPUs closer together on gaming and workstation desktops. With the PCIe buses and SDRAM all sitting so far apart, it seems like the next major frontier would be to bring all of this closer together. Hell even non-volatile memory could stand to get a bit closer. I'd think this sort of thing was probably still 10 years out, but I would not be surprised if this is where things are headed. You would have the same byopc design desktops have always had, but something much closer in spirit with the architecture of an APU, just not as a SOC as it would be applied at a slightly further up level in the memory hierarchy

 

Especially the way asynchronous parallel computing is going these days with 32+ simd cores, having them tear through all the awful branch heavy logic to determine which matrix transforms further get crunched, which need written back to the GPU along with the command buffers. Especially now that things like Vulkan really get the most of the GPU and CPUs, the memory hierarchy between can only be made better for this paradigm of numeric computing. I'm sure the CUDA people wouldn't mind closing the gap either, as the latency for computer kernels dictates if it's even worth offloading something to the GPU.


Edited by injurai, 26 January 2019 - 10:51 PM.


#6
AwesomeOcelot

AwesomeOcelot

    (9) Sorcerer

  • Members
  • 1318 posts
  • Pillars of Eternity Silver Backer
  • Kickstarter Backer
  • Deadfire Silver Backer
  • Fig Backer
The video cherry picks data and had some incorrect points about S-curves and performance. There's many factors that make particular solutions popular or become obsolete. From a performance and cost standpoint integrating the GPU and RAM seems to be inevitable. Most people don't change their GPU or CPU just in desktops, most people buy laptops and tablets. The obstacles for this transition is HBM isn't cost competitive yet, and it would be hard to cool HBM/CPU/GPU on one package. Scale and refinements will solve HBM's cost, a better process will solve cooling.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users