Jump to content

Recommended Posts

Posted (edited)

Yeah... Decent APUs should probably kill off dedicated low to mid range graphics cards in notebooks etc and that would hurt nVidia a lot, but there are limitations in how they can be applied in desktops and certainly in terms of getting above mid range even in laptops without some fundamental shifts.

 

The main problem for a classic APU's graphic performance is that the graphics uses system resources and RAM which is slow. The 2400G scales very well with faster RAM but there will be a point at which system RAM will simply be too slow to keep feeding added graphics cores- especially in laptops where you'll often get single channel and slow RAM to save costs. The problems with system RAM are why graphics cards have specialist RAM on their boards, after all- well, except that execrable joke DDR3 1030 and even that at least doesn't use system RAM. So you  would have to either change from DDR# to GDDR#/ HBM or put some fast RAM on the chip. Once you do that though you're greatly increasing complexity and price, and instead of an APU you're more or less making a SoC/ NUC instead. Indeed, those solutions are used by Hades Canyon and the PS5. You're also in the situation where if you want new graphics processing you have to buy a new processor as well, fine for laptops but a lot more of a problem for desktops; and in desktop AMD would potentially be doing themselves out of the low to mid range market where they're genuinely competitive. They would also have to deal with potential backlash from hardware makers- AMD would to most practical purposes then be making motherboards and graphics cards themselves, it's unlikely that Gigabyte/ ASUS etc are going to like that.

 

I also have to say that I find his obsession with S curves a bit disingenuous. I mean, he's right in principle but it's a bit... off starting an S curve off with the GTX480 which was simply not a good card but was preceded by some good ones. Not that he should necessarily go all the way back to NV1 or anything, but GPUs have always had generational plateaux where you'd have the tail end of the last generational leap offering smaller improvements, then a big improvement that gets iterated on until it too offers smaller improvements.

Edited by Zoraptor
Posted

I wouldn't be surprised if eventually we get an SPU/SOC-like architecture for desktops where the GPU is still modular, where a faster mainstore is shared between cpu and gpu. Probably not for a while now though.

Posted

I can't see it happening any time soon. End of the day there is a reason why PCs are designed as they are and why they aren't (generally) set up the same way that consoles are.

 

Most I could see happening some time recently would be selling essentially a 'boxless' PS5/ Xbox type SoC and that won't be as competitive as either console unfortunately. Current gen 1X may use a lot of tricks to claim it's 4k ready but nevertheless it still has considerably better performance than the desktop equivalent 570/580 due to optimisations that you won't have playing the PC version; and the pricing won't have volume advantage either.

 

Having said that, AMD is the only company with a full suite approach of CPU and GPU at the moment (frankly, I'm skeptical of Intel making significant consumer waves even when they do start with the discrete graphics) and AM4/ Ryzen has a fair bit of functionality on the chip that would usually be part of the chipset.

Posted (edited)

The industry successfully moved away from front-side busses and northbridges. I would think a similar sort of memory hierarchy change could be slated to bring the CPU and GPUs closer together on gaming and workstation desktops. With the PCIe buses and SDRAM all sitting so far apart, it seems like the next major frontier would be to bring all of this closer together. Hell even non-volatile memory could stand to get a bit closer. I'd think this sort of thing was probably still 10 years out, but I would not be surprised if this is where things are headed. You would have the same byopc design desktops have always had, but something much closer in spirit with the architecture of an APU, just not as a SOC as it would be applied at a slightly further up level in the memory hierarchy

 

Especially the way asynchronous parallel computing is going these days with 32+ simd cores, having them tear through all the awful branch heavy logic to determine which matrix transforms further get crunched, which need written back to the GPU along with the command buffers. Especially now that things like Vulkan really get the most of the GPU and CPUs, the memory hierarchy between can only be made better for this paradigm of numeric computing. I'm sure the CUDA people wouldn't mind closing the gap either, as the latency for computer kernels dictates if it's even worth offloading something to the GPU.

Edited by injurai
Posted

The video cherry picks data and had some incorrect points about S-curves and performance. There's many factors that make particular solutions popular or become obsolete. From a performance and cost standpoint integrating the GPU and RAM seems to be inevitable. Most people don't change their GPU or CPU just in desktops, most people buy laptops and tablets. The obstacles for this transition is HBM isn't cost competitive yet, and it would be hard to cool HBM/CPU/GPU on one package. Scale and refinements will solve HBM's cost, a better process will solve cooling.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...