Jump to content

Radeon Thread


Zoraptor

Recommended Posts

Thought it might be sensible to do a general purpose thread for Radeon as there is for Intel/ nVidiaRTX and Ryzen and speculation was occasionally spilling over

Anyway, RDNA2X speculation ahead of the October 28th announcement. AMD has been very restrictive on the leaks, so at this point very little concrete is known.

High confidence- explicit statements from AMD or other official or hard to fake sources

  • Big Navi is >40CUs, ie more CUs than 5700XT, and there is more than one 'Big Navi'
  • TSMC 7nm
  • hybrid Raytracing; uses the same hardware for both raster and raytracing performance (from PSXBox information)
  • Navi 21, 22, 23 chips at least
  • 2.2+ Ghz clocks (PS5)
  • (claimed) performance for a big navi card just below RTX3080 (per Ryzen 5000 launch). Phrasing was cagey as to whether it was the biggest navi variant or not; and obviously not independent benchmarking
  • (claimed) 50% perf/watt improvement over RDNA1

Medium Confidence- from leakers with good track records and Apple OSX beta updates (which are solid, but OSX of course isn't windows and for example had RDNA1 cards with HBM that windows never got)

  • 505-540mm^2 for largest die (by way of comparison, 5700XT 40CU was ~200mm^2)
  • 256/192 bit buses
  • 16/12/8 GB GDDR6 non X
  • 128MB (most commonly cited) 'Infinity Cache' on die
  • HBM is definitely supported (but 'confirmed' only for Apple SKUs)
  • Up to 2.5Ghz clocks (Apple)
  • 80CU Navi21,
  • 72CU (and 52CU, if 60 CU 'Navi 22') non XT models

Lower Confidence- more speculative, from consoles and inference from leakers with good track records, or information from otherwise good sources where there are significant contradictions

  • DirectML for DLSS equivalent (albeit w/ no tensor cores; definite on xbox but speculative for AMD brands)
  • HBM for consumers at top end
  • 'Biggest Navi' (92-100CU)
  • 280-300W for the 80 CU variant
  • 40CU Navi22 and 32CU Navi23 listed in Apple OSX update; but other sources consistently have a 60CU Navi22, and a 40CU gap from Navi21 ->22 then just 8CUs to Navi23 seems unlikely, so the exact set up is unconfirmed.

Speculation is that the raytracing performance will be better than Turing but worse than Ampere. Some sort of large on die cache is now very likely, this should help raytracing performance significantly and compensate for a relatively slow 256 bit bus. Speculated launch date is sometime in November.

Availability may be interesting. For a 540 mm^2 GPU chip AMD could make around 7 Zen chiplets and there has clearly been a shortage of fab space at TSMC recently. While that should have eased with some companies shifting to 5nm and Huawei gone if there is supply pressure it's likely to be the GPU side that suffers first because they simply make a lot more money off CPUs for the wafer space.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

I think it can get close to the 3080 in rasterization, but how important is that? I'm sceptical about the features, they did manage to catch up with VRR in 6 months but I don't think they've invested the same time and money as Nvidia has in AI and RT. It is not going to have RT or DLSS equivalents. Every Radeon I've been interested in for about 10 years has had availability issues, and that's how long it's been since I've bought an AMD card. If it's good, good luck trying to get one before March.

  • Like 1
Link to comment
Share on other sites

Yeah let's face it, DLSS combined with Ray Tracing is an extremely magical accessory in Nvidia's arsenal and Radeon's hasn't spilled too many beans on how they're gonna match it.  I guess we'll have to find out on the 28th because yeah, raw rasterization simply isn't enough anymore.

However, some games do look better on Radeon, colorful shiny games like Borderlands and The Outer Worlds tend to have better color density with Radeon cards in my opinion.  It doesn't use compression like Nvidia does.  Nvidia cards tend to do better in darker games like S.T.A.L.K.E.R., Metro Exodus, and Control.  So I guess, like everything, it boils down to preference and price.

Link to comment
Share on other sites

I haven't followed those Infinity Cache rumors, but was there ever any actual clarification about the Megabit vs Megabyte things? There were alot of people saying 128MB would be too much area, while 128Mb would be more reasonable.

Civilization, in fact, grows more and more maudlin and hysterical; especially under democracy it tends to degenerate into a mere combat of crazes; the whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, most of them imaginary. - H.L. Mencken

Link to comment
Share on other sites

Not really any clarification. 128MB would fit, but it's definitely a lot of space (~140mm^2, so over a quarter of the die taken up), but it is by far the most commonly cited amount. Personally, if it is 16 GB VRAM and 256 bit bus I'd say it would definitely need more than 128 Mb of cache to compensate or everyone would have smaller buses and bigger caches.

Though as I've said elsewhere, this really does seem to be more than touching on the high end problems HBM was designed to solve...

5 hours ago, AwesomeOcelot said:

..I don't think they've invested the same time and money as Nvidia has in AI and RT. It is not going to have RT or DLSS equivalents.

AMD itself hasn't invested the time or money, but they have been funded by Sony and especially Microsoft, who have deep pockets and in MS' case a decent amount of independent R&D into relevant fields as well.

RDNA2+ will have raytracing, just no separate RT/ Tensor hardware. NextBox will definitely have a DirectML/ Azure DLSS equivalent, AMD may get access to it as well but it's certainly unconfirmed. I'm more than a little skeptical of nVidia on the added specialist hardware front, especially the raytracing side, though I will freely admit there's a healthy dollop of my intrinsic dislike of nVidia potentially at play; but if RDNA2 can get to above Turing performance in raytracing without specialist hardware then much of the point of that specialist hardware has evaporated.

Link to comment
Share on other sites

I don't expect AMD to beat Nvidia, but if they can come close enough to make Nvidia break out that 3080 Ti SUPER we all know they have in their back pocket, that would be swell.

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Link to comment
Share on other sites

7 hours ago, Zoraptor said:

RDNA2+ will have raytracing, just no separate RT/ Tensor hardware. NextBox will definitely have a DirectML/ Azure DLSS equivalent, AMD may get access to it as well but it's certainly unconfirmed. I'm more than a little skeptical of nVidia on the added specialist hardware front, especially the raytracing side, though I will freely admit there's a healthy dollop of my intrinsic dislike of nVidia potentially at play; but if RDNA2 can get to above Turing performance in raytracing without specialist hardware then much of the point of that specialist hardware has evaporated.

Pascal and RDNA 1 had raytracing. The only way that RDNA2 gets Turing performance without RT cores is if it's twice as fast, and even then people said RT performance on Turing without DLSS was not good enough. GPUs will get twice as fast as Turing eventually. It's not scepticism at this point, we've had the hardware for a while, tensor cores aren't even a Nvidia thing. For RT you'd be banking on an AMD software breakthrough that would be leaps beyond what Nvidia did with Turing. Nvidia's RT isn't just hardware, it's an incredible work of software that runs better on specializied hardware, it too a very long time to develop. We'll soon see, but looking at the consoles, it's not looking good for AMD, it doesn't look equivalent to Turing.

Edited by AwesomeOcelot
Link to comment
Share on other sites

  • 2 weeks later...

AMD Radeon RX 6800XT alleged 3DMark scores hit the web [Videocardz.com]

I'm not that familiar with synthetic benchmark scores, haven't run one since 2001. Having a quick scan seems to suggest it's faster than a 2080 Ti and close to a 3080 in rasterization and around a 2070 in RT performance. Which makes sense since the 2070 scores around half what the 6800XT allegedly does on Time Spy Extreme.

Link to comment
Share on other sites

Good thing the presentation is only a few days away, because the sheer volume of all these Big Navi rumors is dizzying.

Edited by Keyrock

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Link to comment
Share on other sites

I would assume Firestrike is still a somewhat accurate representation of raw rasterization performance, even if it is outdated. The legitimacy of those numbers, on the other hand, I very much question.

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Link to comment
Share on other sites

There's a major disparity between the Firestrike and Time Spy benchmarks, where the 6800XT is close to the 3080. There used to be a gap between AMD and Nvidia in DX12 and DX11 performance, but it was the other way around. There's more to these benchmarks and games in general than pure rasterization, even if it was performance based, it wouldn't explain the difference.

Link to comment
Share on other sites

Just now, AwesomeOcelot said:

There's a major disparity between the Firestrike and Time Spy benchmarks, where the 6800XT is close to the 3080. There used to be a gap between AMD and Nvidia in DX12 and DX11 performance, but it was the other way around. There's more to these benchmarks and games in general than pure rasterization, even if it was performance based, it wouldn't explain the difference.

You appear to be correct:

Image

Also factor in DLSS 2.x and a Ray Tracing (both require DX12) looks like Big Navi can't quite destroy Nvidia on up-to-date performance.  

However, when you factor in the lower power consumption and probably cheaper price tag, should still be a viable option for many.

Link to comment
Share on other sites

If it's $600 it better have some killer features that are as good as DLSS and Ray Tracing. At $550, people might say, most AAA multi platform games are not going to have much RT because consoles can't do RT well, and the number of games with good DLSS is still small. These are only synthetics though, it's possible actual games look a lot different, and a lot of benchmark suites have DX11 games in them.

Link to comment
Share on other sites

50 minutes ago, ComradeMaster said:

You appear to be correct:

However, when you factor in the lower power consumption and probably cheaper price tag, should still be a viable option for many.

Since we are going with speculatory things

https://www.igorslab.de/en/amd-radeon-rx-6000-the-actual-power-consumption-of-navi21xt-and-navi21xl-the-memory-and-the-availability-of-board-partner-card-exclusive/

Navi 21XT ~320W

Get those Sweatbands ready boyos, red or green, it's gonna get toasty.

Civilization, in fact, grows more and more maudlin and hysterical; especially under democracy it tends to degenerate into a mere combat of crazes; the whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, most of them imaginary. - H.L. Mencken

Link to comment
Share on other sites

Igor is out of line with most other estimates there, though not massively so. Around 280W seems to be the general consensus, and the PCB shot that leaked supports that since the VRM set up is robust, but less robust than a 3080. Though as with below that may be for a 6800XT rather than putative Biggest Navi.

2 hours ago, ComradeMaster said:

You appear to be correct:

Image

That graphic is likely wrong. Not the data itself, but the assumption that the benchmarks are for a 80CU unit.

The leaks are from AIBs, and the consensus is that AIBs have only got Biggish Navi 6800XT (72CUs and down, most likely) with Biggest Navi(s) being 1st party AMD only, and AMD v2020 in comparison to Intel or nVidia leaks less than someone wearing half a dozen Depends.

Guess we find out for sure in 100 odd hours anyway.

 

  • Like 1
Link to comment
Share on other sites

Igor had a pretty accurate estimate for the power consumption for the 3080 before others. The graphic has to be wrong, or everyone else posting results is wrong, because they're labelling it as the 72CU part.

I wish I knew more about GPU architecture and the 3DMark benchmarks to know why the 6800XT loves Firestrike and the 2080 Ti hates it. It would be nice to get some 1080 Ti comparisons in there. It's only through lack of benchmarks this is even being discussed, I didn't care about 3DMark scores for the 5700XT or 3080 launches, especially not Firestrike. This seems like an Ashes of the Singularity situation.

It would be interesting if AMD dominates Nvidia on old games and whether that would be a big enough market to compete with Nvidia on.

Edited by AwesomeOcelot
  • Like 1
Link to comment
Share on other sites

28 minutes ago, AwesomeOcelot said:

I wish I knew more about GPU architecture and the 3DMark benchmarks to know why the 6800XT loves Firestrike and the 2080 Ti hates it.

Why AMD seems to be having a tougher time dislodging Nvidia than Intel is that Nvidia, despite having market dominance for many years, always seems to self innovate and subsequently "dump the old".  Starting with Turing they really started pushing DX12 features, DX11 be damned.  AMD appears to be a bit more conservative in its quest for dominance and more backwards friendly.

But yeah there's just so many questions and speculations right now that you'll go mad trying to unravel it yourself.  The 28th can't come soon enough!

Link to comment
Share on other sites

48 minutes ago, AwesomeOcelot said:

I wish I knew more about GPU architecture and the 3DMark benchmarks to know why the 6800XT loves Firestrike and the 2080 Ti hates it. It would be nice to get some 1080 Ti comparisons in there. It's only through lack of benchmarks this is even being discussed, I didn't care about 3DMark scores for the 5700XT or 3080 launches, especially not Firestrike. This seems like an Ashes of the Singularity situation.

We simply don't know enough to say much at all. Are the leaked benches even all the same chip? And it's not like Engineering Samples are all set up the same, even if they are the same chip we don't know if they're doing full power runs or whatever. Even the official benchmarks from the Ryzen presentation didn't say what chip it was. It's also kind of pointless engaging in speculation with so little time to go before we find out for reelz, but if my arm was twisted and assuming they are for the same general chip and set up...

..if there isn't some sort of semi deliberate chicanery going on like doing a combined CPU/GPU benchmark with a tricked out Zen3 vs a 2080Ti w/ a Celeron it could be the 'infinity cache' at work; CPU side some benchmarks absolutely love the big caches on Zen2/3 so similar could happen with video benchmarks. I'd suspect that would also be a lot more likely on an older benchmark.

Link to comment
Share on other sites

I think it's also down to Nvidia having so much market share they dictate the standards. DX12 only became adopted once Nvidia supported it, even though AMD led the development of it with Mantle. AMD definitely had the DX12 advantage when I bought the GTX 970. It's surprising how many of the top played PC games are DX 9 & 11, maybe even the majority. Some of the rest are Vulkan. I know some people will not play a DX12 game in the next few years. So it's not a crazy strategy.

Link to comment
Share on other sites

There's certainly an element of majority bias at play, one of the historic criticisms of Time Spy as a benchmark was that it forced AMD cards that could do proper async compute to use a crappy fallback instead-- because nVidia cards didn't support proper async compute and could only use the crappy fallback.

(Though that shouldn't be a reason for a performance difference now, since 2000 series+ do have proper async)

Link to comment
Share on other sites

The only benchmark I have any faith in whatsoever is the one Dr. Lisa Su showed during the Zen 3 reveal for the RX 6??? (X?T?), with the caveat that the conditions used to generate said benchmark were almost certainly set to be as favorable as possible for the RX 6??? (X?T?)

4 more days and we'll finally get some concrete information, though, as always, have your trusty 55 gallon drum of salt handy for any benchmark preseted by AMD themselves.

Edited by Keyrock

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Link to comment
Share on other sites

Time Spy didn't force AMD to use a fallback. Its workload didn't take advantage of AMD's superior async compute. 3DMark's defence was that they spoke to AMD, Nvidia, game devs, and based their decision on what a game developer would do. The only two games that favour GCN over Pascal due in part to async compute as far as I know were Doom 2016 and Ashes of the Benchmark. I'm not a fan of synthetic benchmarks, but in terms of being representative of DX12 games in 2016-2021, I don't think having async compute favour AMD in Time Spy would make it more representative.

Why 3dMark didn't create a benchmark like Port Royale but for async compute? There is an incumbent advantage, because game devs are developing games based on Nvidia's ray tracing, they always seem to have several killer apps. There are quite a few games with RT based on Nvidia implementation. Also it's a fact that ray tracing is a lot cooler than async compute. Ashes of the Benchmark already existed, a 3D Mark async compute benchmark wasn't really necessarily, since Ashes had been available since 2015 as a benchmark.

It's possible that this will change when more AAA multi platform games are released for the latest console generation, some of the RT implementations look like they won't take advantage of Nvidia hardware. It's a shame for PC gamers, and AMD, that a lot of these games with RDNA2 RT implementations are going to be console exclusives.

The leaked rumour is that the 3080 is 22% faster than the 6800XT in port royal. If that puts it around the 3070 range at $500, it's going to be a pretty tough choice if 6800XT is $500-550. It's going to be a very easy choice if the 6800XT is $600-650. A lot of games from 2019 onwards are going to run better on the 3070 with DLSS and RTX than the 6800XT.

Edited by AwesomeOcelot
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...