Jump to content

AwesomeOcelot

Members
  • Posts

    1486
  • Joined

  • Last visited

Everything posted by AwesomeOcelot

  1. SotTR has a good DLSS implementation, there's no reason to not have it on at 4K. 3070 for this game is the pick all day long against the 6800 at $80 cheaper. 3 of the games benchmarked on AMD slides had DLSS 2.0. A significant amount of games have RTX implementations or DLSS. AMD needs an equivalent of DLSS to be competitive. SotTR is one of the more minimal RT implementations, there's going to be a few of them being launched with the consoles. RT heavy games favour Ampere over Turing. It's looking like the 6800 justifies it's price, and it could actually be the pick of the bunch from the 6000 series. If it OCs well then it will also be harder to get.
  2. They've got to compete with them, and they will at power consumption and price, at points they decide. They might not have a full stack, AMD didn't have a top end the past generation. Intel might stick to the $100-300 segment. People tend to forget that Intel managed to capture almost the entire budget market, with inferior technology and support, even with GMA they were present, and that was awful. Enthusiast level gaming is not the entire gaming market, it's not even the majority. Being performance dominant is not the only way to do business, AMD has never taken over a market from Intel, even with periods of better chips measured in years. If the DG1 beats the MX330 and Intel sells it to OEMs for cheaper, that beats Nvidia.
  3. You release the SUPER/Ti versions in the summer. Nvidia probably know a lot more about AMD than the general public. RT took about a year to get going. It took 2 years for Nvidia to get DLSS right. VRS hasn't had a good implementation with the promised performance improvements. AMD spent 6 months catching up to Nvidia on VRR, they're not going to have these features up to Nvidia's level on launch. I don't think AMD will ever have a DLSS equivalent this generation, they don't have tensor cores, they haven't spent the years of development on it. Ray tracing on console games is very limited because the console hardware is mid range. There's 3 RT games on launch for AMD, the 6000 series is Turing level RT, and that's not going to look good against RTX. People are angry with Nvidia, they must have not tried to get a good AMD GPU in the last 10 years. When reviews come out, and a month after launch, the 6000 series is going to look expensive and supply limited. It's going to be like Turing where it looks slow and all the featues are "still in development".
  4. What are you going to be using all them cores for, some rendering like blender or cinebench?
  5. That's in line with the 3d Mark leaks, RDNA2 has Turing level RT acceleration, therefore the 6900XT would be ~20-25% faster than 2080 Ti. The review guides sent out to tech sites suggest that RT is accelerated on each CU, like Ampere and Turing have a RT core on each SM. Nvidia has spent 2 years building support for DLSS and RT. The 6000 series will launch with 3 RT based games. By that time Nvidia will have over 20. I hope this motivates Nvidia to push out DLSS and RT even more. They also mention how their infinity cache speeds up the slow VRAM. Probably like the Xbox 360 sped up its RAM with ESRAM.
  6. If your target is 4K 100fps, none of this generation can do that, with the games currently available, let alone games coming up. Even a 6900XT with RAGE MODE and Smart Access Memory (that does sound really interesting, I'm been hoping for a while that AMD leverage their CPU advantage), struggles to get average above 100, and you'd want lows to be that. Also you're giving up RT for that, which is bonkers. DLSS or some equivalent is the only way to achieve that, we'll have to wait for reviews to see if these GPUs have something like that. Although you would expect AMD to actually talk about killer features. AMD has always had segmentation issues. Vega 56 vs 64, R9 Fury vs X. I wonder if 6800 is a great overclocker, and they don't want to canibalize 6800XT sales if it's priced at $500. Especially if supply is as a low as I expect it to be. The only weird thing here is how AMD were effectively a generation behind Nvidia, why was RDNA2 not in competition with Turing. They're still a generation behind in RT performance. One of the benchmarks AMD is using is Young Blood, and if it's not using DLSS/RT/VRS then it's not really a fair comparison. No one running Ampere is going to be running with any of those off in my opinion. The game is going to look so much better.
  7. "I don't care about RT" is a very weird stance. Sure, you may not play games that have RT implementations, and that probably is a lot of people, but then the intersection of those people and people that will spend more than $400 on a GPU is not many. It's like being in the 90's and saying I don't care about glide, or I don't care about hardware T&L. Some people probably didn't, as they were playing sprite based RTS, but then they probably weren't interested in GPUs in general. GN Steve reported from AMD that these GPU won't support RTX games without a rewrite. I'm not even sure there's going to be meaningful RT games for AMD this generation. My impression from what he was saying was that AMD does not have a DLSS equivalent. Perhaps AMD features will be good, like their super sampling, dynamic resolution, sharpening etc... we'll have to wait and see. My first ATI card was a Rage Fury. It was an angry GPU.
  8. I doubt that's with RT on... Also 3090 is a terrible gaming card, no one should have bought it for gaming. 6800XT is a 3080 priced card with 3070 performance. 6800 is priced above the 3070 for sub 3070 performance. This was embarassing, but not unexpected, as Radeon has been in this state for a long time.
  9. RTX is a fundamentally different implementation of ray tracing. Nvidia has contributed to DXR, it's the same situation as AMD with mantle that led to DX12/Vulkan. Console ray tracing teased so far has been a limited implementation that you might expect from a GPU without specific ray tracing cores. PC games are not going to have a RTX implementation of ray tracing and then a AMD implementation. They're going to have RTX and GTX implementation. The GTX might have cheated ray tracing, like the new console games, or the crysis remaster, they might not, but they are not going to have a RTX equivalent. I'd like to be wrong, means AMD are far more in the game than people think. AMD are perfectly capable of leading the industry with technology, having something cooking secretly, but I can't see it this generation.
  10. 360hz displays definitely show things that 240hz and 120hz don't, that's proven by high speed cameras. So pros will be interested in them. Although reaction wise, non-pros shouldn't need to worry to much about above 144hz, they're not making that reaction shot. Even non-pros can get an advantage by seeing a peak or someone run by a partially open door. If I was still playing competitive FPS I'd be looking at 240/360hz. For casual gamers, higher refresh rates just "feel" better, but 240/360 is a high frame target for a lot of games at reasonably enjoyable graphical settings. For pros, resolution is not a big issue in a lot of games. I remember playing at 800x600 in CS, which gave you an advantage. Pros are looking for pixel changes, and it's easier to see lower resolution. They are doing many things through memory anyway. For casuals, I'd argue the difference between 1440p and 2160p isn't great in motion, and things like ray tracing are worth stepping down for. Things like dynamic resolution and DLSS are making it clear that you don't need straight 4K, you only need segments of it in certain areas, situations.
  11. Pro shop. RTX 3080. Recieved = 534, customer orders = 3,786. Scan (UK). RTX 3080. Recieved = 302, customer orders = 3,393. Someone on reddit commented that Nvidia has said that the 3070 has 4 times the supply of the 3080. While the demand is obviously going to be higher for the 3070, at least more people will be able to get cards.
  12. It depends what games and whether you have Vsync/VRR(GSync/Freesync). When I used to play competitive FPS I used to play at 300fps on a 70hz monitor. Having more FPS can effect the input response time e.g. from click to boom headshot, and display response time e.g. from click to seeing a flash. While the 60hz is only outputting frames at that rate, a higher frame rate can reduce lag if it hits before the frame is pushed. For the same reason you can get frame tearing, where 2 different frames are displayed at once. People might not be able to see the difference between 60hz and 120hz the same way they do 30hz and 60hz, but I think most people can "feel" the difference. They definitely feel the difference in VR .
  13. Diminishing returns, it's the same with 720p vs 1080p vs 2160p vs 4320p, 60 hz vs 120hz vs 240hz vs 360hz, polygon counts, rays, anti-aliasing. My target refesh was 60/70hz for 20 years. Then it was 144hz, now it's 120hz. The jump from 60hz to 120hz is a lot bigger than 120hz to 240hz. The changes earlier on are way more noticeable, and going from budget to mid range is usually way bigger than from mid range to high end. Enthusiasts pay a premium for being earlier adopters and for low scale products. That's why my monitor is a TV, because while I would prefer a monitor, even though TV's are better now, monitors don't sell anywhere near as many, so they are way more expensive.
  14. It's too complicated for me to explain because I'm not an expert, but the way video works and how you can do motion blur, means that while not "true" 240hz, it's still better than 120hz. In games, where frame to frame is not predictable in the same way, you don't want the TV to use 240hz, as it will actually harm the response. All the experts say to turn processing off on TVs for gaming, usually that's what the "game" modes do, so that response is increased.
  15. My target with my 3080 is going to be 4K 120hz with VRR/gsync. Whether that's with DLSS, dynamic resolution, or lowering quality settings. Digital Foundry has great videos on how to set up games to get max visual fidelity with least performance impact. Sometimes games settings take a lot of performance, but don't do much for visuals, especially in motion. TV's have repeating frames refresh rates. My father had a 360Hz plasma over 10 years ago. It could only display frames at 60hz. It was incredibly smooth for the time though. Response times, black to white, grey to grey, also factor in to how good a monitor is in motion. OLED and Plasma are a difference class in some regards, way better. To get refresh rates of 120hz at 4K TV's need HDMI 2.1, which my TV, the LG CX has.
  16. 3070 reviews are out. A little less than what Nvidia suggested, but still close to the 2080 Ti at 4K, perhaps memory bandwidth limiting 1440p and under. RTX performance with a bit of an increase over the 2080 Ti, so it performs within a few percentage points of it in RTX enabled games, slightly under it in games without RTX. The 2080 Ti is factory overclocked as well. AIB 3070's are almost certainly going to slightly better overall than the 2080 Ti.
  17. Dread it, run from it, destiny arrives all the same... Only $500 for 7%. Intel isn't even in the top 13 of the multi-threaded.
  18. Time Spy didn't force AMD to use a fallback. Its workload didn't take advantage of AMD's superior async compute. 3DMark's defence was that they spoke to AMD, Nvidia, game devs, and based their decision on what a game developer would do. The only two games that favour GCN over Pascal due in part to async compute as far as I know were Doom 2016 and Ashes of the Benchmark. I'm not a fan of synthetic benchmarks, but in terms of being representative of DX12 games in 2016-2021, I don't think having async compute favour AMD in Time Spy would make it more representative. Why 3dMark didn't create a benchmark like Port Royale but for async compute? There is an incumbent advantage, because game devs are developing games based on Nvidia's ray tracing, they always seem to have several killer apps. There are quite a few games with RT based on Nvidia implementation. Also it's a fact that ray tracing is a lot cooler than async compute. Ashes of the Benchmark already existed, a 3D Mark async compute benchmark wasn't really necessarily, since Ashes had been available since 2015 as a benchmark. It's possible that this will change when more AAA multi platform games are released for the latest console generation, some of the RT implementations look like they won't take advantage of Nvidia hardware. It's a shame for PC gamers, and AMD, that a lot of these games with RDNA2 RT implementations are going to be console exclusives. The leaked rumour is that the 3080 is 22% faster than the 6800XT in port royal. If that puts it around the 3070 range at $500, it's going to be a pretty tough choice if 6800XT is $500-550. It's going to be a very easy choice if the 6800XT is $600-650. A lot of games from 2019 onwards are going to run better on the 3070 with DLSS and RTX than the 6800XT.
  19. I think it's also down to Nvidia having so much market share they dictate the standards. DX12 only became adopted once Nvidia supported it, even though AMD led the development of it with Mantle. AMD definitely had the DX12 advantage when I bought the GTX 970. It's surprising how many of the top played PC games are DX 9 & 11, maybe even the majority. Some of the rest are Vulkan. I know some people will not play a DX12 game in the next few years. So it's not a crazy strategy.
  20. Igor had a pretty accurate estimate for the power consumption for the 3080 before others. The graphic has to be wrong, or everyone else posting results is wrong, because they're labelling it as the 72CU part. I wish I knew more about GPU architecture and the 3DMark benchmarks to know why the 6800XT loves Firestrike and the 2080 Ti hates it. It would be nice to get some 1080 Ti comparisons in there. It's only through lack of benchmarks this is even being discussed, I didn't care about 3DMark scores for the 5700XT or 3080 launches, especially not Firestrike. This seems like an Ashes of the Singularity situation. It would be interesting if AMD dominates Nvidia on old games and whether that would be a big enough market to compete with Nvidia on.
  21. If it's $600 it better have some killer features that are as good as DLSS and Ray Tracing. At $550, people might say, most AAA multi platform games are not going to have much RT because consoles can't do RT well, and the number of games with good DLSS is still small. These are only synthetics though, it's possible actual games look a lot different, and a lot of benchmark suites have DX11 games in them.
  22. There's a major disparity between the Firestrike and Time Spy benchmarks, where the 6800XT is close to the 3080. There used to be a gap between AMD and Nvidia in DX12 and DX11 performance, but it was the other way around. There's more to these benchmarks and games in general than pure rasterization, even if it was performance based, it wouldn't explain the difference.
  23. Firestrike is a DirectX 11 benchmark from 2013. How many DX11 games are AMD expecting in 2021?
  24. Time for Passmark to change their CPU benchmark.
  25. AMD Radeon RX 6800XT alleged 3DMark scores hit the web [Videocardz.com] I'm not that familiar with synthetic benchmark scores, haven't run one since 2001. Having a quick scan seems to suggest it's faster than a 2080 Ti and close to a 3080 in rasterization and around a 2070 in RT performance. Which makes sense since the 2070 scores around half what the 6800XT allegedly does on Time Spy Extreme.
×
×
  • Create New...