-
Posts
1486 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Everything posted by AwesomeOcelot
-
I've managed to get a Founder's Edition 3080, at least I got through checkout and have a confirmation email. All it took was over 100 attempts and F5 camping for just under a week, almost 160 hours.
-
My 48" LG OLED is almost the size of my desk already. A 100" TV would be as wide as the wall my desk is against. Who needs VR when you can have a holodeck? I genuinely don't notice the difference most of the time between 1440p and 4K when gaming. I'm fine with dynamic resolution because in action, dropping resolution isn't going to make a difference for me. I'm more of a FPS snob. Every decade I double my FPS, and I literally cannot play on lower FPS then, I usually end up not finishing games and go back to high FPS games.
-
The likelihood is that most people's 4K monitors look as good as that TV in terms of resolution, as the PPI is going to be roughly equivalent. It is very cool that a GPU can actually run 8K, and you can now game on a 80-120" TV without sacrificing resolution. 2klikphilip tried running 8K on a 3080, but the 10GB VRAM meant most new games broke. Definitely need 16GB min for 8K. Of course at 4K the 3090 is only 10-15% faster than a 3080 at double the price. My Index came today, I haven't decided whether to test it out before I built my PC. I'm waiting for Zen 3 so I could be waiting until November. I am sure I'll wait for it before I play Half-Life Alex.
-
If the parent company owns the IP. The developer created the IP. The fans of the developer want a game with the IP. It's hard to argue against, it's whether Obsidian want to do it.
-
I love how timely this comment was. How unlikely? Looks like Fallout's back on the menu boys.
-
Every gen was a game changer back then went from a ATI Rage Fury ('98) to a Voodoo 3('99) to Geforce 2 Pro('01) to a Radeon 9600('04). AMD HD4650 & HD5870. Then GTX 970, 1080, and now RTX 3080. I tracked the launch of the Vega 56, but it was more expensive than the 64 by the time I wanted an upgrade and they were both very uncompetitive. The 1080 was way cheaper with better performance, lower power, lower noise. I can't wait to try a ray tracing game at 4K DLSS 2.0 @ 120FPS. Right now my 1080 can't output 4K 120hz or GSYNC to my OLED.
-
Voodoo 3 3000 was my 2nd video card. UT99 and Rainbow 6 were games I used to play online.
-
I was not able to get a FE card. As far as I can tell they were never on sale.
-
In my experience most benchmarks aren't like what you describe. A specific subset of benchmarks are like that, Digital Foundry do them from time to time. Most benchmarks are even worse than that in being representative of actual gameplay. They're designed purely for showing difference and consistency. If hardware reviewers only used real world testing they'd go out of business, because in real world testing, CPU's are not very important for games (there are a few exceptions but not many), and GPUs from model to model e.g. 2080 vs 2080 super, wouldn't produce noticeable differences in a real world setting. Reviewers actually use a fresh install for benchmarking, they are as clutter free as could possibly be. They almost always use an open air bench, which might be better in terms of thermals than the average user. The 10900K reviews use 1080p medium settings with a 2080 Ti. Not saying that people don't do that, but that's probably 1% of owners. I swear a lot of benchmarking sites are about benchmarking for benchmarkers, people who build PCs to get 3d Mark scores. Which I am not against, it's a hobby, like drag racing and car tweaking.
-
I don't care about that, it seems you're talking about something else to what the benchmarks are. "I get 70 fps" while playing isn't what a benchmark is, if your PC ran this benchmark, you'd get around 44 fps, 4K, "bad ass" settings (I think is ultra in BL). Maybe a bit more, as the 2080 Ti does over clock, but under 50. You seem to be saying, the game is a very playable 60 fps at most times. Digital Foundry do those kind of videos too, they have settings guides and target minimum frames. I wouldn't understand upgrading from a 2080 Ti right now. 3090 is for 8K and AI compute as far as I can tell, it's a terrible gaming card apart from that, even worse value for money than the 2080 Ti was. The 2080 Ti will probably drive what you want for a long time. A 3090 will only burn a hole in your wallet unless you upgrade to 8K. I'm a value orientated builder and patient, I couldn't care less about bragging rights. 3080 is 30% faster than a 2080 Ti for a little over half the price. I also want it because my OLED 4K needs HDMI 2.1 to drive 120fps HDR. I also ordered a Valve Index for HL Alex. I would have never gotten a 10900K because the 10600K would runs games virtually the same. Actually the 3950X is only 1% different at 4K according to TechPowerUps many benchmarks. Most games at 4K are GPU limited.
-
Digital Foundry benchmark of BL3 4K on a 3080 is over 60, 53 on 1% low. 2080 Ti is 44 mean, 39 1% low.
-
Benchmarks are consistent and average frame rates are over several runs. You can launch a game and see 70 ish fps, but that's one area with a random viewport position, it's not multiple runs of averaged frame rates, that get averaged. You're probably not keeping an eye on the frame rate in action. Some games vary in frame rate over each benchmark run. Another reviewer, using a different run of the same game, will get a different frame rate, just by looking at different things in game. The better reviewers will try to control for all these factors, the average gamer will not.
-
I hope I don't have to wait long for Zen 3. I want to start building ASAP.
-
I'm going to try to get a founders edition just to see what this new cooler is about. In terms of blowing air into the CPU cooler. It's been my experience, and the opinion of smarter people than me, that the delta is large enough that it shouldn't matter. I hope to find out though.
-
Nvidia's development team were asked about the memory size and they said for 4K AAA games only use 4-6GB. Considering the 16GB shared memory of the consoles, I can't see people needing more than 10GB for AAA gaming at 4K. They said they chose higher speed memory over higher memory size because it's more important for running games. If they turn around and release higher memory cards, they're going to have some explaining to do.
-
I bought the HD5870 that came out the same month as the HD5850. The GTX470 came out 6 months later, I wouldn't say it was equal to the GTX470. Price/performance was really good for the HD5850, and bad for the 470, so the difference is large. I seem to remember availability being a problem with the HD5850. On paper, the MSRP and performance of the Vega56 was better than Nvidia, I couldn't get one, and when I could, it was more expensive than the GTX1080 I bought. Getting that 1080 was the best deal I ever got because it was around $400, I mined ETH for a month and sold it for around $100.
-
I've owned more ATI/AMD GPUs than Nvidias, never seen them smash Nvidia in performance. The only time where I though ATI was ahead was when I bought a 9000 series card. In reality, price/performance, it's been very competitive, until recently where AMD has struggled. Nvidia has so often been ahead in bringing features to market. Getting a 970 and 1080 were easy choices for me, and it looks like I'm getting a 3080.
-
Nvidia rep said by less than a few percent. PCIe2.0 only bottlenecks Turing cards by a few percent. It's also less the higher the resolution, because high frames use more bandwidth than low frames, so 1080p at 200fps bottlenecks more than 4K 60fps. The more CPU work the more bandwidth it uses as well. Then a lot of the time you could prevent the bottleneck if you capped your frame rate, or used adaptive sync. The engine seems to matter more than the graphical detail. Bad looking games can bottleneck more than good looking games. So it's not preventing good looking games and high FPS, so much as preventing games from spamming the PCIe bus for no reason.
-
3090 seems like a deep learning card. Probably wouldn't even be that much better for gaming than a 3080 and certainly not good price/performance wise.
-
Hardware T&L is comparable, in the way they were both software before getting specific hardware, although real-time ray tracing was never popular, where as T&L was everywhere. Fully programmable vertex and pixel shaders would be comparable in how much of an impact it will have on how graphics are made, but is less comparable in how it the hardware supports it. Async compute is the last time a hardware feature was pimped so much. There was some impressive performance gained, especially with lower end CPUs, but it didn't really change graphics.
-
DLSS was always software. The hardware that it runs on is used for both DLSS 1.0, 1.9 (unofficial name for Control's implementation), and 2.0. There were statements about supercomputers and AI, but that was never implemented. Developers struggled to implement it on engines that were already well into development, like RTX ray tracing but even worse. At this point it's free TAA much better and actually boosts frame rates. It will probably get to a point where everybody should have some form of DLSS enabled, even if the base resolution is quite high, and the quality is highest.
-
Even if AMD could compete on processing, Nvidia's software and features like DLSS 2.0 and RTX are so far ahead. AMD's only hope is that Nvidia decide to expand into more segments and forget about the PC space for 6 years, but what kind of company would... do... th... The consoles were able to get so much out of mediocre hardware, AMD hardware, but that software effort wasn't put into PC games. Consoles had dynamic shaders and resolution, checkerboard scaling, and some pretty clever tricks that would have been really good as AMD features.
-
Rumour of better binned versions of 3600X, 3800X, 3900X, with the XT suffix. Not surprising, we've seen this a lot. Are they clearing inventory before a Zen 3 announcement, or is Zen 3 going to arrive a bit later than rumoured?