Jump to content

AwesomeOcelot

Members
  • Posts

    1486
  • Joined

  • Last visited

Posts posted by AwesomeOcelot

  1. My 48" LG OLED is almost the size of my desk already. A 100" TV would be as wide as the wall my desk is against. Who needs VR when you can have a holodeck?

    I genuinely don't notice the difference most of the time between 1440p and 4K when gaming. I'm fine with dynamic resolution because in action, dropping resolution isn't going to make a difference for me. I'm more of a FPS snob. Every decade I double my FPS, and I literally cannot play on lower FPS then, I usually end up not finishing games and go back to high FPS games.

  2. The likelihood is that most people's 4K monitors look as good as that TV in terms of resolution, as the PPI is going to be roughly equivalent. It is very cool that a GPU can actually run 8K, and you can now game on a 80-120" TV without sacrificing resolution. 2klikphilip tried running 8K on a 3080, but the 10GB VRAM meant most new games broke. Definitely need 16GB min for 8K. Of course at 4K the 3090 is only 10-15% faster than a 3080 at double the price.

    My Index came today, I haven't decided whether to test it out before I built my PC. I'm waiting for Zen 3 so I could be waiting until November. I am sure I'll wait for it before I play Half-Life Alex.

    • Like 1
  3. 1 hour ago, Amentep said:

    FWIW Obsidian does not own the Fallout IP, and does not own Fallout: New Vegas, so them working on a remaster of the game would be up the Bethesda, which would be fairly unlikely.

    I love how timely this comment was. How unlikely? Looks like Fallout's back on the menu boys.

    • Haha 2
  4. Every gen was a game changer back then went from a ATI Rage Fury ('98) to a Voodoo 3('99) to Geforce 2 Pro('01) to a Radeon 9600('04). AMD HD4650 & HD5870. Then GTX 970, 1080, and now RTX 3080.  I tracked the launch of the Vega 56, but it was more expensive than the 64 by the time I wanted an upgrade and they were both very uncompetitive. The 1080 was way cheaper with better performance, lower power, lower noise. I can't wait to try a ray tracing game at 4K DLSS 2.0 @ 120FPS. Right now my 1080 can't output 4K 120hz or GSYNC to my OLED.

    • Like 1
  5. 46 minutes ago, LadyCrimson said:

    *scratches head*  
    No, I know that's how benchmarks look/are run. BL3 does let you do that from an ingame setting actually, forgot about that (most of my games do not have such a feature, I forget more games do that now). But "flying" at rapid speed in/through the game is not playing, no one plays a game that way outside of fly-cheat codes.

    I guess my confusion then stems that I thought benchmarks gave a decent representation of what kind of actual real average gameplay FPS one might see/expect with given settings vs. only an identical stress test to compare % increases. Sounds like that isn't necessarily always going to be the case, at least not imo? I suppose that would explain why I look at benchmarks and wonder why my rigs (over the years) often seem to be (real playing) higher fps etc. than indicated. And here I thought I just kept such a clutter free rig or something. :lol:

    Running BL3's benchmark looks like 52ish, 69ish if I turn V. Fog all the way Off (the harshest game setting). I only ran them once each, dunno how much variance there is/might be if repeated a lot.

    In my experience most benchmarks aren't like what you describe. A specific subset of benchmarks are like that, Digital Foundry do them from time to time. Most benchmarks are even worse than that in being representative of actual gameplay. They're designed purely for showing difference and consistency. If hardware reviewers only used real world testing they'd go out of business, because in real world testing, CPU's are not very important for games (there are a few exceptions but not many), and GPUs from model to model e.g. 2080 vs 2080 super, wouldn't produce noticeable differences in a real world setting.

    Reviewers actually use a fresh install for benchmarking, they are as clutter free as could possibly be. They almost always use an open air bench, which might be better in terms of thermals than the average user.

    The 10900K reviews use 1080p medium settings with a 2080 Ti. Not saying that people don't do that, but that's probably 1% of owners. I swear a lot of benchmarking sites are about benchmarking for benchmarkers, people who build PCs to get 3d Mark scores. Which I am not against, it's a hobby, like drag racing and car tweaking.

  6. 4 hours ago, LadyCrimson said:

    Btw, I'm not ragging on the 3080.  It's a great sounding card.  If one hasn't upgraded a gpu for a bit and wants to move to 4k it's a good buy seems like.

    I'm just coming from the perspective of "have a good gpu for my needs already/skip generation" so I may sound more critical or nitpicky than I intend to, lol.

    If there's a 3080ti that'd (probably) be the one I'd want if I was buying ... unless the 3090 is the new ti? I'm not clear about that.

    I don't care about that, it seems you're talking about something else to what the benchmarks are. "I get 70 fps" while playing isn't what a benchmark is, if your PC ran this benchmark, you'd get around 44 fps, 4K, "bad ass" settings (I think is ultra in BL). Maybe a bit more, as the 2080 Ti does over clock, but under 50. You seem to be saying, the game is a very playable 60 fps at most times. Digital Foundry do those kind of videos too, they have settings guides and target minimum frames.

    I wouldn't understand upgrading from a 2080 Ti right now. 3090 is for 8K and AI compute as far as I can tell, it's a terrible gaming card apart from that, even worse value for money than the 2080 Ti was. The 2080 Ti will probably drive what you want for a long time. A 3090 will only burn a hole in your wallet unless you upgrade to 8K.

    I'm a value orientated builder and patient, I couldn't care less about bragging rights. 3080 is 30% faster than a 2080 Ti for a little over half the price. I also want it because my OLED 4K needs HDMI 2.1 to drive 120fps HDR. I also ordered a Valve Index for HL Alex. I would have never gotten a 10900K because the 10600K would runs games virtually the same. Actually the 3950X is only 1% different at 4K according to TechPowerUps many benchmarks. Most games at 4K are GPU limited.

     

  7. Benchmarks are consistent and average frame rates are over several runs. You can launch a game and see 70 ish fps, but that's one area with a random viewport position, it's not multiple runs of averaged frame rates, that get averaged. You're probably not keeping an eye on the frame rate in action. Some games vary in frame rate over each benchmark run. Another reviewer, using a different run of the same game, will get a different frame rate, just by looking at different things in game. The better reviewers will try to control for all these factors, the average gamer will not.

  8. I'm going to try to get a founders edition just to see what this new cooler is about. In terms of blowing air into the CPU cooler. It's been my experience, and the opinion of smarter people than me, that the delta is large enough that it shouldn't matter. I hope to find out though.

    • Like 1
  9. Nvidia's development team were asked about the memory size and they said for 4K AAA games only use 4-6GB. Considering the 16GB shared memory of the consoles, I can't see people needing more than 10GB for AAA gaming at 4K. They said they chose higher speed memory over higher memory size because it's more important for running games. If they turn around and release higher memory cards, they're going to have some explaining to do.

    • Like 1
  10. 1 hour ago, Azdeus said:

    Ontop of my head, the HD5850 was equal to the 470 for 100$ less, and it came out almost a year before, the 460 that was the price competitor was without a chance. I don't remember most of the others, but than one stood out quite alot. If I remember correctly my HD7970 was faster than the Geforce equivalent aswell.

    The last nVidia card I owned was a Geforce 2 MX for a few days, went back to my TNT2 for a long while, until I got an Radeon X800

    I bought the HD5870 that came out the same month as the HD5850. The GTX470 came out 6 months later, I wouldn't say it was equal to the GTX470. Price/performance was really good for the HD5850, and bad for the 470, so the difference is large. I seem to remember availability being a problem with the HD5850. On paper, the MSRP and performance of the Vega56 was better than Nvidia, I couldn't get one, and when I could, it was more expensive than the GTX1080 I bought. Getting that 1080 was the best deal I ever got because it was around $400, I mined ETH for a month and sold it for around $100.

  11. I've owned more ATI/AMD GPUs than Nvidias, never seen them smash Nvidia in performance. The only time where I though ATI was ahead was when I bought a 9000 series card. In reality, price/performance, it's been very competitive, until recently where AMD has struggled. Nvidia has so often been ahead in bringing features to market. Getting a 970 and 1080 were easy choices for me, and it looks like I'm getting a 3080.

  12. 1 hour ago, Keyrock said:

    I'm curious if PCIe3.0 will bottleneck a 3090. If so, then the answer to your question is yes.

    Nvidia rep said by less than a few percent. PCIe2.0 only bottlenecks Turing cards by a few percent. It's also less the higher the resolution, because high frames use more bandwidth than low frames, so 1080p at 200fps bottlenecks more than 4K 60fps. The more CPU work the more bandwidth it uses as well. Then a lot of the time you could prevent the bottleneck if you capped your frame rate, or used adaptive sync.

    The engine seems to matter more than the graphical detail. Bad looking games can bottleneck more than good looking games. So it's not preventing good looking games and high FPS, so much as preventing games from spamming the PCIe bus for no reason.

  13. Hardware T&L is comparable, in the way they were both software before getting specific hardware, although real-time ray tracing was never popular, where as T&L was everywhere. Fully programmable vertex and pixel shaders would be comparable in how much of an impact it will have on how graphics are made, but is less comparable in how it the hardware supports it.

    Async compute is the last time a hardware feature was pimped so much. There was some impressive performance gained, especially with lower end CPUs, but it didn't really change graphics.

    • Thanks 1
  14. DLSS was always software. The hardware that it runs on is used for both DLSS 1.0, 1.9 (unofficial name for Control's implementation), and 2.0. There were statements about supercomputers and AI, but that was never implemented. Developers struggled to implement it on engines that were already well into development, like RTX ray tracing but even worse. At this point it's free TAA much better and actually boosts frame rates. It will probably get to a point where everybody should have some form of DLSS enabled, even if the base resolution is quite high, and the quality is highest.

    • Like 1
  15. Even if AMD could compete on processing, Nvidia's software and features like DLSS 2.0 and RTX are so far ahead. AMD's only hope is that Nvidia decide to expand into more segments and forget about the PC space for 6 years, but what kind of company would... do... th... 🤭

    The consoles were able to get so much out of mediocre hardware, AMD hardware, but that software effort wasn't put into PC games. Consoles had dynamic shaders and resolution, checkerboard scaling, and some pretty clever tricks that would have been really good as AMD features.

  16. Rumour of better binned versions of 3600X, 3800X, 3900X, with the XT suffix. Not surprising, we've seen this a lot. Are they clearing inventory before a Zen 3 announcement, or is Zen 3 going to arrive a bit later than rumoured?

×
×
  • Create New...