Jump to content

Recommended Posts

Posted (edited)

They do, for this game at least that seems to use their hardware pretty well, while nVidia poured so many game specific optimization into their DX11 path that they don't gain much in this game unless very CPU bound. With DX12, optimal usage of resources is up to the engine devs. Previously, it was more of a trial and error process on the dev side because it was not very predictable how which GPU with which driver would react to their code, and driver optimization on the GPU side.

 

[edit]For the more technically minded: Siggraph presentations about the new APIs, including practical examples from Oxide, Valve and Unity: http://nextgenapis.realtimerendering.com/

Edited by samm

Citizen of a country with a racist, hypocritical majority

Posted

This is a game designed around Mantle, Oxide games are partnered with AMD. Same company that created the Star Swarm benchmark, where Nvidia did better, eventually. Nvidia are playing catch up, don't expect these results to be the same with other DX12 games or even this game on launch. A big deal is being made because it's the first DX12 game benchmarked. Although, if you look at the confirmed DX12 games coming up, there all AMD partners. We could have months of games being released that run better on AMD. What I think will be interesting is benchmarks of a FPS/RPG like Deus Ex, whether the gap between DX11 and 12 DX on i7s will be significant.

Posted (edited)

so the AMD hardware is fine, it was the sofware that could not keep up

Edited by teknoman2

The words freedom and liberty, are diminishing the true meaning of the abstract concept they try to explain. The true nature of freedom is such, that the human mind is unable to comprehend it, so we make a cage and name it freedom in order to give a tangible meaning to what we dont understand, just as our ancestors made gods like Thor or Zeus to explain thunder.

 

-Teknoman2-

What? You thought it was a quote from some well known wise guy from the past?

 

Stupidity leads to willful ignorance - willful ignorance leads to hope - hope leads to sex - and that is how a new generation of fools is born!


We are hardcore role players... When we go to bed with a girl, we roll a D20 to see if we hit the target and a D6 to see how much penetration damage we did.

 

Modern democracy is: the sheep voting for which dog will be the shepherd's right hand.

Posted

Not entirely correct. AMD has built their hardware using a brute force method: lots of simpler operations doing fewer things at a time, demanding massive amounts of draw calls.

 

Nvidia has taken a slightly different route: fewer, slightly more complex operations doing more things at a time, demanding fewer (but still lots) of draw calls.

 

Draw calls have always been a weak point in DX11 (and before) and put a lot of strain on the CPU. DX12 is much faster in this regard.

 

We will see if what AwesomeOcelot says is true or not (Star Swarm being AMD optimized and not a good indication). I would say it's a pretty good indication of what will happen in games using lots of draw calls. Naturally this doesn't affect all genres, but I do think AMD has an advantage in this specific area.

Swedes, go to: Spel2, for the latest game reviews in swedish!

Posted (edited)

Not wanting to start a new thread, I'll just put this here:

 

https://www.youtube.com/watch?v=llOHf4eeSzc

 

Keep in mind this is just a benchmarking demo specifically designed to show Vulkan in the best possible light and we may rarely, if ever, get this kind of result in a real world game.

Edited by Keyrock

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Posted

Latest drama is that Maxwell doesn't support asynchronous computing

 

http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1210#post_24357053

 

Wow, there are lots of posts here, so I’ll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it’s pretty important that it be representative of the gameplay. To keep things clean, I’m not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

 

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don’t really bare that out. Since we’ve started, I think we’ve had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you’d draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you’ve pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I’ve ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

 

If you use this metric, however, given Nvidia’s promotions with Unreal (and integration with Gameworks) you’d have to say that every Unreal game is biased, not to mention virtually every game that’s commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn’t disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that’s a completely other topic wink.gif).

 

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdownasync compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.

 

From our perspective, one of the surprising things about the results is just how good Nvidia’s DX11 perf is. But that’s a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven’t even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We’ll have to dig into this one.

 

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn’t a poster-child for advanced GCN features.

 

 

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven’t made their way to the PC yet, but I’ve heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don’t think Unreal titles will show this very much though, so likely we’ll have to wait to see. Has anyone profiled Ark yet?

 

In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

 

P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

 

AFAIK, Maxwell doesn’t support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

 

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD’s hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it’s scheduler is hard to say

  • Like 1

Why has elegance found so little following? Elegance has the disadvantage that hard work is needed to achieve it and a good education to appreciate it. - Edsger Wybe Dijkstra

Posted

Latest drama is that Maxwell doesn't support asynchronous computing

 

It does support it, in that it apparently executes the code. It just hurts performance to use it, because it seems that it is not actually (currently?) asynchronous but serialized with other workloads. Let's see if that's just some early woes of new features or an architectural problem. If it's the latter, they'll probably use it to push people to buy Pascal next year ;) Also quite funny when thinking about the Anandtech article wrongly suggesting that Maxwell 2 is actually king of asynchronous computing by showing misleading numbers of "compute" capability for AMD here http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading and completely disregarding the facts shown in the article comments...

Citizen of a country with a racist, hypocritical majority

Posted

It's down to context switching, which AMD does better. Nvidia uses async in Gameworks for improved performance for VR, Nvidia's implementation of async does not degrade performance. Ashes has been designed around AMD hardware from the start, it's designed for Mantle which is only GCN. I don't think this is a smart play from AMD. Other developers can do the same thing for Nvidia cards, using Gameworks and an unoptimized alt path for AMD/Intel.

Posted (edited)

but the performance test was in DX12, not mantle. nvidia does not benefit much from DX12, because the drivers can already draw out all the potential of the hardware. AMD on the other hand, had drivers that could use a bit more than half of the hardware's power in DX11, but can draw out all the power in DX12

to make a simplistic comparison

Edited by teknoman2

The words freedom and liberty, are diminishing the true meaning of the abstract concept they try to explain. The true nature of freedom is such, that the human mind is unable to comprehend it, so we make a cage and name it freedom in order to give a tangible meaning to what we dont understand, just as our ancestors made gods like Thor or Zeus to explain thunder.

 

-Teknoman2-

What? You thought it was a quote from some well known wise guy from the past?

 

Stupidity leads to willful ignorance - willful ignorance leads to hope - hope leads to sex - and that is how a new generation of fools is born!


We are hardcore role players... When we go to bed with a girl, we roll a D20 to see if we hit the target and a D6 to see how much penetration damage we did.

 

Modern democracy is: the sheep voting for which dog will be the shepherd's right hand.

Posted

 

Hour long video with a bunch of technical stuff that may go over some people's heads (sure went over mine) but some key points for laymen are:

 

  • Nvidia has a working Vulkan driver you can use right now...  if you're a member of Khronos and sign an NDA
  • Vulkan is still on track for release this year
  • Nvidia driver will release to the public day 1 or very shortly thereafter

Nothing super surprising, mostly a bunch of technical stuff, if you're into that.

sky_twister_suzu.gif.bca4b31c6a14735a9a4b5a279a428774.gif
🇺🇸RFK Jr 2024🇺🇸

"Any organization created out of fear must create fear to survive." - Bill Hicks

Posted

If a game is designed around Mantle for years, being ported to another API doesn't suddenly negate that. In a Benchmark by the same company, Oxide, with a demo of the same engine Ashes uses, Nitrous, the GTX 980 had double the fps average in DX12 vs DX11, it also performed much better than AMD's highest single GPU offering. So the argument that DX12 does nothing for Maxwell 2 cards doesn't hold much water.

 

The dev saying that they've been communicating more with Nvidia and committed more code for Nvidia, so it could be said they've biased towards Nvidia, tells you everything you need to know the dev because they know this is highly misleading. If they've designed the engine around AMD hardware for years, of course they're going to have to patch workarounds on top to support Nvidia cards. They've also got the policy that any fix submitted by GPU makers can't be detrimental to another product, but that didn't count when they were partnered with AMD designing the game around only GCN and Mantle.

 

In regards to DX12 and Vulkan, there's many features that Nvidia GPUs can take advantage of, it's not true that there will be no performance increase for Nvidia with the new APIs, the gains for AMD DX12 vs DX11 will be greater in proportion, but not much difference in regards to DX12 vs Mantle.

Posted

the people at Nvidia have been marketing better drivers as superior hardware and overpriced it. if DX12 closes as much as that review showed the performance gap between AMD and Nvidia cards, then Nvidia will not have an excuse to keep the prices as high as they are and may lose customers, so they lash out at the results to maintain their image until they find a way to widen the gap again with their next product

The words freedom and liberty, are diminishing the true meaning of the abstract concept they try to explain. The true nature of freedom is such, that the human mind is unable to comprehend it, so we make a cage and name it freedom in order to give a tangible meaning to what we dont understand, just as our ancestors made gods like Thor or Zeus to explain thunder.

 

-Teknoman2-

What? You thought it was a quote from some well known wise guy from the past?

 

Stupidity leads to willful ignorance - willful ignorance leads to hope - hope leads to sex - and that is how a new generation of fools is born!


We are hardcore role players... When we go to bed with a girl, we roll a D20 to see if we hit the target and a D6 to see how much penetration damage we did.

 

Modern democracy is: the sheep voting for which dog will be the shepherd's right hand.

Posted (edited)
Agreed. I'd bet that Pascal will close the gap in that respect, if not some driver fixes it for Maxwell before that, and then I'd bet, we'd read all over the web about the great innovation of async compute :)
 

It's down to context switching, which AMD does better. Nvidia uses async in Gameworks for improved performance for VR, Nvidia's implementation of async does not degrade performance. Ashes has been designed around AMD hardware from the start, it's designed for Mantle which is only GCN. I don't think this is a smart play from AMD. Other developers can do the same thing for Nvidia cards, using Gameworks and an unoptimized alt path for AMD/Intel.

So you're implying here that Ashes uses an "unoptimized alt path" for nVidia? They don't - they use an alt path for that one feature for nVidia at nVidia's request: They leave out a feature that currently costs performance on their hardware in order to make the game run better.
 

If a game is designed around Mantle for years, being ported to another API doesn't suddenly negate that. In a Benchmark by the same company, Oxide, with a demo of the same engine Ashes uses, Nitrous, the GTX 980 had double the fps average in DX12 vs DX11, it also performed much better than AMD's highest single GPU offering. So the argument that DX12 does nothing for Maxwell 2 cards doesn't hold much water.

So you're implying here that someone said that "DX12 does nothing for Maxwell 2"? Strawman?

 

Sure, the engine was designed with Mantle first, because it was the first low level API. Now it's ported to DX 12 - big deal? Probably not, as they even admit that async compute, which is the one feature in dispute currently, is used just for a few PP effects.

Edited by samm

Citizen of a country with a racist, hypocritical majority

Posted

So you're implying here that someone said that "DX12 does nothing for Maxwell 2"? Strawman?

nvidia does not benefit much from DX12, because the drivers can already draw out all the potential of the hardware.

I interpreted that as Nvidia benefits little from DX12, because they have reached their potential in DX11. Do you know what a strawman is?

Posted

 

So you're implying here that someone said that "DX12 does nothing for Maxwell 2"? Strawman?

nvidia does not benefit much from DX12, because the drivers can already draw out all the potential of the hardware.

I interpreted that as Nvidia benefits little from DX12, because they have reached their potential in DX11. Do you know what a strawman is?

 

Acting as though someone tried to make point A in order to refute that point A, while noone actually said A.

Citizen of a country with a racist, hypocritical majority

Posted

let me use numbers since they are easier to understand

the power of the GTX 980 is 100. DX11 uses 93, DX12 uses 98.

the power of the R9 280 is also 100. DX11 uses 58, DX12 uses 98.

Nvidia is mad because they advertized their hardware as 35% more powerful and that is now revealed to be false

i hope this makes it more clear

The words freedom and liberty, are diminishing the true meaning of the abstract concept they try to explain. The true nature of freedom is such, that the human mind is unable to comprehend it, so we make a cage and name it freedom in order to give a tangible meaning to what we dont understand, just as our ancestors made gods like Thor or Zeus to explain thunder.

 

-Teknoman2-

What? You thought it was a quote from some well known wise guy from the past?

 

Stupidity leads to willful ignorance - willful ignorance leads to hope - hope leads to sex - and that is how a new generation of fools is born!


We are hardcore role players... When we go to bed with a girl, we roll a D20 to see if we hit the target and a D6 to see how much penetration damage we did.

 

Modern democracy is: the sheep voting for which dog will be the shepherd's right hand.

  • 2 weeks later...
Posted

Why has elegance found so little following? Elegance has the disadvantage that hard work is needed to achieve it and a good education to appreciate it. - Edsger Wybe Dijkstra

  • 3 weeks later...
Posted

From what I've read AMD has a 100% hardware solution, the schedular and processing is all done in hardware. Nvidia however has a hybrid system where the schedular is done in software and the processing is done in hardware.

Windows 10 x64 | Intel i7 920 @ 2.66GHZ | Gigabyte Geforce 760 4GB OC1 Windforce x3 | Integrated Audio | 8GB DDR3 RAM | ASUS P6T | Corsair AX760 PSU

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...