Jump to content

Recommended Posts

Posted

Besides that sort of thing would require people to be online and well, there are some people who are only online when they surf the net or downloading something.

Posted
Well, it's just that you mentioned computationally expensive, requiring large amounts of memory.  I find that those don't go together that often.  If something takes up large amounts of memory, it's probably done so to reduce computational cost (since reading from memory is fast, and you can store data computed earlier).

 

I've only taken a basic algorithms course, so I'm not an expert, but it seems like many of the computationally expensive algorithms aren't really huge memory hogs.  Not compared to the computer ticks anyways.  Computer cycles are often orders of magnitude higher than memory usage.  This is because time is much more available than memory.  You can always wait longer for something to finish.  You can't finish if you run out of memory though.

 

It's not really that far-fetched that lots of computationally intense algorithms on Cell, which specialize in enormous amounts of computations (which results in data), that take up lots of memory. That's why PC workstations need a ton of RAM, for example -- not just the graphics chip, but system memory.

 

Well, the Dreamcast was before the PS2, as was the Saturn compared to the PSX.  IIRC, the Sega Master System was out before the NES too wasn't it?  Being first is a bit overrated IMO.

 

I don't believe I stated that first out of the gate wins, I simply stated that if the Xbox came out first and got the bulk of the percentage of users, then developers would have been far more willilng and ready to code games for the Xbox instead. It was really a moot point since it didn't happen, but you appeared to claim that developers made tons of the games for the PS2 dispite the difficulties - which caused me to think that you were perhaps stating that regardless of programming difficulty, developers will support a console full blown. I just don't find that to be a reasonable assumption, since they would rather develop for the console with the larger userbase. :p

 

The big buzz words I'm hearing (though I don't follow consoles as much anymore) are things like photorealism and magnificent graphics, with "big bux" allocated to graphics.  I wouldn't be surprised at all if at half (if not more) of the unified RAM goes towards graphics resources.

 

If photo realism is a goal, then much of the gameplay will be sacrificed, on either platform. There's a delicate balance between great graphics and simply over doing it. We are not yet at the point where we can have visuals of that nature while still providing the gamer with a great gameplay experience.

 

You talk about the high bandwidth of the eDRAM.  I know it has a 256 GB/s internal memory bandwidth, but from what I have read that only really benefits things like Z Buffering and Antialiasing.  Just an assurance to make sure that the bottleneck isn't there, so to speak.

 

It offers a few other advantages, but yes, the bulk of the workload on the eDRAM is antialiasing and z buffering. It really does provide an advantage though, as it frees up a tremendous amount of bandwidth for other graphical opperations. I believe that antialiasing is one of the most bandwidth consuming tasks that a GPU does.

 

Not really, as shaders are making additional geometry less important.

Half-Life 2 pushes less polygons than Doom 3, and aside from shadows, looks much better (at least in my opinion).  The per-pixel effects shifted the focus away from simply more triangles.  Besides, in the example you provide, the Cell not being able to access the GDDR3 pool is irrelevant as the RSX would be using it all anyways.  There'd be no memory available from the GDDR3 pool.

 

If RSX's bandwidth is consumed by applying FSAA and AF to the scene, as well as rendering the geometry and applying shading etc..., it may have previously been in a developers interest to have Cell take over some of the geometry workload. Fully destructable environments would have also been possible, in my opinion. I recall Kojima stating in an interview just recently that they had to remove some of the destructive environments. Who knows, perhaps it's a byproduct of this, or something entirely different...

 

Aside from physics, the SPE's won't have a lot to do if they can't help RSX render, in my opinion.

Posted (edited)
If photo realism is a goal, then much of the gameplay will be sacrificed, on either platform. There's a delicate balance between great graphics and simply over doing it. We are not yet at the point where we can have visuals of that nature while still providing the gamer with a great gameplay experience.

 

Oh believe me, I am well aware of this. Press releases from both Microsoft and Sony seem to be pressing the issue of bigger and better graphics.

 

That's why PC workstations need a ton of RAM, for example -- not just the graphics chip, but system memory.

 

I don't have too much experience with workstation apps. I have heard that much of the insane memory requirements have to do with high end photo/video editting. There is 3D rendering though, in CAD files. I'm not sure what the differences would be between using CAD compared to playing a game. I do find it kind of interesting that much of the content creation for games is done on hardware that tends to outstrip the game it ultimately is played on.

 

I'm not sure the differences there are between content creation and simply viewing the content though. I know I use a computer that is significantly faster than necessary for my software to run at an acceptable speed because I spend a lot of time doing repeated tasks, such as compiling. Memory usage goes up because I often load up the program with debug mode on, cycle through and look at source code in my IDE...all while running the program. I wonder if stuff such as this is part of the reason why RAM requirements go up with workstation applications. Content creation does seem more involved than merely viewing that content.

 

It's not really that far-fetched that lots of computationally intense algorithms on Cell, which specialize in enormous amounts of computations (which results in data), that take up lots of memory.

 

I guess I can't really dispute that they could use computationally intense algorithms that require a lot of memory. It's just that in my experiences, stuff becomes computationally expensive because of the memory requirement. Memory is much more finite than time, so we find ways to limit memory usage, resulting in increases in computation time. Since you mentioned anti-aliasing, the idea of rip-maps is not popular due to memory requirements, so they dynamically create tables which is more computationally expensive.

 

If RSX's bandwidth is consumed by applying FSAA and AF to the scene, as well as rendering the geometry and applying shading etc..., it may have previously been in a developers interest to have Cell take over some of the geometry workload. Fully destructable environments would have also been possible, in my opinion. I recall Kojima stating in an interview just recently that they had to remove some of the destructive environments. Who knows, perhaps it's a byproduct of this, or something entirely different...

 

Aside from physics, the SPE's won't have a lot to do if they can't help RSX render, in my opinion.

 

Well, if the assumption that half of the total memory is going to be used for graphics anyways, then I guess you could still have the SPE's work on geometry, as RSX's memory works on other things. The Cell can still write to graphics memory really fast, and the RSX can read from the main memory really fast.

 

As for the other uses of the SPE's. I think it might be our previous experiences that may be clouding the issue. Historically floating point operations have been significantly slower than integer operations. As a result, many abstractions of floating point are made, converting them into integer ops in order to speed them up. If the Cell is really as much of a FP beast as Sony and IBM say it is, then I'm curious if the abstractions we make will be as necessary (and maybe have the benefit of limiting memory requirements as a latent benefit).

 

In the old thread (before you even registered here), I suggested the possibility that AI could possibily benefit, as the calculation of continuous Random Variables to help make decisions at nodes more accurate and perhaps more dynamic. My experience with ORTS (you can see me, Allan Schumacher, worked on it last summer) had me working on breaking the map down into triangles (and not just the 3D rendering triangles, so I couldn't use that information) in order to help reduce the search space for A* pathfinding. Currently, most games make a tile based abstraction, and I believe Bioware mentioned they plan on going back to a tile-based pathfinding system for Dragon Age from I think a hexagon based system for Jade Empire, and I think KOTOR. A hurdle we found with breaking the world down into triangles is that they weren't uniform (we'd pick ends of line segments and connect them to the corners of other line segments). Determining location became a bit more problematic, especially if a triangle was very large (for instance, a Square room would be cut into an X as the corners connected each other, so you could have 4 large triangles). The situation ended up becoming more floating point intensive, and was starting to see diminishing returns in performance (it was also complicated...so maybe the PS3 is teh suck!).

 

We also analyzed combat AI, specifically concentration of fire. We solved n vs. 1, but trying to decide which units our guys should concentrate their firepower on in n vs. many was much more difficult. We ended up using a simple Monte Carlo and iterating over the possibilities to see if we could find a pattern to the results and perhaps some sort of equation (and maybe even a proof!). We ended up playing around with ratios of firepower, cooldown, and hitpoints (we didn't want to consider the fact that some units might do different damage depending on the unit). As units get added/removed from battle, we ended up doing a lot of ratio calculations trying to determine the most efficient way to kill the enemy units. It ended up having a fair bit of repeated floating point calculations as a result.

 

 

I do know that robotics AI has been hindered by limited Floating Point power. Interesting Read

 

On a final note, anything that could benefit from statistics (particularly of the Random Variable kind) could find itself benefitting from the SPEs. We're just familiar with Physics and Graphics right now, because we can't really "cheat" and abstract things into integers for these.

Edited by alanschu
Posted

What's cool about this discussion is that I can follow just enough to think I know what the hell 10k and alan are saying. What's bad about it is I might make a fool of myself someday by citing the discussion and then proving I really don't understand any of it at all. :Eldar's pleased with the technical types who frequent these parts icon:

Fionavar's Holliday Wishes to all members of our online community:  Happy Holidays

 

Join the revelry at the Obsidian Plays channel:
Obsidian Plays


 
Remembering tarna, Phosphor, Metadigital, and Visceris.  Drink mead heartily in the halls of Valhalla, my friends!

Posted
I don't have too much experience with workstation apps.  I have heard that much of the insane memory requirements have to do with high end photo/video editting.  There is 3D rendering though, in CAD files.  I'm not sure what the differences would be between using CAD compared to playing a game.  I do find it kind of interesting that much of the content creation for games is done on hardware that tends to outstrip the game it ultimately is played on.

 

Your typical CAD station has roughly 2GB of RAM and a minimum of 512MB workstation GPU. Even then, an object that's 100'+ long struggles to render/rotate smoothly. Maintaining 5fps is a problem for the most part. Has nothing to do with games though. lol

 

Well, if the assumption that half of the total memory is going to be used for graphics anyways, then I guess you could still have the SPE's work on geometry, as RSX's memory works on other things.  The Cell can still write to graphics memory really fast, and the RSX can read from the main memory really fast.

 

The problem with having Cell write to the GDDR3 pool is that RSX will most likely be tapped out in bandwidth, so writing there would not be beneficial. Reading from the pool, and removing some of the workload from RSX would have been a better choice I think. Developers will find a way, though, so it's really a moot point. I will be curious to see how games turn out in four years.

 

your comments on AI/pathefinding *snipped for length*

 

Very interesting read. Would AI running in FP really simulate intelligence though, or would it be a more "basic" AI? I don't think the history table would be that long for any kind of advanced AI to be run in that nature... I could easily be wrong though.

 

Cell and Xenon both use branch predictors from the PowerPC 970 - best in class - just a smaller history table. I'm not going to say that running AI on the SPEs can't be done, I just don't think it'll be worth doing. It was proved that the PS2 could, in fact, do bump mapping, but nobody did it for a game since it was too taxing.

 

The SPEs were designed to run specific tasks that they repeatedly compute, which is not largely the case in gaming, and especially not the case in AI, I believe. Context switching on SPEs is expensive*, in addition to the weak decision-tree and branching performance on them. Not only is running it on the SPEs a nightmare when it comes to synchronization but you're using up huge amounts of Cell's real abilities to try to sludge something that will run very poorly on it. I think a best case scenario for the PS3 is that AI will run part time on the PPE and perhaps later down the road, someone will slave a SPE to help with whatever non-decision tree work there is.

 

* SPEs have to manually context-switch - run code on them to flush the local store/reload it for the new dataset - it's an expensive operation for them.

Posted
http://media.ps3.ign.com/media/814/814614/vids_1.html

 

Producer Andy Sites claims they are finaly past the point were any PC can outperform the PS3 at it's own task.  $3,500 PC can't keep up with it apparently.

 

Now are those PCs $3,500 on todays market -- or were they $3,500 two+ years ago... I'd expect the game Andy Sites is producing (Untold Legends: Dark Kingdom) to have much better graphics following that statement, but they really don't seem to be any better then what can be done on current gen PCs costing less than $3,500.

Posted (edited)
http://media.ps3.ign.com/media/814/814614/vids_1.html

 

Producer Andy Sites claims they are finaly past the point were any PC can outperform the PS3 at it's own task.  $3,500 PC can't keep up with it apparently.

 

Now are those PCs $3,500 on todays market -- or were they $3,500 two+ years ago... I'd expect the game Andy Sites is producing (Untold Legends: Dark Kingdom) to have much better graphics following that statement, but they really don't seem to be any better then what can be done on current gen PCs costing less than $3,500.

 

Well, I imagine the PCs are pretty high end. Since before the devkits (and even during E3 some claim) PS3 games were runnning on top of the line SLI rigs.

 

EDIT : I'm not impressed with that they have produced in any case.

Edited by Haitoku
Posted
The problem with having Cell write to the GDDR3 pool is that RSX will most likely be tapped out in bandwidth, so writing there would not be beneficial. Reading from the pool, and removing some of the workload from RSX would have been a better choice I think. Developers will find a way, though, so it's really a moot point. I will be curious to see how games turn out in four years.

 

Why would writing there not be beneficial? I was referring to the Cell doing some operations, and writing it into the graphics memory for the RSX to use, not that the RSX would have to read it and give it back to the Cell.

 

Very interesting read. Would AI running in FP really simulate intelligence though, or would it be a more "basic" AI? I don't think the history table would be that long for any kind of advanced AI to be run in that nature... I could easily be wrong though.

 

What do you mean more "basic AI?" I assume you mean "more" as in "more of the same." Saying "simulating intelligence" is a misnomer anyways. What do you mean by intelligence? What do you mean by "simulating intelligence?" I'm talking about potentially better decision making, with lesser need for abstractions.

 

 

 

As for your tidbits on Cell, where do you read up on it? I've done most of my reading (which admittedly isn't as much as I would like) direct from IBM's site, as well as with the odd discussion with professors at my university.

Posted
Why would writing there not be beneficial?  I was referring to the Cell doing some operations, and writing it into the graphics memory for the RSX to use, not that the RSX would have to read it and give it back to the Cell.

 

It was initially the idea that Cell would actually perform the graphical operations that it was taking away from RSX. So writing it to the GDDR3 memory pool for RSX to process wouldn't really help that much, since RSX would assumed to be tapped out for Cell to be needed.

 

What do you mean more "basic AI?"  I assume you mean "more" as in "more of the same."  Saying "simulating intelligence" is a misnomer anyways.  What do you mean by intelligence?  What do you mean by "simulating intelligence?"  I'm talking about potentially better decision making, with lesser need for abstractions.

 

Yeah, more of the same. I'd like AI that learns a little better (the larger history tables in Cell and Xenon should make that possible). I'd also like AI to have more decisions to make, and actually choose the smarter decisions in most cases. I want the AI to have more decisions, more variety to choose from, more "thinking" if it were. In order to do that, you need more branching.

 

You said you've worked on AI in the past, so I'm sure you'd agree that you need heavy branching to make AI better, right?

 

As for your tidbits on Cell, where do you read up on it?  I've done most of my reading (which admittedly isn't as much as I would like) direct from IBM's site, as well as with the odd discussion with professors at my university.

 

I don't really read up on Cell...

 

But IBM and the Berkeley report (I think it's been published on the web somewhere) are good places to read up on it.

Posted
It was initially the idea that Cell would actually perform the graphical operations that it was taking away from RSX. So writing it to the GDDR3 memory pool for RSX to process wouldn't really help that much, since RSX would assumed to be tapped out for Cell to be needed

 

Errr. If the Cell is putting stuff in the RSX memory for the RSX to work with, I'm not sure why the RSX has to be too "tapped out." If the Cell is computing all of the geometry, it can write that information right into the RSX (and do it fast), and have it waiting there for the RSX to start doing all of its shader routines/rendering/whatever.

 

 

Yeah, more of the same. I'd like AI that learns a little better (the larger history tables in Cell and Xenon should make that possible). I'd also like AI to have more decisions to make, and actually choose the smarter decisions in most cases.

 

This is what I'm curious about with respect to using the Floating Point power of the Cell to calculate Random Variable distributions. Maybe we no longer need to abstract the decision making in order to reduce the floating point instructions, since Cell has a lot of floating point power.

 

I want the AI to have more decisions, more variety to choose from, more "thinking" if it were. In order to do that, you need more branching.

 

You said you've worked on AI in the past, so I'm sure you'd agree that you need heavy branching to make AI better, right?

 

Branching is important. But even then, determining which branch to take requires some sort of metric by which they make their decision (where I'm curious if the floating point power could be an advantage, as stated above). I know there's a lot of hoopla about the poor branch prediction of the Cell, though I'm not sure how (this is starting to go beyond the scope of my studies). According to IBM:

 

The SPU branch architecture does not include dynamic branch prediction, but instead relies on compiler-generated branch prediction using "prepare-to-branch" instructions to redirect instruction prefetch to branch targets.

 

 

Also, I have heard concerns that the Cell, while powerful, is inappropriate as a gaming processor. I find this a little unusual, as much of IBM's articles seem to indicate that it is probably best suited for a gaming processor.

 

Meh.

Posted
Errr.  If the Cell is putting stuff in the RSX memory for the RSX to work with, I'm not sure why the RSX has to be too "tapped out."  If the Cell is computing all of the geometry, it can write that information right into the RSX (and do it fast), and have it waiting there for the RSX to start doing all of its shader routines/rendering/whatever.

 

The original idea was for Cell to actually read and write to both pools and be able to help out RSX even more (well, the original idea was no GPU at all, and Cell doing all the work, but that didn't happen). In order for Cell to get instructions from RSX now, RSX has to pull from the GDDR3 pool, and the write to the XDR pool for Cell to read. Again, I'm not saying this is time consuming, or anything of that nature, it's just more work, and wasted bandwidth dispite it being a minimal amount.

 

This is what I'm curious about with respect to using the Floating Point power of the Cell to calculate Random Variable distributions.  Maybe we no longer need to abstract the decision making in order to reduce the floating point instructions, since Cell has a lot of floating point power.

 

AI is not my field, but I can say this... It was originally thought that AI could run across the SPEs since it was a parallel architecture. People assumed that you could traverse several branches at once with the SPEs. It turned out to be something that wouldn't work, good theory, botched in practice.

 

Branching is important.  But even then, determining which branch to take requires some sort of metric by which they make their decision (where I'm curious if the floating point power could be an advantage, as stated above).  I know there's a lot of hoopla about the poor branch prediction of the Cell, though I'm not sure how (this is starting to go beyond the scope of my studies).  According to IBM:

 

Cell itself doesn't have poor branch prediction, since the PPE is good at it, but yes, the SPEs have no branch predictor. They have what is called branch hinting. Hinting gives them a 50/50 shot at guessing the right branch to take, if they guess wrong, it's about a 20 cycle flush to start over. I think almost every developer will view that as a waste of time when you have an almost 90% prediction rate from the PPE.

 

Also, I have heard concerns that the Cell, while powerful, is inappropriate as a gaming processor.  I find this a little unusual, as much of IBM's articles seem to indicate that it is probably best suited for a gaming processor.

 

Meh.

 

That's a toss up. Until proven in action, it can only be assumed that IBM and Sony know what they're doing. I don't think it's a well rounded processor though, it has entirely too much focus on FPU performance, and not enough on integer. Just remember that Cells concept was not for the PS3, it's purpose for Sony is beyond that. If the PS3 doesn't grab the majority ownership this generation, I worry for Sony's Playstation division. I just can't imagine publishers sinking millions of dollars into a project for the PS3, when that money could go much, much further on the Xbox 360 or even the Wii. If Sony still has market share this time around, then developers/publishers will do what they need to. I just hope it doesn't kill off the smaller development houses even further... :-

Posted

Oh yeah, I forgot, congrats to your Oilers... Very good game, should have been 3-1 though, it's a shame that ref was out of position and couldn't see the puck.

Posted
AI is not my field, but I can say this... It was originally thought that AI could run across the SPEs since it was a parallel architecture. People assumed that you could traverse several branches at once with the SPEs. It turned out to be something that wouldn't work, good theory, botched in practice.

 

I haven't heard that. Do you have a link?

Posted
AI is not my field, but I can say this... It was originally thought that AI could run across the SPEs since it was a parallel architecture. People assumed that you could traverse several branches at once with the SPEs. It turned out to be something that wouldn't work, good theory, botched in practice.

 

I haven't heard that. Do you have a link?

@10K fists: Thing is, folks at T. J. Watson are very, very smart. I doubt they would have come up with something that doesn't work in practice. I'm not questioning what you're saying, 10K. All I'm saying is that it seems strange for IBM to mess up this bad. In fact, this is something that I've always wondered about the Cell. It's obvious that the thing has a lot of raw potential, but also that it's going to take quite a bit of programming effort or a fantastic omnipotent compiler to get anywhere close to that peak amount. Either that, or, there's something unique about some part of game code that's really easy to code on those SPU's with little programmer or compiler effort. But all programmer feedback received thus far seems to indicate otherwise.

Posted
I haven't heard that.  Do you have a link?

 

Like I said earlier, I don't read anything up on Cell, so I don't know what's public or not. The Berkeley report might have something on context switching...

 

@10K fists: Thing is, folks at T. J. Watson are very, very smart. I doubt they would have come up with something that doesn't work in practice. I'm not questioning what you're saying, 10K. All I'm saying is that it seems strange for IBM to mess up this bad. In fact, this is something that I've always wondered about the Cell. It's obvious that the thing has a lot of raw potential, but also that it's going to take quite a bit of programming effort or a fantastic omnipotent compiler to get anywhere close to that peak amount. Either that, or, there's something unique about some part of game code that's really easy to code on those SPU's with little programmer or compiler effort. But all programmer feedback received thus far seems to indicate otherwise.

 

I'm not saying IBM or Sony made a mistake in the sense that they built Cell incorrectly. But Cell's main design is not for the PS3, Sony wants to do far more with it. Even the benchmarks for Cell are run on bigger versions with more SPEs and a higher clock speed. So please don't take it the wrong way when I point out potential flaws in Cells architecture when it relates to video games. So IBM didn't mess up, they did exactly what was needed to make Cell do what Sony wanted.

 

We honestly wouldn't really be having discussions like this had Xenon not been created with developers in mind. Not to sound like a raving fanboy, but the processor has so many benefits over Cell when it comes to video game programing that's it's astounding. But, once you remove Xenon from its gaming environment, it will be trounced by the performance of Cell. I think that just goes to show just how powerful Cell really is, in that even when you put Cell in an environment it wasn't truely created for (video games), it still holds its own.

Posted (edited)
Like I said earlier, I don't read anything up on Cell, so I don't know what's public or not. The Berkeley report might have something on context switching...

 

Well, you must have heard a claim like that somewhere. If you didn't find it by reading up on it, then where?

 

I think that just goes to show just how powerful Cell really is, in that even when you put Cell in an environment it wasn't truely created for (video games), it still holds its own.

 

That's just it. The impression I'm getting from IBM, is that the Cell's strength is gaming.

Edited by alanschu
Posted
I think that just goes to show just how powerful Cell really is, in that even when you put Cell in an environment it wasn't truely created for (video games), it still holds its own.

 

That's just it. The impression I'm getting from IBM, is that the Cell's strength is gaming.

 

 

Weren't they talking about putting it in refridgerators and home theaters and crap though?

DEADSIGS.jpg

RIP

Posted
I think that just goes to show just how powerful Cell really is, in that even when you put Cell in an environment it wasn't truely created for (video games), it still holds its own.

 

That's just it. The impression I'm getting from IBM, is that the Cell's strength is gaming.

 

 

Weren't they talking about putting it in refridgerators and home theaters and crap though?

 

 

They have talked about it being used in many different places.

 

The IBM discussion about the Cell architecture goes on and on about all the capabilities, but at one point says that "Cell is not limited to game systems." And that it was designed for a wide range of applications, "including the most demanding consumer appliance: game consoles." It also talks about how parts of the design decisions are based on "the real time nature of game workloads."

Posted
The IBM discussion about the Cell architecture goes on and on about all the capabilities, but at one point says that "Cell is not limited to game systems."  And that it was designed for a wide range of applications, "including the most demanding consumer appliance: game consoles."  It also talks about how parts of the design decisions are based on "the real time nature of game workloads."

Gotta love the PR folks. :D

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...