Jump to content

angshuman

Members
  • Posts

    655
  • Joined

  • Last visited

Everything posted by angshuman

  1. Has anyone figured out how to enable AA in the game yet?
  2. By a comfortable margin. Not so much in shader power as in memory bandwidth: the *800 cards have 256-bit memory buses while the *600 ones have 128-bit buses. This has a huge impact on anti-aliasing performance and high-res textures.
  3. - If you can get an AGP 6800GT for about $100 - $120 (do the conversion yourself ), it's probably a good buy. A brand new 7600GS AGP can be had in the US for about $120 these days (which is robbery since the PCIe version is about $88), and the 6800GT should be a fair bit faster than this card. - Very hard to say. These things are really fragile, they could break in 1 month or last for years. Try to get a card that has never been overclocked or had its heatsink tampered with. - Yes, I wouldn't say it's unlikely. A 6800GT is a little bit of an overkill for an XP2600+.
  4. Did Bloodlines have auto-target-locking? From the video, ME seems to have auto-aiming.
  5. Sorry buddy, this is the first time I'm seeing this, but it seems rather interesting. So I gather a Dreamcast does not have an Ethernet port to connect to a generic LAN or PPPoE? And the idea of this project is to emulate the behavior of a 56K modem using a PC on a generic internet connection?
  6. Ah, thanks for clarifying. I thought Volo had finally lost it.
  7. http://www.mtv.com/overdrive/?name=games&id=1545880
  8. http://blogs.smh.com.au/mashup/archives//008228.html
  9. When I was 9 I could work on a BBC-microcomputer at school on which the only thing I could do was write programs in LOGO and BASIC. Then, one day, my teacher inserted a floppy disk into an external drive and showed us Bat N Ball, Moon Raider, and Galaxy. I had never been exposed to the notion of an electronic "game" before (no handheld, no console, nothing), so you can imagine what an impact that experience had on me.
  10. In case you haven't seen it already, you may find this website interesting. I believe the initiative was launched this month, and they have already managed to get the RIAA all panicky and screaming foul. Way to go!
  11. Oh yes, absolutely, affinity is a huge issue. Ah, but a shared L2 has hidden costs. First: The primary benefit of a shared L2 is that gives you the "illusion" of an overall larger cache (compared to split private L2's) due to more efficient utilization of space. However, you need a high-bandwidth crossbar sitting between the cores and a shared L2 cache, which takes up a significant amount of area. Therefore, the total area you can allocate for the L2 is smaller. This is a very interesting tradeoff that was once very eloquently described by a colleague of a colleague as "what's better, the illusion of a larger cache, or a larger cache?" Second, there's the issue of cycle time. Two small L2's can be clocked faster than one large L2. Third: Applications could sometimes destructively interfere with each other, so sometimes there are fairness issues. Associativity can only go so far to prevent this, although you could fix this with simple solutions like way reservation. Despite these issues, I guess what really tips the scales in a shared L2's favor is the fact that if you have a lot of sharing, you can prevent unnecessary data replication and coherence traffic. So, for multi-threaded applications (as opposed to a multi-programmed workload), a shared L2 probably makes more sense, which is why we are seeing real implementations moving towards this direction.
  12. Deceptively simple concepts, yes, but it's a b*tch to write complex and efficient programs with them and reason about their correctness. Lock-free approaches (such as transactional memory) greatly simplify the programming and reasoning process.
  13. Athlons and Pentiums have split L1, split L2. Core 2's have split L1, shared L2. Regardless of the organization, if the OS switches the core you're executing on, it'll make sure your stack, PC etc. is saved. But neither the OS nor the app have to worry about explicitly "moving" anything over, it's all part of the virtual memory that is addressable from any core. The coherence protocol takes care of the moving around of physical blocks between the caches as and when required.
  14. Hmm, I can't figure out what he's talking about either... you need a compare-n-swap or some such atomic instruction in order to implement lock-free synchronization. Also, I'm not quite sure what a "hardware" spin lock is... a hardware test-n-set instruction is required for any kind of a spin lock.
  15. Locks in general are evil. Coarse-grained locks are easy to program and reason about, but are extremely inefficient. Fine-grained locks are performance-efficient, but it takes pro programmers to reason about their correctness. Transactional Memory seems to be a promising alternative, but the concept is still at a very researchy stage, and it's going to be a while before we start seeing them on real systems.
  16. Fixed. <{POST_SNAPBACK}> Lost in translation? I was looking at the Nvidia Chipset Mobo, and wondering how a ATI AMD GPU would work together. Fixed again. http://ati.amd.com/ Or, if you prefer the Inquirer lingo:
  17. (This thread probably belongs to the Tech Forum.) This is a very deep and interesting issue. C and C++ are languages of the 60's, when machines used to have 32KB memory, 1MB Winchester drives were state-of-the-art, and integrated circuits used to operate in KHz. Look how far we have come today. Unfortunately, in terms of programming models, we have not progressed at all, continuing to use the antiquated procedural systems created in the 60's according to the needs of those times. Unforunately, hardware (ISA) abstractions, OS abstractions and Language abstractions together form a very tight vicious cycle that is very diddicult to get out of. Therefore, although the underlying hardware (at the microarchitectural level) has progressed at a breakneck pace, the abstractions exposed by the hardware to the layers above have remained more or less the same for over 40 years. It is not clear any more if the so-called "inefficiencies" of many of today's higher level languages (Java, Python, ML) are genuine inefficiencies or merely artifacts of sticking to an antiquated paradigm. The good news is that the multi-core fad (or rather, the stall in single-thread performance scaling) has hit us all like a freight train, and is forcing architects and programmers alike to try and re-think the entire approach we have been using for solving problems. It's not just a performance issue any more. Multi-threaded programs are a PITA to debug and write correctly and efficiently. The number of ways in which a programmer can shoot himself in the foot using an "efficient" language like C has grown to unmanageable levels. This is why safe higher level languages are being looked at with a renewed interest in the community. I'm not sure if .NET, CORBA etc. are necessarily the solution. I don't know much about them, but AFAIK they are more like patchworks to existing C/C++-based infrastructures to enable large modular projects to be managed easily. I'd be a lot more interested if someone attempted to write a game engine in Perl or Python.
  18. **EA** did that?!?? Was this a long time ago, when EA was actually a development studio instead of the bloodsucking parasite of a publisher that it currently is?
  19. I usually don't "register" in the traditional sense. Several games I've purchased recently have had some form of implicit registration, e.g., Steam games, Guild Wars, etc. The only "survey" I ever completed was Valve's excellent hardware survey where some script automatically scanned my hardware within a fraction of a second, prepared a report and sent it to Valve, and also showed me some nice bar charts on the data they had collected thus far.
  20. AGain, depends on how much you want to spend. Assuming you're looking for the next cheapest rung below the 8800GTX and GTS: If you run only Windows, then I'd recommend the X1950XTX. If you need *nix support, then get the 7950GX2 or 7900GTX.
  21. Yeah that sounds like a reasonable estimate . Hopefully prices should drop somewhat after R600 and 4x4 are launched, but it'll probably be a long while before it reaches the sub-$1000 mark. Fortunately, there's always Conroe E6600/8800GTS which is no slouch of a setup either, and you should be able to get that for $750 today. :D
  22. Agree on all counts. Servers, schmervers, MMO, whatever, I don't care, I absolutely flat-out refuse to pay monthly fees for any game. And of course, if I have to look at ads in a game, it better be free.
  23. I'm really looking forward to Unreal Engine 3 now, it should go bonkers on this beast... Gears of War already looks absolutely phenomenal on the puny X360. Imagine running the engine on a Kentsfield/G80 setup. The fact that it runs so well on the 360 also indicates that it can probably extract every last bit of juice from a unified-shader architecture.
×
×
  • Create New...