Jump to content

angshuman

Members
  • Posts

    655
  • Joined

  • Last visited

Posts posted by angshuman

  1. 6800GT is faster than 7600GS?

    By a comfortable margin. Not so much in shader power as in memory bandwidth: the *800 cards have 256-bit memory buses while the *600 ones have 128-bit buses. This has a huge impact on anti-aliasing performance and high-res textures.

  2. - If you can get an AGP 6800GT for about $100 - $120 (do the conversion yourself :)), it's probably a good buy. A brand new 7600GS AGP can be had in the US for about $120 these days (which is robbery since the PCIe version is about $88), and the 6800GT should be a fair bit faster than this card.

     

    - Very hard to say. These things are really fragile, they could break in 1 month or last for years. Try to get a card that has never been overclocked or had its heatsink tampered with.

     

    - Yes, I wouldn't say it's unlikely. A 6800GT is a little bit of an overkill for an XP2600+.

  3. Sorry buddy, this is the first time I'm seeing this, but it seems rather interesting. So I gather a Dreamcast does not have an Ethernet port to connect to a generic LAN or PPPoE? And the idea of this project is to emulate the behavior of a 56K modem using a PC on a generic internet connection?

  4. When I was 9 I could work on a BBC-microcomputer at school on which the only thing I could do was write programs in LOGO and BASIC. Then, one day, my teacher inserted a floppy disk into an external drive and showed us Bat N Ball, Moon Raider, and Galaxy. I had never been exposed to the notion of an electronic "game" before (no handheld, no console, nothing), so you can imagine what an impact that experience had on me.

  5. DRM, DRM ,DRM. I hate it, i have it, i hate it.

     

    I'm glad there are sites where i can still get pure MP3's without DRM tacked on. That way i actually own my own music, not just renting it.

    In case you haven't seen it already, you may find this website interesting. I believe the initiative was launched this month, and they have already managed to get the RIAA all panicky and screaming foul. Way to go! :)

  6. i wasn't referring to physically moving anything.  i was referring to the down time while things get moved, stalls, etc.  i.e. these functions are handled automatically, but they still take considerable amount of time.  i'm running into that right now with a benchmark i'm running that requires threads for larger sizes (an FFT).  my times are all over the map because i cannot lock down affinity. 

    Oh yes, absolutely, affinity is a huge issue.

     

    i prefer the latter organization for parallel.  split L1 is a given necessity for speed reasons.  a shared L2, however, allows cores to play together in the sandbox a little better.  of course, associativity semi-reserves areas of L2 anyway...

    Ah, but a shared L2 has hidden costs. First: The primary benefit of a shared L2 is that gives you the "illusion" of an overall larger cache (compared to split private L2's) due to more efficient utilization of space. However, you need a high-bandwidth crossbar sitting between the cores and a shared L2 cache, which takes up a significant amount of area. Therefore, the total area you can allocate for the L2 is smaller. This is a very interesting tradeoff that was once very eloquently described by a colleague of a colleague as "what's better, the illusion of a larger cache, or a larger cache?" :)

     

    Second, there's the issue of cycle time. Two small L2's can be clocked faster than one large L2. Third: Applications could sometimes destructively interfere with each other, so sometimes there are fairness issues. Associativity can only go so far to prevent this, although you could fix this with simple solutions like way reservation.

     

    Despite these issues, I guess what really tips the scales in a shared L2's favor is the fact that if you have a lot of sharing, you can prevent unnecessary data replication and coherence traffic. So, for multi-threaded applications (as opposed to a multi-programmed workload), a shared L2 probably makes more sense, which is why we are seeing real implementations moving towards this direction.

  7. Semaphores, and Mutual exclusion, stuff like dekkers algorithm, are surprisingly simple.

    Deceptively simple concepts, yes, but it's a b*tch to write complex and efficient programs with them and reason about their correctness. Lock-free approaches (such as transactional memory) greatly simplify the programming and reasoning process.

  8. they have shared L1?  or only shared L2?  i've got a unique L1 per core, and one fairly large, but shared, L2.  if linux pulls a core switch on me, i have to move my entire stack into a region i can use.

     

    taks

    Athlons and Pentiums have split L1, split L2. Core 2's have split L1, shared L2. Regardless of the organization, if the OS switches the core you're executing on, it'll make sure your stack, PC etc. is saved. But neither the OS nor the app have to worry about explicitly "moving" anything over, it's all part of the virtual memory that is addressable from any core. The coherence protocol takes care of the moving around of physical blocks between the caches as and when required.

  9. Hmm, I can't figure out what he's talking about either... you need a compare-n-swap or some such atomic instruction in order to implement lock-free synchronization. Also, I'm not quite sure what a "hardware" spin lock is... a hardware test-n-set instruction is required for any kind of a spin lock.

  10. Locks in general are evil. Coarse-grained locks are easy to program and reason about, but are extremely inefficient. Fine-grained locks are performance-efficient, but it takes pro programmers to reason about their correctness. Transactional Memory seems to be a promising alternative, but the concept is still at a very researchy stage, and it's going to be a while before we start seeing them on real systems.

  11. Very Impressed with it but I'm ambivalent about Nvidias new graphic cards. So im curious to how an ATI AMD would run on a Nvidia Chipset. Are they going to play nicely?

    Fixed. :brows:

    :thumbsup: Lost in translation?

    I was looking at the Nvidia Chipset Mobo, and wondering how a ATI AMD GPU would work together.

    Fixed again. :sorcerer:

     

    http://ati.amd.com/

     

    Or, if you prefer the Inquirer lingo:

    I was looking at the Nvidia Chipset Mobo, and wondering how a DAAMIT GPU would work together.
  12. (This thread probably belongs to the Tech Forum.)

     

    This is a very deep and interesting issue. C and C++ are languages of the 60's, when machines used to have 32KB memory, 1MB Winchester drives were state-of-the-art, and integrated circuits used to operate in KHz. Look how far we have come today. Unfortunately, in terms of programming models, we have not progressed at all, continuing to use the antiquated procedural systems created in the 60's according to the needs of those times.

     

    Unforunately, hardware (ISA) abstractions, OS abstractions and Language abstractions together form a very tight vicious cycle that is very diddicult to get out of. Therefore, although the underlying hardware (at the microarchitectural level) has progressed at a breakneck pace, the abstractions exposed by the hardware to the layers above have remained more or less the same for over 40 years. It is not clear any more if the so-called "inefficiencies" of many of today's higher level languages (Java, Python, ML) are genuine inefficiencies or merely artifacts of sticking to an antiquated paradigm.

     

    The good news is that the multi-core fad (or rather, the stall in single-thread performance scaling) has hit us all like a freight train, and is forcing architects and programmers alike to try and re-think the entire approach we have been using for solving problems. It's not just a performance issue any more. Multi-threaded programs are a PITA to debug and write correctly and efficiently. The number of ways in which a programmer can shoot himself in the foot using an "efficient" language like C has grown to unmanageable levels. This is why safe higher level languages are being looked at with a renewed interest in the community.

     

    I'm not sure if .NET, CORBA etc. are necessarily the solution. I don't know much about them, but AFAIK they are more like patchworks to existing C/C++-based infrastructures to enable large modular projects to be managed easily. I'd be a lot more interested if someone attempted to write a game engine in Perl or Python. :thumbsup:

  13. Oh, one more thing, since I realized it later on.

     

    Admittedly there's not much tangible incentive to do so, but I did think it was pretty cool that Electronic Arts gave every person that registered Ultima 9 a free remastered CD with all of the patches built right in.

     

    Also included was a letter of apology for the state that the original game was sent in.

     

    That was pretty neat.

    **EA** did that?!?? :p Was this a long time ago, when EA was actually a development studio instead of the bloodsucking parasite of a publisher that it currently is?

  14. I usually don't "register" in the traditional sense. Several games I've purchased recently have had some form of implicit registration, e.g., Steam games, Guild Wars, etc. The only "survey" I ever completed was Valve's excellent hardware survey where some script automatically scanned my hardware within a fraction of a second, prepared a report and sent it to Valve, and also showed me some nice bar charts on the data they had collected thus far.

  15. Angs, what do you reckon is the minimum cost for a Kentsfield/G80 setup?  I'm guessing the CPU+GPU alone will be around $1500, let alone the price of an entire system.

    Yeah that sounds like a reasonable estimate :aiee:. Hopefully prices should drop somewhat after R600 and 4x4 are launched, but it'll probably be a long while before it reaches the sub-$1000 mark. Fortunately, there's always Conroe E6600/8800GTS which is no slouch of a setup either, and you should be able to get that for $750 today. :D

  16. I'm really looking forward to Unreal Engine 3 now, it should go bonkers on this beast... Gears of War already looks absolutely phenomenal on the puny X360. Imagine running the engine on a Kentsfield/G80 setup. The fact that it runs so well on the 360 also indicates that it can probably extract every last bit of juice from a unified-shader architecture.

×
×
  • Create New...