Jump to content

angshuman

Members
  • Posts

    655
  • Joined

  • Last visited

Everything posted by angshuman

  1. That's very strange... most common display devices won't even accept a 1700x resolution. You have the Elite, correct? Did you happen to try this with multiple games?
  2. The console can send the final output at multiple resolutions, but it's a limited set. For example, when you play a DVD, it sends a 480p output signal, and your TV upscales it to 720p or 1080p. I believe the 360 can do 480i, 480p, 720p, 1080i, 1080p, but you can render the scene at pretty much any resolution and then use the 360's scaler to scale it to one of the valid output resolutions. It is up to the game developer to decide both resolutions. Most games use either 720p or 1080p output resolutions, but the render resolution can be all over the board, as you can see from the two links I've pasted above.
  3. I think Vista is a terrible piece of software, but compatibility issues have nothing to do with it. In fact, I believe that in order to create a clean new version of a product, backwards compatibility must sometimes be compromised. It's very difficult to improve the quality of a product if it is constantly bogged down by BC requirements. Both Microsoft and Intel have had to suffer due to absolutely unreasonable customer demands on BC. As Nighshade has shown, Vista still does a reasonable job of maintaining BC with legacy software.
  4. Depends on the game. Not many games actually render at full 720p or 1080p. For example, Halo 3 renders at 1152x640 and then upscales it via the 360 hardware. As you can imagine, a lot of times this is a necessary tradeoff to maintain a certain scene complexity, framerate etc. The folks at Beyond3D forums have done an extensive investigation of the native resolution different console games render at. Here are the links: http://forum.beyond3d.com/showthread.php?t=46241 http://forum.beyond3d.com/showthread.php?t=46242
  5. http://slashdot.org/articles/08/03/19/2112253.shtml
  6. "Some" here would be AMD. There's no really strict definition of what a "true" quad core should be. As end users, we get 4 cores in a single socket either way. As I mentioned previously in the CPU thread, the disadvantage to Intel's approach starts becoming apparent only with a 3+ threaded application with *heavy* data sharing across the threads. With multitasking, as well as with threaded apps that do not share data, there's no drawback at all. On the other hand, the slapping-together approach allows Intel to get much better yields, which translates to higher clock speeds and larger caches at reasonable prices for us. These two factors have a much greater impact on end-user-perceived performance. I think Intel's approach is brilliant, given the current state of available applications.
  7. Yes. It's nearly as good as a "full-blown" quad core. The only time you'd see a huge performance deficiency is if you have a multithreaded application with massive sharing across the threads, and that too only if the sharing is happening across the two dual-core dies. I doubt you will find any real desktop application today or for the next couple of years that will be bottlenecked by this style of quad-core implementation, however aesthetically unappealing it may be. That said, for gaming, I'd pick the E8400 too simply for its ridiculous clock speed.
  8. Your memory might be lacking here. My old 1.6GHz Athlon XP (called "1900+") easily beat 2GHz P4 in most if not all scenarios, might want to google it The A64 just made things worse for intel and opened AMDs appeal to a broader public. The same was the case with the original Athlon vs. PII. It was only the higher consumer awareness, illegal marketing tricks, on the tech side higher production output, clockspeed and now the Core architecture that saved intel's market share Lets leave business ethics out of the equation here I won't dispute your claim here, the AthlonXP's were priced and numbered at launch to just about outperform the P4's at similar price points. As far as I can recall both vendors were more or less trading blows each time a new set of steppings/clock speeds were introduced (similar to the balanced NV vs. ATI war in the G70 timeframe), but given Intel's higher common-man reputation at that point, one could possibly end up paying a slight "Intel tax" at a given performance point. That's not the point I was trying to make. These were just pricing/marketing games. From a technical standpoint, the two vendors seemed to be equally capable of shipping a range of CPUs with comparable performance numbers in that era. Things changed with the Athlon64: Intel just weren't able to match the performance at the high end, and no amount of pricing tricks could make up for the large performance delta (especially in games) without incurring severe losses. Add to that the stellar chipset support for the A64's and the increasing public awareness. The P4 was much, much better off against the XP.
  9. Interesting - I had been under the impression that the P4 architecture was robust and merely had a problem with power usage/heat generation. The P4 microarchitecture was designed to allow clock speeds to shoot through the roof at the expense of less work done per clock cycle (IPC). At the time it was introduced, it was probably a sound tradeoff (except at the *very* beginning... the clocks weren't high enough to trump the performance of the P3). But the engineers knew that as manufacturing tech improved, their new design would allow clock speeds to scale very well. And it did work. Don't forget that AMD's superiority only started emerging with the Athlon64. Before that, the P4 was doing just fine competing against (and often trumping) the Athlon XP. By the time the Athlon64 came to life, the P4 design philosophy had begun to show diminishing returns. Manufacturing realized that pushing beyond 3GHz with the technology they had in those days led to massive heat, power and reliability issues. Another interesting problem was that pushing the processor's clock speed sky-high was leading to a huge disparity between processor and memory speeds, and Intel's lack of an integrated memory controller started to hurt more and more. They should have quit and changed tactics then instead of trying to milk it to 4GHz. The P4 wasn't altogether a terrible idea. The designers just couldn't foresee the manufacturing constraints they would hit at the 3.8GHz region. On the other hand, the entire process probably gave them experience in handling high clock speeds, which would have come in handy with the C2D.
  10. Nehalem uses the new QuickPath interconnect with Intel's first on-chip memory controller, which is why it requires a completely new socket/motherboard/chipset.
  11. 720p is considered HD, as is 1080p. DVDs are 480p (SD). Your point is still valid, I just got offended that you insulted my 720p setup.
  12. Got my a** kicked by the AI in the GDI Croatia mission in CnC Tiberium Wars. After almost 8 years I had to break my vow to always play RTS's vs. AI only in Hard mode.
  13. There's a fine line between the fun of leveling/character building and the tedium of grinding. Unfortunately, the threshold is different for different people.
  14. It all has to do with manufaturing and off-chip bandwidth. One ugly non-scalable physical constraint is the number of pins on a chip, which is largely a function of chip surface area. A single multi-core chip with as much pin-bandwidth as two smaller chips will end up being as expensive in silicon real-estate as the dual-chip solution, but will have much lower manufacturing yields. Similar story with PCBs: you can only route so many wires from the GPU to the memory on a single PCB.
  15. It seems people are complaining about different things here. Personally, I'm sick and tired of Norse mythology-based fantasy settings. Sure, you can argue that elements in other fantasy universes (e.g. sci-fi) have exact analogues with traditional high-fantasy universes, but that's fine by me... as long as there are new concepts to keep me interested, and as long as I don't have to hear the ****ing word "elven" ever again.
  16. The disappointment that was the ATI Radeon 2900XT has a 512 bit interface and the memory is clocked at 1.65GHz. It's bandwidth beats that of the 8800GTX. On paper that GPU should have been a beast. I still don't understand what holds it back.. Optimized TWIMTBP code.
  17. To get a rough idea about the performance of a video card, you can look at some easy-to-calculate metrics such as pixel, texel, and vertex fill rates. These metrics are becoming less and less indicative of the actual in-game performance of modern GPUs with highly programmable shaders, but are still your best bet short of looking at actual benchmark scores. Fill rates can be calculated from the published specifications of graphics cards. http://www.gpureview.com/ has a pretty extensive and accurate database of GPUs with fillrate numbers, but the numbers for some of the modern cores with their unified scalar shaders are not comparable to older GPUs (apples-oranges). If you really don't even want to get into fillrates, but want a single number (based on specs, not benchmark runs) to compare and get an instant idea, then based on the way GPUs and games have evolved in the past few years, I'd say you want to look at one metric: Memory Bandwidth. Simply multiply the bit-width of the memory bus (e.g., 384-bits for the 8800GTX) with the memory clock (e.g., 1800MHz for the GTX) and you'll have a pretty good idea of the market bracket the card falls into.
  18. In addition, There's a 150-item limit. Say you're at 145 items, and you open a crate with 10 items in it. You MUST destroy 5 items out of the 10 that you just picked up. You cannot pick 5 and leave the others in the crate. You can't even cancel the whole process, return to your inventory, get rid of some useless items and then open the crate again. Once you open a crate, you're screwed. When you're equipping your characters, the items are categorized, so it works fine. But when you're at a store, for example, you need to scroll through the entire randomly organized list of items to find what you want to sell. The scroll speed is agonizingly slow. If you switch from the Sell to the Buy tab and back, the pointer jumps back to the top of the list. Exploited properly, this is sometimes helpful, but most of the time it's plain irritating. Eventually, you get used to it though. It's just an annoyance, just like other moderate annoyances like the lack of full control over the squad, the sometimes stupid party AI, the long elevator rides, the texture popups, and the occasional glitch. My only real gripe against the game is the lack of good sidequests, and the uninteresting unchartered worlds (although ).
  19. Like I said, Ninja Gaiden does look better, but it's an exception. Do keep in mind that the 360 uses 3 relatively anaemic PowerPC CPUs and a very unconventional ATi GPU while the Xbox had a standard wide Pentium3 CPU and an nVidia GeForce3. These are pretty much as different as two architectures can be, so the 360 has to rely on a combination of software emulation and dynamic binary translation to get the job done. Given that there isn't really a huuuge difference in computational capabilities between the two platforms, I am actually shocked the 360 manages to do what it does so well. MS must have had some truly exceptional programmers working on BC. Or, they might have pulled the PS3 BC trick without telling anybody. Edit: Nick said the same thing... didn't see his post.
  20. A few games look better on the 360 due to antialiasing (e.g. Ninja Gaiden), most games in the compatibility list play and look fine, and a few games don't run too well (e.g. Fable). All in all, if you're only interested in Xbox games, I'd say you're better off playing them on the original box, but a 360 does pretty ok too for most titles if you only want to keep one console.
×
×
  • Create New...