Fionavar Posted October 8, 2006 Posted October 8, 2006 the Inquirer The universe is change; your life is what our thoughts make it - Marcus Aurelius (161)
taks Posted October 8, 2006 Posted October 8, 2006 there is some truth to this, however, i don't agree entirely. i'm currently working with a quad-core, and one of my tasks is parallel optimization. yes, it's hard. very hard. yes, the type of work i'm doing lends itself better, sort of, to parallel implementation. however, just about any process can be split into parallel threads and SOME gains can be had. what i've noticed, btw, is that gains are sort of logarithmic. i.e. 4 cores gives you a 2X boost with certain apps (with some, it could easily be 4x, but not most). developers that are saying it won't ever happen are actually about as short-sighted as bill gates was claiming "who would ever need more than 64 kB of memory?" btw, i'm not just trying to run parallel across cores. each core with my processor has two floating point units, each capable of single-instruction, multiple data (SIMD) operations. in this case, we can load two floating-point (32-bit) values at a time and operate on them, one "pair" in each pipe, simultaneously. so there's thread-level parallelism, and instruction-level parallelism. some processors can do even more... taks comrade taks... just because.
Gorth Posted October 8, 2006 Posted October 8, 2006 Reminds me a bit of my time at the university. One of the projects we had to do was develop our own programming language and environment. The one our team did was a "concurrent" language that would spread itself out on the network on a number of computers, all contributing their calculation power to the task at hand. Nothing like 30+ risc processors at your disposal. That exercise also showed us, as we had demonstrate the usability of such conucrrent processing, that there are tasks well suited for it and a lot that aren't. Our wave model simulation was quite impressive , but that one was mostly a lot of computations on arrays. I'm not sure Freecell would benefit to the same degree. It does require some work to design your tasks, determine dependencies, ensuring consistent transactions, you name it. It's fun, but it can be hard to do if you are used to sequential thinking. Oh, and debugging *was* a pain in the butt btw. I'll give them that. “He who joyfully marches to music in rank and file has already earned my contempt. He has been given a large brain by mistake, since for him the spinal cord would surely suffice.” - Albert Einstein
taks Posted October 8, 2006 Posted October 8, 2006 extreme pain. part of the problem is that no matter how parallel an app is in actual execution, it is written out as a serial stream on the screen when you're wading through trying to figure out why you just encountered a segmentation fault... ugh. taks comrade taks... just because.
Diamond Posted October 8, 2006 Posted October 8, 2006 Worse even if that thing crashes gdb. (happened to me)
taks Posted October 9, 2006 Posted October 9, 2006 my version of gdb is suffering. well, it suffers because it can only give you certain responses to a crash, and can't elaborate on the response. typically, using SIMD instructions, a failure is due to a seg-fault, bus-error or illegal instruction. seg-faults are from reads outside of a memory block, bus-errors are from writes outside of a memory block and illegal instructions are because you read in invalid data that was probably outside a memory block, but the data was not designated to another memory block (i.e. it wasn't malloc'd for something else, which would have otherwise caused a seg-fault). taks comrade taks... just because.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now