Simple: speed.
Developing AI is actually about developing two separate technologies: software and hardware. Now, as you know, we've got the software already - we've had some version of it for decades (metaheuristic search) and we can't go much further here without the hardware to back it up.
Which is where memristors come in. While you don't need memristors for AI, they look like they'll be a powerful enabling technology because they offer the capability to increase speed by orders of magnitude at a time when conventional transistors have all but hit the 'brick wall'. Taks listed all the reasons. My materials science is rusty, but I believe the most important of them is that they increase speed by taking up far less area than transistors (in all dimensions).
"Williams adds that memristors could be used to speed up microprocessors by synchronizing circuits that tend to drift in frequency relative to one another or by doing the work of many transistors at once."
http://www.sciam.com/article.cfm?id=missin...-of-electronics
Besides that, though, memristors appear to exhibit quirky 'learning'/adaptive abilities which resemble those seen in biological life: http://lanl.arxiv.org/abs/0810.4179v2
I'd say that's just the tip of the iceberg.
Somewhat loosely on topic: genetic algorithms are about a programme breeding competing solutions, right? Well what about a programme that breeds competing programmes to solve a problem? Or a programme that does that, but on ITSELF (i.e. to evolve itself to be better at evolving other solutions - this is what evolution itself does.) Or what about genetic algorithms that don't evolve programmes, but hardware?
http://en.wikipedia.org/wiki/Genetic_programming
http://en.wikipedia.org/wiki/Evolvable_hardware
AI by itself is fairly tame. It can learn, but it can't change it's own coding. It's no more than a human mind in a computer instead of a body. Humans can't (easily) change how their code (both DNA and neural net) works, and nor can your average AI. So the fears about AI's getting out of control and taking over the world are naive. But not impossible. If somebody coded not just an AI, but an AI that could change it's own code (which seems possible given the above, but an order of magnitude harder than creating an AI again), that would be something else.