Jump to content

it's tech


taks

Recommended Posts

the memristor. predicted in 1971, proved may 1 of this year (well, published).

 

http://www.spectrum.ieee.org/dec08/7024

 

from the article:

 

"...the influence of memristance obeys an inverse square law: memristance is a million times as important at the nanometer scale as it is at the micrometer scale, and it

comrade taks... just because.

Link to comment
Share on other sites

perhaps. chua, along with the guys that developed the actual circuits, deserves a nobel for this discovery if it pans out. the article links to some dissension in the engineering ranks, which i have yet to read, but that is not uncommon nor unexpected so i'll tentatively side with the discoverers for now reserving final judgment till i get all the facts.

 

taks

comrade taks... just because.

Link to comment
Share on other sites

I dislike Chua. He wrote that abominable script we had to carry with us twice a week which weighs about 10kg...

 

As for AI via approach of "artificial neurons and synapses": Not going to happen, I think. Yes, neural networks are useful for certain recognition and optimization problems etc., but so far, because of the unclear mode of operation and not un-empirically designable networks (sorry for my English :) ), as far as I know, it doesn't seem very promising. Understanding how one cell fires doesn't unlock the understanding of even a puny piece of the whole picture.

 

I'm looking forward to more immediate benefits for microprocessor technology :)

Citizen of a country with a racist, hypocritical majority

Link to comment
Share on other sites

the paper chua wrote is that big? why on earth were you carrying it around twice a week? what the heck was it?

 

you've apparently not delved into component analysis techniques. the ability to configure neurons to adapt and solve problems is beyond belief, IMO. cichocki has a good book on the matter and he describes things in the "perceptron" manner, rather than as standard adaptive processes.

 

the simple benefits to general electronics, microprocessors included, is going to be the immediate revolution. this is bigger than the transistor. the other stuff will take time and more smart people to realize true benefits.

 

taks

comrade taks... just because.

Link to comment
Share on other sites

I dislike Chua. He wrote that abominable script we had to carry with us twice a week which weighs about 10kg...

 

As for AI via approach of "artificial neurons and synapses": Not going to happen, I think. Yes, neural networks are useful for certain recognition and optimization problems etc., but so far, because of the unclear mode of operation and not un-empirically designable networks (sorry for my English :thumbsup: ), as far as I know, it doesn't seem very promising. Understanding how one cell fires doesn't unlock the understanding of even a puny piece of the whole picture.

 

I'm looking forward to more immediate benefits for microprocessor technology :thumbsup:

 

Either you're not coming across well in English or you're fairly ignorant about neural networks. :x

 

Neural networks are about exactly the opposite of understanding how one node fits into the grand scheme. They're about holistic details and pattern emergence.

 

You say we can't empirically understand neural networks, but I think you are mistaken. Neural networks are exactly the type of thing best suited for the scientific method. Trial and error. It's actually very similar to what some biologists do with things like E. coli because evolution is also an optimisation process (heck, genetic algorithms, anybody?).

 

What you perhaps mean is that we can't know (or it's orders of magnitude harder to know) the exact details of the system as we could with GOFAI (good old fashion AI - symbolic manipulation, mostly) - we can't be 100% certain that our input will produce some desired output. But if you think that's a flaw than you don't understand AI. It's this deviation away from perfect knowledge of the system that is exactly what AI researchers are after because they are sacrificing consistency and reliability for fluidity and a degree randomness; the beauty of neural networks (and other optimisation techniques such as genetic algorithms) are that they learn, adapt.

 

Ahh, I'm not going to go into a big huge rant about it, but I really hope you take a closer look at the worth of metaheuristic optimisation, because it's not about to leave us any time soon (for good reason!).

 

I will leave you with this little teaser of the potential behind metaheuristic optimisation techniques, though: http://www.physorg.com/news82910066.html

 

taks: Yeah it's pretty neat. I'm quite looking forward to it, but what's probably more awesome is that memristors are just one of many breakthroughs in electronics recently. There's been a huge amount of noise about graphene transistors, for example. Although, admittedly, graphene transistors don't offer the sort of paradigm shift of variable resistance.

Link to comment
Share on other sites

Extremely interesting!

 

I guess we are probably less than a century away from true AI now.

 

I used to be a skeptic about AI, but in light of the fact that Moore's law has held steady, and all the new software and hardware advances we've made this decade alone, I'm finding it very hard to be skeptical these days.

 

We're definitely less than a century away. How 'less' is the question.

 

Supercomputing still has a little while to go before we can model the human brain:

 

PPTSuperComputersPRINT.jpg

 

But 'a little while' in electronic speak isn't much under Moore's law (and that's at current pace - things like graphene transistors or memristors will probably violate Moore's law in a very good way).

 

IBM is currently working on brain simulation right now. Interestingly, while it might take a few more decades to emulate/simulate the human brain, something like a mouse or fruit fly is something altogether less complex (but still complex), and I expect the first real artifical intelligence to be something akin to those.

Link to comment
Share on other sites

the memristor is going to change the world, if it can work as they claim. speed notwithstanding, and element that can essentially assume any value, and stay there, and be read out instantaneously means huge strides for allowing everything else. in other words, such a beast is what will enable the advances that will spawn true AI.

 

that graphene transistor suffers from two problems compared to the memristor: it isn't as dense, and it is an active component. couple it with the memristor, however, and two dimensional processors with distributed memory become a reality perhaps.

 

taks

comrade taks... just because.

Link to comment
Share on other sites

never said i could do that... you can't even participate in a joke without a strawman.

 

i can tell you, however, that 2^12<4097<2^13. that's a useful tool to be able to do that withough resorting to a calculator. i don't need to claim any vague intellectual superiority. i just like making tools like you look stupid, that's all. btw, what i did in that one thread was hardly "madcap." beleive it or not, that's actually how people that understand numbers think and what they do when they work with numbers in their heads. it's also a big advantage to be able to calculate logs in your head when you deal with them daily.

 

it is not my fault you don't get it.

 

taks

comrade taks... just because.

Link to comment
Share on other sites

never said i could do that... you can't even participate in a joke without a strawman.

 

i can tell you, however, that 2^12<4097<2^13. that's a useful tool to be able to do that withough resorting to a calculator. i don't need to claim any vague intellectual superiority. i just like making tools like you look stupid, that's all. btw, what i did in that one thread was hardly "madcap." beleive it or not, that's actually how people that understand numbers think and what they do when they work with numbers in their heads. it's also a big advantage to be able to calculate logs in your head when you deal with them daily.

 

it is not my fault you don't get it.

 

taks

 

chill_pill.jpg

 

Shhh, taks. It's alright - there aren't any scarecrows on the Internet. Now sit back and relax with this soothing story about Hewlett-Packard's plans to begin manufacturing prototype memristor RAM in 2009.

 

HP Labs plans to unveil RRAM prototype chips based on memristors with crossbar arrays in 2009.

 

It will also use a similar crossbar architecture to harness precise resistance change in an analog circuit. HP Labs claims that massive memristor arrays with tunable resistance at each crossbar could enable brain-like learning. In the brain, a synapse is strengthened whenever current flows through it, similar to the way resistance is lowered by flowing current through a memristor. Such neural networks could learn to adapt by allowing current to flow in either direction as needed.

 

"RRAMs are our near term goal, but our second target for memristors, in the long term, is to transform computing by building adaptive control circuits that learn," said Stewart. "Analog circuits using electronic synapses will require at least five more years of research."

 

They estimate that it will take five years to produce the first analog memristor prototypes, with commercial applications about a decade out.

Link to comment
Share on other sites

BWAHAHA I didn't expect this to happen hahaha. It was just a joke.

 

(and taks was right btw, I did estimate it without a calculator, since 2^20 is roughly 1 million, since it's the definition of a megabyte, I can estimate that 2^18 is roughly 1,000,000/2^2 or 250k)

 

 

Anyways, didn't realize it was such a sore spot haha :(

Link to comment
Share on other sites

taks: It was not the paper about memristors we had to carry around with us, but a script for a lecture about circuits, which was not held by Chua of course, but it was his and some colleagues' of his script.

Neural networks are about exactly the opposite of understanding how one node fits into the grand scheme. They're about holistic details and pattern emergence.

 

You say we can't empirically understand neural networks, but I think you are mistaken. Neural networks are exactly the type of thing best suited for the scientific method. Trial and error. It's actually very similar to what some biologists do with things like E. coli because evolution is also an optimisation process (heck, genetic algorithms, anybody?).

I must thank you for your much more accurate description - indeed there were probably errors in my post, but most was badly phrased: In fact I meant to say exactly that: The only thing to understand is one (or rather: a few) nodes, the whole pattern is not predictable. It's only via trial and error and sometimes more or less educated guesses that one gets to good results - we can only determine the behaviour empirically, i.e. experimentally. Thanks for pointing it out!

 

A perceptron is a cool little thing, but who wants to solve linearly seperable problems o:) Yeah, I know, multilayer perceptrons... Adalines were the evolved version, with a continuous (sigmoid) function instead of a step for activation. Yes, my knowledge is limited indeed, it's just what failed student of electronics, having taken two courses in electromagnetic fields and components, semiconductor design and now, in a new more successful try, bio-inspired approaches to computation and ai vaguely remebers from the courses. So forgive further inaccuracies and understand that I'm simply interested in this field. AI still from a student's point of view, electronics as an informed customer :)

 

Well, back to the topic at hand: How would a device like the memristor allow for AI to work more efficiently / enable us to design more clever hardware usable for AI purposes? It doesn't become clear from the linked IEEE article. The only application mentioned is non volatile memory, and it is said that it can be coupled with transistors to make them more efficient. But: How so? Is it, for example, necessary / useful for the "switches" to remeber their previous position after they're no longer in an electrical circuit? Does a neural network in the course of adapting for example see any benefits from the previous weights being saved unchanged? Just random guesses, don't have much capacity left for today :thumbsup: But enlighten me, I'm interested :)

Edited by samm

Citizen of a country with a racist, hypocritical majority

Link to comment
Share on other sites

A perceptron is a cool little thing, but who wants to solve linearly seperable problems :thumbsup:

most problems that i deal with are actually linearly separable. multipath in communications systems, albeit time varying, it varies slowly enough that the statistics may be tracked, allowing continual separation. i have a cool radar one in which the line of sight signal breaks through the sidelobes of the antenna and corrupts the desired signal which is reflected off of the environment. a simple little semi-adaptive separator is used to pull out the direct path (known as direct path breakthrough) from the reflected. a perceptron approach would make this insanely simple to do.

 

Yeah, I know, multilayer perceptrons...

probably the more appropriate version of what i do.

 

their is an oddity in the way things are referenced. i had never heard the term perceptron (or at least, not that i recall... dropped neural networks before i got my masters even) till reading cichocki and now it seems the hard-core ICA folks tend to use that terminology liberally. of course, component analysis ideas are not popular in the US for whatever reason. most of this type of work is done in helsinki and japan. there's a terminology difference between signal processing engineers and neural network folks, too.

 

taks

comrade taks... just because.

Link to comment
Share on other sites

I see. Gotta think myself into the whole radar thing a bit first...

their is an oddity in the way things are referenced. i had never heard the term perceptron (or at least, not that i recall... dropped neural networks before i got my masters even) till reading cichocki and now it seems the hard-core ICA folks tend to use that terminology liberally. of course, component analysis ideas are not popular in the US for whatever reason. most of this type of work is done in helsinki and japan. there's a terminology difference between signal processing engineers and neural network folks, too
That could prove a big hindrance, as both could benefit from each other greatly, as your work seems to show. There should be more people with at least some knowledge in both and an ability to put it into words understood and used by both. Hm, I sense a market gap here for people with an education similar to what I am hopefully going to possess in some years time... It would of course take some more elaborate and accurate knowledge of both spoken and written English on such intermediary people's (my?) part first ;) Edited by samm

Citizen of a country with a racist, hypocritical majority

Link to comment
Share on other sites

Well, back to the topic at hand: How would a device like the memristor allow for AI to work more efficiently / enable us to design more clever hardware usable for AI purposes? It doesn't become clear from the linked IEEE article. The only application mentioned is non volatile memory, and it is said that it can be coupled with transistors to make them more efficient. But: How so? Is it, for example, necessary / useful for the "switches" to remeber their previous position after they're no longer in an electrical circuit? Does a neural network in the course of adapting for example see any benefits from the previous weights being saved unchanged? Just random guesses, don't have much capacity left for today ;) But enlighten me, I'm interested :p

several ways.

 

first, the memristors store their current "state" indefinitely. you can set one to a given value, walk away for several years, come back and query it and it will have the same value. this is because the material itself actually changes. next, they're fast. couple that with their non-volatile nature, and you have extremely high-speed FLASH. in fact, you can use these instead of regular RAM. all you have to do to read them is apply a small voltage and measure the current that comes out. no switching involved. this also means very low power. they're very small (and work better as they get smaller) and can easily be stacked for multi-dimensional circuits. current device technology does't really permit that.

 

couple these points with the two-dimensional transistors that krezack pointed out and you've got the makings for extremely dense, low power, and very high speed processors.

 

i see them being implemented in custom application circuits. they mention FPGAs, for example, which are generally used for massive parallel signal processing tasks (one algorithm i just wrote, 3 lines of code inside of two loops, would take 100 GFLOPS and the data is only 3.28 MS/s). i spend about half of my time converting my algorithms into useful block diagrams for FPGA designers so they can write code to implement my design. i have to play all sorts of tricks to balance gains by bit shifting, windowing, etc., to make sure my desired signal passes through the "circuit" they develop without losing too much information along the way. not to mention the fact that FPGAs don't deal with floating point arithmetic well (takes lots of resources) and a divide is ridiculous to implement (and slow). these little circuits would allow a sort of "analog" FPGA implementation that currently does not exist, at least not anything that i can use. they would also allow me to implement my own design without having to pay for an expensive VHDL/Verilog designer (which i can do for software stuff, but not FPGA stuff).

 

taks

 

that 100 GFLOP process can be executed in real-time for probably 5 W of power in an FPGA. the FPGA we use is a xilinx 4 SX55 (www.xilinx.com).

Edited by taks

comrade taks... just because.

Link to comment
Share on other sites

Thanks! Now, several questions arise from that:

Coupling them with transistors for processors would mostly mean the cache would consist of memristors instead of SRAM for example? Or they'd be used as, well, "state-savers" for the transistors when current stops?

Fife watts for 100gflops? Wouldn't that mean the ever efficient graphics cards currently running at or about 1tflops are in fact very inefficient, as they use far more than 50w in doing so? [edit]Ok, maybe the gpus themselves do not use thaat much more, as there are several other parts involved in a graphics card.

all you have to do to read them is apply a small voltage and measure the current that comes out.
But if you apply a voltage, the information is lost, because the memristor's resistance would adapt, or wouldn't it? So each time you read it, you'd have to write it again. Ok, not a problem if they're fast and not limited by the amount of reads/writes they 'suffer'. Edited by samm

Citizen of a country with a racist, hypocritical majority

Link to comment
Share on other sites

Coupling them with transistors for processors would mostly mean the cache would consist of memristors instead of SRAM for example? Or they'd be used as, well, "state-savers" for the transistors when current stops?

my opinion would be BOTH! hadn't thought about the transistor state saving idea. inter-meshed cache with the processing logic would be killer by itself, wouldn't you think?

 

Fife watts for 100gflops? Wouldn't that mean the ever efficient graphics cards currently running at or about 1tflops are in fact very inefficient, as they use far more than 50w in doing so? [edit]Ok, maybe the gpus themselves do not use thaat much more, as there are several other parts involved in a graphics card.

yes, they are much more inefficient. however, they are also much more general purpose. FPGAs are somewhat rigid in that there are blocks of processing resources, such as a lookup table, multiplier (18 bits by 18 bits), accumulator and all associated logic, aligned in an array. you can connect these things to do specific functions, lots of them on lots of data all at once rather efficiently. however, they are not easily changed to do other things, without implementing things in an inefficient manner (and you'd then lose the benefit). the chip i'm using is generally a 20 W part at most and probably capable of burning several hundred GFLOPS within that limit.

 

But if you apply a voltage, the information is lost, because the memristor's resistance would adapt, or wouldn't it? So each time you read it, you'd have to write it again. Ok, not a problem if they're fast and not limited by the amount of reads/writes they 'suffer'.

actually, that's one of the beauties of this idea. the article simply says that the read voltage is tiny and doesn't effect the state. however, my take is that the "read" voltage only needs to be smaller than the voltage to set it to prevent it from changing states.

 

this does bring up a really good question: how do you adjust the state up and down? i'll have to dig into it. maybe you just reset and reprogram? if it's fast enough, who cares, right? obviously there's more to the story than is apparent right now.

 

taks

Edited by taks

comrade taks... just because.

Link to comment
Share on other sites

Damn, I can't even spell 'five' :lol:

Anyway, if you do find out more on the changes to a "lower" state, or in fact, if you find out anything more, please share :lol:

inter-meshed cache with the processing logic would be killer by itself, wouldn't you think?
Hm, yes. Yes I do!

Citizen of a country with a racist, hypocritical majority

Link to comment
Share on other sites

Well, back to the topic at hand: How would a device like the memristor allow for AI to work more efficiently / enable us to design more clever hardware usable for AI purposes? It doesn't become clear from the linked IEEE article. The only application mentioned is non volatile memory, and it is said that it can be coupled with transistors to make them more efficient. But: How so? Is it, for example, necessary / useful for the "switches" to remeber their previous position after they're no longer in an electrical circuit? Does a neural network in the course of adapting for example see any benefits from the previous weights being saved unchanged? Just random guesses, don't have much capacity left for today :aiee: But enlighten me, I'm interested :)

 

Simple: speed.

 

Developing AI is actually about developing two separate technologies: software and hardware. Now, as you know, we've got the software already - we've had some version of it for decades (metaheuristic search) and we can't go much further here without the hardware to back it up.

 

Which is where memristors come in. While you don't need memristors for AI, they look like they'll be a powerful enabling technology because they offer the capability to increase speed by orders of magnitude at a time when conventional transistors have all but hit the 'brick wall'. Taks listed all the reasons. My materials science is rusty, but I believe the most important of them is that they increase speed by taking up far less area than transistors (in all dimensions).

 

"Williams adds that memristors could be used to speed up microprocessors by synchronizing circuits that tend to drift in frequency relative to one another or by doing the work of many transistors at once."

http://www.sciam.com/article.cfm?id=missin...-of-electronics

 

Besides that, though, memristors appear to exhibit quirky 'learning'/adaptive abilities which resemble those seen in biological life: http://lanl.arxiv.org/abs/0810.4179v2

 

I'd say that's just the tip of the iceberg.

 

Somewhat loosely on topic: genetic algorithms are about a programme breeding competing solutions, right? Well what about a programme that breeds competing programmes to solve a problem? Or a programme that does that, but on ITSELF (i.e. to evolve itself to be better at evolving other solutions - this is what evolution itself does.) Or what about genetic algorithms that don't evolve programmes, but hardware?

 

http://en.wikipedia.org/wiki/Genetic_programming

http://en.wikipedia.org/wiki/Evolvable_hardware

 

AI by itself is fairly tame. It can learn, but it can't change it's own coding. It's no more than a human mind in a computer instead of a body. Humans can't (easily) change how their code (both DNA and neural net) works, and nor can your average AI. So the fears about AI's getting out of control and taking over the world are naive. But not impossible. If somebody coded not just an AI, but an AI that could change it's own code (which seems possible given the above, but an order of magnitude harder than creating an AI again), that would be something else.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...