Brain of lab rat taught to fly fighter jet

I’m not kidding.


Holy hell. I wonder what this means for the future of technology? The article was posted over a year ago, I wonder why we never heard about this on the news?

There were several news reports about it. I didn’t find any new ones, though. Science takes time, I guess :o
Also, he probably focuses more on brain- related diseases now than creating robot mice, and this is normally not big in the news.

So many uses for that kind of bio-tech advancement. Guidance systems for cars, several uses in computers, maybe even biomechanical soldiers and/or tanks. But are there any possible moral issues with this kind of tech? The brains are living tissue, after all. Does that mean they have a conciousness? Oi, I’m thinking way ahead about this, but it’s eating at my mind.

Honestly. Can’t we learn anything from <a href=“”>movies</a> anymore?

Tour guide: At this point in time, I would like to direct your attention to the particular air vehicle next to which I am currently standing. The Harrier Jet is one of our more dollar-intensive ordnance delivery vectors.
Marge: Five tires!? Am I seeing things?
Guide: And, although it looks complicated it is so well-designed, even a child could fly it.
Lisa: Can I fly it?
Guide: Of course you can not.

It’s a bit like The Matrix where they were able to teach people everything by hooking them up to the computer.

Well we could have learned something from that movie… If anyone actauly went to see it.

So now we have actual flying rats. And diseases aren’t the first thing we should be afraid of.

Remember: Anything that can be done to a rat can be done to a human being.

Think about that.

Did any of you even read the article? x_X They extracted brain cells which then connected to a learning pre- stage of a brain, there’s no flying rat per se. It’s not about flying rats, if at all it’s about creating super smart ultrabrains who will get out of control and rule the world one day!

I’m doing something similar to that in my college. Except that instead of real neurons, I’m “emulating” then with an Neural Network program I’m developping. And I’m training it to play games.

In 2004 my friends developed a strategy game and then spent months trying to come up with the best program that would win the most times. But they only used recursive playing - where their program considers every possible movement for the next, say, 1,000 rounds, and then makes only those movements which will give more advantages to it in the long run. My NN doesn’t analyze movements, it only looks at how the board is at a given time and does what it has learned to do in such a setting. The outcome of this is that it takes thousands less time to make a movement and defeats recursive playing most of the time.

Also, as my crazy idea for a nice flick, I’ll try implementing a Neural Network in old Quake 1 (I already have everything I need in order to do it). I’ll then train it so it knows when to jump to avoid splash damage from rockets, when and where to shoot, when to run etc. If everything works fine in an year I’ll have the ultimate, supreme bot. And then I’ll try the idea with other games that’ve opened their code as well. Imagine games where the bots learn your style through trial and error instead of following a neverchanging script. In time they’ll rock every human player around.

I’m not using my NN’s for gaming purposes only. The most important one for my project will be used for signal processing. But that’s a lot of boring stuff that’s not interesting to discuss now nor here.

Back to the rat - I think they’ve only done it to prove it can be done with organic neurons. It’d be much faster and cheaper to do it with artificial neural networks. And those who think it’ll be possible to learn stuff by plugging a cable into your heads, dispel the illusion.

For the learning, a neural network, biological or artificial, has to be able to analyze where it commits mistakes and judge how intense these mistakes are. Therefore the learning comes from inside, not outside. Also, even the smallest functional change in a single neuron affects the whole network and what it knows. Because of these, you can’t just plug a cable and throw knowledge in. With the artificial ones, if you try this on a network that’s already had some training time, you screw its “memory”. At best you can copy the state of a trained NN into a completely blank one.

Thanks, DT. I’ve saved that page into my homework folder :smiley:

That. is. SO COOL. :open_mouth:

Hopefully in the future when the Brain Overlords have nearly wiped us out, someone with l33t Metroid skills will save us. >.>

Commentary on what Ren said:

At best, artificial neural networks are inspired by real ones. Real neurons are far far more complicated than the artificial neurons generally used and are far less well understood.

Artificial neural networks have already been used as parts of first person shooter AIs, although I’m not familiar with the details. There are already bots that are far better than humans.

There is a remarkable variety of training algorithms for ANNs. Some attempt to adjust parameters to reduce errors. Some randomly change things around until it works better. Some randomly generate changes and then use the ones that would have reduced error. In just about every case, a nueral network’s knowledge does seem some what fragile. But I can’t see any reason why it wouldn’t be possible to design a modular neural network archtecture that would allow plugging in knowledge. Doing this in a flexible way could be tricky and the learning algorithm would be a bit different…

Yeah, it would be so fun to always lose to the computer.

It’s not like that, Epic. It’s just that when you’re fighting bots driven by scripts, sooner or later you learn how to defeat them on most times (since you are a neural network XD). Having enemies learn your style would put two networks, you and the computer’s, in a state where one never completely learns the style of the other, specially because you’ll have to change your ways in order to continue competing fairly.

In some experiments on other games, I found that there are times when the change son the behavior of both the artificial net and the human player sometimes become cyclic. But even then, I still think it’d be funnier to have an opponent that was practically custom made for me than a drone with a broken “repeat” switch.

Vorpy: the tricky part is that changes in one neuron will have results in the working of the whole network. If you want to make it modular, so that you could isolate part of the network so as not affecting the rest and thus making it able to learn by simply receiving ready knowledge, the closest thing I can think of it a group of networks connected, not a single one. And even then it’s tricky.

One of the things my project involves is a set of neural networks communicating with each other with a novel interface they’ll use among them. But still, even though I can insert knowledge at points, it’ll still be as if each network were a neuron in a larger network, and “lecturing” a network rather than training the group would still change the memory of the whole.

Super edit: has anyone here ever seen Ghost in the Shell - the mangá, not the animé? It starts right like this experiment. I am quoting just the first paragraph you read in no. 1:

This is a photograph of a growth-type neurochip, created at Harima Science City in 1998 (enlarged 50,000 times). The cells are nearly dead from over-growth, and cracks in neuro-fibers can be observed through-out the chip. Neuro-fibers have grown all the way out to the chip terminals, which are made of a relative of polystyrene coated with galactose. The fibers have even warped the thin film base on which the terminals themselves are etched. In the same month that the chip was developed, vast capital corporations (largely media conglomerates) began to form a huge network in the medical world that used micro-machines as supplementary “cyber-brains”. Cyberbrain technology thereafter began shifting to a micromachine base, and by the year 2028 large numbers of neurochips were in use in AI and robotics.

Pic of the page (from my Portuguese translation):

Go science? It’s neat but also quite disturbing since computers that are powered by biological neurons can actually think like you and me. They’ve got the potential to be as unpredictable and socially unstable and dangerous and bad tempered as humans!

This is very interesting. I can’t remember when I heard about this before; I’m surprised I just glanced over it. To address what Nulani said: that’s not necessarily a concern for now as emotions are controlled by a very complicated system that you wouldn’t acquire by getting 25k neurons together. 25000 neurons is seriously not a lot cells. It is very very little in biological terms. Also, we don’t have much to worry as long as it doesn’t develop self-awareness, which is a concept that’s not entirely understood by neurobiology.

Nul: what Sin said, plus these neurons are raised in a controlled way. And…

A neuron is a simple switch that will take signals as inputs and will output a signal as a result of a very simple algorithm. An artificial neuron is not very different from a biological one in functionality. If the biological ones can “think”, so can the artificial ones.

The more I study ANN’s, the more I feed the idea that the existance of thought and self-awareness depend on the complexity of the neural network you’re analyzing.

One thing, though, comes to my mind as sure: neural networks that are raised differently from the way they would naturally develop inside the head of an animal will have their connections between neurons so differently from a normal brain that they will work in a completely different framework - that is, because they would grow in a pattern different from the one in our heads, without temporal, parietal and other lobes, hippocampus, pons etc., if they ever develop to a level where they could act like a sentient being, their ways of thinking will be alien compared to ours - and though their trains of thought could possibly be analogue to that of a human, but there’s no guessing the likehood of it.

Ren, I think the scientific world is very interested in knowing what this “very simple algorithm” that controls real neurons is. Just because artificial neural networks use such an oversimplified model doesn’t mean you can assume real neurons do too. If you really want to understand real neurons, stop studying artificial neurons.