Conversation between two artificial intelligences

Two artificial intelligences talk to each other

Doing a new job based only on spoken or written directions and then explaining it to others so they can do it too is an important part of human communication that artificial intelligence (AI) still can’t do. A group from the University of Geneva (UNIGE) has been able to build an artificial neural network that can think this way. This artificial intelligence learned and did some simple jobs and then used language to explain them to a “sister” AI, which then did them. The results are reported in Nature Neuroscience and look very good, especially for robots.

One of the most amazing things about humans is that they can learn how to do something new just by hearing or reading the directions. Also, once we know how to do something, we can explain it to someone else so they can do it too. This ability to do two things at once sets us apart from other species that need many tries with positive or negative reinforcement signs to learn a new job without being able to tell their siblings about it.

Natural language processing is a branch of artificial intelligence (AI) that tries to make tools that can understand and react to spoken or written language. The idea behind this method comes from artificial neural networks, which are based on neurons and how they send electrical messages to each other in the brain. However, the processes in the brain that would allow someone to do the above-mentioned cognitive feat are still not well known.

It is now possible for conversational AI bots to use linguistic knowledge to create text or a picture. But as far as we know, they can’t yet turn a spoken or written command into a sensorimotor action, let alone explain that action to another AI so that it can do it too.

A good brain

The expert and his team were able to create an artificial neural model that could do both of these things, though it needed to be trained first. “We started with S-Bert, an example of artificial neurons that already existed. It has 300 million neurons and was already trained to understand language.” “We linked” it to a smaller, simpler network of a few thousand neurons,” says Reidar Riveland, first author of the study and a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine.

The first part of the experiment had neuroscientists train this network to act like Wernicke’s area, which is the part of the brain that understands and perceives words. During the second stage, the network was taught to copy Broca’s area, which, with the help of Wernicke’s area, makes words and speaks them. Standard laptop computers were used for the whole process. After that, English-language written directions were sent to the AI.

Pointing to the left or right where a stimulus is detected; reacting in the opposite direction of a stimulus; or, in a more complicated case, showing the brighter of two visual stimuli with a slight difference in contrast. The scientists then looked at the model’s data, which showed how the desire to move, or point, in this case, worked.

For humanoids of the future

This model gives us new ways to look at how words and behavior affect each other. It looks especially good for robots, where one of the most important goals is to make tools that let machines talk to each other. “The network we’ve built is very small.” The two experts say, “There is now nothing stopping us from building much more complex networks on this foundation that could be used to make humanoid robots that can not only understand us but also understand each other.”

Thank you for reading this post, don't forget to follow my whatsapp channel


Discover more from TechKelly

Subscribe to get the latest posts sent to your email.

Comments are closed.

Discover more from TechKelly

Subscribe now to keep reading and get access to the full archive.

Continue reading