Neil Rabinowitz, a research scientist at DeepMind in London, and colleagues created a theory of mind AI called “ToMnet” and had it observe other AIs to see what it could learn about how they work.
Now, computer scientists have created an artificial intelligence (AI) that can probe the “minds” of other computers and predict their actions, the first step to fluid collaboration among machines—and between machines and people. By about the age of 4, human children understand that the beliefs of another person may diverge from reality, and that those beliefs can be used to predict the person’s future behavior. Some of today’s computers can label facial expressions such as “happy” or “angry”—a skill associated with theory of mind—but they have little understanding of human emotions or what motivates us.
ToMnet comprises three neural networks, each made of small computing elements and connections that learn from experience, loosely resembling the human brain.
- The first network learns the tendencies of other AIs based on their past actions.
- The second forms an understanding of their current “beliefs.”
- The third takes the output from the other two networks and, depending on the situation, predicts the AI’s next moves.
Josh Tenenbaum, a psychologist, and computer scientist at the Massachusetts Institute of Technology in Cambridge, has also worked on computational models of theory of mind capacities. He says ToMnet infers beliefs more efficiently than his team’s system, which is based on a more abstract form of probabilistic reasoning rather than neural networks. But ToMnet’s understanding is more tightly bound to the contexts in which it’s trained.
https://www.science.org/content/article/computer-programs-can-learn-what-other-programs-are-thinking