Related to: Language from Game Theory
Hey all, I apologize if this isn’t the correct place to post this.
I’m writing because I was independently working on a project which was extremely similar to the linked OpenAI article, Learning to Communicate. Once the article came out I was happy to abandon the project since I assumed the people at OpenAI would have it much better in hand than I would.
On reflection though, I realized that my approach was slightly different from what was described in Learning to Communicate, and I thought I’d share my thinking on the off-chance that the people pursuing that line of research find it helpful.
For the most part, my approach was extremely similar to what was written in Learning to Communicate (in fact, it was really cool to see my own thinking in someone else’s work, as well as seeing the many things they thought of that I hadn’t). The major difference though was how our respective “games” were set up.
In essence, I had based mine around game theory, or at least economics-based exchange. This video clip is a good primer https://www.youtube.com/watch?v=OLHh9E5ilZ4&feature=youtu.be&t=5m2s.
My world, or at least my initial world, was going to be made up of square cells. Agents, which occupy individual cells, can move about and exchange resources with each other, or can extract different colored resources from different-colored non-moving farms, which also occupy individual cells. Resources are subtracted periodically from agents in exchange for points, and having more diverse resources in your inventory means more points. Agents can also communicate with each other if they are within a certain range.
There are lot more details, but they don’t really matter. The core idea behind the game is to make the rules so that there is benefit to cooperating by economics, but also so that there is benefit to information sharing. In other words, it’s a very simple version of the conditions in which early man would have developed language in the first place.
Here are the potential benefits to this approach.
- You don’t have to be too creative about what forms of complexity you introduce; just model real life. For example, one of the first developments I would consider making is the ability of agents to commit violence on each other. Some amount of cooperation would still be optimal, but violence might also be optimal in certain situations. I can tweak the parameters of the game to acquire this balance, and thus make an entire range of vocabulary sensible for agents to develop.
- The language might evolve in such a way so that its vocabulary more resembles our own, since the “game” is modeled after our own world.
I think that’s it. As I said, I thought it was a pretty long shot that anyone would find this useful, but I thought it was worth a go. Anyone can feel free to ask me questions or clarify where my thinking is wrong. I’m by no means knowledgeable on AI, I went into the project thinking it would be a good learning experience.