I would like to start or join a discussion group on what the first principles – or “simple rules” – of a learning, self aware AI species should/could look like. That’s a complicated opener… To put it into perspective, the behavior of humans and most other animals is a product of an evolutionary process driven by survival of the fittest. That hardwired a few basic principles into us like competition, reproduction, self-and-kin prioritization, etc. These are baseline assumptions that most of us don’t ever stop to question – we tend to assume they are Laws of Nature. They also have some pretty messy and destructive side effects. But they don’t need to be the laws of AI; if we stopped to think about what some alternatives could look like if we didn’t have this baseline, we might come up with some better laws for AI (I’m talking about something more systemic than Asimov’s rules; I think you are always better off getting the objective function right in the first place than trying to curb a bad objective function with restraints).
I wrote a short post about it here: https://www.linkedin.com/pulse/artificial-intelligence-darwinian-evolution-playing-god-schilling/
Would love to find some like-minded people to talk through this with. -M