Probably not a big thing but I looked up the 7 Hermetic Principles and Correlation is not one of them. Tomross, which one of the 7 Hermetic Principles were you referring to?
I tend to find myself thinking about the implications of AI, not from the perspective of the survival of my own personal consciousness, but more from the evolution of human consciousness more generally once attached to silicon-based intelligence and, ultimately, consciousness.
Once I connected my mind to a computer, it would presumably change/evolve/grow/expand/?? so quickly and radically that I cannot imagine it even relating as anything resembling "me" to the world I am currently conscious of and somewhat intelligent about. It would no longer be me in any sense that a 2017 human would recognize.
The quantum mind, furthermore would have no reason to keep "my consciousness" separate from the consciousness of all other humans also connected to the cosmic brain. My consciousness would be swept into a stream with everyone else. Any intelligence/consciousness great enough to "encounter" what we have begun creating would experience it more as a single consciousness/intelligence than as a collection of individual consciousnesses/intelligences.
In short, what's truly scary about AI and biotechnology is that what it seems capable of producing is completely beyond our ability to both comprehend and therefore to direct or plan.
And if this is even remotely so, how in the heck can we "plan" for it? Like it? Agree to it?? Prepare for it??
It's so truly out-of-this-world that it's beyond us.
I do not understand why/how existence exists at all and have actually only recently been able to understand the deeper meaning of that question. Now that I've become conscious of the question, aside from being embarrassed, at 72, that it's never really occurred to me before, I realize that while I have been wondering for years what will happen when humankind really begins attempting to integrate all that we have learned about existence these past 50 years into our common culture, even the implications of AI for the rapid evolution of a cosmic brain of some sort, still does not "escape" the constraints of existing within the "confines" of what we know as existence.
So good luck to it, as far as exploring the limits of existence and perhaps if we are left alone as humans on this planet, getting back to us with a report on what we are not easily able to comprehend.
Ok. OK. I'm getting lost here. But in the spirit of RawkSKel, these are some of the thoughts rattling around in my head relative to what's happening with AI and biotechnology.
OK, one more thing.
On the one hand, to the extent we can create some sort of super intelligence/consciousness over the next 50 years, it will go off on its own into the universe to explore, learn and ..... and whatever it will do.
The fun and games we are talking about of connecting our brains to a computer to enhance our intelligence and maybe our consciousness are a completely different project if each connected human will retain its personal human consciousness and essentially dominate whatever intelligence/consciousness it's connected to - at least for another 50-100 years. The former is the true "Accomplishment" of AI in the shorter term. The latter is a form of virtual reality game we'll be playing with ourselves. Likely much less significant in the bigger picture.
Truth be told, in order to get any handle on this from the human perspective, we need to speak with intelligent life that has evolved elsewhere in the universe and ask them what they think about our AI and AI in general.
Could that be the answer to the limits we should set for AI? That we should not develop it until we have done the due diligence of consulting with a cupla other intelligences in our universe (hopefully within our own galaxy) and incorporate their experience into directing and limiting our work here?