Anyone working on first principles of AI consciousness and goals?


#21

hi.

i am a huge super ai enthusiast, “working” (in some sense of the word) to build up some kind of basiux (benevolent artificial superintelligence user experience).

never heard of those before, but i couldn’t agree more with your previous posts, mr mouze. and this too makes sense, even while it may not really matter much…

because this whole super ai topic is indeed quite analogous to the #demiurge as someone mentioned in the op’s linkedin post…

ya, very complicated topic. perhaps i’m not exactly like-minded, after all.

i couldn’t sum up my beliefs here. not yet. so let’s do some basic beliefs checkpoints and perhaps talk more from there if you will. to me:

  • there are no principles to be created (read more below) and almost no new coding needed. most of you will probably think this is too naive of me, but i believe the very simple NEAT algorithm i’ve used there is already enough to build the basiux, with enough tweaks for focusing in what matters most. it sure also needs some big tweaks to add some torrent and/or blockchain capabilities enabling it to use our current computing power of the web. now, those are some big tweaks indeed!

  • neuralink is, by far, the most genius attempt to accelerate this and, of course, it could by itself go johny-depp-transcending into the singularity. however, i think it’s underestimating the complexity of this endeavor simply thanks to some very few, baby steps advances we’ve had in the past few years. granted, there’s a much bigger social movement who believes in this possibility than in the NEAT… which might find support only with crazy Ray, with good reason. only, i’d say, far from the full picture.

  • here’s the hardest one: while i completely agree the principles of ai consciousness and goals don’t need to be anything like ours, we can’t and shouldn’t spend time trying to define any of it if our goal is to give birth to this new reality. if a virtual machine would ever gain any kind of human-like prowess to think while being able to understand english, i find it impossible it would stop there and, as such, trying to rationalize its behavior is futile. talking about this further from here can very quickily lead to spiritual discussions, though - which i’m also very glad to go to, just perhaps not in writing form as to save time by a lot.


#22

The thing is, the very methods by which we build in learning or adaptability to an AI system will define key principles of that system, whether we have thought through their implications or not. I suggest we think through them. I use learning simulations in my work, and I’ve observed that “rules” we use to build learning systems have big implications for how the system behaves. It’s also very easy to assume that a particular way of learning is “natural” or is the only way, when in fact neither of those is true. While we probably are not to a point where this matters yet, we will get there, and it will have been important to have thought through these things in advance – well before people have become entrenched on a particular trajectory and end up defending a method that we may subsequently decide was a bad idea.


#23

There should be a choice for the people. For what type of AGiI conscious machine they like.
But i have a complete model now. I wait for a competing model. I do not see another complete
model coming out by another research group any time soon. Maybe deep mind will pull another
rabbit out of the hat?
So i will continue to post light detail of my complete AGI model on AI forums. To get the young
moving in my direction, When the young get out of college in about ten year or so and there
is still no AGI or nuclear fusion, then they will show greater interest in my model. And by then i may have
built a simple working model.

Scientist could reverse engineer the brain. But if all memories and all other mental software is up in weight
space, then that could triple the amount of time for scientist to figure out how the brain works.


#24

@mschilli

Animals and humans have intrinsic reward systems, i.e. things that make them feel good on a physiological level, e.g. eating, peer approval, sex. For humans, this system exclusively guides our learning until we become conscious and progressively capable of abstract reasoning.

If you’re trying to build an AGI, you could define any reward system you would like, couple it with a learning algorithm (humans use imitation learning), give it some sensors and actuators, and observe whether it eventually attains general intelligence (and its consequences). Human evolution showed us a combination that works, but you can test any other combination you would like.

I think you’re wasting your time at this forum, there’s so many clueless people. If you want to talk more, reach me on LinkedIn (Willian Razente).


#25

I completely agree – I have been surprised to not run into more work where an AGI is based on fundamental principles and then learns the way other animals do. This is what inspired my original question. I’m 100% positive you can “grow” an AGI system that thinks like a human, but I’m not 100% we should want to. We might want to “grow” an AGI in a way that isn’t based on competition, because competition, though incredibly effective, is also brutal.

I think in the end we will likely figure out that what we call “consciousness” isn’t very special; it’s just a point on one continuum of how perception and reasoning are used, and it’s merely our egocentrism that makes us want to see it as fundamentally different from what guides the behavior of a paramecium, or any other creature that must take action to survive.

I’d like to find a group where serious conversations about this are taking place, without all the nonesense and fighting. Any thoughts?


#26

There are researchers working on similar builds of self-teaching systems exploring environments. You could consider a lot of a-life approaches in this, although typically the agents are very very simple compared to modern AI. Also, systems like the COG robot https://en.wikipedia.org/wiki/Cog_(project)

The main issue is scaling up to the complexity required. Even assuming you have what looks like a valid architecture (and that is some feat already), it is still computationally too far out of reach to simulate or embody an artificial creature with enough capabilities in a realistic enough environment, that we would recognise it as intelligent as opposed to very simply surviving or crudely pattern matching. For instance, a typical A-Life creature might have a couple of dozen input neurons and maybe half as many output neurons with one or two hidden layers - or it might be built up connection-by-connection, from a small starting network using something like NEAT. These networks are tiny compared to what we know is sensible for basic audiovisual perception or for an agent to behave close to optimally in a simplified game environment - and even with those large networks the responses are not sophisticated, they are typically driven by quick reactions and planning over the next few seconds. Trying to run an a-life simulation with even this moderate NN size would require thousands of GPUs.

So for now, researchers take shortcuts by concentrating on far simpler environments, perceptual and learning challenges than something that could realistically lead to a true bootstrap from experience AGI. These more constrained environments are testbeds for ideas that may be part of a more complete system in the future.


#27

I have few re searchers copying my complete AGI model with out my permission.

Joscha Bach - From Computation to Consciousness:

Antonio Damasio
Human intelligence can’t be transferred to machines

Emotions and Feeling in a AGI system:

https://groups.google.com/forum/#!topic/artificial-general-intelligence/pxWmHClAAdA

Keghn’s Conscious AGI Machine :

https://groups.google.com/forum/#!topic/artificial-general-intelligence/f5yCbo3XALE

AGI station:

https://groups.google.com/forum/#!forum/artificial-general-intelligence


#28

deleted


#29

The truth for life is to survive at all cost. Even if it mean life hast to live a lie to survive.
No brain life are totally in the dark. Better the brain, better paths into the future will become the real truth.
All non brain things and non predictors should be under the stewardship of a plant wide ASI of their
choosing.
My not be a ASI. Could be a future forecasting service. Staffed with AI, AGI, ASI, and other forms of
life.


#30

Archimedes said “Give me the place to stand, and I shall move the earth.”
1 = 0!
Answer all your questions


#31

For AGI it is temporal pattern loops that can be figured out in reasonable amount of time. P patterns loops.
It True that that nothing really repeats from the beginning of time to the end? The Main NP pattern. But planet do repeatedly go around a star.
With this illusion of repetition are P patterns,
Patterns loops can a complete pattern loop. Like car going around race track
Or pattern loops are built of smaller sub part that i call the “launch”. A fraction of a pattern loop. Example would be
getting out of a chair and waling over to the frig to get something and then walk backward
back to the chair.
Or a launch can be movement out of one pattern loop into another.
From the beginning of time to the end is one big launch.
By finding new pattern loop by way of motors, logic, and good observation, new launches and pattern loops can be found.
The main pattern Form now to the end to the end of time will be made of many paths bundled together. Not all path will be solvable. because
there is no repetition within a sub path. No useful data compression possible. Dark matter or dark energy patterns?


#32

deleted


#33

Hi DavidW19m.

The real sport in AGI is for the AGI to build its own data base.

GAN has two main parts the front end, which is the detector NN, and the back end, which regenerates NN. Regeneration can be used as image data for
memory, or as a output to a motor.

Now back to the front. The detector NN is a unsupervised deep detector NN. The fist layer finds very simple sub features. The middle layers find more complex features. The last layers will have a pseudo RNN for detecting temporal patterns. Each layer of detector NN will be regenerated. One regenerative NN for each layer of the detector NN. This acts as hierarchy memory for the AGI Mind.

The way the fist layer of the unsupervised detector NN works is by having a
bunch of simple detectors nn, side by side. That have their weights randomly set. Then it is shown a data stream.
If one little NN detect something then it is saved. After a wile a bunch of these detector will come into existence and their output will be
propagate to the next layer to detect more complex features.
This is slower leaning, like a child. But a fully trained AGI can be cloned
in the millions!