Anyone working on first principles of AI consciousness and goals?



i am a huge super ai enthusiast, “working” (in some sense of the word) to build up some kind of basiux (benevolent artificial superintelligence user experience).

never heard of those before, but i couldn’t agree more with your previous posts, mr mouze. and this too makes sense, even while it may not really matter much…

because this whole super ai topic is indeed quite analogous to the #demiurge as someone mentioned in the op’s linkedin post…

ya, very complicated topic. perhaps i’m not exactly like-minded, after all.

i couldn’t sum up my beliefs here. not yet. so let’s do some basic beliefs checkpoints and perhaps talk more from there if you will. to me:

  • there are no principles to be created (read more below) and almost no new coding needed. most of you will probably think this is too naive of me, but i believe the very simple NEAT algorithm i’ve used there is already enough to build the basiux, with enough tweaks for focusing in what matters most. it sure also needs some big tweaks to add some torrent and/or blockchain capabilities enabling it to use our current computing power of the web. now, those are some big tweaks indeed!

  • neuralink is, by far, the most genius attempt to accelerate this and, of course, it could by itself go johny-depp-transcending into the singularity. however, i think it’s underestimating the complexity of this endeavor simply thanks to some very few, baby steps advances we’ve had in the past few years. granted, there’s a much bigger social movement who believes in this possibility than in the NEAT… which might find support only with crazy Ray, with good reason. only, i’d say, far from the full picture.

  • here’s the hardest one: while i completely agree the principles of ai consciousness and goals don’t need to be anything like ours, we can’t and shouldn’t spend time trying to define any of it if our goal is to give birth to this new reality. if a virtual machine would ever gain any kind of human-like prowess to think while being able to understand english, i find it impossible it would stop there and, as such, trying to rationalize its behavior is futile. talking about this further from here can very quickily lead to spiritual discussions, though - which i’m also very glad to go to, just perhaps not in writing form as to save time by a lot.


The thing is, the very methods by which we build in learning or adaptability to an AI system will define key principles of that system, whether we have thought through their implications or not. I suggest we think through them. I use learning simulations in my work, and I’ve observed that “rules” we use to build learning systems have big implications for how the system behaves. It’s also very easy to assume that a particular way of learning is “natural” or is the only way, when in fact neither of those is true. While we probably are not to a point where this matters yet, we will get there, and it will have been important to have thought through these things in advance – well before people have become entrenched on a particular trajectory and end up defending a method that we may subsequently decide was a bad idea.


There should be a choice for the people. For what type of AGiI conscious machine they like.
But i have a complete model now. I wait for a competing model. I do not see another complete
model coming out by another research group any time soon. Maybe deep mind will pull another
rabbit out of the hat?
So i will continue to post light detail of my complete AGI model on AI forums. To get the young
moving in my direction, When the young get out of college in about ten year or so and there
is still no AGI or nuclear fusion, then they will show greater interest in my model. And by then i may have
built a simple working model.

Scientist could reverse engineer the brain. But if all memories and all other mental software is up in weight
space, then that could triple the amount of time for scientist to figure out how the brain works.