Hi Aidan, Legg’s definition is ok for me, but it defines a particular aspect of intelligence, following goals, not the root definition where an intelligence can be goal-less and still take decisions, nor the nature of “intelligence” on its own.
Alexander’s definition is about thermodynamics, applying the 2nd law to the “cone” of future outcomes that follow if you take one given option so you can score them and thake your decision as the weightened average (its future entropy, normalised, is the weight here) over your options.
So, for me, Alexander’s definition is wider and more precise than any other one I know about.
I actually worked out the way to convert any set of goals into a utility function to feed the AI I built on those Alexander’s principles, and it really worked beyond my spectations.
I have a blog about it and there is a post that actually talks about it and shows a very complex environment (Lorentz atractors included) that the AI manages quite well solving and creating new strategies from thin air:
As you see, the definition I use allows me to actually code a general AI capable of dealing with any environmet as far as you can simulate the system, without any kind of training (it trains itself using a small set of future paths built on the go from its actual position/state, instead of the examples from the past used to train a NN).
Just wondering if anyone here have ever heard about these “causal entropic forces”, I think the article was not given the merit it deserves and basically forgotten.