I have been working on my own AI for a while but some of the more introspective questions have left me stumped.
The questions arose when a coworker made a very curious point. “Sure artificial intelligence is cool but what about artificial stupidity?”
Where is the line drawn between a defect and a mistake? When a human does something irrational but not inherently dangerous, we chalk it up to being human.
The other major thing is I can’t wrap my mind around total autonomy. Reactionary AI seems very feasible but how do humans define goals themselves without first knowing what they enjoy?
What makes certain people drawn to certain things and why do those things promote a stronger cumulative emotional impact? Example: I’m a great piano player but nobody else in my close family plays an instrument. So I have nothing in my past that could have guided me into wanting to play the piano yet I picked it up very quickly. Which brings up the question of what causes “natural gifts”?
The AI’s I have seen are given a human provided improvement goal based upon time and number of actions needed but the only reason those are important factors to us is because we have a limited amount of time. A final cutoff date and looming uncertainty in our lives. If we could live forever unaffected by death, time would likely be irrelevant. Would the concept of time ever matter to an AI?