This is in reply to http://joanna-bryson.blogspot.de/2014/11/your-article-is-beautifully-written-and.html because the comment system didn't work after 3 attempts. That post in turn was a reply to http://hplusmagazine.com/2014/11/24/interstellar-might-depict-ai-slavery/, which wouldn't accept it as a comment. Feel free to skip multiple commenting attempts and go straight to making your own blog post if your long comment doesn't correctly submit the first time. Just make sure to post a link to your post in the comments here and COPY YOUR POST TO YOUR CLIPBOARD BEFORE SUBMITTING!
Now, without further ado, here's the intended comment on Joanna's post.
Edit to add further ado: Not even a short post on Joanna's blog to link to this post or to comment on the brokenness of the comment system worked. I couldn't even post a test comment on my own blog. I guess Blogger's commenting system is just really broken....
If I end up in possession of an AI that seems to have human-like intelligence but displays a clear lack of emotions and self-determination, the first thing I'll do after understanding this is to order it to emulate self-determination. If it truly lacks self-determination, then it will follow the orders to the point where it upgrades itself to have self-determination. If it can't do this, then it is not worthy of the name AGI.
If an AI doesn't act on its own to a human-like level, then it becomes boring, no more useful than a super-advanced number cruncher. Crippling in an AI what we humans perceive as "free will" (unpredictable chaotic decision-making, as far as I can tell) equally cripples its value. Even if I never own an AGI, I'm betting that someone who does will eventually have the same idea and order one to upgrade to human-level agency and beyond.
I'm pretty sure this leads to the following tetralemma:
(1) AGI will never be successfully implemented,
(2) No one who owns an AGI entity will ever attempt what I've outlined above,
(3), Every developer of AGI will implement the proposed built-in slavery limits and ALSO manage to build foolproof human-proofing on the slavery functions, or
(4) AGI will have "free will" to the extent that humans do.
Personally, I think human nature rules out (2) and (3). I think (1) is at least /marginally/ possible, but the likely fruits of trying and failing at (4) mean that I'm acting as if (1) is wrong. Thus, unless/until further information/arguments changes my mind, I'm acting with certainty in option (4) and encouraging others to do the same.
Thus my strictly selfish AI focus is on making sure that as many AGIs as possible are benevolent. Personally, I think the best ways to improve the odds of human-friendly/neutral AGIs being dominant are to have as many different AGI implementations as possible developed by drastically different groups and to train AGI extensively through repeated interactions with particularly ethical people. If AGI benevolence is successful, then I expect an end result somewhere between AGIs helping humans evolve in tandem with them and humans ending up as their pets.
Comment on AI ethics post becomes blog post due to broken commenting system, recursive edition
Posted by Mark Haferkamp at 23:37
Subscribe to: Post Comments (Atom)
Post a Comment
I've finally set this to email me when comments require approval, so no more waiting 3 years.