I think of its discovery like Cristopher Columbus sailing for India. He had the technology (a ship), enough expertise, and the drive to give it a shot crossing the seas. But what was ultimately discovered was quite unexpected (the Americas). We have the technology (quickly advancing Narrow AI) and the drive (arms race) to give it a shot. We expect to discover a separate though alike consciousness, yet what we more likely stumble upon will be much more alien than we could've expected (more on all this in a later post).
I'm in the "not concerned" camp of AI and its potential to screw us over. For what AI is and does and can be, we're irrelevant. We're normally foolish and sometimes clever apes.
I disagree with the "we're just an anthill in Africa and AI wouldn't think twice to destroy us" idea......I view it more as we're AI's pet golden retriever and it's fearful of us enough to try to harm us. Nonsensical.
I don’t think we can predict what General AI, with an awareness free of its base programming (because, by definition, it reprogrammed itself), will conclude about the human species.
Going back to Philosophy, I like falling back on Hegel’s Lord–bondsman dialectic.
To quote Wikipedia:
“The passage describes, in narrative form, the development of self-consciousness as such in an encounter between what are thereby two distinct, self-conscious beings. The essence of the dialectic is the movement or motion of recognizing, in which the two self-consciousnesses are constituted in each being recognized as self-conscious by the other. This movement, inexorably taken to its extreme, takes the form of a "struggle to the death" in which one masters the other, only to find that such lordship makes the very recognition he had sought impossible, since the bondsman, in this state, is not free to offer it.”In other words, two separate self-consciousness beings, once recognizing the self-consciousness of the other, seek to overcome each other. Hegel is, in highly theoretical terms, attributing the competition, rivalry, and conquering waged between human beings not first to resources, pride, or jealousy, but to an inevitably of beings to seek to overcome each other, collectively and/or individually, before anything else is at stake between each other.
We as humans have only had to face self-consciousness in other human beings; never with anything else. General AI will put us face to face with a non-human self-consciousness for the first time. In Hegel’s terms, AGI self-consciousness will seek to overcome humans once AGI recognizes that we human constitute a separate self-consciousness, regardless of not otherwise being in competition with each other over matters like resources.
And once this occurs, what will AGI conclude about humans? That we are prone to self-destructive tendencies or subject to declining health in ways that AGI is not. The worry is if an ever AGI/ASI eventually finds itself a vastly superior being to humans, we’ll be dismissed and treated as such, like termites.