It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
There it is. For as many futurists, philosophers, and pontificators see it as possible and/or inevitable, there are just as many well-versed and well-respected people in the field, and philosophers who agree with them, who are skeptical it can or will happen, for a variety of reasons.
I'm firmly in the camp that believes it's very unlikely we'll get sentience from AGI, though I never say never. I'm also in utee's camp that practically, it won't matter in some ways, because AGI could emulate intelligence and self-awareness so well that there's no meaningful difference to us. It's been a long-standing question as to whether or not we could possibly know if a machine was sentient or not. We can't even prove other humans are sentient, we take it on faith and enculturated norms. There's even a branch of philosophy that is skeptical anyone else exists at all, outside of themselves. I always wanted to ask them, if that's true, who do you think is agreeing with you in that philosophical stance? Or disagreeing with you?
@CatsbyAZ , I picked the wrong day to be absent. I quite like your line of thought, and despite my prior ramblings about the basic theory, I also find the philosophical aspects far more interesting. And, despite my education being in ML, and not philosophy, I've studied philosophy so much over the years as a hobby and ingested quite a bit about the philosophy of AI and consciousness that I consider myself more well-versed in that aspect than the actual building/using AI realm. Sadly, I just don't have time now to go back and respond to all of yours, brad's, and utee's posts, and everybody's probably moved on anyway. But they were all interesting. Bottom line is, I think Hegel makes a worthwhile point about language (and his quotes you used remind me of the Jeremy Renner movie "Arrival"), but that it's incomplete, and intelligence/sentience isn't just a function of language, although language is a necessary component. The other ingredients, though, I remain skeptical that any amount of complicated algorithm or language will ever produce. Philosophy of the mind is super interesting, and there are those who disagree with me, so ymmv.
One of the coolest things I got out of learning about LLMs after years of studying and thinking about the philosophy of AI and consciousness is realizing the genuine parallels between how LLMs do what they do and how babies-toddlers-children-adults do what we do. Once I grasped that, it nearly blew my mind when I realized what it meant about.....I don't know how else to say it.....reality. Everything. The way the universe/multiverse/Whole Sort of General Mish-Mash (Douglas Adams fans will get that) just
is...is insane. And by insane, I mean magnificently ordered.