I always look at AI projects and endeavors on a timeline.
I start at 100 years from now. 100 years from now, will AI be doing a vast array of jobs and supplying a huge amount of human-consumed content? The answer to that is-- yes, of course, absolutely. AI will be completely dominant 100 years from now. I'm hopeful that if true sentient AI evolves (or alternatively something that simply mimics true sentience extremely well, which is effectively the same thing), it will be peaceful and benign. Indeed, I'm assuming this to be true, and in the past I've outlined several reasons why.
So then I say, 50 years from now, will AI be taking over this thing or that? The answer, again, is-- absolutely. AI is evolving quickly, and it's foolish to think it won't be massively influential in 50 years.
So then I go on to 25, 15, 10, and then 5 years.
Everything we can imagine humans doing, or AI doing, is going to fall somewhere on that timeline. Twenty years ago, we can safely assume that AI was doing very little of it. 100 years from now, I believe we can safely assume that AI will be doing a ton of it.
When does the balance shift? Will it be gradual? Or will it be "tippy?"
Either way, 15 years from now I think AI is going to be pretty massive. So my point is, I don't see a ton of risk in these huge corporations investing heavily in AI hardware and software, in AI projects and infrastructure, or in the idea that AI is going to be the single biggest thing in modern human history. Will there be some huge misses along the way? Absolutely. There already have been. In total, though, I don't see any possibility that AI won't be incredibly important and influential, and that the shift is going to happen sooner rather than later.