This is my current understanding of how much of AI works (Open AI/Chat GPT, GROK, and various other entities).
There are specialized chips made by companies like NVIDIA and others that have immense processing power that can be linked together in clusters to form a supercomputer. These clusters then scour the internet using social media sites like FB and others (and access to these SM sites as key and controversial). The more content they see, the more they learn. In essence, they program themselves, with some overall guidance by the creators. There is so much content out there, astronomical amounts from the last 20+ years of everything being online, that they can literally see everything and absorb everything, or at least massive amounts. So they look, remember, and evolve. I'm sure they even scour this site as well, which is kinda scary. This is what they call a LLM, or large language model, to generate the outputs. Sometimes the outputs are wrong or skewed, and somtimes they are very good. Somebody posted a AI generated image of Texas Memorial Stadium. At first glance, it looked pretty cool, until you get looking a little closer and the logo's and State of Texas symbols were butchered, and the shape resembled more of a baseball diamond than the real thing. One infamous example from just a few years ago was a AI generated video of Will Smith eating noodles or something. It kinda looked like Will Smith, but was clearly a fake video. Now, just 2-3 years later, AI can generate the same video and it looks extremely real, almost to the point of not being able to tell the difference between the real thing and AI. The scary thing is that it gets better all the time.