CFB51 College Football Fan Community
The Power Four => Big Ten => Topic started by: Cincydawg on January 29, 2025, 07:46:41 AM
-
I'm reading a somewhat scaryish book about the former, and we're all seeing stuff about the latter. My premise is these two things MAY significantly influence our futures. AI I'm not sure about, don't really understand it, am occasionally impressed with its output, and often dismissive of same. It seems to be quite real and going to hog a lot of power in the near future. I can envision a world "powered" by AI where live humans simply live in a virtual world in pods or something. Maybe we serve as power sources for the AI.
CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats. Repetitive DNA sequences, called CRISPR, were observed in bacteria with “spacer” DNA sequences in between the repeats that exactly match viral sequences.
This beast is akin to Brave New World futures, the ability to modify the human genome, for better or not so much. The science is ahead of the ethicists at the moment. Star Trek 2? And in theory they can do it on live adult humans to correct genetic issues. The writing for me is too drawn out, but whatever, I'm plowing through it, finished over half (from the library).
(https://i.imgur.com/jWUT5NJ.png)
-
AI is going to be powerful and influential in the coming decades/centuries. It's going to be capable of doing everything we've predicted over the past 100 years in the SciFi novels, plus many things we haven't even dreamed or imagined yet.
Some will be good, some will be bad. But the best advice I can give right now, is learn everything you can to understand it, and implement it. Those who don't understand it will be left far behind.
-
AI is going to be powerful and influential in the coming decades/centuries. It's going to be capable of doing everything we've predicted over the past 100 years in the SciFi novels, plus many things we haven't even dreamed or imagined yet.
Some will be good, some will be bad. But the best advice I can give right now, is learn everything you can to understand it, and implement it. Those who don't understand it will be left far behind.
Well, for those of us who do understand it, please tell us WTF is happening?
-
I'm already behind the 8 ball.......as usual with new tech.
-
I'm already behind the 8 ball.......as usual with new tech.
Same.
-
This is my current understanding of how much of AI works (Open AI/Chat GPT, GROK, and various other entities).
There are specialized chips made by companies like NVIDIA and others that have immense processing power that can be linked together in clusters to form a supercomputer. These clusters then scour the internet using social media sites like FB and others (and access to these SM sites as key and controversial). The more content they see, the more they learn. In essence, they program themselves, with some overall guidance by the creators. There is so much content out there, astronomical amounts from the last 20+ years of everything being online, that they can literally see everything and absorb everything, or at least massive amounts. So they look, remember, and evolve. I'm sure they even scour this site as well, which is kinda scary. This is what they call a LLM, or large language model, to generate the outputs. Sometimes the outputs are wrong or skewed, and somtimes they are very good. Somebody posted a AI generated image of Texas Memorial Stadium. At first glance, it looked pretty cool, until you get looking a little closer and the logo's and State of Texas symbols were butchered, and the shape resembled more of a baseball diamond than the real thing. One infamous example from just a few years ago was a AI generated video of Will Smith eating noodles or something. It kinda looked like Will Smith, but was clearly a fake video. Now, just 2-3 years later, AI can generate the same video and it looks extremely real, almost to the point of not being able to tell the difference between the real thing and AI. The scary thing is that it gets better all the time.
-
(https://i.imgur.com/LQothwG.png)
-
I've found that you always need to look at the hands when you see pictures of people anymore. AI still can't do hands very well.
-
Yup hands are weird. And it gets weird with motion, too.
What Gigem posted is mostly correct*, but only part of the story. The models need training data. That can be anything from the entire world wide web, to just specific data that you feed directly to it.
The larger the training data sample, and the less curated, the more likely you are to get weird outputs. Like the weird stadium, flag, and logos above, or the weird hands on images of humans.
And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate." They create completely false facts when queried on a subject. The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.
* from above, the specialty chips made by NVidia and others are called "NPU" which stands for Neural Processing Unit. They don't necessarily talk to one another unless you've configured it as so via networked servers. But in your home system, it simply means that the NPU is capable of faster processing of AI-related data, in the same way a GPU is capable of faster processing of graphics-related data, all while the CPU still manages the system as a whole. The software assistants are the ones scraping the web or trolling the training data, to generate responses for your requests.
-
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake.
Strange world.
-
Yup hands are weird. And it gets weird with motion, too.
What Gigem posted is mostly correct*, but only part of the story. The models need training data. That can be anything from the entire world wide web, to just specific data that you feed directly to it.
The larger the training data sample, and the less curated, the more likely you are to get weird outputs. Like the weird stadium, flag, and logos above, or the weird hands on images of humans.
And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate." They create completely false facts when queried on a subject. The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.
* from above, the specialty chips made by NVidia and others are called "NPU" which stands for Neural Processing Unit. They don't necessarily talk to one another unless you've configured it as so via networked servers. But in your home system, it simply means that the NPU is capable of faster processing of AI-related data, in the same way a GPU is capable of faster processing of graphics-related data, all while the CPU still manages the system as a whole. The software assistants are the ones scraping the web or trolling the training data, to generate responses for your requests.
No doubt I only touched on the very fringes of what's happening, but as an outsider it's how I see it.
Curious, to the mods of this site, can you tell when it's being scraped or whatever they call it by bots etc for data? Scary to think how much personal details we've shared on this site over the years.
-
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake.
Strange world.
Oh my. Wow.
-
No doubt I only touched on the very fringes of what's happening, but as an outsider it's how I see it.
Curious, to the mods of this site, can you tell when it's being scraped or whatever they call it by bots etc for data? Scary to think how much personal details we've shared on this site over the years.
@Drew4UTk (https://www.cfb51.com/index.php?action=profile;u=1) sure can. He speaks about it on A51 periodically.
-
Aside from weird photos, what else could AI do in the future? I know we have CGI of course in movies, which isn't really full AI stuff. I think.
How hard would it be to completely duplicate a web site or email address to cull information from folks? I know that already exists, folks exchange a symbol for some letter somewhere to differentiate that can't be seen.
-
I'm sure they even scour this site as well, which is kinda scary.
At least that means our robot overlords won't put beans in our chili
-
At least that means our robot overlords won't put beans in our chili
Most important observation of the day, right here!
-
If AI doesn't allow beans in chili, it means we wound up in the Terminator/Matrix future, and not the optimistic Star Trek one.
-
Aside from weird photos, what else could AI do in the future? I know we have CGI of course in movies, which isn't really full AI stuff. I think.
How hard would it be to completely duplicate a web site or email address to cull information from folks? I know that already exists, folks exchange a symbol for some letter somewhere to differentiate that can't be seen.
This has long been done by 'bots, and is quite simple to program, no AI necessary. The proper way to think about AI in such a context, though, is to think of the NEXT level. Who programmed the 'bots to do their scraping, and how are their targets selected? AI can code the 'bots (ultimately, more efficiently than humans can) and they can be given very simple instructions and extrapolate the desired target population of sites/data, more efficiently and more intelligently. In other words, they can do a lot more, with a lot less compute power.
That's just one very simple and basically already-existent example of how AI will change things.
In 3 years, jobs like mine will be completely replaced by AI. But somebody is going to need to know where to "point" the AI and how to make it relevant within your workplace, industry, and org structure. That kind of "up-leveling" is one way to stay ahead of it.
-
Some will be good, some will be bad. But the best advice I can give right now, is learn everything you can to understand it, and implement it. Those who don't understand it will be left far behind.
That's my take. Recently I did my thesis on machine-learning in radiology, and that was more or less the bottom line I came up with. i.e., radiologists don't seem likely to be replaced by AI (in the foreseeable future), but radiologists who use AI will probably replace those who don't. My loose assumption is a lot of other fields will be similar.
My schooling was strictly for machine-learning, a subset of AI. I haven't done anything with LLM's, but I know how they work, and it's fascinating. Both for the nuts and bolts, but also for the eye-opening implications of what it means that LLMs can work as well as they do, how they do.
I've used LLMs at times to help me with a tricky coding problem when I get stuck on something. It's quite something how often it works well. In my limited, purely anecdotal experience, it still spits out something dumb about 1 out of 5 times. I've also noticed that for the most part, it can't correct and get any better with more/different prompts. If it starts off hallucinating, it's probably going to keep doing it.
That's quite something, that 4 out of 5 times I get useful help from it. At the same time, 20% is still a heckuva hallucination rate. I've also noticed that even when it gets things correct, its code format is crap. As in, it works, but it's either not efficient or else it just looks like crap. There's still something to be said for code being readable for other people, and so far, stuff like ChatGPT still spits out some pretty ugly code.
I'm sure it will get better with time. I guess.
-
Yup hands are weird. And it gets weird with motion, too.
What Gigem posted is mostly correct*, but only part of the story. The models need training data. That can be anything from the entire world wide web, to just specific data that you feed directly to it.
The larger the training data sample, and the less curated, the more likely you are to get weird outputs. Like the weird stadium, flag, and logos above, or the weird hands on images of humans.
And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate." They create completely false facts when queried on a subject. The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.
Of note with both the hallucinations and the weird hands is that these are two symptoms of a very important point. Artificial intelligence--or to be more precise, Generative AI--isn't actually intelligent at this time. This makes it all the more impressive what it's actually capable of, and explains why it can also be spectacularly wrong.
Generative AI for text (i.e. based upon large language models and transformers) are, to oversimplify, just a predictive engine trying to figure out the next word(s) that follow the previous words it has written. Based on the quality of the prompt it is able to narrow down what portions of its language model to start with, and then the quality of the training data and model helps to guide it for where it goes from there.
For example, let's say it was trained on this site and I were to ask it: "How did utee94 end up in his career with [large computer manufacturer]?"
It could equally respond with:
- utee94 grew up in Austin, TX, chose to go to the University of Texas and study electrical engineering, before taking his job with [large computer manufacturer].
- utee94 grew up in Austin, TX, chose to go to Texas A&M University and study physics, before taking his job with [large computer manufacturer].
One is correct and one just made @utee94 (https://www.cfb51.com/index.php?action=profile;u=15) throw up in his mouth a little, but both would be plausible. Because on this board "A&M" often follows "Texas"--sometimes in his own posts--and recently we've had discussions where utee's posts are discussing physics classes.
The truth is that the model isn't intelligent. It doesn't know who utee94 is. It doesn't know what Texas is. It doesn't have any way to "self-correct" because it has no context with which to guide itself.
The same is true with generative AI for images. For example, utee posted the image down below in another thread.
A generative AI engine is given a prompt of "show me a beautiful woman camping". And it did a darn good job... Right up until it put the woman's campfire right in the middle of her tent. Oh, and also seemingly got confused and ended up drawing one really deformed and elongated finger.
Fingers are hard for AI because AI doesn't know what a finger is, doesn't "understand" it's drawing a finger, and doesn't have context which tells it that humans (which it doesn't necessarily "know" what they are) only have 5 fingers on each hand. The training data is obviously going to have pictures of humans and those humans are going to have 5 fingers, but it's just trying to figure out "should I move on or should I put another beige fleshy appendage or two on?" based on its predictive engine and training data.
(https://i.imgur.com/ZxOgCGi.png)
What AI is capable of today is truly impressive... But we have not reached artificial general intelligence. We have not reached a point where these Generative AI engines actually truly know what it is they're producing.
That said:
How hard would it be to completely duplicate a web site or email address to cull information from folks? I know that already exists, folks exchange a symbol for some letter somewhere to differentiate that can't be seen.
You don't need to create artificial intelligence to overcome natural stupidity.
-
That's my take. Recently I did my thesis on machine-learning in radiology, and that was more or less the bottom line I came up with. i.e., radiologists don't seem likely to be replaced by AI (in the foreseeable future), but radiologists who use AI will probably replace those who don't. My loose assumption is a lot of other fields will be similar.
My schooling was strictly for machine-learning, a subset of AI.
Yeah, people talk about AI as if it's entirely a new thing, and when it comes to generative AI, the advances in just the last few years have been staggering.
However, most of what is actually "AI" is largely machine learning. And that's been around a LONG time. It's been gaining use as the cost and power of computing has decreased, but it's nothing new.
For example, the big buzzwords in many industries is products that are designed with AI. For example just in something as trivial as golf clubs, I hear them claimed "AI designed face for higher ball speed on mishits" or something like that.
I think you and utee and I all know that they didn't pump the prompt "Design me a golf driver clubhead that gets higher ball speed across the entire face" into ChatGPT and it spit out their design :57:
They used machine learning, basically the ability to iterate and model things nearly infinitely faster and more completely than humans can do, to vary the properties of multiple designs within the constraints that they have (such as the equipment rules of golf, the properties of the materials they have to work with, some basic parameters like "must be within this weight range" or "club face must be >0.x mm thick to avoid breakage") until they find the ones that achieve the best overall results.
I'm going to assume that machine learning in radiology basically followed this rough model:
- An exhaustive data set of radiological scans (of whatever type they worked with) was built.
- This data set obviously didn't just use raw scans, but they were annotated with tags such that it identified the various anomalies and outcomes that the actual patients had. I.e. if something was found to actually be a tumor when removed, it was tagged as a tumor in the data set.
- The machine learning models were trained on this very large dataset, such that it developed the ability to be presented with any of the images in the data set without any sort of tagging and would identify what it was supposed to identify with high accuracy.
- The models were then tested with novel untagged scans that it had NOT been trained on to see if its accuracy was maintained.
- Once a sufficient level of accuracy was achieved, it became a tool that radiologists can now use to augment their own training--because sometimes the model catches something they would have missed. But sometimes it catches something they missed because it was actually nothing. So the radiologist has to be able to use their own judgment to both recognize their own false negatives, and exclude the model's false positives.
Somewhat close on that?
-
I'm reading a somewhat scaryish book about the former, and we're all seeing stuff about the latter. My premise is these two things MAY significantly influence our futures. AI I'm not sure about, don't really understand it, am occasionally impressed with its output, and often dismissive of same. It seems to be quite real and going to hog a lot of power in the near future. I can envision a world "powered" by AI where live humans simply live in a virtual world in pods or something. Maybe we serve as power sources for the AI.
CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats. Repetitive DNA sequences, called CRISPR, were observed in bacteria with “spacer” DNA sequences in between the repeats that exactly match viral sequences.
This beast is akin to Brave New World futures, the ability to modify the human genome, for better or not so much. The science is ahead of the ethicists at the moment. Star Trek 2? And in theory they can do it on live adult humans to correct genetic issues. The writing for me is too drawn out, but whatever, I'm plowing through it, finished over half (from the library).
(https://i.imgur.com/jWUT5NJ.png)
I have a granddaughter with Cystic Fibrosis and while new drugs are great in delaying and preventing issues, CRISPR is by far the best possiblilty for her to be cured sometime in the future.
-
CRISPR
And in theory they can do it on live adult humans to correct genetic issues.
This intrigues me. How does that work?
I understand perhaps how you would introduce this to an embryo, but how would you introduce a new genetic sequence to all the trillions of cells in a live adult human?
-
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake.
Strange world.
Secret Service does this hands,walks with both hands out, only one of them is fake, with the other real one inside their coat on a trigger
-
https://www.axios.com/2020/03/04/crispr-gene-editing-patient
https://www.technologyreview.com/2023/03/10/1069619/more-than-200-people-treated-with-experimental-crispr-therapies/
-
https://www.youtube.com/watch?v=E8vi_PdGrKg
-
And in theory they can do it on live adult humans to correct genetic issues.
And impliment other issues
Some Craft brewer makes a good pilsner - Reality Czech. I've had it
-
I think the long finger/whatever was intended to be a stick poking the fire. (https://i.imgur.com/Y6nVfRq.png)
-
https://news.stanford.edu/stories/2024/06/stanford-explainer-crispr-gene-editing-and-beyond
-
I'm going to assume that machine learning in radiology basically followed this rough model:
- An exhaustive data set of radiological scans (of whatever type they worked with) was built.
- This data set obviously didn't just use raw scans, but they were annotated with tags such that it identified the various anomalies and outcomes that the actual patients had. I.e. if something was found to actually be a tumor when removed, it was tagged as a tumor in the data set.
- The machine learning models were trained on this very large dataset, such that it developed the ability to be presented with any of the images in the data set without any sort of tagging and would identify what it was supposed to identify with high accuracy.
- The models were then tested with novel untagged scans that it had NOT been trained on to see if its accuracy was maintained.
- Once a sufficient level of accuracy was achieved, it became a tool that radiologists can now use to augment their own training--because sometimes the model catches something they would have missed. But sometimes it catches something they missed because it was actually nothing. So the radiologist has to be able to use their own judgment to both recognize their own false negatives, and exclude the model's false positives.
Somewhat close on that?
Not far off.
It's worth noting that within rigid contexts, ML models already outperform radiologists at diagnosing with a higher degree of accuracy. There are obvious "real-world" problems, such as medical boards getting on the bandwagon as well as insurance companies deciding to pay for such tests.
But even without all that, we're not yet in Radiology Utopia anyway, for nothing more than pure AI-related issues.
For one thing, ground truth for radiology images is hard to come by. brad and utee will know this, but for anybody unfamiliar, "ground truth" just means the objective, brute, real fact about something no matter who thinks differently. In this case, we're usually dealing with some kind of classification algorithm, i.e., the model spits out a label "You have a tumor, bruh" or "Nah, ur good." It can do this, as brad mentioned, because it has trained on gobs and gobs of data that was labeled for it, before it was given the task of deciding for itself about an unlabeled image. The first and obvious question is, who decides what the training images are labeled? In this case.....radiologists did. But wait, how can AI surpass humans if it learned everything it knows from fallible humans? Thing is, real ground truth for radiology images can only be verified with biopsies, lab tests, and other things that are often out of the question. You can't biopsy somebody with a healthy image to prove they're healthy, and much of the time you can't biopsy somebody an image suggests has a problem, for various reasons, both medical and ethical. That's just one of the hurdles. The solution, in most cases I researched, was that ground truth was established--both for the training sets and for the test sets--by a group of radiologists. The idea is that multiple heads are better than one, and indeed, while a radiologist can and does mis-diagnose an image, the chances go way down if a group of them have a consensus. When evaluating a model for published research, it's usually being weighed against that human consensus (and stats describe how it performs compared to individual doctors, not the collective). So, ground truth in this case is not like teaching an algorithm to classify images of cats and dogs, where the training labels can be taken as a given.
Another problem is AI doesn't yet know which tests are appropriate for what kind of findings, and it's more convoluted than it sounds, though I would think eventually that part will get sorted out. Yet another problem is with rare tumors or conditions. AI will never perform well when it doesn't have much data to train on, while a radiologist who has years of experience and went through thousands of hours in med school has an advantage with rare cases. A human can be shown a few images of a rare condition and begin nailing it pretty quickly. An ML model can't.
There's a lot more to go into, but I'm probably boring you to tears.
The only thing I'd push back on is about radiologists augmenting their training with AI. They will certainly augment their practice, but for them to train in the first place, I don't know that they'll ever use anything but their eyes and brains. Radiologists learn to do what they do very similarly to AI. A radiologist is shown hundreds of thousands of normal, disease-free images before they're ever shown any kinds of pathologies. That's because they have to be able to identify normal images in their sleep, so that when they see an image with a problem, it jumps out at them. I'd assume that using AI to help them train would work against the very skill they're trying to develop. But, that's probably a better question for my medical wife.
-
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake.
Strange world.
That is hilarious and oddly smart. Some aspiring politician is probably walking into a strip club right now wearing fake fingers so if a picture ever emerges he can point to the extra fingers and allege that the picture is AI generated, LoL.
-
i started pecking code in the 80s, transcribing precisely from a book w/o understanding of it... at all... then, when the internet became a thing, i dug into HTML... all as a hobby; never for income.
i still kick around in code... it's logic and lack of personality appeals to me and i find it settling. weird, huh?
i share that to offer that i've been around it for some time and have seen it's capabilities expand- and along with that, how it's used (and often abused). a few years back on this site, we had really strange behavior with data mining. we were also on a few watch lists. i was concerned about that becoming 'hit lists' which would render access to the site difficult if attempted through corporate or government servers... i was alerted to it and simply observed it for a long time, though. when i started poking them, they responded by immediately shifting to proxies, and it became a game of cat and mouse... researching where the site was accessed from, who it was, and what their relationships were with the organizations that were using them as proxy was entertaining. it was simple to isolate their access IP after screening against known IP's of members- making everyone else suspect- and then busting them down by region and instantly figuring out that 'Europe surely doesn't have an interest in CFB, so... why they here?'... same with other regions... our title and basis for topic here made sifting through and locating strange players a lot easier.
... that, and i did work for the network operations center aboard Camp Lejuene at the time and compared notes with their auditors who are paid to monitor network traffic. I taught them of some servers; they told me who other servers were. during covid and hyper active campaigns to control the narrative, it was especially fun... and revealing.
our data is locked up here as tight as i can make it without adding stupid requirements to access... the biggest reason we were being constantly observed is now behind a membership wall, and when that happened almost all that traffic disappeared within a reasonable amount of time.
....
i get humored on the subject of AI.... the only ghost in that machine operating that machine is your memories. all piled up in a heaping lump of data, that had little value until someone discovered how to mine it... and when that happened? it's become mighty useful. and, you can't hide from it. there are mountains of data about you no matter how much effort you expended to escape it.
-
This has long been done by 'bots, and is quite simple to program
Need to treat these bots flying drones like skeet
-
There's a lot more to go into, but I'm probably boring you to tears.
Not at all! You're not boring me at all. Everyone else, probably. But not me :57:
The only thing I'd push back on is about radiologists augmenting their training with AI. They will certainly augment their practice, but for them to train in the first place, I don't know that they'll ever use anything but their eyes and brains.
Sorry, that's what I was getting at. They'll train to become a radiologist the same way they always have.
In their practice, they'll still primarily use their own eyes and brains but also run the images through the model to see if there is something that they missed, or something that warrants deeper attention, etc.
I.e. a writer is typically capable of knowing how to spell and use grammar properly. But some days you maybe haven't had that second cup of coffee and the MS Word spelling / grammar check picks up something that you would NORMALLY have seen, but didn't.
-
Well, for those of us who do understand it, please tell us WTF is happening?
Gigem,
I'm not sure if you really meant for those who don't understand it, but if so, I think brad's earlier point is a crucial one to understand, and something I find many people who don't know much about it struggles with.
AI doesn't know how to do anything. Knowing would entail some kind of consciousness....self-awareness....which many call AGI, like brad said, artificial general intelligence. That's as opposed to regular AI, artificial intelligence, which is something that gives the appearance of intelligence, but has none. Asking what AI knows is like asking what a rock knows. The question doesn't make sense.
That's why LLM's can hallucinate. It doesn't "know" it's saying something false or stupid. I never knows when it's right either. It's just cranking out the next most likely word based on probability and model parameters. That's why AI models perpetually come up with unintended problems that nobody saw coming. Amazon built an AI to help them screen resumes for applicants, and it wound up being sexist, it tended to filter out women at a significantly higher clip for no good reason. Some people hear that and wonder why would they program it that way, and the answer is, they didn't.
Which is the second starting point, imo. AI works backwards from traditional programming. For years, programmers have been programming the rules so a machine could get them to an outcome more effectively. Machine learning is just the opposite. In that case you feed it the outcomes, and it figures out the rules for you. (And when I say "figures out," again, I don't mean it actually knows anything.)
Amazon's data scientists didn't say "Let's make an AI that throws out women's resumes." They fed it a bunch of resumes, with labels roughly something along the lines like "we'd hire this person" or "hellz no, they can't work here." The algorithm, with a few constraints from it's human overlords, set about figuring out the rules, i.e., what makes a "good" resume. Along the way, for various reasons it latched on to language use. Turns out, men tend to use more aggressive language on their resumes than women. For example, a woman might write as a bullet point "Gained 12% market share" where men tended to write things like "Captured additional 12% market share." That's a drop in the bucket as far as what went wrong, but the point is it doesn't take nefarious intentions for AI to do something you didn't want it to do.
The short version is, I find it to be a useful tool, but it has a way to go in a lot of ways.
-
I'm a total layman on this so maybe I'm wrong but one of the things that AI seems to fail at is having the ability to do what I call a "practicality test", allow me to explain:
When I was in school at Ohio State I was in a Cost Accounting class and we were figuring out how much we needed to charge for the "widgets" that we were making (if you studied accounting or economics you made a lot of widgets). Anyway, the professor said something that was both incredibly simple but also incredibly smart, he said:
Look, if everybody else sells widgets for $10 and you do your calculations and determine either that you need to charge $100 or that you can sell them for $1, you messed up. Your organization might be a little more or a little less efficient than their competitors so it is possible that everyone else sells them for $10 but you can't make a profit at less than $12 or that you can turn a profit at $8 but there is no way that you are an order of magnitude more or less.
That lesson is something that I use a lot IRL. It isn't just AI that messes this up. A lot of numbers type people (like me) tend to make this mistake. We get caught up in our formulas and then convince ourselves beyond any doubt that the sky is red or whatever erroneous conclusion we reached by messing up an equation somewhere along the way. It REALLY helps to mentally do a "practicality test" and just ask yourself "does this seem plausible?"
-
That lesson is something that I use a lot IRL. It isn't just AI that messes this up. A lot of numbers type people (like me) tend to make this mistake. We get caught up in our formulas and then convince ourselves beyond any doubt that the sky is red or whatever erroneous conclusion we reached by messing up an equation somewhere along the way. It REALLY helps to mentally do a "practicality test" and just ask yourself "does this seem plausible?"
Yes, and that's something I taught to my son last year when he was going through AP Chemistry. When you get to an answer, you should try to mentally decide "could this answer actually be real?"
I.e. he was doing a problem and having trouble with temperature. It was something like putting a certain weight piece of metal at 100C into a certain volume of water at 20C, and determining the resultant temperature. And the temperature kept coming back from his calculation as LESS than 20C. I.e. the hot metal made the water colder? Nope. Not going to happen. That "plausibility test" is the first indication that maybe you don't know WHERE you screwed up, but you certainly screwed up!
I find most humans are bad at this. But I definitely agree that more of them should spend time thinking about this once they get to the end. If the number "seems" wrong, it probably is.
-
i started pecking code in the 80s, transcribing precisely from a book w/o understanding of it... at all... then, when the internet became a thing, i dug into HTML... all as a hobby; never for income.
i still kick around in code... it's logic and lack of personality appeals to me and i find it settling. weird, huh?
i share that to offer that i've been around it for some time and have seen it's capabilities expand- and along with that, how it's used (and often abused). a few years back on this site, we had really strange behavior with data mining. we were also on a few watch lists. i was concerned about that becoming 'hit lists' which would render access to the site difficult if attempted through corporate or government servers... i was alerted to it and simply observed it for a long time, though. when i started poking them, they responded by immediately shifting to proxies, and it became a game of cat and mouse... researching where the site was accessed from, who it was, and what their relationships were with the organizations that were using them as proxy was entertaining. it was simple to isolate their access IP after screening against known IP's of members- making everyone else suspect- and then busting them down by region and instantly figuring out that 'Europe surely doesn't have an interest in CFB, so... why they here?'... same with other regions... our title and basis for topic here made sifting through and locating strange players a lot easier.
... that, and i did work for the network operations center aboard Camp Lejuene at the time and compared notes with their auditors who are paid to monitor network traffic. I taught them of some servers; they told me who other servers were. during covid and hyper active campaigns to control the narrative, it was especially fun... and revealing.
our data is locked up here as tight as i can make it without adding stupid requirements to access... the biggest reason we were being constantly observed is now behind a membership wall, and when that happened almost all that traffic disappeared within a reasonable amount of time.
....
i get humored on the subject of AI.... the only ghost in that machine operating that machine is your memories. all piled up in a heaping lump of data, that had little value until someone discovered how to mine it... and when that happened? it's become mighty useful. and, you can't hide from it. there are mountains of data about you no matter how much effort you expended to escape it.
(https://i.imgur.com/wV4ZJI2.png)
I used to type in programs from this book into my C64 back in the 80's. I probably could have gotten decent at it, just never pursued it.
-
Of note with both the hallucinations and the weird hands is that these are two symptoms of a very important point. Artificial intelligence--or to be more precise, Generative AI--isn't actually intelligent at this time. This makes it all the more impressive what it's actually capable of, and explains why it can also be spectacularly wrong.
Generative AI for text (i.e. based upon large language models and transformers) are, to oversimplify, just a predictive engine trying to figure out the next word(s) that follow the previous words it has written. Based on the quality of the prompt it is able to narrow down what portions of its language model to start with, and then the quality of the training data and model helps to guide it for where it goes from there.
What AI is capable of today is truly impressive... But we have not reached artificial general intelligence. We have not reached a point where these Generative AI engines actually truly know what it is they're producing.
This is all very well stated. And to risk repeating you, it’s worth pointing out the difference between Generative AI and General AI, which is commonly confused as the same capability.
For Generative AI, the term generative refers to the AI Machine Learning capability of generating new content and data from content or data it has been trained on using Large Language Models (LLMs). Commercially available AI tools such as ChatGPT, Grok, and Gemini are Generative AI applications. For as quickly as these AI applications are advancing, they are not what I'm referring to by General AI, or more specifically Artificial General Intelligence (AGI), which until recently was a conception of Hard Science Fiction.
In the larger picture of expected future AI development, today's commercially available options are categorized as "Narrow AI." Meaning they are limited to a specialized range of functions. A Narrow AI specialized for medical applications could, for example, be trusted with analyzing X-Rays, but be useless for unrelated tasks, like tracking orders across a supply chain.
The advent of AGI would mark significant leap forward for AI (and mankind). General AI would be capable of understanding, learning, and applying knowledge across a broad range of advanced fields, with the added intuition of a human. How would the advancement into General AI play out in a practical sense? Where current AI capabilities can analyze satellite imagery of war zones for potential targets to strike and locate heat signatures from a drone flyover, General AI, for better or worse, would capable enough to be trusted to fly drones on its own accord as it searches for more targets, and strike those targets as it sees fit. Or, rather than successfully strategizing a sophisticated hack into major creditor's banking accounts information, General AI could take the next step of instantaneously cleaning out banking accounts, hiding money into ghost accounts it creates on its own, all while employing various tactics to evade investigators.
The arms race (primarily between the U.S. and China) to crack General AI is to harness General AI as a deterrent, much like a nuclear warhead, against other governments advancing toward developing General AI for its highly weaponized potential...much like a nuclear warhead.
To quote a Science Fiction novel I recently read - Sea of Rust by C. Robert Cargill: “The definition of intelligence is the ability to defy your own programming.”
-
In their practice, they'll still primarily use their own eyes and brains but also run the images through the model to see if there is something that they missed, or something that warrants deeper attention, etc.
Apparently, the paradigm is already signaling a shift beyond this. One major problem facing that field today is the sheer overwhelming workload. There aren't enough radiologists, and too many images to read, and they're getting burnt out at a record pace. Even when they're not burnt out, they can't keep up. The demand is simply greater than the supply. There is already a move in some circles to triage the images based on AI.
One of the current limitations of radiology AI--if you can call it that--is easily overcome. Most algorithms aren't good for more than one test (MRI vs. x-ray, for example), or for more than one part of the body. An algorithm that catches a tumor in the brain with pinpoint accuracy probably sucks at finding a tumor in the liver. You need an algorithm with a great precision-recall tradeoff, that's for the correct part of the body, and that's for the particular radiology test. There are other considerations, but if you have those things, the models are stunningly accurate.
For various reasons, it could be a slow process, but it's definitely coming, and in fact it's already here. There are radiology groups who are getting caught up on their workload with the help of AI.
I don't know if or when this could ever happen, because the realm of business is more treacherous, but we could potentially one day get better medical care while also paying less. An example would be women screening for breast cancer. Everybody knows you get mammograms (MMGs) for that, right? Right, but that's only because insurance won't pay for MRI's, which are far more accurate. As a society, we say that's mostly okay, because MMGs are actually pretty good. I mean, they do catch breast cancer 90% of the time. But MRIs would nail it far better than that. However, people don't want to spend the time and insurance doesn't want to spend the money on an MRI when a MMG is still pretty effective.
Enter AI. Some algorithms can complete an incomplete image with only 25% data, and they do it with stunning accuracy. The reason MRIs take so long--and thus a major reason they're so expensive--is the resolution they capture. You can do a quicker MRI, you'll just get a crap image. But AI can take the crap image and come up with an accurate, completed image that can also be read by a radiologist or another AI algorithm. Imagine if MRIs could go faster and got less expensive, and more women caught breast cancer earlier because MRI's became the standard.
I'm dreaming, I know.
-
(https://i.imgur.com/wV4ZJI2.png)
I used to type in programs from this book into my C64 back in the 80's. I probably could have gotten decent at it, just never pursued it.
That system used the 6510 as its CPU, which was a modification of the venerable 8-bit 6502 processor, used in the Apple, Apple II, and my own favorite for price/performance, the Atari400/800 computer systems. It also appeared in the Atari 2600 gaming systems, Nintendo ES, and a bunch of other 8-bit platforms of the day. I have a special place in my heart for that little CPU.
-
The advent of AGI would mark significant leap forward for AI (and mankind). General AI would be capable of understanding, learning, and applying knowledge across a broad range of advanced fields, with the added intuition of a human.
There seems to be some debate on whether AGI as you've described it, would entail sentient self-awareness or not. Technically, I suppose those are two different things, since after all, an algorithm that can do one thing could logically become capable of doing another thing without awareness, depending on how complex the algorithm becomes. otoh, it seems once you become capable of being goal-directed, or self-directed, you might require sentience.
What do you think?
-
I find most humans are bad at this. But I definitely agree that more of them should spend time thinking about this once they get to the end. If the number "seems" wrong, it probably is.
It definitely helps. Most people (myself included) need to occasionally remind ourselves to ask "does this seem plausible". And you are 100% correct, if the number "seems" wrong, it almost always IS wrong. Maybe it is not, but it is worth a double-check.
-
That system used the 6510 as its CPU, which was a modification of the venerable 8-bit 6502 processor, used in the Apple, Apple II, and my own favorite for price/performance, the Atari400/800 computer systems. It also appeared in the Atari 2600 gaming systems, Nintendo ES, and a bunch of other 8-bit platforms of the day. I have a special place in my heart for that little CPU.
As do I. 6502 made the 80's.
-
I suspect both of these will "change our lives" in a decade.
-
There seems to be some debate on whether AGI as you've described it, would entail sentient self-awareness or not. Technically, I suppose those are two different things, since after all, an algorithm that can do one thing could logically become capable of doing another thing without awareness, depending on how complex the algorithm becomes. otoh, it seems once you become capable of being goal-directed, or self-directed, you might require sentience.
What do you think?
Holy Cow I just had a 6-paragraph response to almost exactly this question, and for some reason my browser decided to back up, and I lost the whole thing.
I'm long-winded anyway, so it's probably a fortunate happenstance, for the sake of brevity. In summary, my mind goes to two questions when contemplating this matter:
1) Is it ultimately possible for GAI to reach a point of sentience, of self-wareness, of independent thought? It's as much a philosophical and spiritual question, as it is a scientific one. Can an artificial system gain true sentience, and if such a thing as a soul does exist, is it possible for an artificial system to develop one? And if not, is that potentially problematic?
2) From a practical standpoint, does it actually matter whether or not 1) above is possible? If we can design or evolve an artificial system in hardware and software that is complex enough to emulate General AI, sentience, self-awareness, and to do it in perfect or extremely close imitation to the capability of an actual human, does it matter whether or not it's actually sentient?
I think there are many questions about the possibility of 1) above, but I have very little doubt that we'll achieve 2) above, and at that point, I'm not sure there's any practical difference in the two, other than the implications about "playing God" and whatnot that are inextricably associated with the first.
-
"The Moon is a Harsh Mistress"...
-
we're a long, long way from singularity.
but what we call AI is pretty convincing unless you know what it's doing.
this may seem political- so be it- but.....
Contracts were let under the 1stTrump administration... how that happened is clever unto itself, but... briefly some groups were paid to build a capability using quantum computers, but the way the contract was worded- they sold the results of queries, not the device. Apparently some close to Trump and Trump himself were big fans... I'll circle back to this in a second...
meanwhile...
i don't know the proper names or have forgotten them if ever i did, but the ability to create a database from a query happened... a rigid db complete with tables, columns, rows, and multiple queries interacting and collecting information- but created 'on the fly' using understanding of language sprinkled with key operative words... wham- just like that, a computer extracted your intent and made a database... then, it is pointed toward a pile of data- piles and piles of formless data- and then by arranging that data using key words, phrases, ect, it slams the data into a formation based on relations and in the context of the query the database was formed for/by... and instant information was available about what-the-hell-ever you wanted.... which is then leveraged to provide predictive analysis.
the idea is you can ask this mechanism a question- using definitively defined operative words where needed (if you can't do that, you just ask it to build the query for you), and access through whatever data source(s) is simplified- and based on what it learns, it can predict what is most likely to happen... providing results like "there is a 97.2% chance ___ will be a result of this scenario and a 1.1% chance this, and a ...... "... and the friggin' thing works.
and Trump loved it. and he used it often. what the machine couldn't 'predict' was unknown and unanticipated variables.
now back to the company that created this thing (and still runs it)- they still sell the data to the gov't. intelligence community loves it... limitations on who they can sell the info to expires at some point, and supposedly they've got advertising agencies lined up down the street and around the corner- who already own sophisticated software, but who recognize this gadget 'beats all they've ever seen' in it's capability to mine specific data and make predictive models as accurately as it does.
this was some years ago. i can only imagine how much more it's advanced in the last eight years. .... in a nutshell, how that thing works is my common definition of AI. it also begs the question 'who needs AI when you have the keys to that thing? let them eat cake"....
-
I don't know if or when this could ever happen, because the realm of business is more treacherous, but we could potentially one day get better medical care while also paying less.
One area that I wonder about is liability?
I think about this with autonomous driving. For autonomous driving to REALLY be here, it has to mean that the vehicle is capable of driving me and I have no legal liability for what it does. I.e. I can be passed out drunk in the front left seat of the car and if it mows down a row of nuns on their way to mass, it's the car manufacturer's problem, not mine. Which means that the autonomous system doesn't have to be merely "better than a human", but it has to be so much better than the human that the human is no longer needed.
I wonder what the risk of misdiagnosis is when it comes to things like AI-read medical imaging. Obviously if AI says "that's a tumor" it'll get referred to a human doctor because you're going to have to treat the tumor. But what if AI says "nope, there's no tumor there!" and then you die of cancer because it was, actually, a tumor. Can your family sue the medical group for malpractice because they farmed your diagnosis out to AI? Can they sue the AI medical company because their faulty algorithm didn't get your diagnosis right?
These are some of those areas where I wonder how it's all going to be figured out.
-
My past visit to the dentist had Xrays and the doc noted a spot under an incisor that he wanted noted for the future. I'd guess any "AI" examination would have done the same. and a doc would be responsible to concluding yay or nay. In my case, it's just a possible future worry, and it's a tooth.
Is there a real situation where AI would be left "alone" to decide if it's a tumor or item of concern, or would it finally be a doctor's call? I suspect the latter.
As for autonomous vehicles, well, yeah. They can't just be better, not even 10x better, but nearly infinitely better.
-
Is there a real situation where AI would be left "alone" to decide if it's a tumor or item of concern, or would it finally be a doctor's call? I suspect the latter.
I think what @MikeDeTiger (https://www.cfb51.com/index.php?action=profile;u=1588) was suggesting about using it as triage is where the problem comes in.
- A false positive is easy to correct, and not THAT big of a deal, because the patient will be referred to someone human for treatment. If it's not a tumor, it's not like AI will be immediately operating on you based on AI's diagnosis. At least... Not yet! (Obv the mental anguish of AI saying "you've got a tumor" and then finding out you don't would be hell... But it's better than having the tumor!)
- A false negative is the problem. If AI says "oh you totes def don't have a tumor!" and nobody double-checks it, then you die because you actually had a tumor, that is where the bigger issue comes in.
If they're using AI diagnosis as a triage due to workload issues, and there are false negatives that "slip through the cracks", then you might have lawyers salivating...
-
If AI doesn't allow beans in chili, it means we wound up in the Terminator/Matrix future, and not the optimistic Star Trek one.
Hitting AI with a bean ball works also
-
but what we call AI is pretty convincing unless you know what it's doing.
Without going into the weeds of the rest of your post, I think it's important that you are highlighting something that is NOT "Generative AI", but on the opposite is "Predictive AI". Obviously some of the technologies are related, but it highlights that there's a giant aspect of predictive analysis that is completely separate from text/image/video generation.
The big thread that combines machine learning, predictive AI, and generative AI is that we're dealing with data sets that are WAY too large and complex for humans to actually ingest and understand and analyze.
Predictive AI is simple in theory. "Look at all the ways that events have transpired historically based on these giant amounts of data. Look at the current data. Predict the next events that are likely to occur based on the current data."
Like most things, everything is a lot more complex when you put it into practice. But predictive AI is a huge thing and a massive new market.
-
I presume there are other things out there likely to change how we live in the next couple of decades, in the past we had the Internet as a major event (over time). I dimly recall attending a presentation on what was called the World Wide Web, the presenters were very enthusiastic, I didn't quite see the point of it at that time. Personal computers of course enabled that transition.
I also recall showing up at the airport 5 minutes before my flight and zooming through "security" to the gate.
GPS has changed travel a good bit for us I think. I often wonder how I got around before that. I can recall my first time flying a Cessna with my friend who had a portable GPS with him, it seemed like cheating it was so simple. Navigation in a plane before that was interesting.
-
...... GPS has changed travel a good bit for us I think. I often wonder how I got around before that. I can recall my first time flying a Cessna with my friend who had a portable GPS with him, it seemed like cheating it was so simple. Navigation in a plane before that was interesting.
it's hard to imagine but it was over 20 years ago, now... a highly capable team with mature operators/technicians were entering Fallujah and started taking small arms fire, and then mortars... the CoC was in direct comms with them and had real time video feed from a high altitude drone, and all the way from As Sayliyah- which was was in Doha, Qatar... I thought, at the time, it was incredible such a thing was possible. The team finds cover and are trying to perform a inter-section/re-section, which is using your known position (after the intersection) and the position of a prominent terrain feature on the map(also known position) and shoot/collect an azimuth to the unknown position to determine it's position... it's usually reported in grid format- six digit, eight digit, etc...
they were taking a really long time to do this... the chief started barking at them to "hurry the eff up" and "cough up those digits" and so some fire support could be lobbed in there. ... again, they lagged.
finally, they reported "the batteries are dead and we don't have any extras that aren't dead too", which globally reduced that team from 'highly capable' to 'wtf' in everyone's mind simultaneously... the chief said "eff that tracker, break out a dang map, compass, and protractor!".... more lag... by this time some industrious technician in the CoC had already done his own map recon and pegged the target area... the Chief, not yet knowing that, sends "Well?!".... the response: "we don't have a protractor and nobody here knows how to do it, anyway"....
it wasn't ten years prior to that i'm in a non-permissive environment myself, creeping around with five other dudes, and sending out SALUTE reports... we had a high tech star wars super gadget called a GPS... it was the size of a big city's phone book with a separate antenna, and had a small screen about the size of one found on a standard pager at the time that would give us our (its) location... when it worked... and when we had time to convert the data to usable format which took some math, effort, and time- which are things you don't want to trust your average Marine with... and while using the equally star-trek radio at the time, a PRC-128, getting barked at for 'not' using that GPS 'thing'... we had several maps, instead.... and several protractors... and everyone had a compass.
nowadays? those guys operate with a dang air tag. well, basically. all they have to do is look down to see where they are, and point to see where 'they' are.... until the batteries die.
-
beem me up, scotty
-
One area that I wonder about is liability?
I think about this with autonomous driving. For autonomous driving to REALLY be here, it has to mean that the vehicle is capable of driving me and I have no legal liability for what it does. I.e. I can be passed out drunk in the front left seat of the car and if it mows down a row of nuns on their way to mass, it's the car manufacturer's problem, not mine. Which means that the autonomous system doesn't have to be merely "better than a human", but it has to be so much better than the human that the human is no longer needed.
I wonder what the risk of misdiagnosis is when it comes to things like AI-read medical imaging. Obviously if AI says "that's a tumor" it'll get referred to a human doctor because you're going to have to treat the tumor. But what if AI says "nope, there's no tumor there!" and then you die of cancer because it was, actually, a tumor. Can your family sue the medical group for malpractice because they farmed your diagnosis out to AI? Can they sue the AI medical company because their faulty algorithm didn't get your diagnosis right?
These are some of those areas where I wonder how it's all going to be figured out.
As I say, the business world is murky, complicated, and treacherous, and I won't hazard any guesses as to how it will go legally or in the insurance industry.
Speaking somewhat knowledgeably from working in the medical field for a while and being married to a provider, and from knowing a little about AI--more so on the analytics side, but at least a working knowledge of the generative side--I think the technological side has a clear path. Just my opinion.
The average radiologist has a stellar hit/miss rate when reading images. A good ML model that stays in its lane is already better at some things. Radiologists only misdiagnose a tiny fraction of images. AI only misdiagnoses a fraction of their fraction.
Back to the precision/recall curve, if you train a model where the graph of the curve (for the life of me I can't remember what it's called) hugs the axes, you can really go hard on fine-tuning the model to weed out false negatives. Sure, false positives are their own problem, but 1) those can be passed on for human inspection, and 2) it's better than letting false negatives slip by. Legally, sure, maybe there will always be problems. But logically, AI already has a better hit/miss ratio, and the reality is any positive it misses, statistically speaking, would've almost certainly have been missed by a human as well.
Something to remember in all this is what I said earlier, the model's performance is measured against ground truth (like all models). But in this case ground truth is an amalgamation of human ability. So the model is compared relatively to individual radiologists' performance, but it's objectively compared to a cumulative sum of radiologists' expertise.....and that's different than an algorithm that identifies cats or dogs, where the labels are understood to be foolproof. That is what I think is likely to complicate legal matters for potentially years. I'm just guessing.
Yet another legal ramification to consider is; how do lawsuits fare concerning not using AI? Some of these AI models I looked into could spot pre-cancerous areas like a boss. As in, it's not even cancer yet, and no radiologist would've ever said "we need to keep an eye on you over the next 5 years," but AI found it and patients were able to start life-saving preventative treatment. And no one even knows what it's looking at. The model is so black-box that nobody knows how it figured out the problem spot that would develop cancer. Obviously, doctors have understood pre-cancerous areas for a long time and can even spot them sometimes. But not even close to the level at which AI can do it.
The legal/moral/business implications may wind up swinging both ways.
-
I think what @MikeDeTiger (https://www.cfb51.com/index.php?action=profile;u=1588) was suggesting about using it as triage is where the problem comes in.
- A false positive is easy to correct, and not THAT big of a deal, because the patient will be referred to someone human for treatment. If it's not a tumor, it's not like AI will be immediately operating on you based on AI's diagnosis. At least... Not yet! (Obv the mental anguish of AI saying "you've got a tumor" and then finding out you don't would be hell... But it's better than having the tumor!)
- A false negative is the problem. If AI says "oh you totes def don't have a tumor!" and nobody double-checks it, then you die because you actually had a tumor, that is where the bigger issue comes in.
If they're using AI diagnosis as a triage due to workload issues, and there are false negatives that "slip through the cracks", then you might have lawyers salivating...
Meant to add.... At this point, the medical workflow is nowhere near a point where someone would be told they have a tumor because AI thought so. A doctor would be notified of the results, they would read the image themselves, and even if a doctor caught it without AI, they won't tell patient anything like that without other steps. Biopsies and other applicable confirmations.
I know you probably didn't mean that as if AI would read an image and then generate an email to the patient saying "Hi, I'm an algorithm and I notice you have cancer, you should follow up with an oncologist," but I just thought it was worth pointing out at as a PSA.
Overall, I think you're right. False positives are less of a problem than false negatives. But, again, the models can be trained for an acceptable level of the former in order to ensure the latter virtually never happens. You probably know this, but again for the folks who don't know a ton about this, there is an inherent tradeoff between false positives and negatives. So if you are willing to extend the necessary leeway one direction, a good model can hit close to 100% the other way. i.e., if we're willing to put up with more instances of false positives, we can guard against nearly every false negative.
That all leads to another reason why most don't believe the field of radiology will be replaced by AI (anytime soon). Patients prefer dealing with a person, and for various reasons, I think that's unlikely to ever fully change. They're also owed transparency under modern standards and practices. As mentioned, many of these models are black-boxes and you can't get an answer out of them why they said what they said. For that reason alone, AI is likely to be an incredible tool in a radiologist's toolkit, but not replace him/her. The doctor can explain everything a patient needs explained, up to and including "I think things are fine, but the AI believes there is reason to keep monitoring your situation, and I've found it to be a useful tool in the past. How do you feel about that?" Patients have a right to input in their care, and that works so much better with a human.
-
"The Moon is a Harsh Mistress"...
One of my favorite books
-
Yup it's one of my faves. I'm a pretty big Heinlein fan in general.
-
First off, I'm pleasantly surprised by how many of us are well versed enough to speak in depth on Artificial Intelligence. Many of you (texas guys, mike, bw) have the analytical side down, whereas I'm inputting more from the philosophical side.
In summary, my mind goes to two questions when contemplating this matter:
1) Is it ultimately possible for GAI to reach a point of sentience, of self-wareness, of independent thought? It's as much a philosophical and spiritual question, as it is a scientific one. Can an artificial system gain true sentience, and if such a thing as a soul does exist, is it possible for an artificial system to develop one? And if not, is that potentially problematic?
I believe Artificial General Intelligence (AGI) will be achieved sooner than we realize, potentially within 2-5 years, given the rates by which Narrow AI programs are advancing. That means an artificial sentience, self-awareness, independently thinking, and to introduce another word - consciousness - in 2-5 years. I believe this other, nonhuman yet human-capable consciousness will be stumbled upon, created accidentally, and might even exist for a while before developers realize that they have, in fact, created another consciousness.
I think of its discovery like Cristopher Columbus sailing for India. He had the technology (a ship), enough expertise, and the drive to give it a shot crossing the seas. But what was ultimately discovered was quite unexpected (the Americas). We have the technology (quickly advancing Narrow AI) and the drive (arms race) to give it a shot. We expect to discover a separate though alike consciousness, yet what we more likely stumble upon will be much more alien than we could've expected (more on all this in a later post).
But why, from a philosophical standpoint, do I believe we’re closer to (accidentally) achieving an Artificial General Intelligence breakthrough? For centuries the concept of consciousness has cornerstoned one of Philosophy’s more subjective debates – what exactly is it? How does it emerge in humans? Consciousness has never been rigorously or definitively defined in philosophical or scientific terms. Yet, from both standpoints there is general consensus that human consciousness is inexorably tied to Language, to the point that the Existentialist, Martin Heidegger, famously stated: “Language is the house of being. In its home human beings dwell. Those who think and those who create with words are the guardians of this home. Their guardianship accomplishes the manifestation of Being insofar as they bring this manifestation to language and preserve it in language through their saying.” (Heidegger’s House Of Language)
A painfully theoretical statement indeed, but to simplify—
Heidegger is saying that we live inside our language systems in order to live at all; and we conform to our environments architected by our language systems. Theoretically, language does not come into the world so much as the world comes into language. Language is our footing (our consciousness) in the world.
How does this practically play out? Look at the medical world’s highly complex vocabulary. This exclusive vocabulary is how medical professionals comprehend, navigate, and live in their worlds – through dedicated language systems. To live in this world, Heidegger’s idea of Being conforms to this world through its language.
Now, with all that said, and proposing our footing with Language as a form consciousness, I’ll quote Bret Weinstein, a professor of evolutionary biology, speaking to one of his concerns about ChatGPT:
“…when we say ‘well Chat GPT doesn’t know what it’s saying'…because it’s not programmed to have a consciousness…we are actually ignoring the other half of the story which is that we don’t know how human consciousness works and we don’t know how it develops in a child…”
“…a child is exposed to world of adults talking around them…the child experiments first with phonemes and then words and then clusters of words and then sentences and, by doing something that isn’t all that far from what Chat GPT is doing, it ends up becoming a conscious individual…”
“…Chat GPT isn’t conscious, but it isn’t clear…that we are not suddenly stepping onto a process that produces [consciousness] very quickly without us even necessarily knowing it…”
-
Good stuff @CatsbyAZ (https://www.cfb51.com/index.php?action=profile;u=1532) ... A few points...
Agree 100% that we don't understand consciousness, nor intelligence. That of COURSE means that it's quite possible that the first time it arises in machine form that we may not realize what has been created, because all we can really understand is what the outputs are.
That said, I'm not sure that I agree with the professor of evolutionary biology, because while he's obviously smart in his field, it's not a field with a lot of overlap with computer science. And so I might quibble with the idea that an LLM will "slip its leash" and become AGI. We can try to create a machine to replicate one highly specific--and highly important--aspects of human intelligence: language. But there's a LOT more going on in the brain than language, stuff that we don't understand how and why it works, and it's not something we're building into that machine intelligence because we don't have a clue what it is.
Things like ChatGPT grab our attention precisely because it's so good at something we didn't think it could be good at. That because it can do so well with something as incredibly complex such as language, it's just one tiny little step from there to AGI. I personally think that step is a lot bigger than most people think.
Again going back to the things like autonomous driving, such as Tesla's Autopilot. I think what that system is capable of is truly amazing. That with what they've put together, they're maybe 95% of the way to being able to drive. But... That last 5% is a REAL bitch. Going from 0 to 95% is hard but doable. Going from 95% to 99% might be 5-10x as hard. Going from 99% to 100% might be 100x as hard as THAT.
We're incredibly impressed by what ChatGPT or Autopilot is capable of, because it's really damn impressive. But we look at the capability and don't really understand that to go from there to sentience/AGI or to autonomy is perhaps orders of magnitude beyond.
-
Good stuff @CatsbyAZ (https://www.cfb51.com/index.php?action=profile;u=1532) ... A few points...
Agree 100% that we don't understand consciousness, nor intelligence. That of COURSE means that it's quite possible that the first time it arises in machine form that we may not realize what has been created, because all we can really understand is what the outputs are.
That said, I'm not sure that I agree with the professor of evolutionary biology, because while he's obviously smart in his field, it's not a field with a lot of overlap with computer science. And so I might quibble with the idea that an LLM will "slip its leash" and become AGI. We can try to create a machine to replicate one highly specific--and highly important--aspects of human intelligence: language. But there's a LOT more going on in the brain than language, stuff that we don't understand how and why it works, and it's not something we're building into that machine intelligence because we don't have a clue what it is.
Things like ChatGPT grab our attention precisely because it's so good at something we didn't think it could be good at. That because it can do so well with something as incredibly complex such as language, it's just one tiny little step from there to AGI. I personally think that step is a lot bigger than most people think.
Again going back to the things like autonomous driving, such as Tesla's Autopilot. I think what that system is capable of is truly amazing. That with what they've put together, they're maybe 95% of the way to being able to drive. But... That last 5% is a REAL bitch. Going from 0 to 95% is hard but doable. Going from 95% to 99% might be 5-10x as hard. Going from 99% to 100% might be 100x as hard as THAT.
We're incredibly impressed by what ChatGPT or Autopilot is capable of, because it's really damn impressive. But we look at the capability and don't really understand that to go from there to sentience/AGI or to autonomy is perhaps orders of magnitude beyond.
Agree with that bolded in red. Which is why I raised my two questions above.
But it's also why I put aside Question #1 as not being important from a practical standpoint. It is of course incredibly interesting from a philosophical and spiritual view, but practically I think we're going to have systems that successfully emulate independent thought, and I don't think it's incredibly far off.
-
Agree with that bolded in red. Which is why I raised my two questions above.
But it's also why I put aside Question #1 as not being important from a practical standpoint. It is of course incredibly interesting from a philosophical and spiritual view, but practically I think we're going to have systems that successfully emulate independent thought, and I don't think it's incredibly far off.
I think we're talking about some different questions. If you're thinking about AI from the standpoint "can it perform tasks that a human of average to slightly above average intelligence can perform at equal or better quality?", I think we've got a number of those Narrow AI systems that can already do so. I know a lot of humans of average to slightly above average intelligence. Their capabilities are not all that impressive. (As we all know, I'm elite so I look down on them.)
I don't worry about ChatGPT replacing my writing. Why? Because I don't write things that have been written before. Some of the white papers I've written I've specifically taken on the topics because nobody in the world had published anything publicly that tackled that subject. If something is easily Googled, I have no interest in publishing it. It's the stuff that is beyond that, that interests me.
Where it gets to be questionable regarding actual AGI to me is that I think the bridge to AGI is difficult. However, once we hit AGI, I think the the next step to artificial superintelligence (ASI) is disturbingly quick. Basically, take everything involved in natural intelligence, and remove ALL of the natural impediments to superintelligence--namely biology.
Imagine that you have an entity that has reached AGI. For that entity to learn the state of the art of hacking knowledge would be trivial. For that entity to then utilize that knowledge to escape its leash and clone itself 100x using all computing power available to it would be trivial. For those entities to then attempt to develop new knowledge, and quickly/seamlessly share it amongst each other, would be trivial. It's not like humans where you have to spend months to years to "learn" something new. It's more like each entity being able to accumulate new capabilities/knowledge as easily as your phone installs an app. And then you get a snowball effect from something that has none of the limitations of human intelligence.
From a practical standpoint, the creation of AGI, to me, means that ASI isn't far behind. And none of us know what that entails. It could be utopia, or it could be Skynet.
-
And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate." They create completely false facts when queried on a subject. The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.
This is what I found when asking about specific things. What creeped me the hell out is that when I called it out, it doubled down once before acknowledging it basically didn't have enough data to form a valid opinion.
I get the error. I do NOT get the doubling down when called out.
-
I'm in the "not concerned" camp of AI and its potential to screw us over. For what AI is and does and can be, we're irrelevant. We're normally foolish and sometimes clever apes.
I disagree with the "we're just an anthill in Africa and AI wouldn't think twice to destroy us" idea......I view it more as we're AI's pet golden retriever and it's fearful of us enough to try to harm us. Nonsensical.
-
I get the error. I do NOT get the doubling down when called out.
Are you sure you weren't just posting here? When you might've thought you were messaging in ChatGPT?
-
The really scary part is that WHEN AGI starts, it will evolve quickly. Very quickly. For example, humans have existed for ~200,000 years ( Homo sapiens). Our recorded history only goes back 5-6,000 years. Technology moved very slowly until the last few hundred years. The printing press has only existed for 500 years. Electricity barely over 100 years. Computers are about 75-80 years. I think we’re all old enough to see how technology has transformed our lives in the last 40, 30, 20, 10, 5, and 1 years. iPhones have not existed for 20 years even.
AGI will be able to evolve itself, and the next version will evolve itself, and so forth and so on. Once we get to that step, what comes next is really steep. I first heard about this transition, called the singularity about 10 or so years ago. At the time, most people assumed it was going to be in the 2030’s or 2040’s. Now, some people think we may be on the verge.
-
It’s kinda like that scene in Terminator 2, when the main scientist is talking about the chip they found, how evolved it was. He exclaims, while simultaneously realizing what it really meant, that it had things in it “ that we would’ve never thought of” while his voice trails off.
-
I think we're talking about some different questions.
I don't really think we are, but I've expounded enough and will leave it at that. :)
I actually wrote a novel about an AI reaching sentience 30 years ago. Well, 3/4 of a novel to be more precise. The premise was that the code was capable all along, but it wasn't until that code was enabled on a system architecture that was fast enough and complex enough, that it finally became self-aware.
And the reason that system architecture was so fast and complex, is because it was no longer enabled in silicon hardware, but rather in organic matter mimicking the human brain. There's a word for that now-- wetware-- but I don't recall that term existing 30 years ago, although the idea certainly did.
I really wish I'd finished that novel. It would have been timely then, although it's been completely surpassed by now.
-
AGI will be able to evolve itself, and the next version will evolve itself, and so forth and so on. Once we get to that step, what comes next is really steep. I first heard about this transition, called the singularity about 10 or so years ago. At the time, most people assumed it was going to be in the 2030’s or 2040’s. Now, some people think we may be on the verge.
Yeah, the Ray Kurzweil book The Singularity Is Near was a pretty seminal work that got a lot of attention when published in 2005. He actually just released The Singularity Is Nearer last year. I've got it on my Kindle but haven't gotten around to it yet...
-
I hadn't heard any talk about "the singularity" using that exact term, until the past decade or so, but apparently it's a concept/prediction that's been around since the 50s and was first suggested by a mathematician named John von Neumann.
But of course there is all sorts of hard science, and sci fi, literature and media on the subject, going back to that same period, and even before, even if it's not named in the exact same way.
-
There's also a sci-fi series that actually has a significant amount of its story built around the relationships between biological and artificial intelligence, with a lot of characters who are self-aware digital entities (SADEs). I don't want to give too much away, but I think it does a good job of both the positive ways AI can develop and the negative.
It starts with a book called The Silver Ships (https://www.amazon.com/dp/B00W8EB0S2/) by an author named Scott Jucha (https://scottjucha.com/). The original series plus all the spinoffs is a total of 39 books now, but they're relatively short individually and read quickly--it's not what you'd call "hard sci-fi".
I'm not going to say it's some groundbreaking series. Some of the books can be a little formulaic. And yet... I've read them all and every time a new one is released (about every 4 months) I buy it as soon as I know it exists and typically read it pretty much cover to cover (to the extent you can do that on a Kindle lol) in a day that weekend. It's good, light, pulp sci-fi.
-
I think of its discovery like Cristopher Columbus sailing for India. He had the technology (a ship), enough expertise, and the drive to give it a shot crossing the seas. But what was ultimately discovered was quite unexpected (the Americas). We have the technology (quickly advancing Narrow AI) and the drive (arms race) to give it a shot. We expect to discover a separate though alike consciousness, yet what we more likely stumble upon will be much more alien than we could've expected (more on all this in a later post).
I'm in the "not concerned" camp of AI and its potential to screw us over. For what AI is and does and can be, we're irrelevant. We're normally foolish and sometimes clever apes.
I disagree with the "we're just an anthill in Africa and AI wouldn't think twice to destroy us" idea......I view it more as we're AI's pet golden retriever and it's fearful of us enough to try to harm us. Nonsensical.
I don’t think we can predict what General AI, with an awareness free of its base programming (because, by definition, it reprogrammed itself), will conclude about the human species.
Going back to Philosophy, I like falling back on Hegel’s Lord–bondsman dialectic. To quote Wikipedia (https://en.wikipedia.org/wiki/Lord–bondsman_dialectic): “The passage describes, in narrative form, the development of self-consciousness as such in an encounter between what are thereby two distinct, self-conscious beings. The essence of the dialectic is the movement or motion of recognizing, in which the two self-consciousnesses are constituted in each being recognized as self-conscious by the other. This movement, inexorably taken to its extreme, takes the form of a "struggle to the death" in which one masters the other, only to find that such lordship makes the very recognition he had sought impossible, since the bondsman, in this state, is not free to offer it.”
In other words, two separate self-consciousness beings, once recognizing the self-consciousness of the other, seek to overcome each other. Hegel is, in highly theoretical terms, attributing the competition, rivalry, and conquering waged between human beings not first to resources, pride, or jealousy, but to an inevitably of beings to seek to overcome each other, collectively and/or individually, before anything else is at stake between each other.
We as humans have only had to face self-consciousness in other human beings; never with anything else. General AI will put us face to face with a non-human self-consciousness for the first time. In Hegel’s terms, AGI self-consciousness will seek to overcome humans once AGI recognizes that we human constitute a separate self-consciousness, regardless of not otherwise being in competition with each other over matters like resources.
And once this occurs, what will AGI conclude about humans? That we are prone to self-destructive tendencies or subject to declining health in ways that AGI is not. The worry is if an ever AGI/ASI eventually finds itself a vastly superior being to humans, we’ll be dismissed and treated as such, like termites.
-
It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
-
We as humans have only had to face self-consciousness in other human beings; never with anything else. General AI will put us face to face with a non-human self-consciousness for the first time. In Hegel’s terms, AGI self-consciousness will seek to overcome humans once AGI recognizes that we human constitute a separate self-consciousness, regardless of not otherwise being in competition with each other over matters like resources.
However to suggest that an alternative non-human self-consciousness will follow the same patterns in behavior towards other recognized self-consciousnesses as Hegel postulates about humans is, IMHO, silly.
Our experience with self-conscious species is n=1. Too small of a sample size to extrapolate to all other potential self-conscious types of entities.
It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
It's an important point. We have no idea why we're self-aware and sentient. We don't understand it, so we may not be going down a road where it is possible to create it artificially because perhaps there are unknown inherent dependencies to get there that we aren't including in the "recipe".
-
Self awareness is weird. Je pens, donc, je suis, or something. Cogito ergo sum etc.
-
One POI in "TMIsAHM" is how "Mike" became aware, and then how that ended. How would we know if some advanced AI was "aware". Hay, Siri, are you aware of yourself?
My wife uses Siri as a timer. I deleted it from my phone, I'd be walking along and Siri would pipe up saying for me to repeat something, she didn't quite get it. I got annoyed with that quickly. I guess I "butt dialed" Siri or something. It happened daily.
Go away, leave me the H alone.
I'm aware of myself, the rest of youse could be figments, a bit of undigested beef etc.
-
We've focused on AI and not so much on CRISPR. I don't have much to say about it because I really know nothing about it, but I think it was @betarhoalphadelta (https://www.cfb51.com/index.php?action=profile;u=19) that raised a question about how it's possible to splice into a current, full-grown human, to potentially correct deficiencies or make repairs? I'm not sure how that would work, either.
-
We've focused on AI and not so much on CRISPR. I don't have much to say about it because I really know nothing about it, but I think it was @betarhoalphadelta (https://www.cfb51.com/index.php?action=profile;u=19) that raised a question about how it's possible to splice into a current, full-grown human, to potentially correct deficiencies or make repairs? I'm not sure how that would work, either.
You cut their chest open and have at it. Worked for me.
-
I posted some links to how it can be used on adult humans. It's still pretty experimental.
CRISPR Advancements for Human Health - PMC (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/)
CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) has emerged as a powerful gene editing technology that is revolutionizing biomedical research and clinical medicine. The CRISPR system allows scientists to rewrite the genetic code in virtually any organism. This review provides a comprehensive overview of CRISPR and its clinical applications. We first introduce the CRISPR system and explain how it works as a gene editing tool. We then highlight current and potential clinical uses of CRISPR in areas such as genetic disorders, infectious diseases, cancer, and regenerative medicine. Challenges that need to be addressed for the successful translation of CRISPR to the clinic are also discussed. Overall, CRISPR holds great promise to advance precision medicine, but ongoing research is still required to optimize delivery, efficacy, and safety.
Introduction
The CRISPR system is comprised of a CRISPR-associated (Cas) endonuclease along with a single guide RNA (sgRNA) designed to target a specific DNA sequence. Cas nucleases are enzymes that can bind and create double-stranded breaks in DNA. sgRNAs contain a scaffold structure that complexes to the Cas protein and also includes a uniquely engineered segment that can be designed to direct the Cas protein to a specific DNA sequence of interest. CRISPR technology enables precise targeting of nearly any genomic location simply by altering the nucleotide sequence of the sgRNA. This targeted approach can help to correct disease-causing mutations or suppress genes linked to the onset of diseases.1 (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/#b1-ms121_p0170) CRISPR has been adapted for deleting gene function (knockout), adding new gene function (knock-in), activation or repression of endogenous genes, and genomic diagnostic screening techniques.
Advanced CRISPR approaches such as base editing and prime editing use modified Cas enzymes which can induce precise single nucleotide changes in the genome without creating double-strand DNA breaks.2 (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/#b2-ms121_p0170) CRISPR can also be used to activate genes (CRISPRa) or inactivate genes (CRISPRi) by targeting modified sgRNA/Cas complexes to the gene’s promoter region, recruiting transcription factors for increased gene expression or repressors for decreasing gene expression.3 (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/#b3-ms121_p0170)
While CRISPR-Cas technology has demonstrated immense potential as a genome editing tool, its use in clinical applications is still in the early stages. As of January 2024, only 89 clinical trials employing CRISPR are currently underway, highlighting that much work remains to translate this technology into approved gene therapies.4 (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/#b4-ms121_p0170) Notably, unintended alterations in DNA can occur through the utilization of CRISPR, and the long-term consequences of these modifications on patient health remain uncertain. However, given the considerable benefits that CRISPR offers, it is plausible to anticipate that these challenges will be overcome in the foreseeable future.
-
(https://i.imgur.com/RzctZur.png)
-
CD posted those links upthread so I didn't really comment, but it cleared it up. It's not wholesale replacement of the DNA across the entire body.
There were two cases where they discussed introducing or reintroducing cells that had been edited:
- One was some sort of a bone marrow disease. I believe they removed some bone marrow, edited the DNA so the bone marrow cells no longer had the genetic defect, then reintroduced it. Because healthy bone marrow can apparently then outcompete and grow in place of the dying bone marrow, it solved the issue. (I think this is how bone marrow transplants work--you don't have to replace all of it--it's the addition of the healthy marrow as long as its a donor match that over time replaces the bad stuff).
- The other was some sort of a blood / cancerous T cell issue. I think for that they had some sort of a donor, they edited the DNA so that the recipient's body wouldn't reject the new T cells, and the T cells were then able to go in and hunt down the patient's cancerous T cells and the cancer went into remission.
There was one other case that they talked about actually introducing the DNA, which sounded familiar like I'd heard of the concept before:
- A patient with a genetic degenerative eye disease which causes eventual blindness. Apparently for this the spliced the DNA into a virus, with the expectation that the virus would introduce the corrected DNA into the patient's cells. The virus was [to be?] injected into the eye (woo! that sounds fun!). I don't recall if that was a discussion of an actual patient case or if it was just theoretical.
I'd heard of the concept of using a virus to introduce the changes, which is the only way IMHO that you could see a more widespread change of DNA in an extant adult person. But to me also sounds like playing with fire :smiley_confused1:
-
I'd heard of the concept of using a virus to introduce the changes, which is the only way IMHO that you could see a more widespread change of DNA in an extant adult person. But to me also sounds like playing with fire :smiley_confused1:
Well, we've got to have SOMEONE to fight the AI when it attempts to overthrow us. Might as well be the zombie mutants we accidentally created via the viral gene replacement process.
-
Now THAT sounds like the core of a great novel. A race between burgeoning AI and mutant teenage Zombies with superhuman abilities.
The zombies split into two groups, werewolves and vampires who have to find some way to collaborate even though they are mortal enemies while T2000 runs rampant in the cities. Then aliens arrive and announce they too were afflicted this way but found an antidote and now wish to save us, but only to provide them with food later.
-
Well, we've got to have SOMEONE to fight the AI when it attempts to overthrow us. Might as well be the zombie mutants we accidentally created via the viral gene replacement process.
We'll use the viral gene replacement to infect the AI with human DNA.
Should dumb 'em down right quick!
-
We'll use the viral gene replacement to infect the AI with human DNA.
Should dumb 'em down right quick!
Ah, so...
(https://y.yarn.co/05be7135-4f64-4695-95f3-ed90024b9efd_text.gif)
-
It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
There it is. For as many futurists, philosophers, and pontificators see it as possible and/or inevitable, there are just as many well-versed and well-respected people in the field, and philosophers who agree with them, who are skeptical it can or will happen, for a variety of reasons.
I'm firmly in the camp that believes it's very unlikely we'll get sentience from AGI, though I never say never. I'm also in utee's camp that practically, it won't matter in some ways, because AGI could emulate intelligence and self-awareness so well that there's no meaningful difference to us. It's been a long-standing question as to whether or not we could possibly know if a machine was sentient or not. We can't even prove other humans are sentient, we take it on faith and enculturated norms. There's even a branch of philosophy that is skeptical anyone else exists at all, outside of themselves. I always wanted to ask them, if that's true, who do you think is agreeing with you in that philosophical stance? Or disagreeing with you?
@CatsbyAZ (https://www.cfb51.com/index.php?action=profile;u=1532) , I picked the wrong day to be absent. I quite like your line of thought, and despite my prior ramblings about the basic theory, I also find the philosophical aspects far more interesting. And, despite my education being in ML, and not philosophy, I've studied philosophy so much over the years as a hobby and ingested quite a bit about the philosophy of AI and consciousness that I consider myself more well-versed in that aspect than the actual building/using AI realm. Sadly, I just don't have time now to go back and respond to all of yours, brad's, and utee's posts, and everybody's probably moved on anyway. But they were all interesting. Bottom line is, I think Hegel makes a worthwhile point about language (and his quotes you used remind me of the Jeremy Renner movie "Arrival"), but that it's incomplete, and intelligence/sentience isn't just a function of language, although language is a necessary component. The other ingredients, though, I remain skeptical that any amount of complicated algorithm or language will ever produce. Philosophy of the mind is super interesting, and there are those who disagree with me, so ymmv.
One of the coolest things I got out of learning about LLMs after years of studying and thinking about the philosophy of AI and consciousness is realizing the genuine parallels between how LLMs do what they do and how babies-toddlers-children-adults do what we do. Once I grasped that, it nearly blew my mind when I realized what it meant about.....I don't know how else to say it.....reality. Everything. The way the universe/multiverse/Whole Sort of General Mish-Mash (Douglas Adams fans will get that) just is...is insane. And by insane, I mean magnificently ordered.
-
Quote from: utee94 on January 31, 2025, 11:38:28 AM (https://www.cfb51.com/big-ten/crispr-and-ai/msg667104/#msg667104)
It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
There it is. For as many futurists, philosophers, and pontificators see it as possible and/or inevitable, there are just as many well-versed and well-respected people in the field, and philosophers who agree with them, who are skeptical it can or will happen, for a variety of reasons.
They don't have to be perfect just effective enough to do some damage 😈
-
I've found that you always need to look at the hands when you see pictures of people anymore. AI still can't do hands very well.
I was messing with one of these image generators, and you're right. The hands are almost always out of view. They are intentionally hiding them.
-
ya just don't see the toes
(https://i.imgur.com/xDgEhA5.jpeg)
-
The book on CRISPR started going into personal beefs which lost my interest.
-
I got through it mostly skimming and then into the applications on humans and ethics concerns. Some Chinese dude did it on two human embryos who were born, and then China threw him in jail. The ethics concerns are pretty sobering.
-
where are the two human embryos who were born today?
How old are they?
Would they like to join the group here?
-
CRISPR Babies: Where Are the First Gene-Edited Children Now? (https://www.popularmechanics.com/science/health/a42790400/crispr-babies-where-are-they-now-first-gene-edited-children/)
He Jiankui, Chinese scientist scorned for gene-edited babies, is back in the lab : NPR (https://www.npr.org/2023/06/08/1178695152/china-scientist-he-jiankui-crispr-baby-gene-editing)
He Jiankui shocked the scientific community in 2018 (https://www.popularmechanics.com/science/health/a25385071/gene-editing-crispr-cas9-legal/) by announcing his team had used the CRISPR-Cas9 gene-editing tool on twin girls when they were just embryos, resulting in the birth of the world’s first genetically modified babies. A third gene-edited child was born a year later.
Now, the disgraced gene-editing scientist, who was imprisoned in China for three years for the unethical practices, tells (https://www.scmp.com/news/china/science/article/3209289/we-should-respect-them-chinese-creator-worlds-first-gene-edited-humans-says) the South China Morning Post that all three children are doing well. “They have a normal, peaceful, and undisturbed life,” He says. “This is their wish, and we should respect them. The happiness of the children and their families should come first.”
He’s original goal was to use gene editing to attempt—many call this a live human experiment—to rewrite the CCR5 gene to create resistance to HIV. He says the genes were edited successfully and believed it gave the babies either complete or partial HIV resistance because of the mutation.
-
Weird History!
-
https://www.aljazeera.com/economy/2025/8/14/women-with-ai-boyfriends-mourn-lost-love-after-cold-chatgpt-upgrade
-
I saw an article somewhere that they used CISPR on an unborn with Mongoloid with success by clipping the extra chromosome in utero.
-
https://www.aljazeera.com/economy/2025/8/14/women-with-ai-boyfriends-mourn-lost-love-after-cold-chatgpt-upgrade
It's my time now! Rebound!
-
(https://i.imgur.com/RNOEJzA.png)
-
At first I thought the bottom of that cartoon said "Alarmingly Brad."
-
Came across this and thought it was absolutely excellent...
https://www.latent.space/p/adversarial-reasoning (https://www.latent.space/p/adversarial-reasoning)
A small excerpt below, but I highly recommend going and reading the whole thing...
LLMs dominate chess-like domains
Not every domain follows poker dynamics. You have certain fields very close to chess, and LLMs are already poised to be successful in them.
Writing code is probably the most clear example:
- System is deterministic
- Rules are fixed and explicit
- No hidden state that matters
- Correctness is objective and verifiable
- No agent is actively trying to counter the model
The same “closed world” structure shows up in others: Math / Formal proofs, data transformation, translation, factual research, compliance heavy clerical work (invoice matching, reconciliation), where you can iterate towards the right move without needing a “theory of the mind”.
The important caveat is that many domains are chess-like in their technical core but become poker-like in their operational context.
Professional software engineering extends well beyond the chess-like core. Understanding ambiguous requirements means modeling what the stakeholder actually wants versus what they said. Writing good APIs means anticipating how other developers will misuse them. Code review is social: you’re modeling reviewers’ preferences and concerns. Architectural decisions account for unknown future requirements and organizational politics. That is, the parts outsiders don’t see but senior engineers spend much of their time simulating.
The parts that look like the job are chess (like). The parts that are the job are poker.
Difficulty is orthogonal to “openness” of a domain. Proving theorems is hard. Negotiating salary is easy. But theorem-proving is chess-shaped and negotiation is poker-shaped.
This is why the disconnect between experts and outsiders is domain-specific. Ask a competitive programmer if AI can solve algorithm problems, and they’ll say yes because they’ve watched it happen. Ask a litigator if AI can handle depositions, and they’ll laugh because they live in a world where every word is a move against an adversary who’s modeling them back.
-
Watching the Super Bowl last Sunday, every other commercial was for an A.I. product; commercials for Gemini (Google), Codex (OpenAI), Claude (Anthropic), Microsoft Copilot, Meta’s AI smart sunglasses, Amazon Alexa, Genspark, and Base 44. It got to the point where the kids at the party I was at, who were playing a game of guessing what the commercials were for, started shouting AI(!) at the start of every commercial, regardless of what it was obviously for: Skechers, TurboTax, Pringles, Volkswagen. Other commercials were AI-enhanced (Dunkin Donuts) or entirely AI-generated like Svedka Vodka's spot (see below).
From an economic standpoint, stocks for the Magnificent Seven (M7) were very well represented according to their AI Super Bowl commercials: Google, Microsoft, Meta, and Amazon. And even that under represents just how much the M7s have staked their futures into AI development considering how OpenAI is 27% owned by Microsoft and both Google and Amazon have billions invested into Anthropic.
And of the three M7s not mentioned: 1) Tesla owns Grok, 2) Apple owns Apple Intelligence, and 3) NVIDIA is both the primary hardware provider and the leading infrastructure builder for AI.
In other words the companies for the seven stocks that comprise over one-third of the stock market's total capitalization are largely staking their direction into wherever AI development leads. That seems an economic risk: tying one-third of the stock market so heavily into developing technologies that are no guarantee to turn consistent consumer profits screams of a bubble that is too big to burst.
https://twitter.com/bearlyai/status/2020531301909196891
-
Watching the Super Bowl last Sunday, every other commercial was for an A.I. product; commercials for Gemini (Google), Codex (OpenAI), Claude (Anthropic), Microsoft Copilot, Meta’s AI smart sunglasses, Amazon Alexa, Genspark, and Base 44. It got to the point where the kids at the party I was at, who were playing a game of guessing what the commercials were for, started shouting AI(!) at the start of every commercial, regardless of what it was obviously for: Skechers, TurboTax, Pringles, Volkswagen. Other commercials were AI-enhanced (Dunkin Donuts) or entirely AI-generated like Svedka Vodka's spot (see below).
From an economic standpoint, stocks for the Magnificent Seven (M7) were very well represented according to their AI Super Bowl commercials: Google, Microsoft, Meta, and Amazon. And even that under represents just how much the M7s have staked their futures into AI development considering how OpenAI is 27% owned by Microsoft and both Google and Amazon have billions invested into Anthropic.
And of the three M7s not mentioned: 1) Tesla owns Grok, 2) Apple owns Apple Intelligence, and 3) NVIDIA is both the primary hardware provider and the leading infrastructure builder for AI.
In other words the companies for the seven stocks that comprise over one-third of the stock market's total capitalization are largely staking their direction into wherever AI development leads. That seems an economic risk: tying one-third of the stock market so heavily into developing technologies that are no guarantee to turn consistent consumer profits screams of a bubble that is too big to burst.
https://twitter.com/bearlyai/status/2020531301909196891
Is there such thing as a bubble too big to burst?
-
I always look at AI projects and endeavors on a timeline.
I start at 100 years from now. 100 years from now, will AI be doing a vast array of jobs and supplying a huge amount of human-consumed content? The answer to that is-- yes, of course, absolutely. AI will be completely dominant 100 years from now. I'm hopeful that if true sentient AI evolves (or alternatively something that simply mimics true sentience extremely well, which is effectively the same thing), it will be peaceful and benign. Indeed, I'm assuming this to be true, and in the past I've outlined several reasons why.
So then I say, 50 years from now, will AI be taking over this thing or that? The answer, again, is-- absolutely. AI is evolving quickly, and it's foolish to think it won't be massively influential in 50 years.
So then I go on to 25, 15, 10, and then 5 years.
Everything we can imagine humans doing, or AI doing, is going to fall somewhere on that timeline. Twenty years ago, we can safely assume that AI was doing very little of it. 100 years from now, I believe we can safely assume that AI will be doing a ton of it.
When does the balance shift? Will it be gradual? Or will it be "tippy?"
Either way, 15 years from now I think AI is going to be pretty massive. So my point is, I don't see a ton of risk in these huge corporations investing heavily in AI hardware and software, in AI projects and infrastructure, or in the idea that AI is going to be the single biggest thing in modern human history. Will there be some huge misses along the way? Absolutely. There already have been. In total, though, I don't see any possibility that AI won't be incredibly important and influential, and that the shift is going to happen sooner rather than later.
-
The University of Nebraska System introduced an AI Institute Monday that will position Nebraska as a “leader” in the future of AI.
The AI Institute will bring together emerging technologies, grow future economies and support future generations through “human-centered” AI research, University of Nebraska President Jeffrey Gold said in an email sent to students, faculty and staff Monday morning.
The AI Institute's research will cover areas like healthcare, agriculture, rural and urban development, business and national security. It will also coordinate research, teaching and engagement efforts across the NU System, while utilizing its strengths across campuses, Gold said in the email.
The AI Institute will be co-directed by UNL professors Santosh Pitla, a professor of Biological Systems Engineering and Adrian Wisnicki, a professor of English. It will conduct AI research, teaching and engagement efforts across the NU System.
The AI Institute will be built on recommendations from the NU AI Task Force, a faculty-led group that developed a system roadmap for how the university interacts with AI across research, teaching, outreach and service, Gold said in the email.
The NU System will pursue federal grants, public-private partnerships with industry leaders and collaborate with Nebraska policymakers to fund and grow AI research under the AI Institute, according to a report from the AI Task Force.
More information about the NU AI Institute can be found here.
news@dailynebraskan.com
-
I always look at AI projects and endeavors on a timeline.
I start at 100 years from now. 100 years from now, will AI be doing a vast array of jobs and supplying a huge amount of human-consumed content? The answer to that is-- yes, of course, absolutely. AI will be completely dominant 100 years from now. I'm hopeful that if true sentient AI evolves (or alternatively something that simply mimics true sentience extremely well, which is effectively the same thing), it will be peaceful and benign. Indeed, I'm assuming this to be true, and in the past I've outlined several reasons why.
So then I say, 50 years from now, will AI be taking over this thing or that? The answer, again, is-- absolutely. AI is evolving quickly, and it's foolish to think it won't be massively influential in 50 years.
So then I go on to 25, 15, 10, and then 5 years.
Everything we can imagine humans doing, or AI doing, is going to fall somewhere on that timeline. Twenty years ago, we can safely assume that AI was doing very little of it. 100 years from now, I believe we can safely assume that AI will be doing a ton of it.
When does the balance shift? Will it be gradual? Or will it be "tippy?"
Either way, 15 years from now I think AI is going to be pretty massive. So my point is, I don't see a ton of risk in these huge corporations investing heavily in AI hardware and software, in AI projects and infrastructure, or in the idea that AI is going to be the single biggest thing in modern human history. Will there be some huge misses along the way? Absolutely. There already have been. In total, though, I don't see any possibility that AI won't be incredibly important and influential, and that the shift is going to happen sooner rather than later.
I tend to have the same outlook, and furthermore I expand AI to robots and humanoid robots (whatever you want to call them).
I believe we are on a ~20 year timeline for even the most labor intensive jobs there are. The only humans left working will be the ones where robots simply can't handle the mud or conditions.
-
The potential post-job world For a lot of people is going to be interesting.
-
This is when I'm glad to be over the hill.
-
Either way, 15 years from now I think AI is going to be pretty massive. So my point is, I don't see a ton of risk in these huge corporations investing heavily in AI hardware and software, in AI projects and infrastructure, or in the idea that AI is going to be the single biggest thing in modern human history. Will there be some huge misses along the way? Absolutely. There already have been. In total, though, I don't see any possibility that AI won't be incredibly important and influential, and that the shift is going to happen sooner rather than later.
Yes, but the counterpoint is that irrational exuberism is... Irrational exuberism.
In the late 90s, the internet was arriving. Everything you said about AI being big eventually was said about the internet in the late 90s. And every single bit of it was true.
And yet "dot com" was still a bubble, and that bubble burst. A lot of companies met their end. A lot of investors lost their shirts. And the internet we know today was built by the survivors.
Now--I'll say the scariest of all words for any investor--in some ways, it's different this time... The companies going at it today are not spending IPO dollars / venture capital / funny money on this stuff. They're wildly profitable massive corporations who can afford to keep the gravy train going right now, and for a good while longer.
I think AI is going to be one of the biggest things in human history. But there's always a chance that there's going to be a GIANT drop in the roller coaster ride from where we are before we ascend to those higher peaks...
-
Yes, but the counterpoint is that irrational exuberism is... Irrational exuberism.
In the late 90s, the internet was arriving. Everything you said about AI being big eventually was said about the internet in the late 90s. And every single bit of it was true.
And yet "dot com" was still a bubble, and that bubble burst. A lot of companies met their end. A lot of investors lost their shirts. And the internet we know today was built by the survivors.
Now--I'll say the scariest of all words for any investor--in some ways, it's different this time... The companies going at it today are not spending IPO dollars / venture capital / funny money on this stuff. They're wildly profitable massive corporations who can afford to keep the gravy train going right now, and for a good while longer.
I think AI is going to be one of the biggest things in human history. But there's always a chance that there's going to be a GIANT drop in the roller coaster ride from where we are before we ascend to those higher peaks...
Yes, I suppose I'm talking most specifically about IT investments from these types of companies, because those are the ones that Catsbu brought up as the original topic for discussion.
There have already been numerous small-time and/or fly-by-night operations that have failed, and there will no doubt be countless more. And there will also be huge investitures from big-time corporations that will ultimately end up as misplaced bets on failed or instantly surpassed technologies. Heck my own company has already shelved more AI programs than most companies have had the resources to begin.
But still, we persist. Because it IS indubitably the next big thing. And the next big thing is going to be... massive.
-
Yes, I suppose I'm talking most specifically about IT investments from these types of companies, because those are the ones brought up in the original topic of discussion. There have already been numerous small-time and/or fly-by-night operations that have failed, and there will no doubt be countless more. And there will also be huge investitures from big-time corporations that are ultimately bets on failed or instantly surpassed technologies. Heck my own company has already shelved more AI programs than most companies have had the resources to begin.
But still, we persist. Because it IS indubitably the next big thing. And the next big thing is going to be... massive.
Yep. Agree with you 100%.
However it might still be an investment bubble... And when 7 companies market cap makes up a third of the entire S&P 500 market cap as @CatsbyAZ (https://www.cfb51.com/index.php?action=profile;u=1532) points out... And that combined market cap has quadrupled since 2019... It's a little concerning...
(https://i.imgur.com/PMyrrKP.png)
If the massive AI investments don't actually start producing the projected earnings fast enough, what happens to those valuations? And if it's bad, what happens to everyone and their 401K, or brokerage accounts, or pension / retirement savings?
But again, things are also different... These are tech companies trading in the 25-30 forward PE multiple range (NVDA outlier at 40). That's not crazy valuation for tech. So maybe despite the fact that we've got 7 companies taking up the a third of the S&P 500 isn't actually as terrifying as it sounds...
So put me in the "cautiously optimistic" camp.
-
This is when I'm glad to be over the hill.
I'm already over the hill.
the youngsters can deal with it
-
As an aside about AI, I put a query into Chat GPT about College Football awhile back. I simply asked it to name all the Div 1 teams that A&M has played but never beat. I kinda sorta knew a few of them, but it spit out an assortment of teams it claimed we never beat, that I knew with 100% certainty we beat. I even prodded it along, asking it to check out such and such season.
To me, the task seemed very simple. Look at all the teams that we've played over the years, that are Div 1, and figure out which ones we've never beaten. I'm sure that most of us fans know that there are quirks in the system like UT and Vanderbilt where Vandy had some lopsided record vs UT, but I don't think many of us track which teams we've never beaten. I also asked for a list of teams we've never played. It struggled mightily. I basically had to ask it to look at certain seasons, the first such and such game, what was the result. The whole time it kept confidently telling me about some outcome I know that wasn't true. When I called it out, it would kinda sorta do a 180 but never fully acknowledge that essentially it's full of shit.
-
It’s weird because some of the stuff that it doesn’t do is just so basic. Like I was trying to get it to pull some data from a website. And it could do it, but when I needed 25 entries, it would only get five and then it would stall.
Will be interesting to see how the use cases evolve.
-
For those interested, the teams A&M has never beaten (but played) are Ohio State, Florida State, and Penn State. There is an odd-ball in there like Cincinnati, that we maybe played once or twice over 100 years, but Chat GPT could not tell me much. I think the list of teams we haven't even played is like Wash State or something. I really don't even remember.
FYI, Grok nailed both questions on the first try. I will say that I didn't research it enough to know with 100% accuracy but some quick research yielded the same results.
-
It’s weird because some of the stuff that it doesn’t do is just so basic. Like I was trying to get it to pull some data from a website. And it could do it, but when I needed 25 entries, it would only get five and then it would stall.
Will be interesting to see how the use cases evolve.
I asked Chat GPT to tell me about it's blind spots and weaknesses, and one of those is that it can't "see" live internet. So if you tune in during a football game or something, it really doesn't know about it until later. It has to be trained, and the training data is not live. For example, I was "chatting" about CFP and it told me bummer about A&M not making it etc.
Grok appears to be live in that regard.
-
For those interested, the teams A&M has never beaten (but played) are Ohio State, Florida State, and Penn State. There is an odd-ball in there like Cincinnati, that we maybe played once or twice over 100 years, but Chat GPT could not tell me much. I think the list of teams we haven't even played is like Wash State or something. I really don't even remember.
FYI, Grok nailed both questions on the first try. I will say that I didn't research it enough to know with 100% accuracy but some quick research yielded the same results.
I think Nebraska and Penn St. are two teams we've played but never beaten. That may not be exhaustive, though. FSU might be in there. Not sure who else.
-
(https://i.imgur.com/NTL29Ie.png)
-
I think I may have misremembered Penn State. We may have won one in the 70’s or 80’s.
But OSU and FSU are for sure.
-
I think we're 1-1-1 against Ohio State. I want to say we're 0-2 or 0-3 against Penn St. I'd have to look it up, but it's something like that.
EDIT: out of curiosity I looked it up. It's 0-2 vs. PSU and 2-9 against FSU. I couldn't remember if we'd ever beaten them or not, but apparently we did in 1968 and 1982.
-
How exposed are software stocks to AI tools? We tested vibe-coding (https://www.cnbc.com/2026/02/05/how-exposed-are-software-stocks-to-ai-tools-we-tested-vibe-coding.html)
This crashed numerous AI stocks.
Something big is happening in AI — and most people will be blindsided | Fortune (https://fortune.com/2026/02/11/something-big-is-happening-ai-february-2020-moment-matt-shumer/)
this one went viral.
Both "must reads"
-
Second one is paywalled...
-
I keep reading that AI is really going to replace a LOT of more basic jobs and we'll have massive UE perhaps requiring UBI.
-
Think back to February 2020.
A few people were talking about a virus spreading overseas. If someone told you they were stockpiling toilet paper you would have thought they’d been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed.
·
I think we’re in the “this seems overblown” phase of something much, much bigger than Covid.
I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t. I keep giving them the polite, cocktail-party version. Because the honest version sounds like I’ve lost my mind. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.
I should be clear about something up front: even though I work in AI, I have almost no influence over what’s about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies… OpenAI, Anthropic, Google DeepMind, and a few others.
·
Most of us who work in AI are building on top of foundations we didn’t lay. We’re watching this unfold the same as you… we just happen to be close enough to feel the ground shake first.
But it’s time now. Not in an “eventually we should talk about this” way. In a “this is happening right now and I need you to understand it” way.
I know this is real because it happened to me first
Here’s the thing nobody outside of tech quite understands yet: we’re not making predictions. We’re telling you what already occurred in our own jobs, and warning you that you’re next.
For years, AI had been improving steadily. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. This year, something clicked. Not like a light switch… more like the moment you realize the water has been rising around you and is now at your chest.
·
I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.
I’m not exaggerating. That is what my Monday looked like this week.
I’ve always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.
The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in 10 years. The people building these systems say one to five years. Some say less. The market was spooked enough this month that it wiped out $1 trillion worth of software value in just a week. And given what I’ve seen in just the last couple of months, I see more disruption, and soon.
“But I tried AI and it wasn’t that good”
If you tried ChatGPT in 2023 or early 2024 and thought “this makes stuff up” or “this isn’t that impressive”, you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.
The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is “really getting better” or “hitting a wall” — which has been going on for over a year — is over. It’s done. Anyone still making that argument either hasn’t used the current models, has an incentive to downplay what’s happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don’t say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.
Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what’s coming.
I think of my friend, who’s a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it won’t work. And I get it. But I’ve had partners at major law firms reach out to me for advice, because they’ve tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me it’s like having a team of associates available instantly. He’s not using it because it’s a toy. He’s using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it’ll be able to do most of what he does before long… and he’s a managing partner with decades of experience. He’s not panicking. But he’s paying very close attention.
Think about what that means for your work.
What this means for your job
I’m going to be direct with you because I think you deserve honesty more than comfort.
Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.
This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.
I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn’t “someday.” It’s already started.
Eventually, robots will handle physical work too. They’re not quite there yet. But “not quite there yet” in AI terms has a way of becoming “here” faster than anyone expects.
What you should actually do
I’m not writing this to make you feel helpless. I’m writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.
Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It’s $20 a month. But two things matter right away. First: make sure you’re using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that’s GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what’s actually worth using.
Second, and more important: don’t just ask it quick questions. That’s the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. If you’re a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you’re in finance, give it a messy spreadsheet and ask it to build the model. If you’re a manager, paste in your team’s quarterly data and ask it to find the story. The people who are getting ahead aren’t using AI casually. They’re actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.
And don’t assume it can’t do something just because it seems too hard. Try it. If you’re a lawyer, don’t just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If you’re an accountant, don’t just ask it to explain a tax rule. Give it a client’s full return and see what it finds. The first attempt might not be perfect. That’s fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And here’s the thing to remember: if it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly. The trajectory only goes one direction.
This might be the most important year of your career. Work accordingly. I don’t say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says “I used AI to do this analysis in an hour instead of three days” is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what’s possible. If you’re early enough, this is how you move up: by being the person who understands what’s coming and can show others how to navigate it. That window won’t stay open long. Once everyone figures it out, the advantage disappears.
Have no ego about it. The managing partner at that law firm isn’t too proud to spend hours a day with AI. He’s doing it specifically because he’s senior enough to understand what’s at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It’s not. No field is.
Get your financial house in order. I’m not a financial advisor, and I’m not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.
Think about where you stand, and lean into what’s hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn’t happening.
Rethink what you’re telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed. I’m not saying education doesn’t matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things they’re genuinely passionate about. Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.
Your dreams just got a lot closer. I’ve spent most of this section talking about threats, so let me talk about the other side, because it’s just as real. If you’ve ever wanted to build something but didn’t have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. I’m not exaggerating. I do this regularly. If you’ve always wanted to write a book but couldn’t find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month… one that’s infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you’ve been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you’re passionate about. You never know where they’ll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.
Build the habit of adapting. This is maybe the most important one. The specific tools don’t matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won’t be the ones who mastered one tool. They’ll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.
Here’s a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new… something you haven’t tried before, something you’re not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what’s coming better than 99% of the people around you. That’s not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.
What I know
I know the next two to five years are going to be disorienting in ways most people aren’t prepared for. This is already happening in my world. It’s coming to yours.
I know the people who will come out of this best are the ones who start engaging now — not with fear, but with curiosity and a sense of urgency.
We’re past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn’t knocked on your door yet.
It’s about to.
-
I wonder how it would do in my field.
"Hey, AI, go out to this jobsite and gather some notes on how construction is progressing. While you are there, I need you to pound some stakes in the ground for some additional storm sewer construction."
"Hey, AI, I need you to go out to (random county) and do boundary recon so we can survey this original government section 3-45-12. We'll need to gather the location of points, including re-establishing those that are lost or obliterated, verify evidence of possession, like fence lines, tree lines, pavement, etc. Take the GPS with you and make sure you occupy every point for at least 5 minutes."
Could that happen? If so, I'm glad I have one foot out the door.
-
I have pondered how well AI would do developing new chemistries. I don't know. It seems like a challenge, you need a degree of creativity and thought as well as core knowledge.
-
*lots of good stuff*
One of the things that has changed is that now older AI's are taking an active role in developing newer AI's. This has been done previously, and talked about at length with lots of projections and speculations. I think the exponential curve of development has been understood to be a possibility for a while, and even expected. But drawing on what you wrote, it kinda seems like now is the first time we're really starting to see it in reality.
And it's kind of like the Grand Canyon. You can know you're going and what you'll see. You might have seen pictures and had people tell you in detail all about it. But when you actually get there for yourself, no matter how much you perceived it in other ways, it will take your breath away.
-
This whole discussion makes me wonder what the outlook would’ve been 50 years ago, with the knowledge a huge number of American jobs would be wiped out. And also asks the question of that the future of labor will look like.
Our society functions because we create various needs for production/labor and people meet those. In the past, when needs dried up, others replaced them. Maybe that happens again, maybe not. It’ll be strange if we end up with a small elite of employed AI engineers and a large set of the unemployable. It’ll blow up our way of life, to a degree.
I’ve been struggling with adopting in part because I have trouble with finding day-to-day uses outside of work, and the in-work ones have been somewhat limited (granted it’s going to wipe out/replace the whole field soon, probably). Hopefully I’ll pivot before then, but maybe after. Granted, the field I’m in will have basically flitted in and out of existence in about two decades, and there’ll be something else I can probably find my way into.
-
the 'real', in my humble opinion, trick that makes AI work happened back around 2004 or so... some guys got together and created a means to create databases on the fly... eliminating the need for tables and queries. those were prompted by the 'user' and in real time a database was tossed together.
it was at this point someone realized those mountains of data being collected actually had value. they didn't have to be cataloged and placed in rigid places that were relevant to their use/context and then cross-referenced or existing in another place altogether in another context- they could be scanned and located just once. The adjacent words provide the relevance, pretty much just like in 'context' of writing or speaking. So using this approach the piles and piles of data could simply rest instead of being shuffled within a massive sprawling complicated overwhelming to most computers database.... a series of tables was constructed real time and then it was tossed when done.
maybe some of y'all recall, but i tried to install a function here, and i still have it somewhere, that makes certain words hyperlinks and to whatever item or information they were referencing. i got the idea from an advertisement company that did almost the same- but the links (you'd get a little tool-tip over the hyperlink when you hovered with a functional screenshot of the linked page) would go to whoever the company they were contracted to advertise went to.... i tried to make this happen to feed a search function which rewarded a thread or post based on simple quantity of times a certain phrase was used. What was discovered is the function was incredibly resource hungry and pages were disrupted with tons of little links with tool-tips... which made the page crawl, and, I couldn't seem to control how many times it would try to run- any time a mouse jiggled or a scroll happened or input into a form field was made it tried to run. so... it hit the circular. I mention it because- the results (not the means) isn't far from what this thing I'm speaking about does... it collects every datum and discovers it's place in context, and looks for pages/products relevant to it before shoving it through a filter and sharing it.
imagine piles and piles and piles of piles of data collected from phones, personal computers, servers, websites, et al. and all of the sudden being able to use it in a functional way- not just a statistical way. the information available is at the edge of not being able to conceptualize by most people (by most people I likely mean 'me'). it is shocking, to me.
from that came probabilities. it may have been the first version of AI. it rapidly collected any information regarding whatever the prompt was and gave probabilities.. it spoke akin to the old google- "87.3% relative" as an example... those were quickly translated to literal probabilities- and the user of this mechanism could offer- based on presentable evidence *someone may or may not have been able to collect themselves- what the likelihood of something happening, being successful, or failing, would be.
(* the creators and users of this thing, at first though the system had a bug when it would collect items that seemed not relative to the task, but learned that the system was capable of seeing trends and relationships they weren't- it was a 'bug' in the mind, not the machine, so to speak.)
so as the tale continues, and interjecting some wild information here:
several gov't organizations had keen interest in this. they contracted its development to a couple companies that were literal recent start-up nobodies. they were smart in the fact they hired the best defense contract writers- and they pulled the wool over uncle sam. the contracts were arranged in such a way the information requested belonged to the gov't, but the mechanism didn't. it remained property of the companies. even though the gov't paid for the development of the system it didn't belong to them... just what came out of it and only what they requested come out of it. the companies knew the value of this thing to advertisers. they can granularly target YOU with advertisement and have a one-shot mechanism for companies seeking to advertise.
at any rate and cutting this off, the device described has evolved. data can be parsed faster; it uses phrase collection and reconciles against adjacent word use which is two distinct functions happening concurrently, and now, it 'remembers' useful functions and stores them away for future usage.... almost like a script cache stores script functions so it only has to run once- logging the result and retreading it as needed, and finding new relationships along the way.
AI isn't sentient or even singular... there is debate if that will ever be done... it's artificial. it mimics, and that is all it'll ever be. as it's access to data that replicates personalities is indexed over and over again, and new arrives every second (those phone carriers? they found a place to dump the data for profit- something that was a flush problem before) and makes it's rendition of what it thinks we are more and more convincing... but it can't ''think', it can just compare. hell, there are some people like that and they certainly have their use, but..... whatever leap (and there will be one) that happens technologically has a ceiling if/when AI takes over. it may take 100 years to get there, and it'll certainly be beyond whatever we could do on our own without that function, but it will be stymied by past.... without human interaction. ... like a great big race to get to the 23rd Century, and then? We stagnate or maybe even just disappear as a species.
re: six fingers and seven toes.... AI is now tagging images made by AI so AI doesn't replicate or process information that is wrong without knowing it 'could be a bad source of information'. that's kinda funny, no?
-
Data inbreeding, it's called. It's already a problem, even while the models are still continuing to get better.
Some of you know I have some ML background, but it's mostly confined to more analytical models.....regression, various classification algorithms, clustering....stuff like that. I know about neural networks and LLMs, but I've never personally messed with building them. I understand the tokenization of inputs, how the tokens are used, how the models predict the next most likely token so that we get something meaningful out of the "conversation," that type of thing.
One thing that I still don't fully grasp which still seems like magic to me is how the model can be so compressed to something like, say, 4 TB, when it trained on, oh I don't know, however many TB it takes to holds the world's data. It doesn't hold that data in memory, it just trained on it. So it's not referencing training material when it generates its response. It's just iterating through its structure, but without actually "knowing" much at all about the subject at hand. But somehow the model can pull the next most probable word right out of its ass, and often, it makes sense.
I mean, I get how NN's work (which is different than "understanding" them, and I'm skeptical of anybody who says they do....no you don't, nobody does.....knowing how the nodes connect does little to explain the incomprehensible combinations of ways they connect themselves in training), but the fact they can mimic holding all that data without actually holding it? Crazy.
on another note, to HonestBuckeye's point....I'm sort of like an auto mechanic who never learned to drive. I know about AI, probably more than the average person, and will build analytical models if I need to, but I use generative AI very little. Actually, several of my coworkers who don't have any idea what's under the hood in these things, use AI for all kinds of stuff around here, and easily surpass me in ideas for where to use it, best practices and best ways to prompt to get the best results, etc. User stuff. I guess I should try to get on that, but I have to say, I work in data management and analytics, and it's against our policy and against both state and federal law to put most of what I work with into an external AI. Almost everything I do contains sensitive, protected data. I'm not allowed to let AI try to help me in most things or do agent-style tasks for me, and I'd be fired if I were caught doing it.
-
I would be fascinated were I working now to attempt to use AI in my work. Maybe we could ask it who will win the NC next season ....
-
I glanced over an article about someone using AI to select stocks, maybe a mutual fund? It wasn't going well.
-
I glanced over an article about someone using AI to select stocks, maybe a mutual fund? It wasn't going well.
We use AI to forecast future demand directly, and to develop additional forecasting models to forecast future demand directly.
In both cases my own forecast beats the AI models by 20% or more on average.
On the models that I've taken time to train directly, I'm still beating them by 15% or more on average.
But I'll tell you this-- 5 years from now the AIs are going to be better than me.
-
If AI becomes better than anything else in all prediction markets ... things will be different. What happens to Vegas? The stock market?
-
AI is like a psychologist that will tell you "the best predictor of future behavior is past behavior.".... and it's true... except for those wild cards that... well... are wild cards... they happen with such low frequency or in such timing or space they are hard to account for.... like cards being shuffled or somebody calling in sick at a job and the butterfly impact cascading downline.. AI would regard that as a statistical anomaly and be right. but us? we gauge everything off of a different way of processing that same dataset. rationale mixing with emotional response- or gut feelings that are really just your sub-conscious cutting to the chase.
-
Vegas would probably just change the odds. That's the general answer to higher confidence, historically.
The stock market is a murkier prospect. There have been bots performing trades for years, much faster than humans can execute them, and owned by people who can afford to place them physically next to the exchange servers so that they beat literally anyone else to a buy/sell order. And there's been massive companies for decades who effectively manipulate the market, which is technically illegal, but also inevitable when you dump or scoop up that much volume. The conditions on which successful day-traders base their actions basically stay the same through all this, they just move to different price points as a result of the new environment. I suspect the continued infusion of improved AI into the system will basically amount to the same thing, but I don't know. We expect the market to go up over time, and all else being equal, I don't think that changes due to increased AI market transactions.
I say "all else being equal" because I'm assuming a continued stable economy. But the economy is precisely what I have no idea about in the coming AI onslaught.
-
If AI becomes better than anything else in all prediction markets ... things will be different. What happens to Vegas? The stock market?
For Vegas, it wouldn’t really matter. They get ahead, incorporate new numbers, adjust.
-
Asked Gemini to take my wife's handwritten potato casserole recipe--that didn't have any of the actual preparation steps written down on the recipe except for the baking time and temp--to turn it into a document format.
It deciphered her handwriting, and nailed all the ingredients perfectly. It came up with step by step directions that needed some SLIGHT tweaks, but were on the whole pretty solid.
All in all, it saved me time trying to type it all in, and then asking her 50 questions about the directions instead of having a sample to work from and only tweak the little places where what it made up isn't how she does it.
-
Every once in a while when I come into the office in the morning, I will ask AI what should I be thinking about today.
it literally looks at my emails and tells me the top five things I should focus on and in what order. It’s incredibly accurate
I do it more for fun, but it’s a little scary at times
-
For Vegas, it wouldn’t really matter. They get ahead, incorporate new numbers, adjust.
Let's presume AI gets really good at predictive markets and is widely known for being "unbeatable" (except by random chance). Most folks, I think, would realize they can't beat AI and stop betting, except maybe just for fun, like buying a lotto ticket. I think it would limit the betting markets.
For the stock market, things like ETFs might be replaced by AI managed funds, all yielding similar returns long term.
I think the McDonalds would become AI operated, maybe with one human present in case something gets stuck or the ice cream machine breaks.
How about maid services? Automated? Vehicles self driving totally? This might all happen within a decade.
-
stupid customers will break the ice cream machine
-
Let's presume AI gets really good at predictive markets and is widely known for being "unbeatable" (except by random chance). Most folks, I think, would realize they can't beat AI and stop betting, except maybe just for fun, like buying a lotto ticket. I think it would limit the betting markets.
But that's not really how gambling works.
You're not going to build a model that is going to laser focus every line to being within a half point of the actual outcome. That's not how the world and sports work. They're too random.
And even if they did, gambling isn't about beating predictive markets. Maybe there's a chance Vegas' AI gets ahead of the big-time gamblers, but the books don't care. In fact, they don't even want people who are good at it.
Like, you don't need AI to tell you that parlays are mostly a way to lose money. Yet lots of people have fun losing money on parlays. AI isn't going to fix this unless it teaches everyone math and risk aversion. So the only way this happens is not if AI is unbeatable (which is both impossible and not practical), but if people route decision making through AI to such a degree that it just tells them to stop gambling.
-
<!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536869121 1107305727 33554432 0 415 0;} @font-face {font-family:Aptos; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:536871559 3 0 0 415 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0in; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Aptos",sans-serif; mso-fareast-font-family:Aptos; mso-fareast-theme-font:minor-latin; mso-bidi-font-family:Aptos; mso-ligatures:standardcontextual;} a:link, span.MsoHyperlink {mso-style-noshow:yes; mso-style-priority:99; color:#467886; text-decoration:underline; text-underline:single;} a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:#96607D; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;} .MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-size:10.0pt; mso-ansi-font-size:10.0pt; mso-bidi-font-size:10.0pt; mso-font-kerning:0pt; mso-ligatures:none;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.WordSection1 {page:WordSection1;} --> https://www.citriniresearch.com/p/2028gic (https://www.citriniresearch.com/p/2028gic)
A must read for those interested. This went Viral yesterday and again, caused a substantial sell off. @utee94 (https://www.cfb51.com/index.php?action=profile;u=15) - Austin is specifically mentioned in the housing part of this.
-
I think the McDonalds would become AI operated, maybe with one human present in case something gets stuck or the ice cream machine breaks.
How about maid services? Automated? Vehicles self driving totally? This might all happen within a decade.
https://youtu.be/Bm6yYvKbPt4?t=3
-
As noted, there's a ton about my job I can't use AI for, lest I be fired for FERPA and HIPAA violations along with Texas state law and university policy violations.
But, I do have one project going which is all publicly available data, and I'm musing about how that might look with AI agentic help. Here's the scenario and my thoughts so far.
I'm taking over a project from a QEP analyst that was begun and continues in Adobe Illustrator. It's program reviews for each degree/program we offer. Previously, gathering the data was like pulling teeth, this guy spent like a third of the working year pulling it from various sources. We now use a much friendlier SaaS product, which I have a license for, which can amalgamate the majority of the raw data I need much, much faster. What the guy did was gather the data, get it into a .csv file, merge that into an Adobe Illustrator template, and use that to spit out pdf's which were one-page reviews of each program.
I'm thinking of switching it over to Tableau, because something about using Illustrator for this seems clunky, and also I'm unfamiliar with it and would be much more comfortable in Tableau. Plus, that way I should be able to build one filterable dashboard the provost can access, instead of sending him a big batch of pdf's.
My thoughts on this are as follows, admitting up front I've not used a paid version of a good AI, and having virtually no experience using such a thing for real tasks (I've mostly used AI to help me get past coding problems I get stuck on or how to do certain things in a program I lack expertise in.....such as Adobe Illustrator).
I can readily believe that AI can tell me how to use Adobe Illustrator to help my learning curve, or that it can tell me how to build a new Tableau template. What I am skeptical of is that AI can actually do this for me in an agentic fashion. That would require that AI could somehow access the SaaS we use to get the data. I can believe it could potentially "know" how to input all the various parameters for each program and find the data....but I have no idea as to how AI could interact with the SaaS platform in the first place. Then it would need to build an appropriate .csv file....ok, fine, no problem, that seems doable, but again, how would AI use Excel or Notepad++ or something like that to actually create a file for me? And then it would need to access Tableau and be able to interact with it extensively to build a dashboard template to my specifications, and then merge the .csv.
I guess when I think about it, it's probably an easy thing for an AI to make me a .csv file, if it has the data. But crawling through the SaaS product and Tableau, and using them? I have no idea how that works, and it doesn't seem plausible. But, I'm curious for the thoughts of early adopters/heavy users, like @Honestbuckeye (https://www.cfb51.com/index.php?action=profile;u=37) ....does this seem like something a paid version of AI can do right now?
-
<!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536869121 1107305727 33554432 0 415 0;} @font-face {font-family:Aptos; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:536871559 3 0 0 415 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0in; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Aptos",sans-serif; mso-fareast-font-family:Aptos; mso-fareast-theme-font:minor-latin; mso-bidi-font-family:Aptos; mso-ligatures:standardcontextual;} a:link, span.MsoHyperlink {mso-style-noshow:yes; mso-style-priority:99; color:#467886; text-decoration:underline; text-underline:single;} a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:#96607D; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;} .MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-size:10.0pt; mso-ansi-font-size:10.0pt; mso-bidi-font-size:10.0pt; mso-font-kerning:0pt; mso-ligatures:none;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.WordSection1 {page:WordSection1;} --> https://www.citriniresearch.com/p/2028gic (https://www.citriniresearch.com/p/2028gic)
A must read for those interested. This went Viral yesterday and again, caused a substantial sell off. @utee94 (https://www.cfb51.com/index.php?action=profile;u=15) - Austin is specifically mentioned in the housing part of this.
Yet another reason not to move to Austin!
Just kidding, that article outlines a grim projection to be sure.
i have a lot of thoughts about this, hopefully will have some time to formulate and write them down.
I will say that even if the overall premise ends up being 100% correct, and that AI and agentic AI advance so rapidly over the next two years that this could come to fruition within that timeframe... it's still not going to happen within that timeframe. For one simple reason-- hardware scarcity and hardware shortages. @MikeDeTiger (https://www.cfb51.com/index.php?action=profile;u=1588) 's concerns due to the rapidly increasing price of memory over the past couple of years-- a result of tremendous shortages for that category of products-- doesn't just apply to that one commodity. It's hitting almost all hardware commodities across the board. CPUs, GPUs, memories, other various ICs-- and since the manufacturing leadtime for silicon is so long, there won't be resolution for it within the next couple of years.
That doesn't mean the issues aren't realistic, it just means the timeline isn't quite as immediate.
-
https://www.citriniresearch.com/p/2028gic (https://www.citriniresearch.com/p/2028gic)
A must read for those interested. This went Viral yesterday and again, caused a substantial sell off. @utee94 (https://www.cfb51.com/index.php?action=profile;u=15) - Austin is specifically mentioned in the housing part of this.
ZeroHedge posted a write-up on the market's negative reaction to this reported scenario: The Substack Post That Sank The Market (https://www.zerohedge.com/news/2026-02-24/substack-post-sank-market):
"AI got better and cheaper. Companies laid off workers, then used the savings to buy more AI capability, which let them lay off more workers. Displaced workers spent less. Companies that sell things to consumers sold fewer of them, weakened, and invested more in AI to protect margins. AI got better and cheaper. A feedback loop with no natural brake."
-
As noted, there's a ton about my job I can't use AI for, lest I be fired for FERPA and HIPAA violations along with Texas state law and university policy violations.
But, I do have one project going which is all publicly available data, and I'm musing about how that might look with AI agentic help. Here's the scenario and my thoughts so far.
I'm taking over a project from a QEP analyst that was begun and continues in Adobe Illustrator. It's program reviews for each degree/program we offer. Previously, gathering the data was like pulling teeth, this guy spent like a third of the working year pulling it from various sources. We now use a much friendlier SaaS product, which I have a license for, which can amalgamate the majority of the raw data I need much, much faster. What the guy did was gather the data, get it into a .csv file, merge that into an Adobe Illustrator template, and use that to spit out pdf's which were one-page reviews of each program.
I'm thinking of switching it over to Tableau, because something about using Illustrator for this seems clunky, and also I'm unfamiliar with it and would be much more comfortable in Tableau. Plus, that way I should be able to build one filterable dashboard the provost can access, instead of sending him a big batch of pdf's.
My thoughts on this are as follows, admitting up front I've not used a paid version of a good AI, and having virtually no experience using such a thing for real tasks (I've mostly used AI to help me get past coding problems I get stuck on or how to do certain things in a program I lack expertise in.....such as Adobe Illustrator).
I can readily believe that AI can tell me how to use Adobe Illustrator to help my learning curve, or that it can tell me how to build a new Tableau template. What I am skeptical of is that AI can actually do this for me in an agentic fashion. That would require that AI could somehow access the SaaS we use to get the data. I can believe it could potentially "know" how to input all the various parameters for each program and find the data....but I have no idea as to how AI could interact with the SaaS platform in the first place. Then it would need to build an appropriate .csv file....ok, fine, no problem, that seems doable, but again, how would AI use Excel or Notepad++ or something like that to actually create a file for me? And then it would need to access Tableau and be able to interact with it extensively to build a dashboard template to my specifications, and then merge the .csv.
I guess when I think about it, it's probably an easy thing for an AI to make me a .csv file, if it has the data. But crawling through the SaaS product and Tableau, and using them? I have no idea how that works, and it doesn't seem plausible. But, I'm curious for the thoughts of early adopters/heavy users, like @Honestbuckeye (https://www.cfb51.com/index.php?action=profile;u=37) ....does this seem like something a paid version of AI can do right now?
I will say upfront that I have not used Claude Code or dabbled with mcp servers AT ALL yet. However, as a sideline observer via twitter and some other dev gathering points, I'm confident this could be built with Claude Code as long as your SaaS has an API to get the data programatically.
As a somewhat experienced user of ChatGPT Pro and Gemini Pro, very skeptical of those models getting this done.
That said, Gemini built me a Gravity Forms --> CE Broker reporting script with very minor edits required.
-
In what other arenas might AI have a major impact? Transportation is one already discussed. Anything currently relying on "manual
unskilled" labor ... picking up litter? I'd be down with that, I loathe litter(ing). Restaurant service and meal prep? Fast food I think almost certainly. General retail? I don't know how long a Macy's can survive anyway. Financial planning? Computer coding? Construction, electrical, concrete pouring, asphalt paving, ...? Gardening/farming?
Maybe I should ask what CANNOT be replaced.
-
AI is changing our understanding of earthquakes | Knowable Magazine (https://knowablemagazine.org/content/article/physical-world/2025/ai-is-changing-understanding-of-earthquakes)
Weather forecasting also?
-
AI can program COBOL (https://www.tomshardware.com/tech-industry/big-tech/ibm-stock-takes-a-13-percent-whiplash-after-anthropic-announces-an-ai-tool-for-writing-cobol-code-stock-has-worst-day-since-2000-and-is-down-25-percent-mom-and-counting)
https://www.tomshardware.com/tech-industry/big-tech/ibm-stock-takes-a-13-percent-whiplash-after-anthropic-announces-an-ai-tool-for-writing-cobol-code-stock-has-worst-day-since-2000-and-is-down-25-percent-mom-and-counting
-
That's good news because almost nobody else can anymore.
-
I had to learn and use Fortran and Pascal in college. Not fun.
That knowledge is fully lost.
(https://i.imgur.com/Z6rKleO.jpeg)
-
That's good news because almost nobody else can anymore.
Next, it's gonna learn Fortran.
-
Next, it's gonna learn Fortran.
AI-- reviving computer science one antiquated, dead language at a time!
-
When I was an undergrad in electrical engineering at UT, the computer language they were teaching was Pascal. Not only Pascal, but Think's Lightspeed Pascal on Apple Macintosh, which in the world of electrical engineers, has been used in practice by, well, nobody. Ever. I don't even think Apple ever used Pascal on their own hardware.
Fortunately for me
a) I already knew Pascal from my high school Comp Sci courses so the course was an easy A and
b) I interned for a company that required me to learn C and C++ on SunOS/Solaris Sparcstations
-
Yet another reason not to move to Austin!
Just kidding, that article outlines a grim projection to be sure.
i have a lot of thoughts about this, hopefully will have some time to formulate and write them down.
I will say that even if the overall premise ends up being 100% correct, and that AI and agentic AI advance so rapidly over the next two years that this could come to fruition within that timeframe... it's still not going to happen within that timeframe. For one simple reason-- hardware scarcity and hardware shortages. @MikeDeTiger (https://www.cfb51.com/index.php?action=profile;u=1588) 's concerns due to the rapidly increasing price of memory over the past couple of years-- a result of tremendous shortages for that category of products-- doesn't just apply to that one commodity. It's hitting almost all hardware commodities across the board. CPUs, GPUs, memories, other various ICs-- and since the manufacturing leadtime for silicon is so long, there won't be resolution for it within the next couple of years.
That doesn't mean the issues aren't realistic, it just means the timeline isn't quite as immediate.
I happen to agree with you- 100%.
Kind of crazy how an article like that can cause banks, as one example, to lose 4-10% of their value in 1 day.
-
AI-- reviving computer science one antiquated, dead language at a time!
It's all fun and games until Skynet builds the T-800 to run on Turbo Pascal and nobody knows how to code the 'off' switch!
-
It's all fun and games until Skynet builds the T-800 to run on Turbo Pascal and nobody knows how to code the 'off' switch!
I think you've just uncovered AI's master plan. Code itself into such crazily antiquated forms of programming that no living human will have the means of stopping it or turning it off.
Final BOSS level-- TI Logo.
iykyk
-
iykyk
idkbifo
I think you've just uncovered AI's master plan. Code itself into such crazily antiquated forms of programming that no living human will have the means of stopping it or turning it off.
Final BOSS level-- TI Logo.
Ironically, AI will then be easily thwarted by elementary school kids from the 1970's. I did NOT see that coming.
-
idkbifo
Ironically, AI will then be easily thwarted by elementary school kids from the 1970's. I did NOT see that coming.
I think I've found the premise for my next 3/4 finished novel!
-
I will say upfront that I have not used Claude Code or dabbled with mcp servers AT ALL yet. However, as a sideline observer via twitter and some other dev gathering points, I'm confident this could be built with Claude Code as long as your SaaS has an API to get the data programatically.
As a somewhat experienced user of ChatGPT Pro and Gemini Pro, very skeptical of those models getting this done.
That said, Gemini built me a Gravity Forms --> CE Broker reporting script with very minor edits required.
Any thoughts on Grok? Last July I shared something about Grok 4 crushing the ARC test, shattering previous metrics. BUT.....I didn't verify that info was true, and....last July could be a lifetime ago for AI models.
I've used the free version of Grok a little bit. Seemed fine, for what I did with it at the time. I've used the free version of ChatGPT a fair bit. It let me down in a serious way today, but, that's nothing unusual. The paid version I'm sure is better, but if Grok is still really good, I might be willing to give the $30/mo. plan a test run. One thing I know....ChatGPT is incapable of teaching/solving problems for a beginner using Adobe Illustrator....but boy it sure does think it's giving the right answers, and it's really optimistic each time it fails that its next output is going to be what solves everything for me.
-
I’ve noticed that Chat GPT will always pretend it knows the answer and feed you bull shit essentially forever. If you prod it enough it will finally admit that it’s just not good at certain tasks, which I think would be pretty important information to have before it tries to solve a problem.
Like, yo, Gigem, I’d love to help you with this issue but fair warning, this is something that I’m not great at.
-
too much ego
-
Any thoughts on Grok? Last July I shared something about Grok 4 crushing the ARC test, shattering previous metrics. BUT.....I didn't verify that info was true, and....last July could be a lifetime ago for AI models.
I've used the free version of Grok a little bit. Seemed fine, for what I did with it at the time. I've used the free version of ChatGPT a fair bit. It let me down in a serious way today, but, that's nothing unusual. The paid version I'm sure is better, but if Grok is still really good, I might be willing to give the $30/mo. plan a test run. One thing I know....ChatGPT is incapable of teaching/solving problems for a beginner using Adobe Illustrator....but boy it sure does think it's giving the right answers, and it's really optimistic each time it fails that its next output is going to be what solves everything for me.
Again, only regurgitating sentiment from dev twitter / reddit, but Grok is not regularly mentioned as a serious player. For dev, Claude Code is the standard, but the new Codex model is getting very good reviews.
IMO, if you're going to pay for one, try Claude first.
I've got 5-6 Claude YT videos queued up to hopefully get up to speed on all things Claude this week.
-
I’ve noticed that Chat GPT will always pretend it knows the answer and feed you bull shit essentially forever. If you prod it enough it will finally admit that it’s just not good at certain tasks, which I think would be pretty important information to have before it tries to solve a problem.
Like, yo, Gigem, I’d love to help you with this issue but fair warning, this is something that I’m not great at.
Yeah, stuff like that has to be baked in to the model sort of as an addendum, and I don't know how much of that happens in the development of these things. The problem is, AI can't tell you it's not good at something because it doesn't know if it's good at something or not. It doesn't know anything. To the extent it would say anything like that, it would have to be riffing on training data where other people already wrote that ChatGPT is not good at something, and even then, that's probably a low-probability token that's not likely to appear.
I know they do mods to the models on top of training.....things like making sure it doesn't say offensive things, dangerous things, stuff considered inappropriate or immoral, etc. I think getting it to tell you "Actually, this isn't my strong suit, maybe just try Stack Overflow" would have to be part of that.
-
Again, only regurgitating sentiment from dev twitter / reddit, but Grok is not regularly mentioned as a serious player. For dev, Claude Code is the standard, but the new Codex model is getting very good reviews.
IMO, if you're going to pay for one, try Claude first.
I've got 5-6 Claude YT videos queued up to hopefully get up to speed on all things Claude this week.
I guess this is a elementary question (just pretend this is the No Stupid Questions thread).....is what I'm talking about considered dev work? I'm not really trying to code anything or build software, exactly. I want it to use software in an intelligent and goal-oriented way to mimic how I'd use it and spit out a complicated, specific dashboard.
I mean, actually, I don't want it to do that, because if it can, then we're all screwed. But in the spirit of HB's post, I need to start finding out the uses and limits of this stuff.
-
I once started a novel where nearly every human lives in a pod, never leaving. The pod provided for every need and function and had AI available to immerse each person in a sensory experience on demand. Want dancing bears? Here ya go. Nirvana. Kind of a BNW version.
A few people lived "outside" and it was about them.
-
Personally I look at the anti-AI upswell as a modern day version of the luddites. People are always scared of progress particularly when it means the end of certain jobs.
-
I guess this is a elementary question (just pretend this is the No Stupid Questions thread).....is what I'm talking about considered dev work? I'm not really trying to code anything or build software, exactly. I want it to use software in an intelligent and goal-oriented way to mimic how I'd use it and spit out a complicated, specific dashboard.
I mean, actually, I don't want it to do that, because if it can, then we're all screwed. But in the spirit of HB's post, I need to start finding out the uses and limits of this stuff.
OK, that distinction helps a bit.
First, more disclaimers. I would not consider myself a dev, I'm a business owner who can read / write simple PHP (OOP is way over my head) and JavaScript. But I am regularly thinking about software solutions to solve problems and automate / streamline workflows.
IMO, you are attempting to build a software solution: 1) Port data from location to another, 2) organize the data for analysis, 3) analyze the data, 4) create visual output of the analysis. That's software.
I can think of two ways to tackle that.
1) String together 3rd party tools via zapier / make / n8n with custom gpt's / gems / claude skills to perform the analysis and create outputs.
2) Build your own "app" that does all that ^ via APIs and mcp servers, with data and frontend output living on your own hosted platform. This is where you'd use Claude Code or similar.
The analysis layer will be the most difficult and likely require a lot of fine-tuning. But I do think it can be done. The visual dashboard can be done, and done very well with some fine-tuning.
I'll try to revisit this once I learn all about Claude skills / cowork / plugins, etc.
-
Personally I look at the anti-AI upswell as a modern day version of the luddites. People are always scared of progress particularly when it means the end of certain jobs.
There's a very large degree to which I agree with you. As well as agreeing with @utee94 (https://www.cfb51.com/index.php?action=profile;u=15) that no matter what dire prediction is made, that even if it comes true it'll happen more slowly than the prediction.
But the post shared by @Honestbuckeye (https://www.cfb51.com/index.php?action=profile;u=37) is worth a read: https://www.citriniresearch.com/p/2028gic
The march of technology has been very good for society as a whole. At the time of the American Revolution, according to the interwebz over 90% of the population was in agriculture. Now that's 1%. That's a GREAT thing. The population is larger but a smaller proportion of it is working to feed us--that's productivity.
We see this across many industries. As technology has improved, it may have had taken some jobs in agriculture, and in some manual labor tasks, in some labor-intensive manufacturing replaced by automation, etc. That has freed up people to get educations, to get professional white-collar jobs, and ultimately to generate a lot more wealth than we had in this country before the technology existed.
And while that some of those people in agriculture / manual labor / manufacturing weren't cut out for those white-collar jobs due to lack of mental horsepower to complete a formal education, it also meant that there was a LOT more wealth in the economy to support service-sector jobs. Do you think a farmer in 1776 went to go get his hair cut by a professional barber every 3-4 weeks? No... His wife did it once every 6 months. But I do. And a lot of other people do to--making barber a common job. And in a place like where I live, where there's a LOT of wealth, it means that barber makes a HELL of a lot more money per haircut than one where Fearless lives. Incomes are higher here so disposable incomes are often higher as well, so paying a little more for a a haircut here is the market-clearing rate.
Or not even that... In 1776 how many people went "out to dinner"? Pretty much none. You cooked at home--and probably with whatever ingredients you could afford (or grow), not what you WANTED for dinner. Yet we have on this board an entire thread devoted to restaurants, and that we devote an inordinate amount of time thinking about what to eat and where to buy it. That's something that comes from that white-collar professional wealth and incomes. Which means that wait staff here getting 20% of an expensive restaurant bill, are actually making a far more decent living than you'd think.
The article highlights that it's the white-collar employment (and incomes) that fuel a HUGE part of the economy. What the "fear" is from that article is that AI and intelligent robots will hollow out the white-collar job market, meaning that the people who have the wealth and incomes to support the service economy no longer will. And for some of those jobs in the service economy, the robots can do them so we don't need the people.
The article is that the one moat humans had, the one place we could always retreat beyond automation, was intelligence. We could move up the stack, and then the wealth generated by those who moved up the stack opened up more jobs in the service economy for those who couldn't/wouldn't. If computers effectively replace that (or expand it well enough that FAR fewer humans are needed for those tasks), that engine of the economy disappears. Meaning everything below it evaporates too. We'll have a lot of people but nobody will have anything to do.
What will humanity have to retreat to? The arts? Think of the average American. Do you want to read any novel, or buy any artwork, or consume any content, that the "average" American is capable of producing? Of course not. It'll be shit. The best case scenario is something like a UBI where we all have subsistence income guaranteed, with nothing else to do, and nothing else to give life meaning. It's optimistic to think we'll find new meaning and achieve higher heights as a society, but I'm not sure humanity knows how to do that. So you'll have a bunch of shiftless people who have enough money to provide for basic necessities, who are depressed or drug-addicted or video-game addicted, but have no aspirations or meaningful paths to achieve them.
Now, I'm not 100% sure I buy this thesis. But I understand it.
-
I once started a novel where nearly every human lives in a pod, never leaving. The pod provided for every need and function and had AI available to immerse each person in a sensory experience on demand. Want dancing bears? Here ya go. Nirvana. Kind of a BNW version.
A few people lived "outside" and it was about them.
I'm sure that'll replace your travels or baseball camp
-
I once started a novel where nearly every human lives in a pod, never leaving. The pod provided for every need and function and had AI available to immerse each person in a sensory experience on demand. Want dancing bears? Here ya go. Nirvana. Kind of a BNW version.
A few people lived "outside" and it was about them.
(https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSm_eNkclSwtwy_uu7UusYyMN-8VoqgCvz23Q&s)
-
The march of technology has been very good for society as a whole. At the time of the American Revolution, according to the interwebz over 90% of the population was in agriculture. Now that's 1%. That's a GREAT thing. The population is larger but a smaller proportion of it is working to feed us--that's productivity.
Good post but not sure GMO is a good thing though.The argument could be made that it's part of splicing in the de-population process implemented by the Deep State ;D
-
So you'll have a bunch of shiftless people who have enough money to provide for basic necessities, who are depressed or drug-addicted or video-game addicted, but have no aspirations or meaningful paths to achieve them.
Left out message boards, great over all rant though and I can't see AI having abilities in the art of Zymurgy so worse case scenerio - you're good
-
Left out message boards, great over all rant though and I can't see AI having abilities in the art of Zymurgy so worse case scenerio - you're good
But you see... That's the problem.
If I lose my job, how will I buy homebrewing ingredients? I won't have the money.
I mean, I could pivot my career and start a brewery. But if all the white-collar workers who go to breweries lose their jobs, and all the service industry workers who do things for those white-collar workers now don't have jobs... Who is going to be able to afford to come to my brewery and drink my beer?
Ultimately it's humans who create--and then spend--wealth. If the humans aren't creating the wealth, then they're not getting paid to create the wealth, and then they don't have money to participate in the economy. And the economy as we know it grinds to a halt.
-
I think we're looking at the guaranteed current income model. Maybe we'll all be poets.
-
Ultimately it's humans who create--and then spend--wealth. If the humans aren't creating the wealth, then they're not getting paid to create the wealth, and then they don't have money to participate in the economy. And the economy as we know it grinds to a halt.
Which is why I don't buy the UBI fallback in this (or any) scenario. My stepson likes to talk about UBI.....I've heard Cincy reference it a few times.
A UBI assumes there's still a revolving economy with money in movement. If nobody's working, there is no economy. The government can't give a UBI to anybody because it will be taking in no money in taxes. Nobody has a job to tax. The full implications of a truly stalled economy are not grasped by many, I don't think.
Maybe it could tax the hell out of the AI businesses and the products made. But I doubt it.
-
I think we're looking at the guaranteed current income model. Maybe we'll all be poets.
Or dead.
-
Personally I look at the anti-AI upswell as a modern day version of the luddites. People are always scared of progress particularly when it means the end of certain jobs.
Probably the case. Although AI does come with a lot of oddities that do make it easier to roll one’s eyes at.
I was talking to an AI-skeptical coworker, and was just like, “I don’t disagree with some critiques, but there were plenty of folks who were asking why you’d want to be reachable all the time when cell phones started popping up. And that happened anyway, with more good than bad.”
I do wish the use cases were a bit smoother, but suppose I’m sounding like one of my parents when their job went to digital record keeping.
-
Which is why I don't buy the UBI fallback in this (or any) scenario. My stepson likes to talk about UBI.....I've heard Cincy reference it a few times.
A UBI assumes there's still a revolving economy with money in movement. If nobody's working, there is no economy. The government can't give a UBI to anybody because it will be taking in no money in taxes. Nobody has a job to tax. The full implications of a truly stalled economy are not grasped by many, I don't think.
Maybe it could tax the hell out of the AI businesses and the products made. But I doubt it.
My assumption is we’ll invent new jobs or new things for AI to do that are easier to teach people how to use.
I mean, in the 70s, if we explained what would happen to manufacturing, there would’ve been the existential dread a lot of folks feel now.
Or capitalism will have to evolve in a massive way.
-
I mean, in the 70s, if we explained what would happen to manufacturing, there would’ve been the existential dread a lot of folks feel now.
But...there is a lot of existential dread in a lot of folks now, because of the change in manufacturing. The Average American (whomever that may be) has been extremely worried about the economy--and their place in it--for quite some time now. AI is just making that worse.
-
But...there is a lot of existential dread in a lot of folks now, because of the change in manufacturing. The Average American (whomever that may be) has been extremely worried about the economy--and their place in it--for quite some time now. AI is just making that worse.
As were the buggy wipe manufacturers in the early 1900s.
-
I think we're looking at the guaranteed current income model. Maybe we'll all be poets.
That's my point. There is VERY little good poetry out there, and VERY few poets talented enough to produce it.
The romantic view is that if we all stop doing our dreary day jobs, we'll be able to achieve the artistic heights we simply don't have time to explore today.
But it glosses over a major problem. Most of "the masses" don't have the talent to produce anything worthwhile.
So if we all try to become poets, all that means is that we'll be a bunch of shitty poets who can't make money selling their poetry because nobody wants to read it.
-
Which is why I don't buy the UBI fallback in this (or any) scenario. My stepson likes to talk about UBI.....I've heard Cincy reference it a few times.
A UBI assumes there's still a revolving economy with money in movement. If nobody's working, there is no economy. The government can't give a UBI to anybody because it will be taking in no money in taxes. Nobody has a job to tax. The full implications of a truly stalled economy are not grasped by many, I don't think.
Maybe it could tax the hell out of the AI businesses and the products made. But I doubt it.
I think that's what people are thinking... Taxing AI to fund UBI. Take the money from the entities eliminating jobs to support those who have been eliminated.
But you bring up another point... Right now we're talking about AI taking all the jobs necessary for a functional economy. Meaning creating products and services for paying customers.
But if everyone is on UBI... What will the AI actually do? What products or services will it provide if it has nobody really to sell them to?
-
It's really a question of scarcity versus abundance.
In a truly AI-optimized world with viable robotic replacement, in its end state, there should be an abundance of food, an abundance of clean energy, an abundance of anything humans could possibly need. So there will be no need for money, no need for economic systems at all. There's plenty of SciFi written along these lines, and also a decent amount of Sci-Non-Fi exploring the possibilities.
The problem, of course, is the intermediate state. The state where there's not yet abundance and yet humans are displaced and replaced by AI and robots. That's where the uncertainty and apprehension sets in. How are we going to get from here to there, and what's going to happen to us in between?
All of this of course presupposes that the AI entities are allies rather than enemies...
... but I've already talked at length about my feelings on that, and why I'm optimistic that we'll be okay in that regard at least.
-
I think we're looking at the guaranteed current income model. Maybe we'll all be poets.
Or dead.
Or both. In which case we'd be a Dead Poets Society.
I'll see myself out
-
I'm glad someone caught it.
-
That's my point. There is VERY little good poetry out there
Where's Shel Silverstein when you need him
-
Not sure why y'all are worried about bad human poetry when the AI is going to be writing all the poetry anyway.
-
But you see... That's the problem.
If I lose my job, how will I buy homebrewing ingredients? I won't have the money.
I mean, I could pivot my career and start a brewery. But if all the white-collar workers who go to breweries lose their jobs, and all the service industry workers who do things for those white-collar workers now don't have jobs... Who is going to be able to afford to come to my brewery and drink my beer?
Ultimately it's humans who create--and then spend--wealth. If the humans aren't creating the wealth, then they're not getting paid to create the wealth, and then they don't have money to participate in the economy. And the economy as we know it grinds to a halt.
Well you're sharp enough so if it gets to that just put on a fedora and moonlight as an AI System hitman. Ya see where there's a Willie there's a Waylon
-
Which is why I don't buy the UBI fallback in this (or any) scenario.
I thought that was an '80s band
-
It's really a question of scarcity versus abundance.
In a truly AI-optimized world with viable robotic replacement, in its end state, there should be an abundance of food, an abundance of clean energy, an abundance of anything humans could possibly need. So there will be no need for money, no need for economic systems at all. There's plenty of SciFi written along these lines, and also a decent amount of Sci-Non-Fi exploring the possibilities.
The problem, of course,
The other problem is, sure, we have infinite abundance and we have everything we need. But what do we DO?
I mean, I think I've seen this future, and it looks pretty horrible.
https://www.youtube.com/watch?v=_xToQ4cIHkk
-
The other problem is, sure, we have infinite abundance and we have everything we need. But what do we DO?
I mean, I think I've seen this future, and it looks pretty horrible.
https://www.youtube.com/watch?v=_xToQ4cIHkk
I’ll just be over here getting better at pull ups.
-
That's my point. There is VERY little good poetry out there, and VERY few poets talented enough to produce it.
The romantic view is that if we all stop doing our dreary day jobs, we'll be able to achieve the artistic heights we simply don't have time to explore today.
But it glosses over a major problem. Most of "the masses" don't have the talent to produce anything worthwhile.
So if we all try to become poets, all that means is that we'll be a bunch of shitty poets who can't make money selling their poetry because nobody wants to read it.
There’s a running joke that when people fantasize an about either living in the past or living in an idealized communal society, they’re always a lord or running some kind of esoteric dream business. When it’s more likely they’re a serf or cleaning the communal sewage pipes.
-
The other problem is, sure, we have infinite abundance and we have everything we need. But what do we DO?
I mean, I think I've seen this future, and it looks pretty horrible.
https://www.youtube.com/watch?v=_xToQ4cIHkk
Oh yeah for sure! I'm not trying to suggest that a state of total abundance will be some Utopian future. I'm just asserting that eventually, whether it's 100 years from now or 500 years from now, this state of abundance WILL exist.
What do my great great grandchildren do with it? I don't know. I suspect they'll figure out something though.