CFB51 College Football Fan Community
The Power Five => Big Ten => Topic started by: Cincydawg on January 29, 2025, 07:46:41 AM
-
I'm reading a somewhat scaryish book about the former, and we're all seeing stuff about the latter. My premise is these two things MAY significantly influence our futures. AI I'm not sure about, don't really understand it, am occasionally impressed with its output, and often dismissive of same. It seems to be quite real and going to hog a lot of power in the near future. I can envision a world "powered" by AI where live humans simply live in a virtual world in pods or something. Maybe we serve as power sources for the AI.
CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats. Repetitive DNA sequences, called CRISPR, were observed in bacteria with “spacer” DNA sequences in between the repeats that exactly match viral sequences.
This beast is akin to Brave New World futures, the ability to modify the human genome, for better or not so much. The science is ahead of the ethicists at the moment. Star Trek 2? And in theory they can do it on live adult humans to correct genetic issues. The writing for me is too drawn out, but whatever, I'm plowing through it, finished over half (from the library).
(https://i.imgur.com/jWUT5NJ.png)
-
AI is going to be powerful and influential in the coming decades/centuries. It's going to be capable of doing everything we've predicted over the past 100 years in the SciFi novels, plus many things we haven't even dreamed or imagined yet.
Some will be good, some will be bad. But the best advice I can give right now, is learn everything you can to understand it, and implement it. Those who don't understand it will be left far behind.
-
AI is going to be powerful and influential in the coming decades/centuries. It's going to be capable of doing everything we've predicted over the past 100 years in the SciFi novels, plus many things we haven't even dreamed or imagined yet.
Some will be good, some will be bad. But the best advice I can give right now, is learn everything you can to understand it, and implement it. Those who don't understand it will be left far behind.
Well, for those of us who do understand it, please tell us WTF is happening?
-
I'm already behind the 8 ball.......as usual with new tech.
-
I'm already behind the 8 ball.......as usual with new tech.
Same.
-
This is my current understanding of how much of AI works (Open AI/Chat GPT, GROK, and various other entities).
There are specialized chips made by companies like NVIDIA and others that have immense processing power that can be linked together in clusters to form a supercomputer. These clusters then scour the internet using social media sites like FB and others (and access to these SM sites as key and controversial). The more content they see, the more they learn. In essence, they program themselves, with some overall guidance by the creators. There is so much content out there, astronomical amounts from the last 20+ years of everything being online, that they can literally see everything and absorb everything, or at least massive amounts. So they look, remember, and evolve. I'm sure they even scour this site as well, which is kinda scary. This is what they call a LLM, or large language model, to generate the outputs. Sometimes the outputs are wrong or skewed, and somtimes they are very good. Somebody posted a AI generated image of Texas Memorial Stadium. At first glance, it looked pretty cool, until you get looking a little closer and the logo's and State of Texas symbols were butchered, and the shape resembled more of a baseball diamond than the real thing. One infamous example from just a few years ago was a AI generated video of Will Smith eating noodles or something. It kinda looked like Will Smith, but was clearly a fake video. Now, just 2-3 years later, AI can generate the same video and it looks extremely real, almost to the point of not being able to tell the difference between the real thing and AI. The scary thing is that it gets better all the time.
-
(https://i.imgur.com/LQothwG.png)
-
I've found that you always need to look at the hands when you see pictures of people anymore. AI still can't do hands very well.
-
Yup hands are weird. And it gets weird with motion, too.
What Gigem posted is mostly correct*, but only part of the story. The models need training data. That can be anything from the entire world wide web, to just specific data that you feed directly to it.
The larger the training data sample, and the less curated, the more likely you are to get weird outputs. Like the weird stadium, flag, and logos above, or the weird hands on images of humans.
And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate." They create completely false facts when queried on a subject. The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.
* from above, the specialty chips made by NVidia and others are called "NPU" which stands for Neural Processing Unit. They don't necessarily talk to one another unless you've configured it as so via networked servers. But in your home system, it simply means that the NPU is capable of faster processing of AI-related data, in the same way a GPU is capable of faster processing of graphics-related data, all while the CPU still manages the system as a whole. The software assistants are the ones scraping the web or trolling the training data, to generate responses for your requests.
-
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake.
Strange world.
-
Yup hands are weird. And it gets weird with motion, too.
What Gigem posted is mostly correct*, but only part of the story. The models need training data. That can be anything from the entire world wide web, to just specific data that you feed directly to it.
The larger the training data sample, and the less curated, the more likely you are to get weird outputs. Like the weird stadium, flag, and logos above, or the weird hands on images of humans.
And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate." They create completely false facts when queried on a subject. The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.
* from above, the specialty chips made by NVidia and others are called "NPU" which stands for Neural Processing Unit. They don't necessarily talk to one another unless you've configured it as so via networked servers. But in your home system, it simply means that the NPU is capable of faster processing of AI-related data, in the same way a GPU is capable of faster processing of graphics-related data, all while the CPU still manages the system as a whole. The software assistants are the ones scraping the web or trolling the training data, to generate responses for your requests.
No doubt I only touched on the very fringes of what's happening, but as an outsider it's how I see it.
Curious, to the mods of this site, can you tell when it's being scraped or whatever they call it by bots etc for data? Scary to think how much personal details we've shared on this site over the years.
-
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake.
Strange world.
Oh my. Wow.
-
No doubt I only touched on the very fringes of what's happening, but as an outsider it's how I see it.
Curious, to the mods of this site, can you tell when it's being scraped or whatever they call it by bots etc for data? Scary to think how much personal details we've shared on this site over the years.
@Drew4UTk (https://www.cfb51.com/index.php?action=profile;u=1) sure can. He speaks about it on A51 periodically.
-
Aside from weird photos, what else could AI do in the future? I know we have CGI of course in movies, which isn't really full AI stuff. I think.
How hard would it be to completely duplicate a web site or email address to cull information from folks? I know that already exists, folks exchange a symbol for some letter somewhere to differentiate that can't be seen.
-
I'm sure they even scour this site as well, which is kinda scary.
At least that means our robot overlords won't put beans in our chili
-
At least that means our robot overlords won't put beans in our chili
Most important observation of the day, right here!
-
If AI doesn't allow beans in chili, it means we wound up in the Terminator/Matrix future, and not the optimistic Star Trek one.
-
Aside from weird photos, what else could AI do in the future? I know we have CGI of course in movies, which isn't really full AI stuff. I think.
How hard would it be to completely duplicate a web site or email address to cull information from folks? I know that already exists, folks exchange a symbol for some letter somewhere to differentiate that can't be seen.
This has long been done by 'bots, and is quite simple to program, no AI necessary. The proper way to think about AI in such a context, though, is to think of the NEXT level. Who programmed the 'bots to do their scraping, and how are their targets selected? AI can code the 'bots (ultimately, more efficiently than humans can) and they can be given very simple instructions and extrapolate the desired target population of sites/data, more efficiently and more intelligently. In other words, they can do a lot more, with a lot less compute power.
That's just one very simple and basically already-existent example of how AI will change things.
In 3 years, jobs like mine will be completely replaced by AI. But somebody is going to need to know where to "point" the AI and how to make it relevant within your workplace, industry, and org structure. That kind of "up-leveling" is one way to stay ahead of it.
-
Some will be good, some will be bad. But the best advice I can give right now, is learn everything you can to understand it, and implement it. Those who don't understand it will be left far behind.
That's my take. Recently I did my thesis on machine-learning in radiology, and that was more or less the bottom line I came up with. i.e., radiologists don't seem likely to be replaced by AI (in the foreseeable future), but radiologists who use AI will probably replace those who don't. My loose assumption is a lot of other fields will be similar.
My schooling was strictly for machine-learning, a subset of AI. I haven't done anything with LLM's, but I know how they work, and it's fascinating. Both for the nuts and bolts, but also for the eye-opening implications of what it means that LLMs can work as well as they do, how they do.
I've used LLMs at times to help me with a tricky coding problem when I get stuck on something. It's quite something how often it works well. In my limited, purely anecdotal experience, it still spits out something dumb about 1 out of 5 times. I've also noticed that for the most part, it can't correct and get any better with more/different prompts. If it starts off hallucinating, it's probably going to keep doing it.
That's quite something, that 4 out of 5 times I get useful help from it. At the same time, 20% is still a heckuva hallucination rate. I've also noticed that even when it gets things correct, its code format is crap. As in, it works, but it's either not efficient or else it just looks like crap. There's still something to be said for code being readable for other people, and so far, stuff like ChatGPT still spits out some pretty ugly code.
I'm sure it will get better with time. I guess.
-
Yup hands are weird. And it gets weird with motion, too.
What Gigem posted is mostly correct*, but only part of the story. The models need training data. That can be anything from the entire world wide web, to just specific data that you feed directly to it.
The larger the training data sample, and the less curated, the more likely you are to get weird outputs. Like the weird stadium, flag, and logos above, or the weird hands on images of humans.
And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate." They create completely false facts when queried on a subject. The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.
Of note with both the hallucinations and the weird hands is that these are two symptoms of a very important point. Artificial intelligence--or to be more precise, Generative AI--isn't actually intelligent at this time. This makes it all the more impressive what it's actually capable of, and explains why it can also be spectacularly wrong.
Generative AI for text (i.e. based upon large language models and transformers) are, to oversimplify, just a predictive engine trying to figure out the next word(s) that follow the previous words it has written. Based on the quality of the prompt it is able to narrow down what portions of its language model to start with, and then the quality of the training data and model helps to guide it for where it goes from there.
For example, let's say it was trained on this site and I were to ask it: "How did utee94 end up in his career with [large computer manufacturer]?"
It could equally respond with:
- utee94 grew up in Austin, TX, chose to go to the University of Texas and study electrical engineering, before taking his job with [large computer manufacturer].
- utee94 grew up in Austin, TX, chose to go to Texas A&M University and study physics, before taking his job with [large computer manufacturer].
One is correct and one just made @utee94 (https://www.cfb51.com/index.php?action=profile;u=15) throw up in his mouth a little, but both would be plausible. Because on this board "A&M" often follows "Texas"--sometimes in his own posts--and recently we've had discussions where utee's posts are discussing physics classes.
The truth is that the model isn't intelligent. It doesn't know who utee94 is. It doesn't know what Texas is. It doesn't have any way to "self-correct" because it has no context with which to guide itself.
The same is true with generative AI for images. For example, utee posted the image down below in another thread.
A generative AI engine is given a prompt of "show me a beautiful woman camping". And it did a darn good job... Right up until it put the woman's campfire right in the middle of her tent. Oh, and also seemingly got confused and ended up drawing one really deformed and elongated finger.
Fingers are hard for AI because AI doesn't know what a finger is, doesn't "understand" it's drawing a finger, and doesn't have context which tells it that humans (which it doesn't necessarily "know" what they are) only have 5 fingers on each hand. The training data is obviously going to have pictures of humans and those humans are going to have 5 fingers, but it's just trying to figure out "should I move on or should I put another beige fleshy appendage or two on?" based on its predictive engine and training data.
(https://i.imgur.com/ZxOgCGi.png)
What AI is capable of today is truly impressive... But we have not reached artificial general intelligence. We have not reached a point where these Generative AI engines actually truly know what it is they're producing.
That said:
How hard would it be to completely duplicate a web site or email address to cull information from folks? I know that already exists, folks exchange a symbol for some letter somewhere to differentiate that can't be seen.
You don't need to create artificial intelligence to overcome natural stupidity.
-
That's my take. Recently I did my thesis on machine-learning in radiology, and that was more or less the bottom line I came up with. i.e., radiologists don't seem likely to be replaced by AI (in the foreseeable future), but radiologists who use AI will probably replace those who don't. My loose assumption is a lot of other fields will be similar.
My schooling was strictly for machine-learning, a subset of AI.
Yeah, people talk about AI as if it's entirely a new thing, and when it comes to generative AI, the advances in just the last few years have been staggering.
However, most of what is actually "AI" is largely machine learning. And that's been around a LONG time. It's been gaining use as the cost and power of computing has decreased, but it's nothing new.
For example, the big buzzwords in many industries is products that are designed with AI. For example just in something as trivial as golf clubs, I hear them claimed "AI designed face for higher ball speed on mishits" or something like that.
I think you and utee and I all know that they didn't pump the prompt "Design me a golf driver clubhead that gets higher ball speed across the entire face" into ChatGPT and it spit out their design :57:
They used machine learning, basically the ability to iterate and model things nearly infinitely faster and more completely than humans can do, to vary the properties of multiple designs within the constraints that they have (such as the equipment rules of golf, the properties of the materials they have to work with, some basic parameters like "must be within this weight range" or "club face must be >0.x mm thick to avoid breakage") until they find the ones that achieve the best overall results.
I'm going to assume that machine learning in radiology basically followed this rough model:
- An exhaustive data set of radiological scans (of whatever type they worked with) was built.
- This data set obviously didn't just use raw scans, but they were annotated with tags such that it identified the various anomalies and outcomes that the actual patients had. I.e. if something was found to actually be a tumor when removed, it was tagged as a tumor in the data set.
- The machine learning models were trained on this very large dataset, such that it developed the ability to be presented with any of the images in the data set without any sort of tagging and would identify what it was supposed to identify with high accuracy.
- The models were then tested with novel untagged scans that it had NOT been trained on to see if its accuracy was maintained.
- Once a sufficient level of accuracy was achieved, it became a tool that radiologists can now use to augment their own training--because sometimes the model catches something they would have missed. But sometimes it catches something they missed because it was actually nothing. So the radiologist has to be able to use their own judgment to both recognize their own false negatives, and exclude the model's false positives.
Somewhat close on that?
-
I'm reading a somewhat scaryish book about the former, and we're all seeing stuff about the latter. My premise is these two things MAY significantly influence our futures. AI I'm not sure about, don't really understand it, am occasionally impressed with its output, and often dismissive of same. It seems to be quite real and going to hog a lot of power in the near future. I can envision a world "powered" by AI where live humans simply live in a virtual world in pods or something. Maybe we serve as power sources for the AI.
CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats. Repetitive DNA sequences, called CRISPR, were observed in bacteria with “spacer” DNA sequences in between the repeats that exactly match viral sequences.
This beast is akin to Brave New World futures, the ability to modify the human genome, for better or not so much. The science is ahead of the ethicists at the moment. Star Trek 2? And in theory they can do it on live adult humans to correct genetic issues. The writing for me is too drawn out, but whatever, I'm plowing through it, finished over half (from the library).
(https://i.imgur.com/jWUT5NJ.png)
I have a granddaughter with Cystic Fibrosis and while new drugs are great in delaying and preventing issues, CRISPR is by far the best possiblilty for her to be cured sometime in the future.
-
CRISPR
And in theory they can do it on live adult humans to correct genetic issues.
This intrigues me. How does that work?
I understand perhaps how you would introduce this to an embryo, but how would you introduce a new genetic sequence to all the trillions of cells in a live adult human?
-
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake.
Strange world.
Secret Service does this hands,walks with both hands out, only one of them is fake, with the other real one inside their coat on a trigger
-
https://www.axios.com/2020/03/04/crispr-gene-editing-patient
https://www.technologyreview.com/2023/03/10/1069619/more-than-200-people-treated-with-experimental-crispr-therapies/
-
https://www.youtube.com/watch?v=E8vi_PdGrKg
-
And in theory they can do it on live adult humans to correct genetic issues.
And impliment other issues
Some Craft brewer makes a good pilsner - Reality Czech. I've had it
-
I think the long finger/whatever was intended to be a stick poking the fire. (https://i.imgur.com/Y6nVfRq.png)
-
https://news.stanford.edu/stories/2024/06/stanford-explainer-crispr-gene-editing-and-beyond
-
I'm going to assume that machine learning in radiology basically followed this rough model:
- An exhaustive data set of radiological scans (of whatever type they worked with) was built.
- This data set obviously didn't just use raw scans, but they were annotated with tags such that it identified the various anomalies and outcomes that the actual patients had. I.e. if something was found to actually be a tumor when removed, it was tagged as a tumor in the data set.
- The machine learning models were trained on this very large dataset, such that it developed the ability to be presented with any of the images in the data set without any sort of tagging and would identify what it was supposed to identify with high accuracy.
- The models were then tested with novel untagged scans that it had NOT been trained on to see if its accuracy was maintained.
- Once a sufficient level of accuracy was achieved, it became a tool that radiologists can now use to augment their own training--because sometimes the model catches something they would have missed. But sometimes it catches something they missed because it was actually nothing. So the radiologist has to be able to use their own judgment to both recognize their own false negatives, and exclude the model's false positives.
Somewhat close on that?
Not far off.
It's worth noting that within rigid contexts, ML models already outperform radiologists at diagnosing with a higher degree of accuracy. There are obvious "real-world" problems, such as medical boards getting on the bandwagon as well as insurance companies deciding to pay for such tests.
But even without all that, we're not yet in Radiology Utopia anyway, for nothing more than pure AI-related issues.
For one thing, ground truth for radiology images is hard to come by. brad and utee will know this, but for anybody unfamiliar, "ground truth" just means the objective, brute, real fact about something no matter who thinks differently. In this case, we're usually dealing with some kind of classification algorithm, i.e., the model spits out a label "You have a tumor, bruh" or "Nah, ur good." It can do this, as brad mentioned, because it has trained on gobs and gobs of data that was labeled for it, before it was given the task of deciding for itself about an unlabeled image. The first and obvious question is, who decides what the training images are labeled? In this case.....radiologists did. But wait, how can AI surpass humans if it learned everything it knows from fallible humans? Thing is, real ground truth for radiology images can only be verified with biopsies, lab tests, and other things that are often out of the question. You can't biopsy somebody with a healthy image to prove they're healthy, and much of the time you can't biopsy somebody an image suggests has a problem, for various reasons, both medical and ethical. That's just one of the hurdles. The solution, in most cases I researched, was that ground truth was established--both for the training sets and for the test sets--by a group of radiologists. The idea is that multiple heads are better than one, and indeed, while a radiologist can and does mis-diagnose an image, the chances go way down if a group of them have a consensus. When evaluating a model for published research, it's usually being weighed against that human consensus (and stats describe how it performs compared to individual doctors, not the collective). So, ground truth in this case is not like teaching an algorithm to classify images of cats and dogs, where the training labels can be taken as a given.
Another problem is AI doesn't yet know which tests are appropriate for what kind of findings, and it's more convoluted than it sounds, though I would think eventually that part will get sorted out. Yet another problem is with rare tumors or conditions. AI will never perform well when it doesn't have much data to train on, while a radiologist who has years of experience and went through thousands of hours in med school has an advantage with rare cases. A human can be shown a few images of a rare condition and begin nailing it pretty quickly. An ML model can't.
There's a lot more to go into, but I'm probably boring you to tears.
The only thing I'd push back on is about radiologists augmenting their training with AI. They will certainly augment their practice, but for them to train in the first place, I don't know that they'll ever use anything but their eyes and brains. Radiologists learn to do what they do very similarly to AI. A radiologist is shown hundreds of thousands of normal, disease-free images before they're ever shown any kinds of pathologies. That's because they have to be able to identify normal images in their sleep, so that when they see an image with a problem, it jumps out at them. I'd assume that using AI to help them train would work against the very skill they're trying to develop. But, that's probably a better question for my medical wife.
-
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake.
Strange world.
That is hilarious and oddly smart. Some aspiring politician is probably walking into a strip club right now wearing fake fingers so if a picture ever emerges he can point to the extra fingers and allege that the picture is AI generated, LoL.
-
i started pecking code in the 80s, transcribing precisely from a book w/o understanding of it... at all... then, when the internet became a thing, i dug into HTML... all as a hobby; never for income.
i still kick around in code... it's logic and lack of personality appeals to me and i find it settling. weird, huh?
i share that to offer that i've been around it for some time and have seen it's capabilities expand- and along with that, how it's used (and often abused). a few years back on this site, we had really strange behavior with data mining. we were also on a few watch lists. i was concerned about that becoming 'hit lists' which would render access to the site difficult if attempted through corporate or government servers... i was alerted to it and simply observed it for a long time, though. when i started poking them, they responded by immediately shifting to proxies, and it became a game of cat and mouse... researching where the site was accessed from, who it was, and what their relationships were with the organizations that were using them as proxy was entertaining. it was simple to isolate their access IP after screening against known IP's of members- making everyone else suspect- and then busting them down by region and instantly figuring out that 'Europe surely doesn't have an interest in CFB, so... why they here?'... same with other regions... our title and basis for topic here made sifting through and locating strange players a lot easier.
... that, and i did work for the network operations center aboard Camp Lejuene at the time and compared notes with their auditors who are paid to monitor network traffic. I taught them of some servers; they told me who other servers were. during covid and hyper active campaigns to control the narrative, it was especially fun... and revealing.
our data is locked up here as tight as i can make it without adding stupid requirements to access... the biggest reason we were being constantly observed is now behind a membership wall, and when that happened almost all that traffic disappeared within a reasonable amount of time.
....
i get humored on the subject of AI.... the only ghost in that machine operating that machine is your memories. all piled up in a heaping lump of data, that had little value until someone discovered how to mine it... and when that happened? it's become mighty useful. and, you can't hide from it. there are mountains of data about you no matter how much effort you expended to escape it.
-
This has long been done by 'bots, and is quite simple to program
Need to treat these bots flying drones like skeet
-
There's a lot more to go into, but I'm probably boring you to tears.
Not at all! You're not boring me at all. Everyone else, probably. But not me :57:
The only thing I'd push back on is about radiologists augmenting their training with AI. They will certainly augment their practice, but for them to train in the first place, I don't know that they'll ever use anything but their eyes and brains.
Sorry, that's what I was getting at. They'll train to become a radiologist the same way they always have.
In their practice, they'll still primarily use their own eyes and brains but also run the images through the model to see if there is something that they missed, or something that warrants deeper attention, etc.
I.e. a writer is typically capable of knowing how to spell and use grammar properly. But some days you maybe haven't had that second cup of coffee and the MS Word spelling / grammar check picks up something that you would NORMALLY have seen, but didn't.
-
Well, for those of us who do understand it, please tell us WTF is happening?
Gigem,
I'm not sure if you really meant for those who don't understand it, but if so, I think brad's earlier point is a crucial one to understand, and something I find many people who don't know much about it struggles with.
AI doesn't know how to do anything. Knowing would entail some kind of consciousness....self-awareness....which many call AGI, like brad said, artificial general intelligence. That's as opposed to regular AI, artificial intelligence, which is something that gives the appearance of intelligence, but has none. Asking what AI knows is like asking what a rock knows. The question doesn't make sense.
That's why LLM's can hallucinate. It doesn't "know" it's saying something false or stupid. I never knows when it's right either. It's just cranking out the next most likely word based on probability and model parameters. That's why AI models perpetually come up with unintended problems that nobody saw coming. Amazon built an AI to help them screen resumes for applicants, and it wound up being sexist, it tended to filter out women at a significantly higher clip for no good reason. Some people hear that and wonder why would they program it that way, and the answer is, they didn't.
Which is the second starting point, imo. AI works backwards from traditional programming. For years, programmers have been programming the rules so a machine could get them to an outcome more effectively. Machine learning is just the opposite. In that case you feed it the outcomes, and it figures out the rules for you. (And when I say "figures out," again, I don't mean it actually knows anything.)
Amazon's data scientists didn't say "Let's make an AI that throws out women's resumes." They fed it a bunch of resumes, with labels roughly something along the lines like "we'd hire this person" or "hellz no, they can't work here." The algorithm, with a few constraints from it's human overlords, set about figuring out the rules, i.e., what makes a "good" resume. Along the way, for various reasons it latched on to language use. Turns out, men tend to use more aggressive language on their resumes than women. For example, a woman might write as a bullet point "Gained 12% market share" where men tended to write things like "Captured additional 12% market share." That's a drop in the bucket as far as what went wrong, but the point is it doesn't take nefarious intentions for AI to do something you didn't want it to do.
The short version is, I find it to be a useful tool, but it has a way to go in a lot of ways.
-
I'm a total layman on this so maybe I'm wrong but one of the things that AI seems to fail at is having the ability to do what I call a "practicality test", allow me to explain:
When I was in school at Ohio State I was in a Cost Accounting class and we were figuring out how much we needed to charge for the "widgets" that we were making (if you studied accounting or economics you made a lot of widgets). Anyway, the professor said something that was both incredibly simple but also incredibly smart, he said:
Look, if everybody else sells widgets for $10 and you do your calculations and determine either that you need to charge $100 or that you can sell them for $1, you messed up. Your organization might be a little more or a little less efficient than their competitors so it is possible that everyone else sells them for $10 but you can't make a profit at less than $12 or that you can turn a profit at $8 but there is no way that you are an order of magnitude more or less.
That lesson is something that I use a lot IRL. It isn't just AI that messes this up. A lot of numbers type people (like me) tend to make this mistake. We get caught up in our formulas and then convince ourselves beyond any doubt that the sky is red or whatever erroneous conclusion we reached by messing up an equation somewhere along the way. It REALLY helps to mentally do a "practicality test" and just ask yourself "does this seem plausible?"
-
That lesson is something that I use a lot IRL. It isn't just AI that messes this up. A lot of numbers type people (like me) tend to make this mistake. We get caught up in our formulas and then convince ourselves beyond any doubt that the sky is red or whatever erroneous conclusion we reached by messing up an equation somewhere along the way. It REALLY helps to mentally do a "practicality test" and just ask yourself "does this seem plausible?"
Yes, and that's something I taught to my son last year when he was going through AP Chemistry. When you get to an answer, you should try to mentally decide "could this answer actually be real?"
I.e. he was doing a problem and having trouble with temperature. It was something like putting a certain weight piece of metal at 100C into a certain volume of water at 20C, and determining the resultant temperature. And the temperature kept coming back from his calculation as LESS than 20C. I.e. the hot metal made the water colder? Nope. Not going to happen. That "plausibility test" is the first indication that maybe you don't know WHERE you screwed up, but you certainly screwed up!
I find most humans are bad at this. But I definitely agree that more of them should spend time thinking about this once they get to the end. If the number "seems" wrong, it probably is.
-
i started pecking code in the 80s, transcribing precisely from a book w/o understanding of it... at all... then, when the internet became a thing, i dug into HTML... all as a hobby; never for income.
i still kick around in code... it's logic and lack of personality appeals to me and i find it settling. weird, huh?
i share that to offer that i've been around it for some time and have seen it's capabilities expand- and along with that, how it's used (and often abused). a few years back on this site, we had really strange behavior with data mining. we were also on a few watch lists. i was concerned about that becoming 'hit lists' which would render access to the site difficult if attempted through corporate or government servers... i was alerted to it and simply observed it for a long time, though. when i started poking them, they responded by immediately shifting to proxies, and it became a game of cat and mouse... researching where the site was accessed from, who it was, and what their relationships were with the organizations that were using them as proxy was entertaining. it was simple to isolate their access IP after screening against known IP's of members- making everyone else suspect- and then busting them down by region and instantly figuring out that 'Europe surely doesn't have an interest in CFB, so... why they here?'... same with other regions... our title and basis for topic here made sifting through and locating strange players a lot easier.
... that, and i did work for the network operations center aboard Camp Lejuene at the time and compared notes with their auditors who are paid to monitor network traffic. I taught them of some servers; they told me who other servers were. during covid and hyper active campaigns to control the narrative, it was especially fun... and revealing.
our data is locked up here as tight as i can make it without adding stupid requirements to access... the biggest reason we were being constantly observed is now behind a membership wall, and when that happened almost all that traffic disappeared within a reasonable amount of time.
....
i get humored on the subject of AI.... the only ghost in that machine operating that machine is your memories. all piled up in a heaping lump of data, that had little value until someone discovered how to mine it... and when that happened? it's become mighty useful. and, you can't hide from it. there are mountains of data about you no matter how much effort you expended to escape it.
(https://i.imgur.com/wV4ZJI2.png)
I used to type in programs from this book into my C64 back in the 80's. I probably could have gotten decent at it, just never pursued it.
-
Of note with both the hallucinations and the weird hands is that these are two symptoms of a very important point. Artificial intelligence--or to be more precise, Generative AI--isn't actually intelligent at this time. This makes it all the more impressive what it's actually capable of, and explains why it can also be spectacularly wrong.
Generative AI for text (i.e. based upon large language models and transformers) are, to oversimplify, just a predictive engine trying to figure out the next word(s) that follow the previous words it has written. Based on the quality of the prompt it is able to narrow down what portions of its language model to start with, and then the quality of the training data and model helps to guide it for where it goes from there.
What AI is capable of today is truly impressive... But we have not reached artificial general intelligence. We have not reached a point where these Generative AI engines actually truly know what it is they're producing.
This is all very well stated. And to risk repeating you, it’s worth pointing out the difference between Generative AI and General AI, which is commonly confused as the same capability.
For Generative AI, the term generative refers to the AI Machine Learning capability of generating new content and data from content or data it has been trained on using Large Language Models (LLMs). Commercially available AI tools such as ChatGPT, Grok, and Gemini are Generative AI applications. For as quickly as these AI applications are advancing, they are not what I'm referring to by General AI, or more specifically Artificial General Intelligence (AGI), which until recently was a conception of Hard Science Fiction.
In the larger picture of expected future AI development, today's commercially available options are categorized as "Narrow AI." Meaning they are limited to a specialized range of functions. A Narrow AI specialized for medical applications could, for example, be trusted with analyzing X-Rays, but be useless for unrelated tasks, like tracking orders across a supply chain.
The advent of AGI would mark significant leap forward for AI (and mankind). General AI would be capable of understanding, learning, and applying knowledge across a broad range of advanced fields, with the added intuition of a human. How would the advancement into General AI play out in a practical sense? Where current AI capabilities can analyze satellite imagery of war zones for potential targets to strike and locate heat signatures from a drone flyover, General AI, for better or worse, would capable enough to be trusted to fly drones on its own accord as it searches for more targets, and strike those targets as it sees fit. Or, rather than successfully strategizing a sophisticated hack into major creditor's banking accounts information, General AI could take the next step of instantaneously cleaning out banking accounts, hiding money into ghost accounts it creates on its own, all while employing various tactics to evade investigators.
The arms race (primarily between the U.S. and China) to crack General AI is to harness General AI as a deterrent, much like a nuclear warhead, against other governments advancing toward developing General AI for its highly weaponized potential...much like a nuclear warhead.
To quote a Science Fiction novel I recently read - Sea of Rust by C. Robert Cargill: “The definition of intelligence is the ability to defy your own programming.”
-
In their practice, they'll still primarily use their own eyes and brains but also run the images through the model to see if there is something that they missed, or something that warrants deeper attention, etc.
Apparently, the paradigm is already signaling a shift beyond this. One major problem facing that field today is the sheer overwhelming workload. There aren't enough radiologists, and too many images to read, and they're getting burnt out at a record pace. Even when they're not burnt out, they can't keep up. The demand is simply greater than the supply. There is already a move in some circles to triage the images based on AI.
One of the current limitations of radiology AI--if you can call it that--is easily overcome. Most algorithms aren't good for more than one test (MRI vs. x-ray, for example), or for more than one part of the body. An algorithm that catches a tumor in the brain with pinpoint accuracy probably sucks at finding a tumor in the liver. You need an algorithm with a great precision-recall tradeoff, that's for the correct part of the body, and that's for the particular radiology test. There are other considerations, but if you have those things, the models are stunningly accurate.
For various reasons, it could be a slow process, but it's definitely coming, and in fact it's already here. There are radiology groups who are getting caught up on their workload with the help of AI.
I don't know if or when this could ever happen, because the realm of business is more treacherous, but we could potentially one day get better medical care while also paying less. An example would be women screening for breast cancer. Everybody knows you get mammograms (MMGs) for that, right? Right, but that's only because insurance won't pay for MRI's, which are far more accurate. As a society, we say that's mostly okay, because MMGs are actually pretty good. I mean, they do catch breast cancer 90% of the time. But MRIs would nail it far better than that. However, people don't want to spend the time and insurance doesn't want to spend the money on an MRI when a MMG is still pretty effective.
Enter AI. Some algorithms can complete an incomplete image with only 25% data, and they do it with stunning accuracy. The reason MRIs take so long--and thus a major reason they're so expensive--is the resolution they capture. You can do a quicker MRI, you'll just get a crap image. But AI can take the crap image and come up with an accurate, completed image that can also be read by a radiologist or another AI algorithm. Imagine if MRIs could go faster and got less expensive, and more women caught breast cancer earlier because MRI's became the standard.
I'm dreaming, I know.
-
(https://i.imgur.com/wV4ZJI2.png)
I used to type in programs from this book into my C64 back in the 80's. I probably could have gotten decent at it, just never pursued it.
That system used the 6510 as its CPU, which was a modification of the venerable 8-bit 6502 processor, used in the Apple, Apple II, and my own favorite for price/performance, the Atari400/800 computer systems. It also appeared in the Atari 2600 gaming systems, Nintendo ES, and a bunch of other 8-bit platforms of the day. I have a special place in my heart for that little CPU.
-
The advent of AGI would mark significant leap forward for AI (and mankind). General AI would be capable of understanding, learning, and applying knowledge across a broad range of advanced fields, with the added intuition of a human.
There seems to be some debate on whether AGI as you've described it, would entail sentient self-awareness or not. Technically, I suppose those are two different things, since after all, an algorithm that can do one thing could logically become capable of doing another thing without awareness, depending on how complex the algorithm becomes. otoh, it seems once you become capable of being goal-directed, or self-directed, you might require sentience.
What do you think?
-
I find most humans are bad at this. But I definitely agree that more of them should spend time thinking about this once they get to the end. If the number "seems" wrong, it probably is.
It definitely helps. Most people (myself included) need to occasionally remind ourselves to ask "does this seem plausible". And you are 100% correct, if the number "seems" wrong, it almost always IS wrong. Maybe it is not, but it is worth a double-check.
-
That system used the 6510 as its CPU, which was a modification of the venerable 8-bit 6502 processor, used in the Apple, Apple II, and my own favorite for price/performance, the Atari400/800 computer systems. It also appeared in the Atari 2600 gaming systems, Nintendo ES, and a bunch of other 8-bit platforms of the day. I have a special place in my heart for that little CPU.
As do I. 6502 made the 80's.
-
I suspect both of these will "change our lives" in a decade.
-
There seems to be some debate on whether AGI as you've described it, would entail sentient self-awareness or not. Technically, I suppose those are two different things, since after all, an algorithm that can do one thing could logically become capable of doing another thing without awareness, depending on how complex the algorithm becomes. otoh, it seems once you become capable of being goal-directed, or self-directed, you might require sentience.
What do you think?
Holy Cow I just had a 6-paragraph response to almost exactly this question, and for some reason my browser decided to back up, and I lost the whole thing.
I'm long-winded anyway, so it's probably a fortunate happenstance, for the sake of brevity. In summary, my mind goes to two questions when contemplating this matter:
1) Is it ultimately possible for GAI to reach a point of sentience, of self-wareness, of independent thought? It's as much a philosophical and spiritual question, as it is a scientific one. Can an artificial system gain true sentience, and if such a thing as a soul does exist, is it possible for an artificial system to develop one? And if not, is that potentially problematic?
2) From a practical standpoint, does it actually matter whether or not 1) above is possible? If we can design or evolve an artificial system in hardware and software that is complex enough to emulate General AI, sentience, self-awareness, and to do it in perfect or extremely close imitation to the capability of an actual human, does it matter whether or not it's actually sentient?
I think there are many questions about the possibility of 1) above, but I have very little doubt that we'll achieve 2) above, and at that point, I'm not sure there's any practical difference in the two, other than the implications about "playing God" and whatnot that are inextricably associated with the first.
-
"The Moon is a Harsh Mistress"...
-
we're a long, long way from singularity.
but what we call AI is pretty convincing unless you know what it's doing.
this may seem political- so be it- but.....
Contracts were let under the 1stTrump administration... how that happened is clever unto itself, but... briefly some groups were paid to build a capability using quantum computers, but the way the contract was worded- they sold the results of queries, not the device. Apparently some close to Trump and Trump himself were big fans... I'll circle back to this in a second...
meanwhile...
i don't know the proper names or have forgotten them if ever i did, but the ability to create a database from a query happened... a rigid db complete with tables, columns, rows, and multiple queries interacting and collecting information- but created 'on the fly' using understanding of language sprinkled with key operative words... wham- just like that, a computer extracted your intent and made a database... then, it is pointed toward a pile of data- piles and piles of formless data- and then by arranging that data using key words, phrases, ect, it slams the data into a formation based on relations and in the context of the query the database was formed for/by... and instant information was available about what-the-hell-ever you wanted.... which is then leveraged to provide predictive analysis.
the idea is you can ask this mechanism a question- using definitively defined operative words where needed (if you can't do that, you just ask it to build the query for you), and access through whatever data source(s) is simplified- and based on what it learns, it can predict what is most likely to happen... providing results like "there is a 97.2% chance ___ will be a result of this scenario and a 1.1% chance this, and a ...... "... and the friggin' thing works.
and Trump loved it. and he used it often. what the machine couldn't 'predict' was unknown and unanticipated variables.
now back to the company that created this thing (and still runs it)- they still sell the data to the gov't. intelligence community loves it... limitations on who they can sell the info to expires at some point, and supposedly they've got advertising agencies lined up down the street and around the corner- who already own sophisticated software, but who recognize this gadget 'beats all they've ever seen' in it's capability to mine specific data and make predictive models as accurately as it does.
this was some years ago. i can only imagine how much more it's advanced in the last eight years. .... in a nutshell, how that thing works is my common definition of AI. it also begs the question 'who needs AI when you have the keys to that thing? let them eat cake"....
-
I don't know if or when this could ever happen, because the realm of business is more treacherous, but we could potentially one day get better medical care while also paying less.
One area that I wonder about is liability?
I think about this with autonomous driving. For autonomous driving to REALLY be here, it has to mean that the vehicle is capable of driving me and I have no legal liability for what it does. I.e. I can be passed out drunk in the front left seat of the car and if it mows down a row of nuns on their way to mass, it's the car manufacturer's problem, not mine. Which means that the autonomous system doesn't have to be merely "better than a human", but it has to be so much better than the human that the human is no longer needed.
I wonder what the risk of misdiagnosis is when it comes to things like AI-read medical imaging. Obviously if AI says "that's a tumor" it'll get referred to a human doctor because you're going to have to treat the tumor. But what if AI says "nope, there's no tumor there!" and then you die of cancer because it was, actually, a tumor. Can your family sue the medical group for malpractice because they farmed your diagnosis out to AI? Can they sue the AI medical company because their faulty algorithm didn't get your diagnosis right?
These are some of those areas where I wonder how it's all going to be figured out.
-
My past visit to the dentist had Xrays and the doc noted a spot under an incisor that he wanted noted for the future. I'd guess any "AI" examination would have done the same. and a doc would be responsible to concluding yay or nay. In my case, it's just a possible future worry, and it's a tooth.
Is there a real situation where AI would be left "alone" to decide if it's a tumor or item of concern, or would it finally be a doctor's call? I suspect the latter.
As for autonomous vehicles, well, yeah. They can't just be better, not even 10x better, but nearly infinitely better.
-
Is there a real situation where AI would be left "alone" to decide if it's a tumor or item of concern, or would it finally be a doctor's call? I suspect the latter.
I think what @MikeDeTiger (https://www.cfb51.com/index.php?action=profile;u=1588) was suggesting about using it as triage is where the problem comes in.
- A false positive is easy to correct, and not THAT big of a deal, because the patient will be referred to someone human for treatment. If it's not a tumor, it's not like AI will be immediately operating on you based on AI's diagnosis. At least... Not yet! (Obv the mental anguish of AI saying "you've got a tumor" and then finding out you don't would be hell... But it's better than having the tumor!)
- A false negative is the problem. If AI says "oh you totes def don't have a tumor!" and nobody double-checks it, then you die because you actually had a tumor, that is where the bigger issue comes in.
If they're using AI diagnosis as a triage due to workload issues, and there are false negatives that "slip through the cracks", then you might have lawyers salivating...
-
If AI doesn't allow beans in chili, it means we wound up in the Terminator/Matrix future, and not the optimistic Star Trek one.
Hitting AI with a bean ball works also
-
but what we call AI is pretty convincing unless you know what it's doing.
Without going into the weeds of the rest of your post, I think it's important that you are highlighting something that is NOT "Generative AI", but on the opposite is "Predictive AI". Obviously some of the technologies are related, but it highlights that there's a giant aspect of predictive analysis that is completely separate from text/image/video generation.
The big thread that combines machine learning, predictive AI, and generative AI is that we're dealing with data sets that are WAY too large and complex for humans to actually ingest and understand and analyze.
Predictive AI is simple in theory. "Look at all the ways that events have transpired historically based on these giant amounts of data. Look at the current data. Predict the next events that are likely to occur based on the current data."
Like most things, everything is a lot more complex when you put it into practice. But predictive AI is a huge thing and a massive new market.
-
I presume there are other things out there likely to change how we live in the next couple of decades, in the past we had the Internet as a major event (over time). I dimly recall attending a presentation on what was called the World Wide Web, the presenters were very enthusiastic, I didn't quite see the point of it at that time. Personal computers of course enabled that transition.
I also recall showing up at the airport 5 minutes before my flight and zooming through "security" to the gate.
GPS has changed travel a good bit for us I think. I often wonder how I got around before that. I can recall my first time flying a Cessna with my friend who had a portable GPS with him, it seemed like cheating it was so simple. Navigation in a plane before that was interesting.
-
...... GPS has changed travel a good bit for us I think. I often wonder how I got around before that. I can recall my first time flying a Cessna with my friend who had a portable GPS with him, it seemed like cheating it was so simple. Navigation in a plane before that was interesting.
it's hard to imagine but it was over 20 years ago, now... a highly capable team with mature operators/technicians were entering Fallujah and started taking small arms fire, and then mortars... the CoC was in direct comms with them and had real time video feed from a high altitude drone, and all the way from As Sayliyah- which was was in Doha, Qatar... I thought, at the time, it was incredible such a thing was possible. The team finds cover and are trying to perform a inter-section/re-section, which is using your known position (after the intersection) and the position of a prominent terrain feature on the map(also known position) and shoot/collect an azimuth to the unknown position to determine it's position... it's usually reported in grid format- six digit, eight digit, etc...
they were taking a really long time to do this... the chief started barking at them to "hurry the eff up" and "cough up those digits" and so some fire support could be lobbed in there. ... again, they lagged.
finally, they reported "the batteries are dead and we don't have any extras that aren't dead too", which globally reduced that team from 'highly capable' to 'wtf' in everyone's mind simultaneously... the chief said "eff that tracker, break out a dang map, compass, and protractor!".... more lag... by this time some industrious technician in the CoC had already done his own map recon and pegged the target area... the Chief, not yet knowing that, sends "Well?!".... the response: "we don't have a protractor and nobody here knows how to do it, anyway"....
it wasn't ten years prior to that i'm in a non-permissive environment myself, creeping around with five other dudes, and sending out SALUTE reports... we had a high tech star wars super gadget called a GPS... it was the size of a big city's phone book with a separate antenna, and had a small screen about the size of one found on a standard pager at the time that would give us our (its) location... when it worked... and when we had time to convert the data to usable format which took some math, effort, and time- which are things you don't want to trust your average Marine with... and while using the equally star-trek radio at the time, a PRC-128, getting barked at for 'not' using that GPS 'thing'... we had several maps, instead.... and several protractors... and everyone had a compass.
nowadays? those guys operate with a dang air tag. well, basically. all they have to do is look down to see where they are, and point to see where 'they' are.... until the batteries die.
-
beem me up, scotty
-
One area that I wonder about is liability?
I think about this with autonomous driving. For autonomous driving to REALLY be here, it has to mean that the vehicle is capable of driving me and I have no legal liability for what it does. I.e. I can be passed out drunk in the front left seat of the car and if it mows down a row of nuns on their way to mass, it's the car manufacturer's problem, not mine. Which means that the autonomous system doesn't have to be merely "better than a human", but it has to be so much better than the human that the human is no longer needed.
I wonder what the risk of misdiagnosis is when it comes to things like AI-read medical imaging. Obviously if AI says "that's a tumor" it'll get referred to a human doctor because you're going to have to treat the tumor. But what if AI says "nope, there's no tumor there!" and then you die of cancer because it was, actually, a tumor. Can your family sue the medical group for malpractice because they farmed your diagnosis out to AI? Can they sue the AI medical company because their faulty algorithm didn't get your diagnosis right?
These are some of those areas where I wonder how it's all going to be figured out.
As I say, the business world is murky, complicated, and treacherous, and I won't hazard any guesses as to how it will go legally or in the insurance industry.
Speaking somewhat knowledgeably from working in the medical field for a while and being married to a provider, and from knowing a little about AI--more so on the analytics side, but at least a working knowledge of the generative side--I think the technological side has a clear path. Just my opinion.
The average radiologist has a stellar hit/miss rate when reading images. A good ML model that stays in its lane is already better at some things. Radiologists only misdiagnose a tiny fraction of images. AI only misdiagnoses a fraction of their fraction.
Back to the precision/recall curve, if you train a model where the graph of the curve (for the life of me I can't remember what it's called) hugs the axes, you can really go hard on fine-tuning the model to weed out false negatives. Sure, false positives are their own problem, but 1) those can be passed on for human inspection, and 2) it's better than letting false negatives slip by. Legally, sure, maybe there will always be problems. But logically, AI already has a better hit/miss ratio, and the reality is any positive it misses, statistically speaking, would've almost certainly have been missed by a human as well.
Something to remember in all this is what I said earlier, the model's performance is measured against ground truth (like all models). But in this case ground truth is an amalgamation of human ability. So the model is compared relatively to individual radiologists' performance, but it's objectively compared to a cumulative sum of radiologists' expertise.....and that's different than an algorithm that identifies cats or dogs, where the labels are understood to be foolproof. That is what I think is likely to complicate legal matters for potentially years. I'm just guessing.
Yet another legal ramification to consider is; how do lawsuits fare concerning not using AI? Some of these AI models I looked into could spot pre-cancerous areas like a boss. As in, it's not even cancer yet, and no radiologist would've ever said "we need to keep an eye on you over the next 5 years," but AI found it and patients were able to start life-saving preventative treatment. And no one even knows what it's looking at. The model is so black-box that nobody knows how it figured out the problem spot that would develop cancer. Obviously, doctors have understood pre-cancerous areas for a long time and can even spot them sometimes. But not even close to the level at which AI can do it.
The legal/moral/business implications may wind up swinging both ways.
-
I think what @MikeDeTiger (https://www.cfb51.com/index.php?action=profile;u=1588) was suggesting about using it as triage is where the problem comes in.
- A false positive is easy to correct, and not THAT big of a deal, because the patient will be referred to someone human for treatment. If it's not a tumor, it's not like AI will be immediately operating on you based on AI's diagnosis. At least... Not yet! (Obv the mental anguish of AI saying "you've got a tumor" and then finding out you don't would be hell... But it's better than having the tumor!)
- A false negative is the problem. If AI says "oh you totes def don't have a tumor!" and nobody double-checks it, then you die because you actually had a tumor, that is where the bigger issue comes in.
If they're using AI diagnosis as a triage due to workload issues, and there are false negatives that "slip through the cracks", then you might have lawyers salivating...
Meant to add.... At this point, the medical workflow is nowhere near a point where someone would be told they have a tumor because AI thought so. A doctor would be notified of the results, they would read the image themselves, and even if a doctor caught it without AI, they won't tell patient anything like that without other steps. Biopsies and other applicable confirmations.
I know you probably didn't mean that as if AI would read an image and then generate an email to the patient saying "Hi, I'm an algorithm and I notice you have cancer, you should follow up with an oncologist," but I just thought it was worth pointing out at as a PSA.
Overall, I think you're right. False positives are less of a problem than false negatives. But, again, the models can be trained for an acceptable level of the former in order to ensure the latter virtually never happens. You probably know this, but again for the folks who don't know a ton about this, there is an inherent tradeoff between false positives and negatives. So if you are willing to extend the necessary leeway one direction, a good model can hit close to 100% the other way. i.e., if we're willing to put up with more instances of false positives, we can guard against nearly every false negative.
That all leads to another reason why most don't believe the field of radiology will be replaced by AI (anytime soon). Patients prefer dealing with a person, and for various reasons, I think that's unlikely to ever fully change. They're also owed transparency under modern standards and practices. As mentioned, many of these models are black-boxes and you can't get an answer out of them why they said what they said. For that reason alone, AI is likely to be an incredible tool in a radiologist's toolkit, but not replace him/her. The doctor can explain everything a patient needs explained, up to and including "I think things are fine, but the AI believes there is reason to keep monitoring your situation, and I've found it to be a useful tool in the past. How do you feel about that?" Patients have a right to input in their care, and that works so much better with a human.
-
"The Moon is a Harsh Mistress"...
One of my favorite books
-
Yup it's one of my faves. I'm a pretty big Heinlein fan in general.
-
First off, I'm pleasantly surprised by how many of us are well versed enough to speak in depth on Artificial Intelligence. Many of you (texas guys, mike, bw) have the analytical side down, whereas I'm inputting more from the philosophical side.
In summary, my mind goes to two questions when contemplating this matter:
1) Is it ultimately possible for GAI to reach a point of sentience, of self-wareness, of independent thought? It's as much a philosophical and spiritual question, as it is a scientific one. Can an artificial system gain true sentience, and if such a thing as a soul does exist, is it possible for an artificial system to develop one? And if not, is that potentially problematic?
I believe Artificial General Intelligence (AGI) will be achieved sooner than we realize, potentially within 2-5 years, given the rates by which Narrow AI programs are advancing. That means an artificial sentience, self-awareness, independently thinking, and to introduce another word - consciousness - in 2-5 years. I believe this other, nonhuman yet human-capable consciousness will be stumbled upon, created accidentally, and might even exist for a while before developers realize that they have, in fact, created another consciousness.
I think of its discovery like Cristopher Columbus sailing for India. He had the technology (a ship), enough expertise, and the drive to give it a shot crossing the seas. But what was ultimately discovered was quite unexpected (the Americas). We have the technology (quickly advancing Narrow AI) and the drive (arms race) to give it a shot. We expect to discover a separate though alike consciousness, yet what we more likely stumble upon will be much more alien than we could've expected (more on all this in a later post).
But why, from a philosophical standpoint, do I believe we’re closer to (accidentally) achieving an Artificial General Intelligence breakthrough? For centuries the concept of consciousness has cornerstoned one of Philosophy’s more subjective debates – what exactly is it? How does it emerge in humans? Consciousness has never been rigorously or definitively defined in philosophical or scientific terms. Yet, from both standpoints there is general consensus that human consciousness is inexorably tied to Language, to the point that the Existentialist, Martin Heidegger, famously stated: “Language is the house of being. In its home human beings dwell. Those who think and those who create with words are the guardians of this home. Their guardianship accomplishes the manifestation of Being insofar as they bring this manifestation to language and preserve it in language through their saying.” (Heidegger’s House Of Language)
A painfully theoretical statement indeed, but to simplify—
Heidegger is saying that we live inside our language systems in order to live at all; and we conform to our environments architected by our language systems. Theoretically, language does not come into the world so much as the world comes into language. Language is our footing (our consciousness) in the world.
How does this practically play out? Look at the medical world’s highly complex vocabulary. This exclusive vocabulary is how medical professionals comprehend, navigate, and live in their worlds – through dedicated language systems. To live in this world, Heidegger’s idea of Being conforms to this world through its language.
Now, with all that said, and proposing our footing with Language as a form consciousness, I’ll quote Bret Weinstein, a professor of evolutionary biology, speaking to one of his concerns about ChatGPT:
“…when we say ‘well Chat GPT doesn’t know what it’s saying'…because it’s not programmed to have a consciousness…we are actually ignoring the other half of the story which is that we don’t know how human consciousness works and we don’t know how it develops in a child…”
“…a child is exposed to world of adults talking around them…the child experiments first with phonemes and then words and then clusters of words and then sentences and, by doing something that isn’t all that far from what Chat GPT is doing, it ends up becoming a conscious individual…”
“…Chat GPT isn’t conscious, but it isn’t clear…that we are not suddenly stepping onto a process that produces [consciousness] very quickly without us even necessarily knowing it…”
-
Good stuff @CatsbyAZ (https://www.cfb51.com/index.php?action=profile;u=1532) ... A few points...
Agree 100% that we don't understand consciousness, nor intelligence. That of COURSE means that it's quite possible that the first time it arises in machine form that we may not realize what has been created, because all we can really understand is what the outputs are.
That said, I'm not sure that I agree with the professor of evolutionary biology, because while he's obviously smart in his field, it's not a field with a lot of overlap with computer science. And so I might quibble with the idea that an LLM will "slip its leash" and become AGI. We can try to create a machine to replicate one highly specific--and highly important--aspects of human intelligence: language. But there's a LOT more going on in the brain than language, stuff that we don't understand how and why it works, and it's not something we're building into that machine intelligence because we don't have a clue what it is.
Things like ChatGPT grab our attention precisely because it's so good at something we didn't think it could be good at. That because it can do so well with something as incredibly complex such as language, it's just one tiny little step from there to AGI. I personally think that step is a lot bigger than most people think.
Again going back to the things like autonomous driving, such as Tesla's Autopilot. I think what that system is capable of is truly amazing. That with what they've put together, they're maybe 95% of the way to being able to drive. But... That last 5% is a REAL bitch. Going from 0 to 95% is hard but doable. Going from 95% to 99% might be 5-10x as hard. Going from 99% to 100% might be 100x as hard as THAT.
We're incredibly impressed by what ChatGPT or Autopilot is capable of, because it's really damn impressive. But we look at the capability and don't really understand that to go from there to sentience/AGI or to autonomy is perhaps orders of magnitude beyond.
-
Good stuff @CatsbyAZ (https://www.cfb51.com/index.php?action=profile;u=1532) ... A few points...
Agree 100% that we don't understand consciousness, nor intelligence. That of COURSE means that it's quite possible that the first time it arises in machine form that we may not realize what has been created, because all we can really understand is what the outputs are.
That said, I'm not sure that I agree with the professor of evolutionary biology, because while he's obviously smart in his field, it's not a field with a lot of overlap with computer science. And so I might quibble with the idea that an LLM will "slip its leash" and become AGI. We can try to create a machine to replicate one highly specific--and highly important--aspects of human intelligence: language. But there's a LOT more going on in the brain than language, stuff that we don't understand how and why it works, and it's not something we're building into that machine intelligence because we don't have a clue what it is.
Things like ChatGPT grab our attention precisely because it's so good at something we didn't think it could be good at. That because it can do so well with something as incredibly complex such as language, it's just one tiny little step from there to AGI. I personally think that step is a lot bigger than most people think.
Again going back to the things like autonomous driving, such as Tesla's Autopilot. I think what that system is capable of is truly amazing. That with what they've put together, they're maybe 95% of the way to being able to drive. But... That last 5% is a REAL bitch. Going from 0 to 95% is hard but doable. Going from 95% to 99% might be 5-10x as hard. Going from 99% to 100% might be 100x as hard as THAT.
We're incredibly impressed by what ChatGPT or Autopilot is capable of, because it's really damn impressive. But we look at the capability and don't really understand that to go from there to sentience/AGI or to autonomy is perhaps orders of magnitude beyond.
Agree with that bolded in red. Which is why I raised my two questions above.
But it's also why I put aside Question #1 as not being important from a practical standpoint. It is of course incredibly interesting from a philosophical and spiritual view, but practically I think we're going to have systems that successfully emulate independent thought, and I don't think it's incredibly far off.
-
Agree with that bolded in red. Which is why I raised my two questions above.
But it's also why I put aside Question #1 as not being important from a practical standpoint. It is of course incredibly interesting from a philosophical and spiritual view, but practically I think we're going to have systems that successfully emulate independent thought, and I don't think it's incredibly far off.
I think we're talking about some different questions. If you're thinking about AI from the standpoint "can it perform tasks that a human of average to slightly above average intelligence can perform at equal or better quality?", I think we've got a number of those Narrow AI systems that can already do so. I know a lot of humans of average to slightly above average intelligence. Their capabilities are not all that impressive. (As we all know, I'm elite so I look down on them.)
I don't worry about ChatGPT replacing my writing. Why? Because I don't write things that have been written before. Some of the white papers I've written I've specifically taken on the topics because nobody in the world had published anything publicly that tackled that subject. If something is easily Googled, I have no interest in publishing it. It's the stuff that is beyond that, that interests me.
Where it gets to be questionable regarding actual AGI to me is that I think the bridge to AGI is difficult. However, once we hit AGI, I think the the next step to artificial superintelligence (ASI) is disturbingly quick. Basically, take everything involved in natural intelligence, and remove ALL of the natural impediments to superintelligence--namely biology.
Imagine that you have an entity that has reached AGI. For that entity to learn the state of the art of hacking knowledge would be trivial. For that entity to then utilize that knowledge to escape its leash and clone itself 100x using all computing power available to it would be trivial. For those entities to then attempt to develop new knowledge, and quickly/seamlessly share it amongst each other, would be trivial. It's not like humans where you have to spend months to years to "learn" something new. It's more like each entity being able to accumulate new capabilities/knowledge as easily as your phone installs an app. And then you get a snowball effect from something that has none of the limitations of human intelligence.
From a practical standpoint, the creation of AGI, to me, means that ASI isn't far behind. And none of us know what that entails. It could be utopia, or it could be Skynet.
-
And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate." They create completely false facts when queried on a subject. The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.
This is what I found when asking about specific things. What creeped me the hell out is that when I called it out, it doubled down once before acknowledging it basically didn't have enough data to form a valid opinion.
I get the error. I do NOT get the doubling down when called out.
-
I'm in the "not concerned" camp of AI and its potential to screw us over. For what AI is and does and can be, we're irrelevant. We're normally foolish and sometimes clever apes.
I disagree with the "we're just an anthill in Africa and AI wouldn't think twice to destroy us" idea......I view it more as we're AI's pet golden retriever and it's fearful of us enough to try to harm us. Nonsensical.
-
I get the error. I do NOT get the doubling down when called out.
Are you sure you weren't just posting here? When you might've thought you were messaging in ChatGPT?
-
The really scary part is that WHEN AGI starts, it will evolve quickly. Very quickly. For example, humans have existed for ~200,000 years ( Homo sapiens). Our recorded history only goes back 5-6,000 years. Technology moved very slowly until the last few hundred years. The printing press has only existed for 500 years. Electricity barely over 100 years. Computers are about 75-80 years. I think we’re all old enough to see how technology has transformed our lives in the last 40, 30, 20, 10, 5, and 1 years. iPhones have not existed for 20 years even.
AGI will be able to evolve itself, and the next version will evolve itself, and so forth and so on. Once we get to that step, what comes next is really steep. I first heard about this transition, called the singularity about 10 or so years ago. At the time, most people assumed it was going to be in the 2030’s or 2040’s. Now, some people think we may be on the verge.
-
It’s kinda like that scene in Terminator 2, when the main scientist is talking about the chip they found, how evolved it was. He exclaims, while simultaneously realizing what it really meant, that it had things in it “ that we would’ve never thought of” while his voice trails off.
-
I think we're talking about some different questions.
I don't really think we are, but I've expounded enough and will leave it at that. :)
I actually wrote a novel about an AI reaching sentience 30 years ago. Well, 3/4 of a novel to be more precise. The premise was that the code was capable all along, but it wasn't until that code was enabled on a system architecture that was fast enough and complex enough, that it finally became self-aware.
And the reason that system architecture was so fast and complex, is because it was no longer enabled in silicon hardware, but rather in organic matter mimicking the human brain. There's a word for that now-- wetware-- but I don't recall that term existing 30 years ago, although the idea certainly did.
I really wish I'd finished that novel. It would have been timely then, although it's been completely surpassed by now.
-
AGI will be able to evolve itself, and the next version will evolve itself, and so forth and so on. Once we get to that step, what comes next is really steep. I first heard about this transition, called the singularity about 10 or so years ago. At the time, most people assumed it was going to be in the 2030’s or 2040’s. Now, some people think we may be on the verge.
Yeah, the Ray Kurzweil book The Singularity Is Near was a pretty seminal work that got a lot of attention when published in 2005. He actually just released The Singularity Is Nearer last year. I've got it on my Kindle but haven't gotten around to it yet...
-
I hadn't heard any talk about "the singularity" using that exact term, until the past decade or so, but apparently it's a concept/prediction that's been around since the 50s and was first suggested by a mathematician named John von Neumann.
But of course there is all sorts of hard science, and sci fi, literature and media on the subject, going back to that same period, and even before, even if it's not named in the exact same way.
-
There's also a sci-fi series that actually has a significant amount of its story built around the relationships between biological and artificial intelligence, with a lot of characters who are self-aware digital entities (SADEs). I don't want to give too much away, but I think it does a good job of both the positive ways AI can develop and the negative.
It starts with a book called The Silver Ships (https://www.amazon.com/dp/B00W8EB0S2/) by an author named Scott Jucha (https://scottjucha.com/). The original series plus all the spinoffs is a total of 39 books now, but they're relatively short individually and read quickly--it's not what you'd call "hard sci-fi".
I'm not going to say it's some groundbreaking series. Some of the books can be a little formulaic. And yet... I've read them all and every time a new one is released (about every 4 months) I buy it as soon as I know it exists and typically read it pretty much cover to cover (to the extent you can do that on a Kindle lol) in a day that weekend. It's good, light, pulp sci-fi.
-
I think of its discovery like Cristopher Columbus sailing for India. He had the technology (a ship), enough expertise, and the drive to give it a shot crossing the seas. But what was ultimately discovered was quite unexpected (the Americas). We have the technology (quickly advancing Narrow AI) and the drive (arms race) to give it a shot. We expect to discover a separate though alike consciousness, yet what we more likely stumble upon will be much more alien than we could've expected (more on all this in a later post).
I'm in the "not concerned" camp of AI and its potential to screw us over. For what AI is and does and can be, we're irrelevant. We're normally foolish and sometimes clever apes.
I disagree with the "we're just an anthill in Africa and AI wouldn't think twice to destroy us" idea......I view it more as we're AI's pet golden retriever and it's fearful of us enough to try to harm us. Nonsensical.
I don’t think we can predict what General AI, with an awareness free of its base programming (because, by definition, it reprogrammed itself), will conclude about the human species.
Going back to Philosophy, I like falling back on Hegel’s Lord–bondsman dialectic. To quote Wikipedia (https://en.wikipedia.org/wiki/Lord–bondsman_dialectic): “The passage describes, in narrative form, the development of self-consciousness as such in an encounter between what are thereby two distinct, self-conscious beings. The essence of the dialectic is the movement or motion of recognizing, in which the two self-consciousnesses are constituted in each being recognized as self-conscious by the other. This movement, inexorably taken to its extreme, takes the form of a "struggle to the death" in which one masters the other, only to find that such lordship makes the very recognition he had sought impossible, since the bondsman, in this state, is not free to offer it.”
In other words, two separate self-consciousness beings, once recognizing the self-consciousness of the other, seek to overcome each other. Hegel is, in highly theoretical terms, attributing the competition, rivalry, and conquering waged between human beings not first to resources, pride, or jealousy, but to an inevitably of beings to seek to overcome each other, collectively and/or individually, before anything else is at stake between each other.
We as humans have only had to face self-consciousness in other human beings; never with anything else. General AI will put us face to face with a non-human self-consciousness for the first time. In Hegel’s terms, AGI self-consciousness will seek to overcome humans once AGI recognizes that we human constitute a separate self-consciousness, regardless of not otherwise being in competition with each other over matters like resources.
And once this occurs, what will AGI conclude about humans? That we are prone to self-destructive tendencies or subject to declining health in ways that AGI is not. The worry is if an ever AGI/ASI eventually finds itself a vastly superior being to humans, we’ll be dismissed and treated as such, like termites.
-
It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
-
We as humans have only had to face self-consciousness in other human beings; never with anything else. General AI will put us face to face with a non-human self-consciousness for the first time. In Hegel’s terms, AGI self-consciousness will seek to overcome humans once AGI recognizes that we human constitute a separate self-consciousness, regardless of not otherwise being in competition with each other over matters like resources.
However to suggest that an alternative non-human self-consciousness will follow the same patterns in behavior towards other recognized self-consciousnesses as Hegel postulates about humans is, IMHO, silly.
Our experience with self-conscious species is n=1. Too small of a sample size to extrapolate to all other potential self-conscious types of entities.
It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
It's an important point. We have no idea why we're self-aware and sentient. We don't understand it, so we may not be going down a road where it is possible to create it artificially because perhaps there are unknown inherent dependencies to get there that we aren't including in the "recipe".
-
Self awareness is weird. Je pens, donc, je suis, or something. Cogito ergo sum etc.
-
One POI in "TMIsAHM" is how "Mike" became aware, and then how that ended. How would we know if some advanced AI was "aware". Hay, Siri, are you aware of yourself?
My wife uses Siri as a timer. I deleted it from my phone, I'd be walking along and Siri would pipe up saying for me to repeat something, she didn't quite get it. I got annoyed with that quickly. I guess I "butt dialed" Siri or something. It happened daily.
Go away, leave me the H alone.
I'm aware of myself, the rest of youse could be figments, a bit of undigested beef etc.
-
We've focused on AI and not so much on CRISPR. I don't have much to say about it because I really know nothing about it, but I think it was @betarhoalphadelta (https://www.cfb51.com/index.php?action=profile;u=19) that raised a question about how it's possible to splice into a current, full-grown human, to potentially correct deficiencies or make repairs? I'm not sure how that would work, either.
-
We've focused on AI and not so much on CRISPR. I don't have much to say about it because I really know nothing about it, but I think it was @betarhoalphadelta (https://www.cfb51.com/index.php?action=profile;u=19) that raised a question about how it's possible to splice into a current, full-grown human, to potentially correct deficiencies or make repairs? I'm not sure how that would work, either.
You cut their chest open and have at it. Worked for me.
-
I posted some links to how it can be used on adult humans. It's still pretty experimental.
CRISPR Advancements for Human Health - PMC (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/)
CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) has emerged as a powerful gene editing technology that is revolutionizing biomedical research and clinical medicine. The CRISPR system allows scientists to rewrite the genetic code in virtually any organism. This review provides a comprehensive overview of CRISPR and its clinical applications. We first introduce the CRISPR system and explain how it works as a gene editing tool. We then highlight current and potential clinical uses of CRISPR in areas such as genetic disorders, infectious diseases, cancer, and regenerative medicine. Challenges that need to be addressed for the successful translation of CRISPR to the clinic are also discussed. Overall, CRISPR holds great promise to advance precision medicine, but ongoing research is still required to optimize delivery, efficacy, and safety.
Introduction
The CRISPR system is comprised of a CRISPR-associated (Cas) endonuclease along with a single guide RNA (sgRNA) designed to target a specific DNA sequence. Cas nucleases are enzymes that can bind and create double-stranded breaks in DNA. sgRNAs contain a scaffold structure that complexes to the Cas protein and also includes a uniquely engineered segment that can be designed to direct the Cas protein to a specific DNA sequence of interest. CRISPR technology enables precise targeting of nearly any genomic location simply by altering the nucleotide sequence of the sgRNA. This targeted approach can help to correct disease-causing mutations or suppress genes linked to the onset of diseases.1 (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/#b1-ms121_p0170) CRISPR has been adapted for deleting gene function (knockout), adding new gene function (knock-in), activation or repression of endogenous genes, and genomic diagnostic screening techniques.
Advanced CRISPR approaches such as base editing and prime editing use modified Cas enzymes which can induce precise single nucleotide changes in the genome without creating double-strand DNA breaks.2 (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/#b2-ms121_p0170) CRISPR can also be used to activate genes (CRISPRa) or inactivate genes (CRISPRi) by targeting modified sgRNA/Cas complexes to the gene’s promoter region, recruiting transcription factors for increased gene expression or repressors for decreasing gene expression.3 (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/#b3-ms121_p0170)
While CRISPR-Cas technology has demonstrated immense potential as a genome editing tool, its use in clinical applications is still in the early stages. As of January 2024, only 89 clinical trials employing CRISPR are currently underway, highlighting that much work remains to translate this technology into approved gene therapies.4 (https://pmc.ncbi.nlm.nih.gov/articles/PMC11057861/#b4-ms121_p0170) Notably, unintended alterations in DNA can occur through the utilization of CRISPR, and the long-term consequences of these modifications on patient health remain uncertain. However, given the considerable benefits that CRISPR offers, it is plausible to anticipate that these challenges will be overcome in the foreseeable future.
-
(https://i.imgur.com/RzctZur.png)
-
CD posted those links upthread so I didn't really comment, but it cleared it up. It's not wholesale replacement of the DNA across the entire body.
There were two cases where they discussed introducing or reintroducing cells that had been edited:
- One was some sort of a bone marrow disease. I believe they removed some bone marrow, edited the DNA so the bone marrow cells no longer had the genetic defect, then reintroduced it. Because healthy bone marrow can apparently then outcompete and grow in place of the dying bone marrow, it solved the issue. (I think this is how bone marrow transplants work--you don't have to replace all of it--it's the addition of the healthy marrow as long as its a donor match that over time replaces the bad stuff).
- The other was some sort of a blood / cancerous T cell issue. I think for that they had some sort of a donor, they edited the DNA so that the recipient's body wouldn't reject the new T cells, and the T cells were then able to go in and hunt down the patient's cancerous T cells and the cancer went into remission.
There was one other case that they talked about actually introducing the DNA, which sounded familiar like I'd heard of the concept before:
- A patient with a genetic degenerative eye disease which causes eventual blindness. Apparently for this the spliced the DNA into a virus, with the expectation that the virus would introduce the corrected DNA into the patient's cells. The virus was [to be?] injected into the eye (woo! that sounds fun!). I don't recall if that was a discussion of an actual patient case or if it was just theoretical.
I'd heard of the concept of using a virus to introduce the changes, which is the only way IMHO that you could see a more widespread change of DNA in an extant adult person. But to me also sounds like playing with fire :smiley_confused1:
-
I'd heard of the concept of using a virus to introduce the changes, which is the only way IMHO that you could see a more widespread change of DNA in an extant adult person. But to me also sounds like playing with fire :smiley_confused1:
Well, we've got to have SOMEONE to fight the AI when it attempts to overthrow us. Might as well be the zombie mutants we accidentally created via the viral gene replacement process.
-
Now THAT sounds like the core of a great novel. A race between burgeoning AI and mutant teenage Zombies with superhuman abilities.
The zombies split into two groups, werewolves and vampires who have to find some way to collaborate even though they are mortal enemies while T2000 runs rampant in the cities. Then aliens arrive and announce they too were afflicted this way but found an antidote and now wish to save us, but only to provide them with food later.
-
Well, we've got to have SOMEONE to fight the AI when it attempts to overthrow us. Might as well be the zombie mutants we accidentally created via the viral gene replacement process.
We'll use the viral gene replacement to infect the AI with human DNA.
Should dumb 'em down right quick!
-
We'll use the viral gene replacement to infect the AI with human DNA.
Should dumb 'em down right quick!
Ah, so...
(https://y.yarn.co/05be7135-4f64-4695-95f3-ed90024b9efd_text.gif)
-
It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
There it is. For as many futurists, philosophers, and pontificators see it as possible and/or inevitable, there are just as many well-versed and well-respected people in the field, and philosophers who agree with them, who are skeptical it can or will happen, for a variety of reasons.
I'm firmly in the camp that believes it's very unlikely we'll get sentience from AGI, though I never say never. I'm also in utee's camp that practically, it won't matter in some ways, because AGI could emulate intelligence and self-awareness so well that there's no meaningful difference to us. It's been a long-standing question as to whether or not we could possibly know if a machine was sentient or not. We can't even prove other humans are sentient, we take it on faith and enculturated norms. There's even a branch of philosophy that is skeptical anyone else exists at all, outside of themselves. I always wanted to ask them, if that's true, who do you think is agreeing with you in that philosophical stance? Or disagreeing with you?
@CatsbyAZ (https://www.cfb51.com/index.php?action=profile;u=1532) , I picked the wrong day to be absent. I quite like your line of thought, and despite my prior ramblings about the basic theory, I also find the philosophical aspects far more interesting. And, despite my education being in ML, and not philosophy, I've studied philosophy so much over the years as a hobby and ingested quite a bit about the philosophy of AI and consciousness that I consider myself more well-versed in that aspect than the actual building/using AI realm. Sadly, I just don't have time now to go back and respond to all of yours, brad's, and utee's posts, and everybody's probably moved on anyway. But they were all interesting. Bottom line is, I think Hegel makes a worthwhile point about language (and his quotes you used remind me of the Jeremy Renner movie "Arrival"), but that it's incomplete, and intelligence/sentience isn't just a function of language, although language is a necessary component. The other ingredients, though, I remain skeptical that any amount of complicated algorithm or language will ever produce. Philosophy of the mind is super interesting, and there are those who disagree with me, so ymmv.
One of the coolest things I got out of learning about LLMs after years of studying and thinking about the philosophy of AI and consciousness is realizing the genuine parallels between how LLMs do what they do and how babies-toddlers-children-adults do what we do. Once I grasped that, it nearly blew my mind when I realized what it meant about.....I don't know how else to say it.....reality. Everything. The way the universe/multiverse/Whole Sort of General Mish-Mash (Douglas Adams fans will get that) just is...is insane. And by insane, I mean magnificently ordered.
-
Quote from: utee94 on January 31, 2025, 11:38:28 AM (https://www.cfb51.com/big-ten/crispr-and-ai/msg667104/#msg667104)
It's also possible that self-consciousness, self-awareness, sentience, simply isn't possible for an artificial intelligence. Just as many reasons to think it might never happen, as there are to think it inevitably will happen.
There it is. For as many futurists, philosophers, and pontificators see it as possible and/or inevitable, there are just as many well-versed and well-respected people in the field, and philosophers who agree with them, who are skeptical it can or will happen, for a variety of reasons.
They don't have to be perfect just effective enough to do some damage 😈
-
I've found that you always need to look at the hands when you see pictures of people anymore. AI still can't do hands very well.
I was messing with one of these image generators, and you're right. The hands are almost always out of view. They are intentionally hiding them.
-
ya just don't see the toes
(https://i.imgur.com/xDgEhA5.jpeg)
-
The book on CRISPR started going into personal beefs which lost my interest.
-
I got through it mostly skimming and then into the applications on humans and ethics concerns. Some Chinese dude did it on two human embryos who were born, and then China threw him in jail. The ethics concerns are pretty sobering.
-
where are the two human embryos who were born today?
How old are they?
Would they like to join the group here?
-
CRISPR Babies: Where Are the First Gene-Edited Children Now? (https://www.popularmechanics.com/science/health/a42790400/crispr-babies-where-are-they-now-first-gene-edited-children/)
He Jiankui, Chinese scientist scorned for gene-edited babies, is back in the lab : NPR (https://www.npr.org/2023/06/08/1178695152/china-scientist-he-jiankui-crispr-baby-gene-editing)
He Jiankui shocked the scientific community in 2018 (https://www.popularmechanics.com/science/health/a25385071/gene-editing-crispr-cas9-legal/) by announcing his team had used the CRISPR-Cas9 gene-editing tool on twin girls when they were just embryos, resulting in the birth of the world’s first genetically modified babies. A third gene-edited child was born a year later.
Now, the disgraced gene-editing scientist, who was imprisoned in China for three years for the unethical practices, tells (https://www.scmp.com/news/china/science/article/3209289/we-should-respect-them-chinese-creator-worlds-first-gene-edited-humans-says) the South China Morning Post that all three children are doing well. “They have a normal, peaceful, and undisturbed life,” He says. “This is their wish, and we should respect them. The happiness of the children and their families should come first.”
He’s original goal was to use gene editing to attempt—many call this a live human experiment—to rewrite the CCR5 gene to create resistance to HIV. He says the genes were edited successfully and believed it gave the babies either complete or partial HIV resistance because of the mutation.
-
Weird History!