header pic

Perhaps the BEST B1G Forum anywhere, here at College Football Fan Site, CFB51!!!

The 'Old' CFN/Scout Crowd- Enjoy Civil discussion, game analytics, in depth player and coaching 'takes' and discussing topics surrounding the game. You can even have your own free board, all you have to do is ask!!!

Anyone is welcomed and encouraged to join our FREE site and to take part in our community- a community with you- the user, the fan, -and the person- will be protected from intrusive actions and with a clean place to interact.


Author

Topic: CRISPR and AI

 (Read 3530 times)

MikeDeTiger

  • All Star
  • ******
  • Posts: 4349
  • Liked:
Re: CRISPR and AI
« Reply #56 on: January 30, 2025, 09:55:00 AM »
One area that I wonder about is liability?

I think about this with autonomous driving. For autonomous driving to REALLY be here, it has to mean that the vehicle is capable of driving me and I have no legal liability for what it does. I.e. I can be passed out drunk in the front left seat of the car and if it mows down a row of nuns on their way to mass, it's the car manufacturer's problem, not mine. Which means that the autonomous system doesn't have to be merely "better than a human", but it has to be so much better than the human that the human is no longer needed.

I wonder what the risk of misdiagnosis is when it comes to things like AI-read medical imaging. Obviously if AI says "that's a tumor" it'll get referred to a human doctor because you're going to have to treat the tumor. But what if AI says "nope, there's no tumor there!" and then you die of cancer because it was, actually, a tumor. Can your family sue the medical group for malpractice because they farmed your diagnosis out to AI? Can they sue the AI medical company because their faulty algorithm didn't get your diagnosis right?

These are some of those areas where I wonder how it's all going to be figured out.


As I say, the business world is murky, complicated, and treacherous, and I won't hazard any guesses as to how it will go legally or in the insurance industry.  

Speaking somewhat knowledgeably from working in the medical field for a while and being married to a provider, and from knowing a little about AI--more so on the analytics side, but at least a working knowledge of the generative side--I think the technological side has a clear path.  Just my opinion.  

The average radiologist has a stellar hit/miss rate when reading images.  A good ML model that stays in its lane is already better at some things.  Radiologists only misdiagnose a tiny fraction of images.  AI only misdiagnoses a fraction of their fraction.  

Back to the precision/recall curve, if you train a model where the graph of the curve (for the life of me I can't remember what it's called) hugs the axes, you can really go hard on fine-tuning the model to weed out false negatives.  Sure, false positives are their own problem, but 1) those can be passed on for human inspection, and 2) it's better than letting false negatives slip by.  Legally, sure, maybe there will always be problems.  But logically, AI already has a better hit/miss ratio, and the reality is any positive it misses, statistically speaking, would've almost certainly have been missed by a human as well.  

Something to remember in all this is what I said earlier, the model's performance is measured against ground truth (like all models).  But in this case ground truth is an amalgamation of human ability.  So the model is compared relatively to individual radiologists' performance, but it's objectively compared to a cumulative sum of radiologists' expertise.....and that's different than an algorithm that identifies cats or dogs, where the labels are understood to be foolproof.  That is what I think is likely to complicate legal matters for potentially years.  I'm just guessing.  

Yet another legal ramification to consider is; how do lawsuits fare concerning not using AI?  Some of these AI models I looked into could spot pre-cancerous areas like a boss.  As in, it's not even cancer yet, and no radiologist would've ever said "we need to keep an eye on you over the next 5 years," but AI found it and patients were able to start life-saving preventative treatment.  And no one even knows what it's looking at.  The model is so black-box that nobody knows how it figured out the problem spot that would develop cancer.  Obviously, doctors have understood pre-cancerous areas for a long time and can even spot them sometimes.  But not even close to the level at which AI can do it.  

The legal/moral/business implications may wind up swinging both ways.  

MikeDeTiger

  • All Star
  • ******
  • Posts: 4349
  • Liked:
Re: CRISPR and AI
« Reply #57 on: January 30, 2025, 10:26:38 AM »
I think what @MikeDeTiger was suggesting about using it as triage is where the problem comes in.
 
  • A false positive is easy to correct, and not THAT big of a deal, because the patient will be referred to someone human for treatment. If it's not a tumor, it's not like AI will be immediately operating on you based on AI's diagnosis. At least... Not yet! (Obv the mental anguish of AI saying "you've got a tumor" and then finding out you don't would be hell... But it's better than having the tumor!)
  • A false negative is the problem. If AI says "oh you totes def don't have a tumor!" and nobody double-checks it, then you die because you actually had a tumor, that is where the bigger issue comes in.

If they're using AI diagnosis as a triage due to workload issues, and there are false negatives that "slip through the cracks", then you might have lawyers salivating...

Meant to add....  At this point, the medical workflow is nowhere near a point where someone would be told they have a tumor because AI thought so.  A doctor would be notified of the results, they would read the image themselves, and even if a doctor caught it without AI, they won't tell patient anything like that without other steps.  Biopsies and other applicable confirmations. 

I know you probably didn't mean that as if AI would read an image and then generate an email to the patient saying "Hi, I'm an algorithm and I notice you have cancer, you should follow up with an oncologist," but I just thought it was worth pointing out at as a PSA. 

Overall, I think you're right.  False positives are less of a problem than false negatives.  But, again, the models can be trained for an acceptable level of the former in order to ensure the latter virtually never happens.  You probably know this, but again for the folks who don't know a ton about this, there is an inherent tradeoff between false positives and negatives.  So if you are willing to extend the necessary leeway one direction, a good model can hit close to 100% the other way.  i.e., if we're willing to put up with more instances of false positives, we can guard against nearly every false negative.  

That all leads to another reason why most don't believe the field of radiology will be replaced by AI (anytime soon).  Patients prefer dealing with a person, and for various reasons, I think that's unlikely to ever fully change.  They're also owed transparency under modern standards and practices.  As mentioned, many of these models are black-boxes and you can't get an answer out of them why they said what they said.  For that reason alone, AI is likely to be an incredible tool in a radiologist's toolkit, but not replace him/her.  The doctor can explain everything a patient needs explained, up to and including "I think things are fine, but the AI believes there is reason to keep monitoring your situation, and I've found it to be a useful tool in the past.  How do you feel about that?"  Patients have a right to input in their care, and that works so much better with a human. 

Riffraft

  • Starter
  • *****
  • Default Avatar
  • Posts: 1473
  • Liked:
Re: CRISPR and AI
« Reply #58 on: January 30, 2025, 12:28:09 PM »
"The Moon is a Harsh Mistress"...
One of my favorite books

utee94

  • Global Moderator
  • Hall of Fame
  • *****
  • Posts: 22219
  • Liked:
Re: CRISPR and AI
« Reply #59 on: January 30, 2025, 12:46:32 PM »
Yup it's one of my faves.  I'm a pretty big Heinlein fan in general.

CatsbyAZ

  • All Star
  • ******
  • Posts: 3184
  • Liked:
Re: CRISPR and AI
« Reply #60 on: January 30, 2025, 06:13:03 PM »
First off, I'm pleasantly surprised by how many of us are well versed enough to speak in depth on Artificial Intelligence. Many of you (texas guys, mike, bw) have the analytical side down, whereas I'm inputting more from the philosophical side.

In summary, my mind goes to two questions when contemplating this matter:

1) Is it ultimately possible for GAI to reach a point of sentience, of self-wareness, of independent thought?  It's as much a philosophical and spiritual question, as it is a scientific one.  Can an artificial system gain true sentience, and if such a thing as a soul does exist, is it possible for an artificial system to develop one?  And if not, is that potentially problematic?

I believe Artificial General Intelligence (AGI) will be achieved sooner than we realize, potentially within 2-5 years, given the rates by which Narrow AI programs are advancing. That means an artificial sentience, self-awareness, independently thinking, and to introduce another word - consciousness - in 2-5 years. I believe this other, nonhuman yet human-capable consciousness will be stumbled upon, created accidentally, and might even exist for a while before developers realize that they have, in fact, created another consciousness.

I think of its discovery like Cristopher Columbus sailing for India. He had the technology (a ship), enough expertise, and the drive to give it a shot crossing the seas. But what was ultimately discovered was quite unexpected (the Americas). We have the technology (quickly advancing Narrow AI) and the drive (arms race) to give it a shot. We expect to discover a separate though alike consciousness, yet what we more likely stumble upon will be much more alien than we could've expected (more on all this in a later post).

But why, from a philosophical standpoint, do I believe we’re closer to (accidentally) achieving an Artificial General Intelligence breakthrough? For centuries the concept of consciousness has cornerstoned one of Philosophy’s more subjective debates – what exactly is it? How does it emerge in humans? Consciousness has never been rigorously or definitively defined in philosophical or scientific terms. Yet, from both standpoints there is general consensus that human consciousness is inexorably tied to Language, to the point that the Existentialist, Martin Heidegger, famously stated: “Language is the house of being. In its home human beings dwell. Those who think and those who create with words are the guardians of this home. Their guardianship accomplishes the manifestation of Being insofar as they bring this manifestation to language and preserve it in language through their saying.” (Heidegger’s House Of Language)

A painfully theoretical statement indeed, but to simplify—
Heidegger is saying that we live inside our language systems in order to live at all; and we conform to our environments architected by our language systems. Theoretically, language does not come into the world so much as the world comes into language. Language is our footing (our consciousness) in the world.

How does this practically play out? Look at the medical world’s highly complex vocabulary. This exclusive vocabulary is how medical professionals comprehend, navigate, and live in their worlds – through dedicated language systems. To live in this world, Heidegger’s idea of Being conforms to this world through its language.

Now, with all that said, and proposing our footing with Language as a form consciousness, I’ll quote Bret Weinstein, a professor of evolutionary biology, speaking to one of his concerns about ChatGPT:

“…when we say ‘well Chat GPT doesn’t know what it’s saying'…because it’s not programmed to have a consciousness…we are actually ignoring the other half of the story which is that we don’t know how human consciousness works and we don’t know how it develops in a child…”

“…a child is exposed to world of adults talking around them…the child experiments first with phonemes and then words and then clusters of words and then sentences and, by doing something that isn’t all that far from what Chat GPT is doing, it ends up becoming a conscious individual…”

“…Chat GPT isn’t conscious, but it isn’t clear…that we are not suddenly stepping onto a process that produces [consciousness] very quickly without us even necessarily knowing it…”


betarhoalphadelta

  • Global Moderator
  • Hall of Fame
  • *****
  • Posts: 14513
  • Liked:
Re: CRISPR and AI
« Reply #61 on: January 30, 2025, 06:41:48 PM »
Good stuff @CatsbyAZ ... A few points...

Agree 100% that we don't understand consciousness, nor intelligence. That of COURSE means that it's quite possible that the first time it arises in machine form that we may not realize what has been created, because all we can really understand is what the outputs are. 

That said, I'm not sure that I agree with the professor of evolutionary biology, because while he's obviously smart in his field, it's not a field with a lot of overlap with computer science. And so I might quibble with the idea that an LLM will "slip its leash" and become AGI. We can try to create a machine to replicate one highly specific--and highly important--aspects of human intelligence: language. But there's a LOT more going on in the brain than language, stuff that we don't understand how and why it works, and it's not something we're building into that machine intelligence because we don't have a clue what it is. 

Things like ChatGPT grab our attention precisely because it's so good at something we didn't think it could be good at. That because it can do so well with something as incredibly complex such as language, it's just one tiny little step from there to AGI. I personally think that step is a lot bigger than most people think. 

Again going back to the things like autonomous driving, such as Tesla's Autopilot. I think what that system is capable of is truly amazing. That with what they've put together, they're maybe 95% of the way to being able to drive. But... That last 5% is a REAL bitch. Going from 0 to 95% is hard but doable. Going from 95% to 99% might be 5-10x as hard. Going from 99% to 100% might be 100x as hard as THAT. 

We're incredibly impressed by what ChatGPT or Autopilot is capable of, because it's really damn impressive. But we look at the capability and don't really understand that to go from there to sentience/AGI or to autonomy is perhaps orders of magnitude beyond. 

utee94

  • Global Moderator
  • Hall of Fame
  • *****
  • Posts: 22219
  • Liked:
Re: CRISPR and AI
« Reply #62 on: January 30, 2025, 06:49:35 PM »
Good stuff @CatsbyAZ ... A few points...

Agree 100% that we don't understand consciousness, nor intelligence. That of COURSE means that it's quite possible that the first time it arises in machine form that we may not realize what has been created, because all we can really understand is what the outputs are.

That said, I'm not sure that I agree with the professor of evolutionary biology, because while he's obviously smart in his field, it's not a field with a lot of overlap with computer science. And so I might quibble with the idea that an LLM will "slip its leash" and become AGI. We can try to create a machine to replicate one highly specific--and highly important--aspects of human intelligence: language. But there's a LOT more going on in the brain than language, stuff that we don't understand how and why it works, and it's not something we're building into that machine intelligence because we don't have a clue what it is.

Things like ChatGPT grab our attention precisely because it's so good at something we didn't think it could be good at. That because it can do so well with something as incredibly complex such as language, it's just one tiny little step from there to AGI. I personally think that step is a lot bigger than most people think.

Again going back to the things like autonomous driving, such as Tesla's Autopilot. I think what that system is capable of is truly amazing. That with what they've put together, they're maybe 95% of the way to being able to drive. But... That last 5% is a REAL bitch. Going from 0 to 95% is hard but doable. Going from 95% to 99% might be 5-10x as hard. Going from 99% to 100% might be 100x as hard as THAT.

We're incredibly impressed by what ChatGPT or Autopilot is capable of, because it's really damn impressive. But we look at the capability and don't really understand that to go from there to sentience/AGI or to autonomy is perhaps orders of magnitude beyond.

Agree with that bolded in red.  Which is why I raised my two questions above.

But it's also why I put aside Question #1 as not being important from a practical standpoint.  It is of course incredibly interesting from a philosophical and spiritual view, but practically I think we're going to have systems that successfully emulate independent thought, and I don't think it's incredibly far off.

betarhoalphadelta

  • Global Moderator
  • Hall of Fame
  • *****
  • Posts: 14513
  • Liked:
Re: CRISPR and AI
« Reply #63 on: January 30, 2025, 07:34:46 PM »
Agree with that bolded in red.  Which is why I raised my two questions above.

But it's also why I put aside Question #1 as not being important from a practical standpoint.  It is of course incredibly interesting from a philosophical and spiritual view, but practically I think we're going to have systems that successfully emulate independent thought, and I don't think it's incredibly far off.
I think we're talking about some different questions. If you're thinking about AI from the standpoint "can it perform tasks that a human of average to slightly above average intelligence can perform at equal or better quality?", I think we've got a number of those Narrow AI systems that can already do so. I know a lot of humans of average to slightly above average intelligence. Their capabilities are not all that impressive. (As we all know, I'm elite so I look down on them.)

I don't worry about ChatGPT replacing my writing. Why? Because I don't write things that have been written before. Some of the white papers I've written I've specifically taken on the topics because nobody in the world had published anything publicly that tackled that subject. If something is easily Googled, I have no interest in publishing it. It's the stuff that is beyond that, that interests me.

Where it gets to be questionable regarding actual AGI to me is that I think the bridge to AGI is difficult. However, once we hit AGI, I think the the next step to artificial superintelligence (ASI) is disturbingly quick. Basically, take everything involved in natural intelligence, and remove ALL of the natural impediments to superintelligence--namely biology.

Imagine that you have an entity that has reached AGI. For that entity to learn the state of the art of hacking knowledge would be trivial. For that entity to then utilize that knowledge to escape its leash and clone itself 100x using all computing power available to it would be trivial. For those entities to then attempt to develop new knowledge, and quickly/seamlessly share it amongst each other, would be trivial. It's not like humans where you have to spend months to years to "learn" something new. It's more like each entity being able to accumulate new capabilities/knowledge as easily as your phone installs an app. And then you get a snowball effect from something that has none of the limitations of human intelligence.

From a practical standpoint, the creation of AGI, to me, means that ASI isn't far behind. And none of us know what that entails. It could be utopia, or it could be Skynet.

OrangeAfroMan

  • Stats Porn
  • Hall of Fame
  • *****
  • Posts: 21773
  • Liked:
Re: CRISPR and AI
« Reply #64 on: January 30, 2025, 08:31:08 PM »


And in the worst case, the AI LLMs trained on the entire web, have a tendency to "hallucinate."  They create completely false facts when queried on a subject.  The false facts are usually plausible, but they're absolutely 100% fabricated, because the training data has so much ambiguity and incorrect data within it.


This is what I found when asking about specific things.  What creeped me the hell out is that when I called it out, it doubled down once before acknowledging it basically didn't have enough data to form a valid opinion.

I get the error.  I do NOT get the doubling down when called out.
“The Swamp is where Gators live.  We feel comfortable there, but we hope our opponents feel tentative. A swamp is hot and sticky and can be dangerous." - Steve Spurrier

OrangeAfroMan

  • Stats Porn
  • Hall of Fame
  • *****
  • Posts: 21773
  • Liked:
Re: CRISPR and AI
« Reply #65 on: January 30, 2025, 08:34:40 PM »
I'm in the "not concerned" camp of AI and its potential to screw us over.  For what AI is and does and can be, we're irrelevant.  We're normally foolish and sometimes clever apes. 

I disagree with the "we're just an anthill in Africa and AI wouldn't think twice to destroy us" idea......I view it more as we're AI's pet golden retriever and it's fearful of us enough to try to harm us.  Nonsensical.
“The Swamp is where Gators live.  We feel comfortable there, but we hope our opponents feel tentative. A swamp is hot and sticky and can be dangerous." - Steve Spurrier

CatsbyAZ

  • All Star
  • ******
  • Posts: 3184
  • Liked:
Re: CRISPR and AI
« Reply #66 on: January 30, 2025, 09:12:13 PM »
I get the error.  I do NOT get the doubling down when called out.

Are you sure you weren't just posting here? When you might've thought you were messaging in ChatGPT?

Gigem

  • All Star
  • ******
  • Posts: 3351
  • Liked:
Re: CRISPR and AI
« Reply #67 on: January 30, 2025, 10:18:46 PM »
The really scary part is that WHEN AGI starts, it will evolve quickly. Very quickly. For example, humans have existed for ~200,000 years ( Homo sapiens). Our recorded history only goes back 5-6,000 years. Technology moved very slowly until the last few hundred years. The printing press has only existed for 500 years. Electricity barely over 100 years.  Computers are about 75-80 years. I think we’re all old enough to see how technology has transformed our lives in the last 40, 30, 20, 10, 5, and 1 years. iPhones have not existed for 20 years even.  

AGI will be able to evolve itself, and the next version will evolve itself, and so forth and so on. Once we get to that step, what comes next is really steep. I first heard about this transition, called the singularity about 10 or so years ago. At the time, most people assumed it was going to be in the 2030’s or 2040’s. Now, some people think we may be on the verge. 

Gigem

  • All Star
  • ******
  • Posts: 3351
  • Liked:
Re: CRISPR and AI
« Reply #68 on: January 30, 2025, 10:21:11 PM »
It’s kinda like that scene in Terminator 2, when the main scientist is talking about the chip they found, how evolved it was. He exclaims, while simultaneously realizing what it really meant, that it had things in it “ that we would’ve never thought of” while his voice trails off. 

utee94

  • Global Moderator
  • Hall of Fame
  • *****
  • Posts: 22219
  • Liked:
Re: CRISPR and AI
« Reply #69 on: January 31, 2025, 06:53:51 AM »
I think we're talking about some different questions.
I don't really think we are, but I've expounded enough and will leave it at that. :)

I actually wrote a novel about an AI reaching sentience 30 years ago.  Well, 3/4 of a novel to be more precise.  The premise was that the code was capable all along, but it wasn't until that code was enabled on a system architecture that was fast enough and complex enough, that it finally became self-aware.

And the reason that system architecture was so fast and complex, is because it was no longer enabled in silicon hardware, but rather in organic matter mimicking the human brain.  There's a word for that now-- wetware-- but I don't recall that term existing 30 years ago, although the idea certainly did.

I really wish I'd finished that novel.  It would have been timely then, although it's been completely surpassed by now.



 

Support the Site!
Purchase of every item listed here DIRECTLY supports the site.