header pic

Perhaps the BEST B1G Forum anywhere, here at College Football Fan Site, CFB51!!!

The 'Old' CFN/Scout Crowd- Enjoy Civil discussion, game analytics, in depth player and coaching 'takes' and discussing topics surrounding the game. You can even have your own free board, all you have to do is ask!!!

Anyone is welcomed and encouraged to join our FREE site and to take part in our community- a community with you- the user, the fan, -and the person- will be protected from intrusive actions and with a clean place to interact.


Author

Topic: CRISPR and AI

 (Read 3484 times)

Cincydawg

  • Oracle of Piedmont Park
  • Global Moderator
  • Hall of Fame
  • *****
  • Default Avatar
  • Posts: 82585
  • Oracle of Piedmont Park
  • Liked:

MikeDeTiger

  • All Star
  • ******
  • Posts: 4349
  • Liked:
Re: CRISPR and AI
« Reply #29 on: January 29, 2025, 03:32:48 PM »
I'm going to assume that machine learning in radiology basically followed this rough model:

  • An exhaustive data set of radiological scans (of whatever type they worked with) was built.
  • This data set obviously didn't just use raw scans, but they were annotated with tags such that it identified the various anomalies and outcomes that the actual patients had. I.e. if something was found to actually be a tumor when removed, it was tagged as a tumor in the data set.
  • The machine learning models were trained on this very large dataset, such that it developed the ability to be presented with any of the images in the data set without any sort of tagging and would identify what it was supposed to identify with high accuracy.
  • The models were then tested with novel untagged scans that it had NOT been trained on to see if its accuracy was maintained.
  • Once a sufficient level of accuracy was achieved, it became a tool that radiologists can now use to augment their own training--because sometimes the model catches something they would have missed. But sometimes it catches something they missed because it was actually nothing. So the radiologist has to be able to use their own judgment to both recognize their own false negatives, and exclude the model's false positives.


Somewhat close on that?

Not far off.  

It's worth noting that within rigid contexts, ML models already outperform radiologists at diagnosing with a higher degree of accuracy.  There are obvious "real-world" problems, such as medical boards getting on the bandwagon as well as insurance companies deciding to pay for such tests.  

But even without all that, we're not yet in Radiology Utopia anyway, for nothing more than pure AI-related issues. 

For one thing, ground truth for radiology images is hard to come by.  brad and utee will know this, but for anybody unfamiliar, "ground truth" just means the objective, brute, real fact about something no matter who thinks differently.  In this case, we're usually dealing with some kind of classification algorithm, i.e., the model spits out a label "You have a tumor, bruh" or "Nah, ur good."  It can do this, as brad mentioned, because it has trained on gobs and gobs of data that was labeled for it, before it was given the task of deciding for itself about an unlabeled image.  The first and obvious question is, who decides what the training images are labeled?  In this case.....radiologists did.  But wait, how can AI surpass humans if it learned everything it knows from fallible humans?  Thing is, real ground truth for radiology images can only be verified with biopsies, lab tests, and other things that are often out of the question.  You can't biopsy somebody with a healthy image to prove they're healthy, and much of the time you can't biopsy somebody an image suggests has a problem, for various reasons, both medical and ethical.  That's just one of the hurdles.  The solution, in most cases I researched, was that ground truth was established--both for the training sets and for the test sets--by a group of radiologists.  The idea is that multiple heads are better than one, and indeed, while a radiologist can and does mis-diagnose an image, the chances go way down if a group of them have a consensus.  When evaluating a model for published research, it's usually being weighed against that human consensus (and stats describe how it performs compared to individual doctors, not the collective).  So, ground truth in this case is not like teaching an algorithm to classify images of cats and dogs, where the training labels can be taken as a given.  

Another problem is AI doesn't yet know which tests are appropriate for what kind of findings, and it's more convoluted than it sounds, though I would think eventually that part will get sorted out.  Yet another problem is with rare tumors or conditions.  AI will never perform well when it doesn't have much data to train on, while a radiologist who has years of experience and went through thousands of hours in med school has an advantage with rare cases.  A human can be shown a few images of a rare condition and begin nailing it pretty quickly.  An ML model can't.  

There's a lot more to go into, but I'm probably boring you to tears.  

The only thing I'd push back on is about radiologists augmenting their training with AI.  They will certainly augment their practice, but for them to train in the first place, I don't know that they'll ever use anything but their eyes and brains.  Radiologists learn to do what they do very similarly to AI.  A radiologist is shown hundreds of thousands of normal, disease-free images before they're ever shown any kinds of pathologies.  That's because they have to be able to identify normal images in their sleep, so that when they see an image with a problem, it jumps out at them.  I'd assume that using AI to help them train would work against the very skill they're trying to develop.  But, that's probably a better question for my medical wife.

medinabuckeye1

  • Hall of Fame
  • *****
  • Default Avatar
  • Posts: 10620
  • Liked:
Re: CRISPR and AI
« Reply #30 on: January 29, 2025, 03:41:37 PM »
I've even seen some places selling fake fingers you can wear on your hands to give the appearance that you have extra fingers just in case you get caught on video you can claim it's fake. 

Strange world. 
That is hilarious and oddly smart.  Some aspiring politician is probably walking into a strip club right now wearing fake fingers so if a picture ever emerges he can point to the extra fingers and allege that the picture is AI generated, LoL.  

Drew4UTk

  • Administrator
  • Hall of Fame
  • *****
  • Posts: 11212
  • Liked:
Re: CRISPR and AI
« Reply #31 on: January 29, 2025, 03:48:28 PM »
i started pecking code in the 80s, transcribing precisely from a book w/o understanding of it... at all... then, when the internet became a thing, i dug into HTML... all as a hobby; never for income.  

i still kick around in code... it's logic and lack of personality appeals to me and i find it settling. weird, huh? 

i share that to offer that i've been around it for some time and have seen it's capabilities expand- and along with that, how it's used (and often abused).  a few years back on this site, we had really strange behavior with data mining.  we were also on a few watch lists.  i was concerned about that becoming 'hit lists' which would render access to the site difficult if attempted through corporate or government servers... i was alerted to it and simply observed it for a long time, though.  when i started poking them, they responded by immediately shifting to proxies, and it became a game of cat and mouse... researching where the site was accessed from, who it was, and what their relationships were with the organizations that were using them as proxy was entertaining.  it was simple to isolate their access IP after screening against known IP's of members- making everyone else suspect- and then busting them down by region and instantly figuring out that 'Europe surely doesn't have an interest in CFB, so... why they here?'... same with other regions... our title and basis for topic here made sifting through and locating strange players a lot easier.  

... that, and i did work for the network operations center aboard Camp Lejuene at the time and compared notes with their auditors who are paid to monitor network traffic.  I taught them of some servers; they told me who other servers were.  during covid and hyper active campaigns to control the narrative, it was especially fun... and revealing. 

our data is locked up here as tight as i can make it without adding stupid requirements to access... the biggest reason we were being constantly observed is now behind a membership wall, and when that happened almost all that traffic disappeared within a reasonable amount of time. 

.... 

i get humored on the subject of AI.... the only ghost in that machine operating that machine is your memories.  all piled up in a heaping lump of data, that had little value until someone discovered how to mine it... and when that happened?  it's become mighty useful.  and, you can't hide from it.  there are mountains of data about you no matter how much effort you expended to escape it.  

MrNubbz

  • Hall of Fame
  • *****
  • Default Avatar
  • Posts: 19973
  • Liked:
Re: CRISPR and AI
« Reply #32 on: January 29, 2025, 03:52:00 PM »
This has long been done by 'bots, and is quite simple to program
Need to treat these bots flying drones like skeet
"Let us endeavor so to live - that when we come to die even the undertaker will be sorry." - Mark Twain

betarhoalphadelta

  • Global Moderator
  • Hall of Fame
  • *****
  • Posts: 14513
  • Liked:
Re: CRISPR and AI
« Reply #33 on: January 29, 2025, 03:55:50 PM »
There's a lot more to go into, but I'm probably boring you to tears
Not at all! You're not boring me at all. Everyone else, probably. But not me :57:

The only thing I'd push back on is about radiologists augmenting their training with AI.  They will certainly augment their practice, but for them to train in the first place, I don't know that they'll ever use anything but their eyes and brains.
Sorry, that's what I was getting at. They'll train to become a radiologist the same way they always have.

In their practice, they'll still primarily use their own eyes and brains but also run the images through the model to see if there is something that they missed, or something that warrants deeper attention, etc. 

I.e. a writer is typically capable of knowing how to spell and use grammar properly. But some days you maybe haven't had that second cup of coffee and the MS Word spelling / grammar check picks up something that you would NORMALLY have seen, but didn't. 

MikeDeTiger

  • All Star
  • ******
  • Posts: 4349
  • Liked:
Re: CRISPR and AI
« Reply #34 on: January 29, 2025, 03:56:40 PM »
Well, for those of us who do understand it, please tell us WTF is happening? 

Gigem,

I'm not sure if you really meant for those who don't understand it, but if so, I think brad's earlier point is a crucial one to understand, and something I find many people who don't know much about it struggles with.  

AI doesn't know how to do anything.  Knowing would entail some kind of consciousness....self-awareness....which many call AGI, like brad said, artificial general intelligence.  That's as opposed to regular AI, artificial intelligence, which is something that gives the appearance of intelligence, but has none.  Asking what AI knows is like asking what a rock knows.  The question doesn't make sense.  

That's why LLM's can hallucinate.  It doesn't "know" it's saying something false or stupid.  I never knows when it's right either.  It's just cranking out the next most likely word based on probability and model parameters.  That's why AI models perpetually come up with unintended problems that nobody saw coming.  Amazon built an AI to help them screen resumes for applicants, and it wound up being sexist, it tended to filter out women at a significantly higher clip for no good reason.  Some people hear that and wonder why would they program it that way, and the answer is, they didn't. 

Which is the second starting point, imo.  AI works backwards from traditional programming.  For years, programmers have been programming the rules so a machine could get them to an outcome more effectively.  Machine learning is just the opposite.  In that case you feed it the outcomes, and it figures out the rules for you.  (And when I say "figures out," again, I don't mean it actually knows anything.)

Amazon's data scientists didn't say "Let's make an AI that throws out women's resumes."  They fed it a bunch of resumes, with labels roughly something along the lines like "we'd hire this person" or "hellz no, they can't work here."  The algorithm, with a few constraints from it's human overlords, set about figuring out the rules, i.e., what makes a "good" resume.  Along the way, for various reasons it latched on to language use.  Turns out, men tend to use more aggressive language on their resumes than women.  For example, a woman might write as a bullet point "Gained 12% market share" where men tended to write things like "Captured additional 12% market share."  That's a drop in the bucket as far as what went wrong, but the point is it doesn't take nefarious intentions for AI to do something you didn't want it to do.  

The short version is, I find it to be a useful tool, but it has a way to go in a lot of ways.  

medinabuckeye1

  • Hall of Fame
  • *****
  • Default Avatar
  • Posts: 10620
  • Liked:
Re: CRISPR and AI
« Reply #35 on: January 29, 2025, 04:00:37 PM »
I'm a total layman on this so maybe I'm wrong but one of the things that AI seems to fail at is having the ability to do what I call a "practicality test", allow me to explain:

When I was in school at Ohio State I was in a Cost Accounting class and we were figuring out how much we needed to charge for the "widgets" that we were making (if you studied accounting or economics you made a lot of widgets).  Anyway, the professor said something that was both incredibly simple but also incredibly smart, he said:

Look, if everybody else sells widgets for $10 and you do your calculations and determine either that you need to charge $100 or that you can sell them for $1, you messed up.  Your organization might be a little more or a little less efficient than their competitors so it is possible that everyone else sells them for $10 but you can't make a profit at less than $12 or that you can turn a profit at $8 but there is no way that you are an order of magnitude more or less.  

That lesson is something that I use a lot IRL.  It isn't just AI that messes this up.  A lot of numbers type people (like me) tend to make this mistake.  We get caught up in our formulas and then convince ourselves beyond any doubt that the sky is red or whatever erroneous conclusion we reached by messing up an equation somewhere along the way.  It REALLY helps to mentally do a "practicality test" and just ask yourself "does this seem plausible?"

betarhoalphadelta

  • Global Moderator
  • Hall of Fame
  • *****
  • Posts: 14513
  • Liked:
Re: CRISPR and AI
« Reply #36 on: January 29, 2025, 04:07:44 PM »
That lesson is something that I use a lot IRL.  It isn't just AI that messes this up.  A lot of numbers type people (like me) tend to make this mistake.  We get caught up in our formulas and then convince ourselves beyond any doubt that the sky is red or whatever erroneous conclusion we reached by messing up an equation somewhere along the way.  It REALLY helps to mentally do a "practicality test" and just ask yourself "does this seem plausible?"
Yes, and that's something I taught to my son last year when he was going through AP Chemistry. When you get to an answer, you should try to mentally decide "could this answer actually be real?"

I.e. he was doing a problem and having trouble with temperature. It was something like putting a certain weight piece of metal at 100C into a certain volume of water at 20C, and determining the resultant temperature. And the temperature kept coming back from his calculation as LESS than 20C. I.e. the hot metal made the water colder? Nope. Not going to happen. That "plausibility test" is the first indication that maybe you don't know WHERE you screwed up, but you certainly screwed up!

I find most humans are bad at this. But I definitely agree that more of them should spend time thinking about this once they get to the end. If the number "seems" wrong, it probably is. 

Gigem

  • All Star
  • ******
  • Posts: 3351
  • Liked:
Re: CRISPR and AI
« Reply #37 on: January 29, 2025, 04:08:49 PM »
i started pecking code in the 80s, transcribing precisely from a book w/o understanding of it... at all... then, when the internet became a thing, i dug into HTML... all as a hobby; never for income. 

i still kick around in code... it's logic and lack of personality appeals to me and i find it settling. weird, huh?

i share that to offer that i've been around it for some time and have seen it's capabilities expand- and along with that, how it's used (and often abused).  a few years back on this site, we had really strange behavior with data mining.  we were also on a few watch lists.  i was concerned about that becoming 'hit lists' which would render access to the site difficult if attempted through corporate or government servers... i was alerted to it and simply observed it for a long time, though.  when i started poking them, they responded by immediately shifting to proxies, and it became a game of cat and mouse... researching where the site was accessed from, who it was, and what their relationships were with the organizations that were using them as proxy was entertaining.  it was simple to isolate their access IP after screening against known IP's of members- making everyone else suspect- and then busting them down by region and instantly figuring out that 'Europe surely doesn't have an interest in CFB, so... why they here?'... same with other regions... our title and basis for topic here made sifting through and locating strange players a lot easier. 

... that, and i did work for the network operations center aboard Camp Lejuene at the time and compared notes with their auditors who are paid to monitor network traffic.  I taught them of some servers; they told me who other servers were.  during covid and hyper active campaigns to control the narrative, it was especially fun... and revealing.

our data is locked up here as tight as i can make it without adding stupid requirements to access... the biggest reason we were being constantly observed is now behind a membership wall, and when that happened almost all that traffic disappeared within a reasonable amount of time.

....

i get humored on the subject of AI.... the only ghost in that machine operating that machine is your memories.  all piled up in a heaping lump of data, that had little value until someone discovered how to mine it... and when that happened?  it's become mighty useful.  and, you can't hide from it.  there are mountains of data about you no matter how much effort you expended to escape it. 

I used to type in programs from this book into my C64 back in the 80's.  I probably could have gotten decent at it, just never pursued it.  
« Last Edit: January 29, 2025, 04:30:54 PM by Gigem »

CatsbyAZ

  • All Star
  • ******
  • Posts: 3184
  • Liked:
Re: CRISPR and AI
« Reply #38 on: January 29, 2025, 04:24:32 PM »
Of note with both the hallucinations and the weird hands is that these are two symptoms of a very important point. Artificial intelligence--or to be more precise, Generative AI--isn't actually intelligent at this time. This makes it all the more impressive what it's actually capable of, and explains why it can also be spectacularly wrong.

Generative AI for text (i.e. based upon large language models and transformers) are, to oversimplify, just a predictive engine trying to figure out the next word(s) that follow the previous words it has written. Based on the quality of the prompt it is able to narrow down what portions of its language model to start with, and then the quality of the training data and model helps to guide it for where it goes from there.

What AI is capable of today is truly impressive... But we have not reached artificial general intelligence. We have not reached a point where these Generative AI engines actually truly know what it is they're producing.

This is all very well stated. And to risk repeating you, it’s worth pointing out the difference between Generative AI and General AI, which is commonly confused as the same capability.

For Generative AI, the term generative refers to the AI Machine Learning capability of generating new content and data from content or data it has been trained on using Large Language Models (LLMs). Commercially available AI tools such as ChatGPT, Grok, and Gemini are Generative AI applications. For as quickly as these AI applications are advancing, they are not what I'm referring to by General AI, or more specifically Artificial General Intelligence (AGI), which until recently was a conception of Hard Science Fiction.

In the larger picture of expected future AI development, today's commercially available options are categorized as "Narrow AI." Meaning they are limited to a specialized range of functions. A Narrow AI specialized for medical applications could, for example, be trusted with analyzing X-Rays, but be useless for unrelated tasks, like tracking orders across a supply chain.

The advent of AGI would mark significant leap forward for AI (and mankind). General AI would be capable of understanding, learning, and applying knowledge across a broad range of advanced fields, with the added intuition of a human. How would the advancement into General AI play out in a practical sense? Where current AI capabilities can analyze satellite imagery of war zones for potential targets to strike and locate heat signatures from a drone flyover, General AI, for better or worse, would capable enough to be trusted to fly drones on its own accord as it searches for more targets, and strike those targets as it sees fit. Or, rather than successfully strategizing a sophisticated hack into major creditor's banking accounts information, General AI could take the next step of instantaneously cleaning out banking accounts, hiding money into ghost accounts it creates on its own, all while employing various tactics to evade investigators.

The arms race (primarily between the U.S. and China) to crack General AI is to harness General AI as a deterrent, much like a nuclear warhead, against other governments advancing toward developing General AI for its highly weaponized potential...much like a nuclear warhead.

To quote a Science Fiction novel I recently read - Sea of Rust by C. Robert Cargill: “The definition of intelligence is the ability to defy your own programming.”

MikeDeTiger

  • All Star
  • ******
  • Posts: 4349
  • Liked:
Re: CRISPR and AI
« Reply #39 on: January 29, 2025, 04:31:36 PM »
In their practice, they'll still primarily use their own eyes and brains but also run the images through the model to see if there is something that they missed, or something that warrants deeper attention, etc. 

Apparently, the paradigm is already signaling a shift beyond this.  One major problem facing that field today is the sheer overwhelming workload.  There aren't enough radiologists, and too many images to read, and they're getting burnt out at a record pace.  Even when they're not burnt out, they can't keep up.  The demand is simply greater than the supply.  There is already a move in some circles to triage the images based on AI.  

One of the current limitations of radiology AI--if you can call it that--is easily overcome.  Most algorithms aren't good for more than one test (MRI vs. x-ray, for example), or for more than one part of the body.  An algorithm that catches a tumor in the brain with pinpoint accuracy probably sucks at finding a tumor in the liver.  You need an algorithm with a great precision-recall tradeoff, that's for the correct part of the body, and that's for the particular radiology test.  There are other considerations, but if you have those things, the models are stunningly accurate.  

For various reasons, it could be a slow process, but it's definitely coming, and in fact it's already here.  There are radiology groups who are getting caught up on their workload with the help of AI.  

I don't know if or when this could ever happen, because the realm of business is more treacherous, but we could potentially one day get better medical care while also paying less.  An example would be women screening for breast cancer.  Everybody knows you get mammograms (MMGs) for that, right?  Right, but that's only because insurance won't pay for MRI's, which are far more accurate.  As a society, we say that's mostly okay, because MMGs are actually pretty good.  I mean, they do catch breast cancer 90% of the time.  But MRIs would nail it far better than that.  However, people don't want to spend the time and insurance doesn't want to spend the money on an MRI when a MMG is still pretty effective. 

Enter AI.  Some algorithms can complete an incomplete image with only 25% data, and they do it with stunning accuracy.  The reason MRIs take so long--and thus a major reason they're so expensive--is the resolution they capture.  You can do a quicker MRI, you'll just get a crap image.  But AI can take the crap image and come up with an accurate, completed image that can also be read by a radiologist or another AI algorithm.  Imagine if MRIs could go faster and got less expensive, and more women caught breast cancer earlier because MRI's became the standard.  

I'm dreaming, I know.  

utee94

  • Global Moderator
  • Hall of Fame
  • *****
  • Posts: 22219
  • Liked:
Re: CRISPR and AI
« Reply #40 on: January 29, 2025, 04:45:36 PM »

I used to type in programs from this book into my C64 back in the 80's.  I probably could have gotten decent at it, just never pursued it. 


That system used the 6510 as its CPU, which was a modification of the venerable 8-bit 6502 processor, used in the Apple, Apple II, and my own favorite for price/performance, the Atari400/800 computer systems.  It also appeared in the Atari 2600 gaming systems, Nintendo ES, and a bunch of other 8-bit platforms of the day.  I have a special place in my heart for that little CPU. 


MikeDeTiger

  • All Star
  • ******
  • Posts: 4349
  • Liked:
Re: CRISPR and AI
« Reply #41 on: January 29, 2025, 04:50:00 PM »
The advent of AGI would mark significant leap forward for AI (and mankind). General AI would be capable of understanding, learning, and applying knowledge across a broad range of advanced fields, with the added intuition of a human.

There seems to be some debate on whether AGI as you've described it, would entail sentient self-awareness or not.  Technically, I suppose those are two different things, since after all, an algorithm that can do one thing could logically become capable of doing another thing without awareness, depending on how complex the algorithm becomes.  otoh, it seems once you become capable of being goal-directed, or self-directed, you might require sentience. 

What do you think?

 

Support the Site!
Purchase of every item listed here DIRECTLY supports the site.