No link or anything, very believable.
Honestly this is a pretty good use case for LLMs and I’ve seen them used very successfully to detect infection in samples for various neglected tropical diseases. This literally is what AI should be used for.
Sure, agreed . Too bad 99% of it’s use is still stealing from society to make a few billionaires richer.
You don’t understand how they work and that’s fine, you’re upset based on your paranoid guesswork thats filled in the lack of understanding and that’s sad.
No one is stealing from society, ‘society’ isn’t being deprived of anything when ai looks at an image. The research is pretty open, humanity is benefitting from it in the same way Tesla, Westi ghouse and Edison benefitted the history of electrical research.
And yes I’d you’re about to tell me Edison did nothing but steal then this is another bit of tech history you’ve not paid attention to beyond memes.
The big companies you hate like meta or nvidia are producing papers that explain methods, you can follow along at home and make your own model - though with those examples you don’t need to because they’ve released models on open licenses. Ironically it seems likely you don’t understand how this all works or what’s happening because zuck is doing significantly more to help society than you are - Ironic, hu?
And before you tell me about zuck doing genocide or other childish arguments, we’re on lemmy which was purposefully designed to remove the power from a top down authority so if an instance pushed for genocide we would have zero power to stop it - the report you’re no doubt going go allude to says that Facebook is culpable because it did not have adequate systems in place to control locally run groups…
I could make good arguments against zuck, I don’t think anyone should be able to be that rich but it’s funny to me when a group freely shares pytorch and other key tools used to help do things like detect cancer cheaply and efficient, help impoverished communities access education and health resources in their local language, help blind people have independence, etc, etc, all the many positive uses for ai - but you shit on it all simply because you’re too lazy and selfish to actually do anything materially constructive to help anyone or anything that doesn’t directly benefit you.
I also agree.
However these medical LLMs have been around for a long time, and don’t use horrific amounts of energy, not do they make billionaires richer. They are the sorts of things that a hobbiest can put together provided they have enough training data. Further to that they can run offline, allowing doctors to perform tests in the field, as I can attest to witnessing first hand with soil transmitted helminths surveys in Mozambique. That means that instead of checking thousands of stool samples manually, those same people can be paid to collect more samples or distribute the drugs to cure the disease in affected populations.
Worth noting the type of comment this is in response to is arguing that home users should be legally forbidden from accessing training data and want a world where only the richest companies can afford to license training data (which will be owned by their other rich friends thanks to ig being posted on their sites)
Supporting heavy copywrite extensions is the dumbest position anyone could have .
I highly doubt the medical data to do these are available to a hobbyist, or that someone like that would have the know-how to train the AI.
But yea, rare non-bad use of AI. Now we just need to eat the rich to make it a good for humanity. Let’s get to that I say!
Actually the datasets for this MDA stuff are widely available.
LLMs do language, not images.
These models aren’t LLM based.
You could participate or complain.
https://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507
Complain to who? Some random twitter account? WHy would I do that?
No, here. You could asked for a link or Google.
I am commenting on this tweet being trash, because it doesn’t have a link in it.
Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.
Yeah there is. A bloke I know did exactly that with brain scans for his masters.
Would you mind asking your friend, so you can provide the source?
https://adni.loni.usc.edu/ here ya go
Edit: European DTI Study on Dementia too, he said it’s easier to get data from there
Lovely, thank you very much, kind stranger!
5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour.
what is your intended use case? are you trying to help government agencies perfect spying? sounds very cringe ngl
My intended use case is to find possibilities how ML can support people with certain tasks. Science is not political, for what my technology is abused, I cannot control. This is no reason to stop science entirely, there will always be someone abusing something for their own gain.
But thanks for assuming without asking first what the context was.
My intended use case is to find possibilities how ML can support people with certain tasks.
weaselly bullshit. how exactly do you intend for people to use technology that identifies ships via satellite? what is your goal? because the only use cases I can see for this are negative
This is no reason to stop science entirely
if the only thing your tech can be used for is bad then you’re bad for innovating that tech
Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?
Of course you have not. Your hatered makes you blind. Close minds never were able to see why science is important. Now enjoy spreading hate somewhere else.
removed by mod
Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?
No, I didn’t think about that. If you did, why exactly were you so hostile to me asking what use you thought this might serve?
I don’t think my reply was hostile, I just criticized your behavior assuming things, before you know the whole truth. I kept everything neutral and didn’t have the urge to have a discussion with someone already on edge. I hope you understand and also learn that not everything is entirely evil in this world. Please stay curious - don’t assume.
I just criticized your behavior assuming things, before you know the whole truth.
I didn’t assume anything. I asked you what your intended use case was and you responded with vague platitudes, sarcasm, and then once I pressed further, insults. Try re-reading your comments from a more objective standpoint and you’ll find neutrality nowhere within them.
removed by mod
no u
Ok
find possibilities how ML can support people with certain tasks
Marxism-Leninism?
Oh, Machine Learning.
Science is not political
in an ideal world maybe, but that is not our world. In reality science is always always political. It is unavoidable.
Typical hexbear reply lol
Unfortunately, you are right, though. Science can be political. My science is not. I like my bubble.
that’s just going through life with blinders on
Typical hexbear reply
Unfortunately, you are right
Yes, typically hexbear replies are right.
It’s not unfortunate though, it’s simply a matter of having an understanding of the world and a willingness to accept it and engage with it. It’s too bad that you seem not to want that understanding or that you lack the willingness to accept it.
My science is not. I like my bubble.
How can you possibly square that first short sentence with the second? Are you really that willfully hypocritical? Yes, “your” science is political. No science escapes it, and the people who do science thinking themselves and their work is unaffected by their ideology are the most effected by ideology. No wonder you like your bubble - from within it, you don’t have to concern yourself with any of the real world or even the smallest sliver of self reflection. But all it is is a happy, self-reinforcing delusion. You pretend to be someone who appreciates science, but if you truly did, you would be doing everything you can to recognize your unavoidable biases rather than denying them while simultaneously wallowing in them, which is what you are openly admitting to doing whether you realize it or not.
removed by mod
“Removed by mod” suck my nuts you fascist fucks lol
These shitlib whiners don’t care and my comments have been removed for the horror of incivility towards dr von braun
deleted by creator
I knew about kaggle, but not about NIH. Thanks for the hint!
Btw, my dentist used AI to identify potential problems in a radiograph. The result was pretty impressive. Have to get a filling tho.
Much easier to assume the training data isn’t garbage when the AI expert system only has a narrow scope, right?
Sure. And the expert interpretes still. But the result was exact.
Yeah, machine learning actually has a ton of very useful applications in things. It’s just predictably the dumbest and most toxic manifestations of it are hyped up in a capitalist system.
Neural networks are great for pattern recognition, unfortunately all the hype is in pattern generation and we end up with mammograms in anime style
Doctor: There seems to be something wrong with the image.
Technician: What’s the problem?
Doctor: The patient only has two breasts, but the image that came back from the AI machine shows them having six breasts and much MUCH larger breasts than the patient actually has.
Technician: sighs
Why does the paperwork suddenly claim the patient is 600 years old shape shifting dragon?
I can do that too, but my rate of success is very low
And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.
Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.
Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.
It never makes mistakes that affect diagnosis?
It’s not diagnosing, which is good imho. It’s just being used to remove noise and artifacts from the images on the scan. This means the MRI is clearer for the reading physician and ordering surgeon in the case of the MRI and that the cardiologist can use less radiation during the procedure yet get the same quality image in the lab.
I’m still wary of using it to diagnose in basically any scenario because of the salience and danger that both false negatives and false positives threaten.
… they said, typing on a tiny silicon rectangle with access to the whole of humanity’s knowledge and that fits in their pocket…
I’m involved in multiple projects where stuff like this will be used in very accessible manners, hopefully in 2-3 years, so don’t get too pessimistic.
Why do I still have to work my boring job while AI gets to create art and look at boobs?
Because life is suffering and machines dream of electric sheeps.
I dream of boobs.
I’ve seen things you people wouldn’t believe.
Can’t pigeons do the same thing?
This is similar to wat I did for my masters, except it was lung cancer.
Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.
pretty sure iterate is the wrong word choice there
Dude needs to use AI to fix his fucking grammar.
I suppose they just dropped the “re” off of “reiterate” since they’re saying it for the first time.
They probably meant reiterate
I think it’s a joke, like to imply they want to not just reiterate, but rerererereiterate this information, both because it’s good news and also in light of all the sucky ways AI is being used instead. Like at first they typed, "I just want to reiterate… but decided that wasn’t nearly enough.
Common case of programmer brain
That’s not the only issue with the English-esque writing.
100% true, just the first thing that stuck out at me
The AI genie is out of the bottle and — as much as we complain — it isn’t going away; we need thoughtful legislation. AI is going to take my job? Fine, I guess? That sounds good, really. Can I have a guaranteed income to live on, because I still need to live? Can we tax the rich?
I really wouldn’t call this AI. It is more or less an inage identification system that relies on machine learning.
That was pretty much the definition of AI before LLM came.
And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.
https://youtube.com/shorts/xIMlJUwB1m8?si=zH6eF5xZ5Xoz_zsz
Detecting is not enough to be useful.
The test is 90% accurate, thats still pretty useful. Especially if you are simply putting people into a high risk group that needs to be more closely monitored.
“90% accurate” is a non-statement. It’s like you haven’t even watched the video you respond to. Also, where the hell did you pull that number from?
How specific is it and how sensitive is it is what matters. And if Mirai in https://www.science.org/doi/10.1126/scitranslmed.aba4373 is the same model that the tweet mentions, then neither its specificity nor sensitivity reach 90%. And considering that the image in the tweet is trackable to a publication in the same year (https://news.mit.edu/2021/robust-artificial-intelligence-tools-predict-future-cancer-0128), I’m fairly sure that it’s the same Mirai.
Also, where the hell did you pull that number from?
Well, you can just do the math yourself, it’s pretty straight-forward.
However, more to the point, it’s taken right from around 38 seconds into the video. Kind of funny to be accused of “not watching the video” by someone who is implying the number was pulled from nowhere, when it’s right in the video.
I certainly don’t think this closes the book on anything, but I’m responding to your claim that it’s not useful. If this is a cheap and easy test, it’s a great screening tool putting people into groups of low risk/high risk for which further, maybe more expensive/specific/sensitive, tests can be done. Especially if it can do this early.
Wanna bet it’s not “AI” ?
Learning machines are ai as well, it’s not really what we picture when we think ai but it is none the less.
It’s probably more “AI” than the LLMs we’ve been plagued with. This sounds more like an application of machine learning, which is a hell of a lot more promising.
AI and machine learning are very similar (if not identical) things, just one has been turned into a marketing hype word a whole lot more than the other.
Machine learning is one of the many things that is referred to by “AI”, yes.
My thought is the term “AI” has been overused to uselessness, from the nested if statements that decide how video game enemies move to various kinds of machine learning to large language models.
So I’m personally going to avoid the term.
AI == Computer Thingy that looks kinda “smart” to people that don’t understand it. it’s like rectangles and squares. you should use the more precise word (CNN, LLM, Stable diffusion) when applicable, just like with rectangles and squares
This seems exactly like what I would have referred to as AI before the pandemic. Specifically Deep Learning image processing. In terms of something you can buy off the shelf this is theoretically something the Cognex Vidi Red Tool could be used for. My experience with it is in packaging, but the base concept is the same.
Training a model requires loading images into the software and having a human mark them before having a very powerful CUDA GPU process all of that. Once the model has been trained it can usually be run on a fairly modest PC in comparison.
deleted by creator
That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.
It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.
Citation please?
Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.
That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.
The correct term is “Computational Statistics”
Stop calling it that, you’re scaring the venture capital
Say it is a predictive llm
According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.
or a pattern recognition model.
Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.
That performance curve seems terrible for any practical use.
Good catch!
Yeah that’s an unacceptably low ROC curve for a medical usecase
Well, this is very much an application of AI… Having more examples of recent AI development that aren’t ‘chatgpt’(/transformers-based) is probably a good thing.
Op is not saying this isn’t using the techniques associated with the term AI. They’re saying that the term AI is misleading, broad, and generally not desirable in a technical publication.
Op is not saying this isn’t using the techniques associated with the term AI.
Correct, also not what I was replying about. I said that using AI in the headline here is very much correct. It is after all a paper using AI to detect stuff.
it’s a good term, it refers to lots of thinks. there are many terms like that.
The problem is that it refers to so many and constantly changing things that it doesn’t refer to anything specific in the end. You can replace the word “AI” in any sentence with the word “magic” and it basically says the same thing…
it refers to lots of thinks
So it’s a bad term.
the word program refers to even more things and no one says it’s a bad word.
It’s literally the name of the field of study. Chances are this uses the same thing as LLMs. Aka a neutral network, which are some of the oldest AIs around.
It refers to anything that simulates intelligence. They are using the correct word. People just misunderstand it.
If people consistently misunderstand it, it’s a bad term for communicating the concept.
It’s the correct term though.
It’s like when people get confused about what a scientific theory is. We still call it the theory of gravity.
Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.
From the conclusion of the actual paper:
Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.
If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.
I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).
I would love to see comparison against risk factors + human image evaluation.
Nevertheless, this is the AI that will really help humanity.
For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.
The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model,
so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step).edit: I stand corrected, commenter below pointed out the appendix, and the regression does in fact come into play in the training stepAs a different commenter mentioned, the data collection is largely the interesting part here.
I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)
ResNet18 is ancient and tiny…I don’t understand why they didn’t go with a deeper network. ResNet50 is usually the smallest I’ll use.
They don’t go in depth about how they combine the two for the hybrid model
Actually they did, it’s in Appendix E (PDF warning) . A GitHub repo would have been nice, but I think there would be enough info to replicate this if we had the data.
Yeah it’s not the most interesting paper in the world. But it’s still a cool use IMO even if it might not be novel enough to deserve a news article.