• Flyberius [comrade/them]
      link
      fedilink
      English
      79 months ago

      Honestly this is a pretty good use case for LLMs and I’ve seen them used very successfully to detect infection in samples for various neglected tropical diseases. This literally is what AI should be used for.

      • NuraShiny [any]
        link
        fedilink
        English
        69 months ago

        Sure, agreed . Too bad 99% of it’s use is still stealing from society to make a few billionaires richer.

        • @[email protected]
          link
          fedilink
          English
          29 months ago

          You don’t understand how they work and that’s fine, you’re upset based on your paranoid guesswork thats filled in the lack of understanding and that’s sad.

          No one is stealing from society, ‘society’ isn’t being deprived of anything when ai looks at an image. The research is pretty open, humanity is benefitting from it in the same way Tesla, Westi ghouse and Edison benefitted the history of electrical research.

          And yes I’d you’re about to tell me Edison did nothing but steal then this is another bit of tech history you’ve not paid attention to beyond memes.

          The big companies you hate like meta or nvidia are producing papers that explain methods, you can follow along at home and make your own model - though with those examples you don’t need to because they’ve released models on open licenses. Ironically it seems likely you don’t understand how this all works or what’s happening because zuck is doing significantly more to help society than you are - Ironic, hu?

          And before you tell me about zuck doing genocide or other childish arguments, we’re on lemmy which was purposefully designed to remove the power from a top down authority so if an instance pushed for genocide we would have zero power to stop it - the report you’re no doubt going go allude to says that Facebook is culpable because it did not have adequate systems in place to control locally run groups…

          I could make good arguments against zuck, I don’t think anyone should be able to be that rich but it’s funny to me when a group freely shares pytorch and other key tools used to help do things like detect cancer cheaply and efficient, help impoverished communities access education and health resources in their local language, help blind people have independence, etc, etc, all the many positive uses for ai - but you shit on it all simply because you’re too lazy and selfish to actually do anything materially constructive to help anyone or anything that doesn’t directly benefit you.

        • Flyberius [comrade/them]
          link
          fedilink
          English
          69 months ago

          I also agree.

          However these medical LLMs have been around for a long time, and don’t use horrific amounts of energy, not do they make billionaires richer. They are the sorts of things that a hobbiest can put together provided they have enough training data. Further to that they can run offline, allowing doctors to perform tests in the field, as I can attest to witnessing first hand with soil transmitted helminths surveys in Mozambique. That means that instead of checking thousands of stool samples manually, those same people can be paid to collect more samples or distribute the drugs to cure the disease in affected populations.

          • @[email protected]
            link
            fedilink
            English
            39 months ago

            Worth noting the type of comment this is in response to is arguing that home users should be legally forbidden from accessing training data and want a world where only the richest companies can afford to license training data (which will be owned by their other rich friends thanks to ig being posted on their sites)

            Supporting heavy copywrite extensions is the dumbest position anyone could have .

          • NuraShiny [any]
            link
            fedilink
            English
            39 months ago

            I highly doubt the medical data to do these are available to a hobbyist, or that someone like that would have the know-how to train the AI.

            But yea, rare non-bad use of AI. Now we just need to eat the rich to make it a good for humanity. Let’s get to that I say!

  • @[email protected]
    link
    fedilink
    English
    189 months ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

    • booty [he/him]
      link
      fedilink
      English
      79 months ago

      5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour.

      what is your intended use case? are you trying to help government agencies perfect spying? sounds very cringe ngl

      • @[email protected]
        link
        fedilink
        English
        99 months ago

        My intended use case is to find possibilities how ML can support people with certain tasks. Science is not political, for what my technology is abused, I cannot control. This is no reason to stop science entirely, there will always be someone abusing something for their own gain.

        But thanks for assuming without asking first what the context was.

        • booty [he/him]
          link
          fedilink
          English
          139 months ago

          My intended use case is to find possibilities how ML can support people with certain tasks.

          weaselly bullshit. how exactly do you intend for people to use technology that identifies ships via satellite? what is your goal? because the only use cases I can see for this are negative

          This is no reason to stop science entirely

          if the only thing your tech can be used for is bad then you’re bad for innovating that tech

          • @[email protected]
            link
            fedilink
            English
            39 months ago

            Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

            Of course you have not. Your hatered makes you blind. Close minds never were able to see why science is important. Now enjoy spreading hate somewhere else.

            • booty [he/him]
              link
              fedilink
              English
              99 months ago

              Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

              No, I didn’t think about that. If you did, why exactly were you so hostile to me asking what use you thought this might serve?

              • @[email protected]
                link
                fedilink
                English
                29 months ago

                I don’t think my reply was hostile, I just criticized your behavior assuming things, before you know the whole truth. I kept everything neutral and didn’t have the urge to have a discussion with someone already on edge. I hope you understand and also learn that not everything is entirely evil in this world. Please stay curious - don’t assume.

                • booty [he/him]
                  link
                  fedilink
                  English
                  69 months ago

                  I just criticized your behavior assuming things, before you know the whole truth.

                  I didn’t assume anything. I asked you what your intended use case was and you responded with vague platitudes, sarcasm, and then once I pressed further, insults. Try re-reading your comments from a more objective standpoint and you’ll find neutrality nowhere within them.

        • MaeBorowski [she/her]
          link
          fedilink
          English
          179 months ago

          find possibilities how ML can support people with certain tasks

          Marxism-Leninism? anakin-padme-2

          Oh, Machine Learning. sicko-wistful

          Science is not political

          in an ideal world maybe, but that is not our world. In reality science is always always political. It is unavoidable.

          • @[email protected]
            link
            fedilink
            English
            9
            edit-2
            9 months ago

            Typical hexbear reply lol

            Unfortunately, you are right, though. Science can be political. My science is not. I like my bubble.

            • MaeBorowski [she/her]
              link
              fedilink
              English
              59 months ago

              Typical hexbear reply

              Unfortunately, you are right

              Yes, typically hexbear replies are right.

              It’s not unfortunate though, it’s simply a matter of having an understanding of the world and a willingness to accept it and engage with it. It’s too bad that you seem not to want that understanding or that you lack the willingness to accept it.

              My science is not. I like my bubble.

              How can you possibly square that first short sentence with the second? Are you really that willfully hypocritical? Yes, “your” science is political. No science escapes it, and the people who do science thinking themselves and their work is unaffected by their ideology are the most effected by ideology. No wonder you like your bubble - from within it, you don’t have to concern yourself with any of the real world or even the smallest sliver of self reflection. But all it is is a happy, self-reinforcing delusion. You pretend to be someone who appreciates science, but if you truly did, you would be doing everything you can to recognize your unavoidable biases rather than denying them while simultaneously wallowing in them, which is what you are openly admitting to doing whether you realize it or not.

  • @[email protected]
    link
    fedilink
    English
    13
    edit-2
    9 months ago

    Btw, my dentist used AI to identify potential problems in a radiograph. The result was pretty impressive. Have to get a filling tho.

    • D61 [any]
      link
      fedilink
      English
      59 months ago

      Much easier to assume the training data isn’t garbage when the AI expert system only has a narrow scope, right?

      • somename [she/her]
        link
        fedilink
        English
        39 months ago

        Yeah, machine learning actually has a ton of very useful applications in things. It’s just predictably the dumbest and most toxic manifestations of it are hyped up in a capitalist system.

  • mayo_cider [he/him]
    link
    fedilink
    English
    13
    edit-2
    9 months ago

    Neural networks are great for pattern recognition, unfortunately all the hype is in pattern generation and we end up with mammograms in anime style

    • D61 [any]
      link
      fedilink
      English
      209 months ago

      Doctor: There seems to be something wrong with the image.

      Technician: What’s the problem?

      Doctor: The patient only has two breasts, but the image that came back from the AI machine shows them having six breasts and much MUCH larger breasts than the patient actually has.

      Technician: sighs

      • mayo_cider [he/him]
        link
        fedilink
        English
        129 months ago

        Why does the paperwork suddenly claim the patient is 600 years old shape shifting dragon?

  • @[email protected]
    link
    fedilink
    English
    279 months ago

    And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.

    • @[email protected]
      link
      fedilink
      English
      139 months ago

      Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.

      • @[email protected]
        link
        fedilink
        English
        6
        edit-2
        9 months ago

        Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.

          • @[email protected]
            link
            fedilink
            English
            29 months ago

            It’s not diagnosing, which is good imho. It’s just being used to remove noise and artifacts from the images on the scan. This means the MRI is clearer for the reading physician and ordering surgeon in the case of the MRI and that the cardiologist can use less radiation during the procedure yet get the same quality image in the lab.

            I’m still wary of using it to diagnose in basically any scenario because of the salience and danger that both false negatives and false positives threaten.

      • @[email protected]
        link
        fedilink
        English
        29 months ago

        … they said, typing on a tiny silicon rectangle with access to the whole of humanity’s knowledge and that fits in their pocket…

    • MuchPineapples
      link
      fedilink
      English
      7
      edit-2
      9 months ago

      I’m involved in multiple projects where stuff like this will be used in very accessible manners, hopefully in 2-3 years, so don’t get too pessimistic.

  • @[email protected]
    link
    fedilink
    English
    1249 months ago

    Why do I still have to work my boring job while AI gets to create art and look at boobs?

  • @[email protected]
    link
    fedilink
    English
    249 months ago

    This is similar to wat I did for my masters, except it was lung cancer.

    Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.

    • apotheotic (she/her)
      link
      fedilink
      English
      29 months ago

      I suppose they just dropped the “re” off of “reiterate” since they’re saying it for the first time.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        9 months ago

        I think it’s a joke, like to imply they want to not just reiterate, but rerererereiterate this information, both because it’s good news and also in light of all the sucky ways AI is being used instead. Like at first they typed, "I just want to reiterate… but decided that wasn’t nearly enough.

  • @[email protected]
    link
    fedilink
    English
    39 months ago

    The AI genie is out of the bottle and — as much as we complain — it isn’t going away; we need thoughtful legislation. AI is going to take my job? Fine, I guess? That sounds good, really. Can I have a guaranteed income to live on, because I still need to live? Can we tax the rich?

  • @[email protected]
    link
    fedilink
    English
    49 months ago

    I really wouldn’t call this AI. It is more or less an inage identification system that relies on machine learning.

      • @[email protected]
        link
        fedilink
        English
        99 months ago

        And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.

    • @[email protected]
      link
      fedilink
      English
      159 months ago

      The test is 90% accurate, thats still pretty useful. Especially if you are simply putting people into a high risk group that needs to be more closely monitored.

      • @[email protected]
        link
        fedilink
        English
        29 months ago

        “90% accurate” is a non-statement. It’s like you haven’t even watched the video you respond to. Also, where the hell did you pull that number from?

        How specific is it and how sensitive is it is what matters. And if Mirai in https://www.science.org/doi/10.1126/scitranslmed.aba4373 is the same model that the tweet mentions, then neither its specificity nor sensitivity reach 90%. And considering that the image in the tweet is trackable to a publication in the same year (https://news.mit.edu/2021/robust-artificial-intelligence-tools-predict-future-cancer-0128), I’m fairly sure that it’s the same Mirai.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          9 months ago

          Also, where the hell did you pull that number from?

          Well, you can just do the math yourself, it’s pretty straight-forward.

          However, more to the point, it’s taken right from around 38 seconds into the video. Kind of funny to be accused of “not watching the video” by someone who is implying the number was pulled from nowhere, when it’s right in the video.

          I certainly don’t think this closes the book on anything, but I’m responding to your claim that it’s not useful. If this is a cheap and easy test, it’s a great screening tool putting people into groups of low risk/high risk for which further, maybe more expensive/specific/sensitive, tests can be done. Especially if it can do this early.

    • @[email protected]
      link
      fedilink
      English
      159 months ago

      Learning machines are ai as well, it’s not really what we picture when we think ai but it is none the less.

    • Captain Aggravated
      link
      fedilink
      English
      179 months ago

      It’s probably more “AI” than the LLMs we’ve been plagued with. This sounds more like an application of machine learning, which is a hell of a lot more promising.

      • @[email protected]
        link
        fedilink
        English
        39 months ago

        AI and machine learning are very similar (if not identical) things, just one has been turned into a marketing hype word a whole lot more than the other.

        • Captain Aggravated
          link
          fedilink
          English
          49 months ago

          Machine learning is one of the many things that is referred to by “AI”, yes.

          My thought is the term “AI” has been overused to uselessness, from the nested if statements that decide how video game enemies move to various kinds of machine learning to large language models.

          So I’m personally going to avoid the term.

          • @[email protected]
            link
            fedilink
            English
            17 months ago

            AI == Computer Thingy that looks kinda “smart” to people that don’t understand it. it’s like rectangles and squares. you should use the more precise word (CNN, LLM, Stable diffusion) when applicable, just like with rectangles and squares

    • TonyOstrich
      link
      fedilink
      English
      129 months ago

      This seems exactly like what I would have referred to as AI before the pandemic. Specifically Deep Learning image processing. In terms of something you can buy off the shelf this is theoretically something the Cognex Vidi Red Tool could be used for. My experience with it is in packaging, but the base concept is the same.

      Training a model requires loading images into the software and having a human mark them before having a very powerful CUDA GPU process all of that. Once the model has been trained it can usually be run on a fairly modest PC in comparison.

    • @[email protected]
      link
      fedilink
      English
      39 months ago

      That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.

    • @[email protected]
      link
      fedilink
      English
      189 months ago

      It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.

    • @[email protected]
      link
      fedilink
      English
      127
      edit-2
      9 months ago

      Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

      • Johanno
        link
        fedilink
        English
        859 months ago

        That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

        • PM_ME_VINTAGE_30S [he/him]
          link
          fedilink
          English
          679 months ago

          Say it is a predictive llm

          According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

          or a pattern recognition model.

          Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

          • @[email protected]
            link
            fedilink
            English
            109 months ago

            Well, this is very much an application of AI… Having more examples of recent AI development that aren’t ‘chatgpt’(/transformers-based) is probably a good thing.

            • @[email protected]
              link
              fedilink
              English
              79 months ago

              Op is not saying this isn’t using the techniques associated with the term AI. They’re saying that the term AI is misleading, broad, and generally not desirable in a technical publication.

              • @[email protected]
                link
                fedilink
                English
                69 months ago

                Op is not saying this isn’t using the techniques associated with the term AI.

                Correct, also not what I was replying about. I said that using AI in the headline here is very much correct. It is after all a paper using AI to detect stuff.

        • 0laura
          link
          fedilink
          English
          29 months ago

          it’s a good term, it refers to lots of thinks. there are many terms like that.

          • Ephera
            link
            fedilink
            English
            39 months ago

            The problem is that it refers to so many and constantly changing things that it doesn’t refer to anything specific in the end. You can replace the word “AI” in any sentence with the word “magic” and it basically says the same thing…

            • 0laura
              link
              fedilink
              English
              19 months ago

              the word program refers to even more things and no one says it’s a bad word.

            • @[email protected]
              link
              fedilink
              English
              3
              edit-2
              9 months ago

              It’s literally the name of the field of study. Chances are this uses the same thing as LLMs. Aka a neutral network, which are some of the oldest AIs around.

              It refers to anything that simulates intelligence. They are using the correct word. People just misunderstand it.

              • @[email protected]
                link
                fedilink
                English
                39 months ago

                If people consistently misunderstand it, it’s a bad term for communicating the concept.

                • @[email protected]
                  link
                  fedilink
                  English
                  2
                  edit-2
                  9 months ago

                  It’s the correct term though.

                  It’s like when people get confused about what a scientific theory is. We still call it the theory of gravity.

      • PM_ME_VINTAGE_30S [he/him]
        link
        fedilink
        English
        119 months ago

        Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

        From the conclusion of the actual paper:

        Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.

        If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.

        • @[email protected]
          link
          fedilink
          English
          39 months ago

          I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).

          I would love to see comparison against risk factors + human image evaluation.

          Nevertheless, this is the AI that will really help humanity.

        • @[email protected]
          link
          fedilink
          English
          8
          edit-2
          9 months ago

          For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.

          The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step). edit: I stand corrected, commenter below pointed out the appendix, and the regression does in fact come into play in the training step

          As a different commenter mentioned, the data collection is largely the interesting part here.

          I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)

          • @[email protected]
            link
            fedilink
            English
            49 months ago

            ResNet18 is ancient and tiny…I don’t understand why they didn’t go with a deeper network. ResNet50 is usually the smallest I’ll use.

          • PM_ME_VINTAGE_30S [he/him]
            link
            fedilink
            English
            5
            edit-2
            9 months ago

            They don’t go in depth about how they combine the two for the hybrid model

            Actually they did, it’s in Appendix E (PDF warning) . A GitHub repo would have been nice, but I think there would be enough info to replicate this if we had the data.

            Yeah it’s not the most interesting paper in the world. But it’s still a cool use IMO even if it might not be novel enough to deserve a news article.