I can see some minor benefits - I use it for the odd bit of mundane writing and some of the image creation stuff is interesting, and I knew that a lot of people use it for coding etc - but mostly it seems to be about making more cash for corporations and stuffing the internet with bots and fake content. Am I missing something here? Are there any genuine benefits?

  • Lemminary
    link
    fedilink
    English
    5
    edit-2
    1 year ago

    I would’ve found it extremely useful in school for studying advanced topics in biology, and now I use it to explain programming concepts to me, or to explain other languages. Some of the answers really do feel like you have a world-class tutor right next to you. It’s not without errors but it’s mostly accurate and insightful.

    It’s also really good at helping you search for things that you can’t just type into a search box using keywords. Like, you can give it a general description of what you’re thinking about and it’ll guess. I’ve used it for TV shows from the 90-00s I largely forgot about, but also words, phrases, or concepts I can’t quite remember. One time I was trying to remember a famous experiment but gave it the wrong scientist and it correctly guessed who it was and what the experiment was about.

    It’s also useful for brainstorming. You give it a general description of what you’re doing and it’ll give you somewhat generic recommendations of what you could expect other people to do so that you cover most bases. I’ve also used this for discussions where I’m not sure about my position so I’ll ask it to get a better idea about the problem and to figure out what I’m not considering.

    Overall, I think it’s a great general purpose assistant.

  • @[email protected]
    link
    fedilink
    English
    71 year ago

    The legal industry is going to get turned on its head when AI can read, comment, and write contracts.

  • @[email protected]
    link
    fedilink
    English
    19
    edit-2
    1 year ago

    AI has some interesting use cases, but should not be trusted 100%.

    Like github copilot ( or any “code copilot”):

    • Good for repeating stuff but with minor changes
    • Can help with common easy coding errors
    • Code quality can take a big hit
    • For coding beginners, it can lead to a deficit of real understanding of your code
      ( and because of that could lead to bugs, security backdoors… )

    Like translations ( code or language ):

    • Good translation of the common/big languages ( english, german…)
    • Can extend a brief summary to a big wall of text ( and back )
    • If wrong translated it can lead to that someone else understands it wrong and it misses the point
    • It removes the “human” part. It can be most of the time depending on the context easily identified.

    Like classification of text/Images for moderation:

    • Help for identify bad faith text / images
    • False Positives can be annoying to deal with.

    But dont do anything that is IMPORTANT with AI, only use it for fun or if you know if the code/text the AI wrote is correct!

    • @[email protected]OP
      link
      fedilink
      English
      21 year ago

      Actually the summaries are good, but you have to know some of it anyway and then check to see if it’s just making stuff up. That’s been my experience.

    • Lemminary
      link
      fedilink
      English
      71 year ago

      Adding to the language section, it’s also really good at guessing words if you give it a decent definition. I think this has other applications but it’s quite useful for people like me with the occasionally leaky brain.

  • @[email protected]
    link
    fedilink
    English
    291 year ago

    This sort of feels like someone using a PC for the first time in 1989 and asking what it does that they can’t do on a piece of paper with a calculator. They may not have been far off at the time, but they would be missing the point. This is a paradigm shift that allows for a single application to fulfill the role of, eventually, infinite applications. And yes it starts with mundane tasks. You know, the kind people don’t want to do themselves.

    • @[email protected]
      link
      fedilink
      English
      8
      edit-2
      1 year ago

      TBF if a mathematician or a programmer cannot do it on paper then they’ve kind of failed and probably won’t have any notable impact. Paper math didn’t end when consumer computers came about.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        I know plenty of modern programmers who are empowered by the ease at which they can learn the trade now. Some never go deeper than front end developer, because there’s good money there. That job would look nothing like it does today if it had to be done by hand.

      • @[email protected]
        link
        fedilink
        English
        61 year ago

        Wrap it up, climate scientists, the show is over! This lad said he can do your job without the supercomputer.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          You think Supercomputers are designing and building themselves, you fucking donkey? You think ChatGPT has the solution to Climate Change?

    • Cris
      link
      fedilink
      English
      161 year ago

      The problem is that most of the things it feels we can currently see applications for are… Kinda bad. Actually repulsive frankly. Like I don’t want those things. I don’t wanna talk to an ai to order my big mac or instead of just getting a highlighted excerpt from a webpage when I search things. I don’t want a world where artists have to compete with image generators to make a living, or where weird creepy porn that chases and satisfies ever more unrealistic expectations is the norm. I don’t want to talk to chat bots that use statistical analysis to convincingly sell me lies they don’t understand.

      I just wanna talk to actual people. I wanna see art made by people, I wanna look at pictures of the bodies of actual human beings, I wanna see the animations that humans poured their soul into, I wanna see the actual text a person wrote on the subject I’m researching. I wanna do simple things, in simple ways, and the world that it feels like AI companies are offering us honestly sucks, and as soon as that door is fully opened things will just be permanently worse. Convenience is great but I don’t want a robot to feed me a weird gross regurgitation of reality or approximation of human interaction to me like a bird that chews and digests its food for its babies. I don’t wanna consume the spit-up of an overgrown algorithm. Its a gross idea of how we could engage with the world. It obfuscates the humanity of whatever it touches, and the humanity is the worthwhile part. There comes a point where the abstraction is abstracting away everything of value and leaving you with the most sanitized version.

      If ai was just gonna be used to improve medicine and translate books or webpages, or as interactive accsessibiltiy tool, or do actually helpful shit maybe I wouldn’t be so opposed to it, but it feels like everything consumer or employee facing that ai is offering is awful and something I absolutely do not want. But companies don’t care, and that shitty world is gonna be the reality cause it’s profitable

      • @[email protected]
        link
        fedilink
        English
        51 year ago

        Well then I guess I’d ask you to reconsider your answer but from the perspective of 1989. I’d imagine that’d be the same answer you’d give to the personal computer. AI isn’t going to make things more complicated It’s going to make things simpler. But people will create a more complicated (diverse) world in the vacuum that leaves. Just like an ox pulled plow made it easier to till farmland led to more complex agricultural societies. This type of advancement has been the story of human history since its beginning. Your perspective seems most concerned with people using this advancement against you, but our future now holds the possibility of having this AI on your side.

        Using it to synopsize complicated TOS that corporations use to obfuscate what you’re agreeing to, actually answering questions instead of needing to search through ad riddled web pages, allowing more people to become artists and create their vision.

        Your examples of useful ways to use AI are great. So help build or support them. If you only look at the future corporations are selling you, yeah, it’s going to look like a bleak corporate nightmare. But the truth is technology empowers the individual. So we need to do something good with that power.

      • @[email protected]
        link
        fedilink
        English
        31 year ago

        You’re confusing brainstorming with content generation. LLMs are great for brainstorming: they can quickly churn out dozens of ideas for my D&D campaign, which I then look through, discard the garbage, keep the good bits of, and riff off of before incorporating into my campaign. If I just used everything it suggested blindly, yeah, nightmare fuel. For brainstorming though, it’s fantastic.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          I would retort that the exact opposite is true, that content generation is the only thing LLMs are good at because they often forget the context of their previous statements.

          • @[email protected]
            link
            fedilink
            English
            11 year ago

            I think we’re saying the same thing there: LLMs are great at spewing out a ton of content, which makes them a great tool for brainstorming. The content they create is not necessarily trustworthy or even good, but it can be great fuel for the creative process.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              1 year ago

              My stance is that spewing out a ton of flawed unrelated content is not conducive to creative good content, and therefor LLM is not useful for writing. That hasn’t changed.

        • Jojo
          link
          fedilink
          English
          21 year ago

          Exactly. It can generate those base-level ideas much faster and worth higher fidelity than humans can without it, and that can see us at the hobby level with DND, or up at the business level with writers rooms and such.

          The important point is that you still need someone good at making the thing you want to look at and finish the thing you’re making, or you end up with paintings with too many fingers or stories full of contradictions

          • @[email protected]
            link
            fedilink
            English
            11 year ago

            Any kid who uses it to craft their campaign is lazy and depriving themselves of a valuable experience, any professional who uses it to write a book, script, or study is wildly unethical, and both are creating a much much worse product than a human without reliance on them. That is the reality of a model who at 100% accuracy would be exactly as flawed as human output, and we’re nowhere near that accuracy.

            • Jojo
              link
              fedilink
              English
              21 year ago

              But the point is that you don’t use it to make the campaign or write the book. You use it as a tool to help yourself make a campaign or write a book. Ignoring the potential of ai as a tool is silly just because it can’t do the whole job for you. That would be a bit like saying you are a fool for using a sponge when washing because it will never get everything by itself…

              • @[email protected]
                link
                fedilink
                English
                11 year ago

                I get it now! You don’t use it for the thing you use it for but instead as a tool to create the thing that you’ve used it for for yourself because the magic was inside all of us but also the GPT all along. /sarcasm

                • Jojo
                  link
                  fedilink
                  English
                  21 year ago

                  “don’t feed the trolls,” they said, but did she ever listen?

                  No, I guess I didn’t…

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    I think as a tool to synthesize and collect and organize information to help people make decisions, it has potential. Much like how machine learning is used to look at a bunch of MRI scans and highlight abnormalities and then medical professional looks at those anomalies to decide if they might be a tumor. But a machine is really good at finding things that are anomalous enough to be worth looking at. 

    Things that you might have delegated to a secretary or assistant or business analyst might be worthwhile done by an LLM. “sort all these papers by which ones understood the topic the best so I can read those first“ “Do any of these articles contain new information I haven’t seen before?“ “based on the Billboard top 20, create 5 catchy beats for a backing track” “Draft a letter to this customer apologizing for our error and offering them a coupon for their next order” “analyze this email I wrote and help me make the tone more professional” 

    I am terrified by what is going to be possible with phishing scams, spam email, fake articles, deep fake videos, reproduction of copyrighted works, an overwhelming volume of trademarks and patents that are meaningless, obtuse contracts that are purposely difficult for a human to read but contain surreptitious loopholes, software that is full of flaws and back doors, and corporations putting more barriers between customers and customer service people.

    “find me the 50 most popular articles on this topic, synthesize them all into a 20 bullet point summary and highlight for me the differences of opinion presented so I can understand both sides of the issue” - super useful

    “Generate 100,000 unique variations on a very professional email correspondence from a Nigerian Prince offering to pay $50,000 transaction fee for assistance with an international wire transfer “ - no

    Unfortunately I don’t think there are any incentives for the companies building these things to limit use or install the guard rails necessary. And our laws, which always run a little behind technology, are thoroughly outpaced by the rate of innovation here. The very old people in charge of governments have no chance of staying ahead of these companies. It will get much worse before it ever gets better.

    Honestly, we should just stick to porn. The Internet should just be for porn because everything else we do with it seems to turn evil. 🫤

    • Alto
      link
      fedilink
      31 year ago

      I’ve got some really bad news about the porn industry if you don’t think it’s evil.

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    Have you tried out writing prompts for an image generating AI? If you have some idea and play around with it it’s quite a new thing. And extension of human imagination. YMMV

    AI is helping us to correctly predict protein folding which will enable new medication. Afaik it’s a major breakthough that could allow alleviating a lot of suffering.

  • blargerer
    link
    fedilink
    21 year ago

    No one knows the long term benefits/costs yet, but its potentially more empowering to small creators than large ones. Everyone has access to the same tools, and for instance, if it can offload a bunch of work from an indy game dev, that could let them focus more on the part of the game design process they are most skilled at/interested in.

    • blazera
      link
      fedilink
      21 year ago

      Everyone has access to the same tools

      go make me something with Sora to see what kind of equal access you think you have

      • blargerer
        link
        fedilink
        11 year ago

        Sure, the public doesn’t have access to a cutting edge research AI that public results from were only published a couple of days ago.

        • blazera
          link
          fedilink
          11 year ago

          Right, it’s a new technology, its usage is being curated. Even once they release a publicly accessible application, it’ll be like going to the hammer renting store, you get to go there and use a hammer, for a fee sometimes, you cant bring the hammer home or own it, they regulate what you can work on with it, it can be overused and inaccessible while demand is high, they can discontinue access when they want.

          And they can and do give more access to wealthier clients.

  • FaceDeer
    link
    fedilink
    121 year ago

    I pasted your question verbatim into Bing Chat. Here’s what it responded with:

    Artificial Intelligence (AI) indeed has a wide range of benefits that extend beyond the ones you’ve mentioned. Here are some areas where AI is making a significant impact:

    • Healthcare: AI is used in predicting disease outbreaks, drug discovery, personalized treatment plans, and improving patient care. For example, machine learning models can analyze medical images to detect diseases at early stages.
    • Education: AI can provide personalized learning experiences, identify gaps in learning materials, and automate administrative tasks. It can adapt to individual learning styles, making education more accessible.
    • Environment: AI can help in climate modeling, predicting natural disasters, and monitoring wildlife. It’s also used in optimizing energy usage in buildings and manufacturing processes, contributing to sustainability.
    • Transportation: Autonomous vehicles use AI for navigation, safety, and traffic management. AI can also optimize logistics, leading to reduced costs and environmental impact.
    • Security: AI can enhance cybersecurity by detecting unusual patterns or anomalies in data, helping to prevent cyber attacks. It’s also used in surveillance systems to identify potential threats.
    • Accessibility: AI can help people with disabilities by providing tools that improve their ability to interact with the world. Examples include speech recognition for those unable to use a keyboard, and visual recognition systems that can describe the environment to visually impaired individuals.

    While it’s true that AI can be used to generate profits for corporations, it’s important to remember that many of these advancements also lead to societal benefits. However, like any technology, AI can be misused, and it’s crucial to have regulations and ethical guidelines in place to prevent such misuse. The creation of “bots and fake content” you mentioned is one such misuse, and efforts are ongoing to combat these issues.

    In conclusion, AI has the potential to greatly benefit society in many ways, but it’s equally important to be aware of and address its challenges.

    Seems like a pretty comprehensive list of the things I’m aware of myself. There’s also tons of interesting future applications being worked on that, if they pan out, will be hugely beneficial in all sorts of ways. From what I’ve seen of what the tech is capable of we’re looking at a revolution here.

    • @[email protected]
      link
      fedilink
      English
      81 year ago

      Seems a bit biased to ask an AI for the benefits of AI…
      Not saying anything specific is wrong, just that appearances matter

      • SharkAttak
        link
        fedilink
        21 year ago

        Was thinking the same… let’s ask Honest Joe the car seller which one is the best mean of transport.

      • Chozo
        link
        fedilink
        11 year ago

        I think implying that it has a bias is giving the Advanced Auto Prediction Engine a bit too much credit.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          Oh I am in fact giving the giant auto complete function little credit. But just like any computer system, an AI can reflect the biases of it’s creators and dataset. Similarly, the computer can only give an answer to the question it has been asked.

          Dataset wise, we don’t know exactly what the bot was trained on, other than “a lot”. I would like to hope it’s creators acted in good judgement, but as creators/maintainers of the AI, there may be an inherent (even if unintentional) bias towards the creation and adoption of AI. Just like how some speech recognition models have issues with some dialects or image recognition has issues with some skin tones - both based on the datasets they ingested.

          The question itself invites at least some bias and only asks for benefits. I work in IT, and I see this situation all the time with the questions some people have in tickets: the question will be “how do I do x”, and while x is a perfectly reasonable thing for someone to want to do, it’s not really the final answer. As reasoning humans, we can also take the context of a question to provide additional details without blindly reciting information from the first few lmgtfy results.

          (Stop reading here if you don’t want a ramble)


          AI is growing yes and it’s getting better, but it’s still a very immature field. Many of its beneficial cases have serious drawbacks that mean it should NOT be “given full control of a starship”, so to speak.

          • Driverless cars still need very good markings on the road to stay in lane, but a human has better pattern matching to find lanes - even in a snow drift.
          • Research queries are especially affected, with chatbots hallucinating references that don’t exist despite being formatted correctly. To that specifically:
            • Two lawyers have been caught separately using chatbots for research and submitting their work without validating the answer. They were caught because they cited a case which supported their arguments but did not exist.
            • A chatbot trained to operate as a customer support representative invented a refund policy that did not exist. As decided by small claims court, the airline was forced to honor this policy
            • In an online forum while trying to determine if a piece of software had a specific functionality, I encountered a user who had copied the question into chatgpt and pasted the response. It was a command option that was exactly what I and the forum poster needed, but sadly did not exist. On further research, there was a bug report open for a few years to add this functionality that was not yet implemented
            • A coworker asked an LLM if a specific Windows powershell commands existed. It responded with documentation about a very nicely formatted command that was exactly what we needed, but alas did not exist. It had to be told that it was wrong four times before it gave us an answer that worked.

          While OP’s question is about the benefits, I think it’s also important to talk about the drawbacks at the same time. All that information could be inadvertently filtered out. Would you blindly trust the health of you child or significant other to a chatbot that may or may not be hallucinating? Would you want your boss to fire you because the computer determined your recorded task time to resolution was low? What about all those dozens of people you helped in side chats that don’t have tickets?

          There’s a great saying about not letting progress get in the way of perfection, meaning that we shouldn’t get too caught on getting the last 10-20% of completion. But with decision making that can affect peoples’ lives and livelihoods, we need to be damn sure the computer is going to make the right decision every time or not trust it to have full controls at all.

          As the future currently stands, we still need humans constantly auditing the decisions of our computers (both standard procedural and AI) for safely’s sake. All of those examples above could have been solved by a trained human gating the result. In the powershell case, my coworker was that person. If we’re trusting the computers with at much decision making as that Bing answer proposes, the AI models need to be MUCH better trained at how to do their jobs than they currently are. Am I saying we should stop using and researching AI? No, but not enough people currently understand that these tools have incredibly rough edges and the ability for a human to verify answers is absolutely critical.

          Lastly, are humans biased? Yes absolutely. You can probably see my own bias in the construction of this answer.

      • FaceDeer
        link
        fedilink
        3
        edit-2
        1 year ago

        It was in part a demonstration. I see a huge number of questions posted these days that could be trivially answered by an AI.

        Try asking Bing Chat for negative aspects of AI, it’ll give you those too.

  • @[email protected]
    link
    fedilink
    English
    81 year ago

    Machine learning is important in healthcare and it’s going to get better and better. If you train an algorithm on two sets of data where one is a collection of normal scans and the other from patients with an abnormality, it’s often more accurate than a medical professional in sorting new scans.

    As for the fancy chatbot side of things, I suspect it’s only going to lead to a bunch of middle management dickheads believing they can lay off staff until the inevitable happens and it blows up in their faces.

  • tiredofsametab
    link
    fedilink
    51 year ago

    The one thing I can say for sure is that, sometimes, when I library or something has bad documentation it might be able to give a solution quicker than diving I to the source code

  • @[email protected]
    link
    fedilink
    English
    421 year ago

    Most email spam detection and antimalware use ML. There are use cases in medicine with trying to predict whether someone has a condition early

    • Lemminary
      link
      fedilink
      English
      171 year ago

      It’s also being used in drug R&D to find similar compounds like antimicrobial activity, afaik.

  • @[email protected]
    link
    fedilink
    English
    41 year ago

    Once the technology has embedded, our societal adjustments have completed (and they will be PAINFUL) and assuming the profit of AI is sufficiently taxed for the wealth to redistribute, AI will be seen as the Industrial Revolution x10.

    Most likely however, the rich will get richer.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      assuming the profit of AI is sufficiently taxed for the wealth to redistribute

      AH - hah-hah-hah-hah !!!

      Oh well, at least some of us will still be good for cleaning up messes and other physical things. And remember, like they used to say, hard work never killed anybody.

  • kingthrillgore
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    Depends on what kind of AI. In gaming, AI is part of the process to entertain and challenge the player, and has even been used to help model life systems.

    I have yet to see how useful LLMs can be outside of being blatant plagarists but for a time, projects like AI Dungeon really did push the emphasis on “interactive dynamic narratives” and it was really fun for a while.

    ML has been an important part in fraud detection for at least a decade now.

    • TheMurphy
      link
      fedilink
      English
      31 year ago

      Very true. I learned how to code surprisingly fast.

      And even the mistakes the AI made was good, because it made me learn so much seeing what changes it did to fix it.

      • @[email protected]
        link
        fedilink
        English
        31 year ago

        Bullshit. Reading a book on a language is just as fast and it doesn’t randomly lie or make up entire documentations as an added bonus.