• @[email protected]
    link
    fedilink
    101 year ago

    Local AI will be harvested - if not today, then as soon as tomorrow. I recommend not trusting any system like this with any sensitive information… Or, honestly, with most non-sensitive information.

    • Possibly linux
      link
      fedilink
      English
      23
      edit-2
      1 year ago

      How? It is running locally in a VM. I could even air gap the VM if I wanted

    • Dizzy Devil Ducky
      link
      fedilink
      English
      101 year ago

      If you connect it to the Internet then sure it can be easily harvested by large companies. Pretty sure you can host an offline AI in a device you have made sure the hardware isn’t gonna be phoning home and it’ll probably be fairly safe if you aren’t an idiot like me and actually know what you’re doing.

      • @[email protected]
        link
        fedilink
        English
        81 year ago

        If you install it locally, it will be as secure as any other thing you do on your computer.

        • Carl O.S. ©
          link
          fedilink
          1
          edit-2
          1 year ago

          @AdrianTheFrog @privacy @AceFuzzLord actually, it depends on the code. If it’s no open source you can’t really know what it is doing with your data. Therefore not all things you install in you local computer are equally insecure (or secure)

    • @[email protected]
      link
      fedilink
      English
      41 year ago

      When people say Local AI, they mean things like the Free / Open Source Ollama (https://github.com/ollama/ollama/), which you can read the source code for and check it doesn’t have anything to phone home, and you can completely control when and if you upgrade it. If you don’t like something in the code base, you can also fork it and start your own version. The actual models (e.g. Mistral is a popular one) used with Ollama are commonly represented in GGML format, which doesn’t even carry executable code - only massive multi-dimensional arrays of numbers (tensors) that represent the parameters of the LLM.

      Now not trusting that the output is correct is reasonable. But in terms of trusting the software not to spy on you when it is FOSS, it would be no different to whether you trust other FOSS software not to spy on you (e.g. the Linux kernel, etc…). Now that is a risk to an extent if there is an xz style attack on a code base, but I don’t think the risks are materially different for ‘AI’ compared to any other software.