I should’ve used it sooner rather than last year when they announced AI integration to Windows. Every peripheral I tried is just worked without needing to install drivers, and it works better and faster than on Windows, just like today when I tried to use my brother’s 3D printer expecting disappointment, but no, it just connected and was ready to print right away (I use Ultimaker Cura), whereas on my brother’s Windows computer I have to wait like 20 seconds; sometimes I have to disconnect and reconnect it again for it to see and ready to use. Lastly, for those who are wondering, I use Vanilla Arch (btw), and sorry for bad English.

  • sunzu2
    link
    fedilink
    13 months ago

    Use local LLM model, it will turbo charge your learning curve.

    Tells you commands and will explain the errors. This is prime LLM domains IMHO since everytbing Linux is well documented online.

    • @[email protected]
      link
      fedilink
      English
      23 months ago

      I have tried with many models online, presumably all of which are more robust than local. Will give it another shot soon

      • sunzu2
        link
        fedilink
        13 months ago

        They deff. Local ain’t gonna be better. Did you not like the results from llms?

        • @[email protected]
          link
          fedilink
          English
          13 months ago

          I mean I find them useful often, but in this case I didn’t like the results in the sense that after trying for hours my problem wasn’t resolved 🙂