They support Claude, ChatGPT, Gemini, HuggingChat, and Mistral.

  • @[email protected]
    link
    fedilink
    147 months ago

    as someone who’s never dabbled with ai bots, what does this feature do? is it only to query for information like a web search?

    • Ephera
      link
      fedilink
      English
      57 months ago

      From the description in the UI, it does sound like it. Theoretically, a chatbot could be created where you can ask questions about the webpage you have currently opened, so if you don’t want to read a long article, for example. I guess, you could probably just throw a link into an existing chatbot either way, but yeah, direct integration might be convenient either way.

      Well, or a chatbot could be created, which has access to your browser history, bookmarks and tabs, so you can ask it when you last saw certain information. However, you’d need a locally running chatbot for that, which makes it more difficult to implement.

    • @[email protected]
      link
      fedilink
      147 months ago

      It just adds ChatGPT or similar to your sidebar. Chatbots can do a lot of things, they are mostly good for information research and technical help, although they have serious flaws like hallucinating false information sometimes

      • Pup Biru
        link
        fedilink
        English
        27 months ago

        good for information research and technical help

        i’d say they are good precursors for information research… never trust them, but use them to find terms to search for reliable sources

    • @[email protected]
      link
      fedilink
      117 months ago

      It is a sidebar that sends a query from your browser directly to a server run by a giant corporation like Google or OpenAI, consumes an excessive amount of carbon/water, then sends a response back to you that may or may not be true (because AI is incapable of doing anything but generating what it thinks you want to see).

      Not only is it unethical in my opinion, it’s also ridiculously rudimentary…

      • @[email protected]
        cake
        link
        fedilink
        4
        edit-2
        7 months ago

        It gives you many options on what to use, you can use Llama which is offline. Needs to be enabled though about:config > browser.ml.chat.hideLocalhost.

        • @[email protected]
          link
          fedilink
          17 months ago

          There’s a huge difference between something that is presented in an easily accessible settings menu, and something that requires you to go to an esoteric page, click through a scary warning message, and then search for esoteric settings… Before even installing a server.

          Nothing was compelling Mozilla to rush this through. In addition, nobody was asking Mozilla for remote access to AI, AFAIK. Before Mozilla pushed for it, people were praising them for resisting the temptation to follow the flock. They could have waited and provided better defaults.

          Or just wedged it into an extension, something they’re currently doing anyway.

        • @[email protected]
          link
          fedilink
          57 months ago

          and thus is unavailable to anyone who isn’t a power user, as they will never see a comment like this and about:config would fill them with dread

  • Sir Arthur V Quackington
    link
    fedilink
    347 months ago

    Thing is, for your average user with no GPU and whp never thinks about RAM, running a local LLM is intimidating. But it shouldn’t be. Any system with an integrated GPU, and the more RAM the better, can run simple models locally.

    The not so dirty secret is that ChatGPT 3 vs 4 isn’t that big a difference, and neither are leaps and bounds ahead of the publically available models for about 99% of tasks. For that 1% people will ooh and aah over it, but 99% of use cases are only seeing marginal gains on 4o.

    And the simplified models that run “only” 95% as well? They can use 90% fewer resources give pretty much identical answers outside of hyperspecific use cases.

    Running a a “smol” model as some are called, gets you all the bang for none of the buck, and your data stays on your system and never leaves.

    I’ve been yelling from the rooftops to some stupid corporate types that once the model is trained, it’s trained. Unless you are training models yourself, there is no need for the massive AI clusters, just for the model. Run it local on your hardware at a fraction of the cost.

    • @[email protected]
      link
      fedilink
      317 months ago

      There’s the tragedy with this new feature: they fast-tracked this past more popular requests, sticking it into Release Firefox.

      But they only rushed the part that connects to third parties. There was also a “localhost” option which was originally alongside the Big Five corporate offerings, but Mozilla ultimately decided to bury that one inside of the about:config settings.

      • @[email protected]
        link
        fedilink
        127 months ago

        I’m guessing that the reason (and a good one at that) is that simply having an option to connect to a local chatbot leads to just confused users because they also need the actual chatbot running on their system. If you can set up that, then you can certainly toggle a simple switch in about:config to show the option.

    • @[email protected]
      link
      fedilink
      47 months ago

      Can you point me to some resources to running smol llm?

      My use case prob just to help “typing” miscellaneous idea I have or check for my grammatical error, in english.

      Thanks, in advance.

    • @[email protected]
      link
      fedilink
      37 months ago

      Last time I tried using a local llm (about a year ago) it generated only a couple words per second and the answers were barely relevant. Also I don’t see how a local llm can fulfill the glorified search engine role that people use llms for.

      • Sir Arthur V Quackington
        link
        fedilink
        47 months ago

        Try again. Simplified models take the large ones and pare them down in terms of memory requirements, and can be run off the CPU even. The “smol” model I mentioned is real, and hyperfast.

        Llama 3.2 is pretty solid as well.

        • @[email protected]
          link
          fedilink
          3
          edit-2
          7 months ago

          These are the answers they gave the first time.

          Qwencoder is persistent after 6 rerolls.

          Anyways, how do I make these use my gpu? ollama logs say the model will fit into vram / offloaing all layers but gpu usage doesn’t change and cpu gets the load. And regardless of the model size vram usage never changes and ram only goes up by couple hundred megabytes. Any advice? (Linux / Nvidia) Edit: it didn’t have cuda enabled apparently, fixed now

          • Sir Arthur V Quackington
            link
            fedilink
            57 months ago

            Nice.

            Yea I don’t trust any AI models for facts, period. They all just lie. Confidently. The smol model there at least tried and got it right at first… Before confusing the sentence context.

            Qwen is a good model too. But if you wanted something to run home automation or do text summaroes, smol is solid enough. I’m using CPU so it’s good enough.

      • @[email protected]
        link
        fedilink
        English
        27 months ago

        They’re fast and high quality now. ChatGPT is the best, but local llms are great, even with 10gb of vram.

  • @[email protected]
    link
    fedilink
    297 months ago

    Didn’t want it in Opera, don’t want it in Firefox. I mean they can keep trying and I’ll just keep on ignoring this shit :/

  • JokeDeity
    link
    fedilink
    397 months ago

    Unpopular opinion, I think they’re doing it right as well as it can be at least. It’s completely optional and doesn’t seem to be intrusive.

  • @[email protected]
    link
    fedilink
    97 months ago

    If they do it in a privacy-preseeving way, this could help them get back market share which will generally benefit an open internet.

      • @[email protected]
        link
        fedilink
        27 months ago

        Because browsers are the most useful tool on most computers. Ordinary People go on google/ask chatgpt for mundane questions. If their browser can do that they need 1 app less and it will be more convenient which is what especially non-tech savy people care about.

  • @[email protected]
    link
    fedilink
    27 months ago

    And I still can’t convince it to stop caching the images because it does not follows the RFC.

  • marcie (she/her)
    link
    fedilink
    20
    edit-2
    7 months ago

    why a fucking chatbot? translate a page better for me you fucking losers, all the translation options suck for privacy outside of specifically trained local AIs. this is the BEST use case for a small local LLM yet mozilla with all its brains and resources couldnt rub two neurons together for this.

    or they could do character prediction on your typing to make typing faster. just some legit examples, why waste resources to build a chat ai into my browser when i can just open a website???

    • @[email protected]
      link
      fedilink
      117 months ago

      I think Mistral is model-available (ie I’m not sure if they release training data/code but they do release model shape and weights), huggingchat definitely is open source and model-available

    • @[email protected]
      link
      fedilink
      97 months ago

      There are no open source ai models, even if they tell you that they are. HuggingFace is the closest thing to as something like open source where you can download ai models to run locally without internet connection. There are applications for that. In Firefox the HuggingChat uses models from HuggingFace, but I think it is running them on a server and does not download from?

      The reason why they are not open source is, because we don’t know exactly on what data they are trained on. We cannot rebuild them on our own. And for trustworthy, I assume you are talking about the integration and the software using the models, right? At least it is implemented by Mozilla, so there is (to me) some sort of trust involved. Yes, even after all the bullshit I trust Mozilla.

      • @[email protected]
        link
        fedilink
        37 months ago

        It’s “open weights” if they are publishing the model file but nothing about its creation. There’s some hypothetical security concerns with training it to give very specific outputs for certain very specific inputs but I feel like that’s one of those kind of far fetched worries especially if you want to use it for chat or summarization and the comparison is getting AI output from a server API. Local is still way better.