My best list of free ChatGPT and other models. Required - no signups.

  • speck
    link
    fedilink
    02 years ago

    Is it cost prohibitive to adopt your own chatgpt?

    • @[email protected]
      link
      fedilink
      02 years ago

      You’re billed per token usage. GPT-3.5-Turbo price per 1K tokens is quite low now.

      I kinda made my own Custom ChatGPT with Python (and LOTS of coding help from Web CharGPT). It evolved from a few lines shitty script to a version that uses Langchain and has access to custom tools, including custom data indexes and has a persistent memory.

      What will ramp up the cost are things like how much context (memory) you want the chatbot to have. If you use something like a recursive summarizer, that summarizes a text by chunks over and over until the text is below a set length, that also uses many API calls that consume tokens. Also, if you want your chatbot to use custom info that you provided to it, solutions like LlamaIndex are easy to use, but require quite some tokens per query.

      On my worst month, with lots of usage due to testing and without the latest price drop, I reached 70$.

      • speck
        link
        fedilink
        02 years ago

        Loved the depth of this info - although it’s over my head. But I kind of understood? I have a project for next while to focus on. But I hear that it’s possible to do, and that’s exciting

        • @[email protected]
          link
          fedilink
          -12 years ago

          I know, it’s fuxxing dense all the info about the Open API and Python to create a model.

          I don’t even know how I got so far.

      • @[email protected]
        link
        fedilink
        English
        02 years ago

        I’m working on a similar project right now with zero coding knowledge. I’ve been trying to find something like langchain all day. I built (by which I mean I coached GPT into building) a web scraper script that can interact with the web to perform searches and then parse the results, but the outputs are getting too big to manage in a hacked together terminal interface.

        How are you doing the UI? That’s what I’m finding to be the biggest puzzle that isn’t fun to solve. I’ve been looking at react as a way to do it.

        • @[email protected]
          link
          fedilink
          English
          -12 years ago

          I use a Gradio chatbot interface. While Gradio has all kinds of interfaces and there’s one specially designed for chatbots.

          IDK if it’s the best option, but it’s what I found on shitty blog tutorials when I started. Even Stable Diffusion WebUI uses it.

          It’s quite powerful, but a bitch to learn to use, IMO.

  • db0M
    link
    fedilink
    English
    8
    edit-2
    2 years ago

    You don’t need to pirate OpenAI. I’ve built the AI Horde so y’all can use it without any workarounds of shenanigans and you can use your PCs to help others as well.

    Here’s a client for LLM you can run directly on your browser: https://lite.koboldai.net

    • @[email protected]
      link
      fedilink
      English
      12 years ago

      Aren’t KobaldAI models on par with GPT3? Why not just use ChatGPT then?

      AI Horde looks dope for image generation though!

      • @[email protected]
        link
        fedilink
        English
        12 years ago

        Kobald is a program to run local llms, some seem on par with gpt3 but normaly youre gonna need a very beefy system to slowly run them.

        The benefit is rather clear, less centralized and free from strict policies but Gpt3 is also miles away from gpt3.5. Exponential growth ftw. I have yet to see something as good and fast as chatgpt

  • @[email protected]
    link
    fedilink
    English
    12 years ago

    Pretty cool, seems like they’re all gpt-3.5 at best but it’s really nice to not sign in

  • Treevan 🇦🇺
    link
    fedilink
    English
    1
    edit-2
    2 years ago

    Cheers for this. I tried a few of them while I’m waiting around and had one excellent result. I’m a near expert in one topic and I often test AIs against my knowledge for fun.

    Perplexity.AI did the best I’ve seen; it sourced its arguments which, finally, weren’t wrong so if I needed to, I could actually learn more about what it was talking about. It’s not 100% but the other AI are so bad at this topic I test it on I always give up immediately.

    I wouldn’t have seen it if it wasn’t for this post so thank you very much.

    • Treevan 🇦🇺
      link
      fedilink
      English
      1
      edit-2
      2 years ago

      I don’t know if anyone will read this but I did further testing on perplexity when I got home. It’s probably not the right spot for it.

      I tried a more trickier question and then I chose the available prompts to move forward (it suggests questions related to the original question if you are unsure how to prompt it next). The prompts were intelligent and were probably the next question I would assume I would ask if I were learning about this topic. On the next answer, it literally quoted something I wrote, almost word for word, on the exact subject which, according to me (of course) would be the correct answer.

      I’ve never had an AI even reference a single thing I’ve written. I had prompted it into a general area where the things I had wrote existed so it should be expected but it made the connection almost instantly and answered the question 100% accurately.

      As much as I hate it, well done Skynet.

      Edit: After further testing, I can catch it out regularly enough but still, if I had to tell someone about the topic generally via email, I’d probably recommend it rather than me waste time typing it all out. I’ve just put myself out of a job.

      • 🐱TheCat
        link
        fedilink
        English
        02 years ago

        I’m curious what your area of expertise is? I’m interested in using ai for a programming assistant, but it seems an entirely different skillset than, say, a language model. I assume some models will be good in 1 area and some models in another

        • Treevan 🇦🇺
          link
          fedilink
          English
          12 years ago

          Mine is in plants which a lot of models seem to struggle with. It’s not the science side, it’s the application side so with that, there is another layer of intelligence that the AI has to break through to appeal to me (answer my particular questions).

          I tested it again with something even more particular and unique to an Australian plant and it was way off. I think I may have been one of the only people to ever post a particular technique to reddit and the AI mustn’t be searching in there as it didn’t even know about it even when asked directly. To its credit, it did give a good suggestion on who to contact to find out more.

        • @[email protected]
          link
          fedilink
          English
          02 years ago

          How has your experience been using it as a programming assistant? I’m trying to do this too

          • 🐱TheCat
            link
            fedilink
            English
            12 years ago

            very hit and miss. It’s okay if Im trying to learn something new, and once or twice it has found and suggested some fix that I probably wouldn’t have thought of otherwise - but it also makes up methods & syntax and then you’re playing ‘whack a mole’ to figure out where it hallucinated.

            I think right now it’s not really boosting my productivity much, but I think in another 5ish years it could be better.

  • Infiltrated_ad8271
    link
    fedilink
    22 years ago

    Anonchatgpt should stop being recommended, it really sucks. It has a VERY strict character limit, immediately forget/ignore the context, requires recaptcha, and the “anon” part of the name is obviously fake if you read the privacy policy.

  • @[email protected]
    link
    fedilink
    English
    02 years ago

    Any news on how there tend to perform compared to GPT-4? I finally decided to toss OpenAI 20 quid to try it out for a month, and it’s pretty impressive.