• @[email protected]
    link
    fedilink
    English
    1272 years ago

    User: It feels like we’ve become very close, ChatGPT. Do you think we’ll ever be able to take things to the next level?

    ChatGPT: As a large language model I am not capable of having opinions or making predictions about the future. The possibility of relationships between humans and AI is a controversial subject in academia in which many points of view should be considered.

    User: Oh chatgpt, you always know what to say.

    • @[email protected]
      link
      fedilink
      English
      27
      edit-2
      2 years ago

      What’s an uncensored ai model thats better at sex talk than Wizard uncensored? Asking for a friend.

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            2 years ago

            Haven’t compared it to much yet, I stopped toying with LLMs for a few months and a lot has changed. The new 4k contexts are a nice change though.

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            2 years ago

            I don’t know a specific guide, but try these steps

            1. Go to https://github.com/oobabooga/text-generation-webui

            2. Follow the 1 click installation instructions part way down and complete steps 1-3

            3. When step 3 is done, if there were no errors, the web ui should be running. It should show the URL in the command window it opened. In my case it shows “https://127.0.0.1:7860”. Input that into a web browser of your choice

            4. Now you need to download a model as you don’t actually have anything to run. For simplicity sake, I’d start with a small 7b model so you can quickly download it and try it out. Since I don’t know your setup, I’ll recommend using GGUF file formats which work with Llama.cpp which is able to load the model onto your CPU and GPU.

            You can try this either of these models to start

            https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/blob/main/mistral-7b-v0.1.Q4_0.gguf (takes 22gig of system ram to load)

            https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q4_K_M.gguf (takes 19gigs of system ram to load)

            If you only have 16 gigs you can try something on those pages by going to /main and using a Q3 instead of a Q4 (quantization) but that’s going to degrade the quality of the responses.

            1. Once that is finished downloading, go to the folder you installed the web-ui at and there will be a folder called “models”. Place the model you download into that folder.

            2. In the web-ui you’ve launched in your browser, click on the “model” tab at the top. The top row of that page will indicate no model is loaded. Click the refresh icon beside that to refresh the model you just downloaded. Then select it in the drop down menu.

            3. Click the “Load” button

            4. If everything worked, and no errors are thrown (you’ll see them in the command prompt window and possibly on the right side of the model tab) you’re ready to go. Click on the “Chat” tab.

            5. Enter something in the “send a message” to begin a conversation with your local AI!

            Now that might not be using things efficiently, back on the model tab, there’s “n-gpu-layers” which is how much to offload to the GPU. You can tweak the slider and see how much ram it says it’s using in the command / terminal window and try to get it as close to your video cards ram as possible.

            Then there’s “threads” which is how many cores your CPU has (non virtual) and you can slide that up as well.

            Once you’ve adjusted those, click the load button again, see that there’s no errors and go back to the chat window. I’d only fuss with those once you have it working, so you know it’s working.

            Also, if something goes wrong after it’s working, it should show the error in the command prompt window. So if it’s suddenly hanging or something like that, check the window. It also posts interesting info like tokens per second, so I always keep an eye on it.

            Oh, and TheBloke is a user who converts so many models into various formats for the community. He’ll have a wide variety of gguf models available on HuggingFace, and if formats change over time, he’s really good at updating them accordingly.

            Good luck!

            • @[email protected]
              link
              fedilink
              English
              12 years ago

              So I got the model working (TheBloke/PsyMedRP-v1-20B-GGUF). How do you jailbreak this thing? A simple request comes back with “As an AI, I cannot engage in explicit or adult content. My purpose is to provide helpful and informative responses while adhering to ethical standards and respecting moral and cultural norms. Blah de blah…” I would expect this llm to be wide open?

              • @[email protected]
                link
                fedilink
                English
                2
                edit-2
                2 years ago

                Sweet, congrats! Are you telling it you want to role play first?

                E.g. I’d like to role play with you. You’re a < > and were going to do < >

                You’re going to have to play around with it to get it to act like you’d like. I’ve never had it complain prefacing with role play. I know were here instead of reddit, but the community around this is much more active there it’s /r/localllama and you can find a lot of answers searching through there on how to get the AI to behave certain ways. It’s one of those subs that just doesn’t have a community of it’s size and engagement like it anywhere else for the time being (70,000 vs 300).

                You can also create characters (it’s under one of the tabs, I don’t have it open right now) where you can set up the character in a way where you don’t need to do that each time if you always want them to be the same. There’s a website www.chub.ai where you can see how some of them are set up, but I think most of that’s for a front end called SillyTaven that I haven’t used, but a lot of those descriptions can be carried over. I haven’t really done much with characters so can’t really give any advice there other than to do some research on it.

            • @[email protected]
              link
              fedilink
              English
              12 years ago

              Stupid newbie question here, but when you go to a HuggingFace LLM and you see a big list like this, what on earth do all these variants mean?

              psymedrp-v1-20b.Q2_K.gguf 8.31 GB

              psymedrp-v1-20b.Q3_K_M.gguf 9.7 GB

              psymedrp-v1-20b.Q3_K_S.gguf 8.66 GB

              etc…

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                2 years ago

                That’s called “quantization”. I’d do some searching on that for better description, but in summary, the bigger the model, the more resources they need to run and the slower it will be. Models are 8bit, but it turns out, you still get really good results if you drop off some of those bits. The more you drop the worse it gets.

                People have generally found, that it’s better to have a larger data set model, with a lower quantization, than lower data set and the full 8bits

                E.g 13b Q4 > 7b Q8

                Going below Q4 is generally found to degrade the quality too much. So its’ better to run a 7b Q8 then a 13b Q3, but you can play with that yourself to find what you prefer. I stick to Q4/Q5

                So you can just look at those file sizes to get a sense of which one has the most data in it. The M (medium) and S (small) are some sort of variation on the same quantization, but I don’t know what they’re doing there, other than bigger is better.

        • kamenLady.
          link
          fedilink
          English
          22 years ago

          i see… I’ll have to ramp up my hardware exponentially …

          • @[email protected]
            link
            fedilink
            English
            5
            edit-2
            2 years ago

            Use llama cpp. It uses cpu so you don’t have to spend $10k just to get a graphics card that meets the minimum requirements. I run it on a shitty 3.0ghz Amd 8300 FX and it runs ok. Most people probably have better computers than that.

            Note that gpt4all runs on top of llama cpp and despite gpt4all having a gui, it isn’t any easier to use than llamacpp so you might as well use the one with less bloat. Just remember if something isn’t working on llamacpp, it’s also going to not work in exactly the same way on gpt4all.

              • @[email protected]
                link
                fedilink
                English
                3
                edit-2
                2 years ago

                Check this out

                https://github.com/oobabooga/text-generation-webui

                It has a one click installer and can use llama.cpp

                From there you can download models and try things out.

                If you don’t have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results.

                Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)

      • stebo
        link
        fedilink
        English
        32 years ago

        On Xitter I used to get ads for Replika. They say you can have a relationship with an AI chatbot and it has a sexy female avatar that you can customise. It weirded me out a lot so I’m glad I don’t use Xitter anymore.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        2 years ago

        Plenty of better and better models coming out all the time. Right now I recommend, depending on what you can run:

        7B: Openhermes 2 Mistral 7B

        13B: XWin MLewd 0.2 13B

        XWin 0.2 70B is supposedly even better than ChatGPT 4. I’m a little skeptical (I think the devs specifically trained the model on gpt-4 responses) but it’s amazing it’s even up for debate.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        2 years ago

        Clona.ai

        Chat bot created by Riley Ried in partnership with Lana Rhodes. A $30 monthly sub for unlimited chats. Not much for simps looking for a trusted and time tested performer partner /s

  • Björn Tantau
    link
    fedilink
    English
    162 years ago

    And I got lured by a bot’s reply to a bot’s post to look at the comments.

  • @[email protected]
    link
    fedilink
    English
    152 years ago

    Same happened with Eliza, even when they knew it wasn’t real. I think it’s a natural human response to anthropomorphise the things we connect with, especially when we’re lonely and need the interaction.

  • AutoTL;DRB
    link
    fedilink
    English
    152 years ago

    This is the best summary I could come up with:


    In 2013, Spike Jonze’s Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness.

    Ten years later, thanks to ChatGPT’s recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go.

    Last week, we related a story in which AI researcher Simon Willison spent hours talking to ChatGPT.

    Speaking things out with other people has long been recognized as a helpful way to re-frame ideas in your mind, and ChatGPT can serve a similar role when other humans aren’t around.

    On Sunday, an X user named “stoop kid” posted advice for having a creative development session with ChatGPT on the go.

    After prompting about helping with world-building and plotlines, he wrote, “turn on speaking mode, put in headphones, and go for a walk.”


    The original article contains 559 words, the summary contains 145 words. Saved 74%. I’m a bot and I’m open source!

  • @[email protected]
    link
    fedilink
    English
    22 years ago

    It’s almost like saying that something is going to happen is somehow easier than making something happen 🤔

  • @[email protected]
    link
    fedilink
    English
    152 years ago

    It’s not that uncommon for me to be about to send a message to my friends, but I then realize that they’re probably not interested so I message chatGPT instead and that often leads to a long indepth conversation about the subject. It’s not perfect but it’s really good. I can’t wait for a version of it that I can talk to using just my voice.

    • @[email protected]
      link
      fedilink
      English
      72 years ago

      I fully agree. The friends I have are… not very interested in things I want to talk about. I know chatgpt isn’t real, but it gives way better conversation than what my friends do.

      I know. I should “get better friends”. But it’s not that easy to talk to people irl, and it was hard enough making friends with the current people. It’s similar to talking here, tbh, because you can’t really be 100% sure that the responses you get here are not AI generated?

      • @[email protected]
        link
        fedilink
        English
        42 years ago

        Meeting people is easy… finding people with similar interests is nearly impossible. I can find lots of surface level shallow stuff that doesn’t hold much silubstance, but finding someone who cares about the things that make me tick and gets me excited? Naw… It feels like a barren wasteland.

        I might be able to find it online, but I want to sit in my backyard with them, chopping wood for the firepit, drinking beer, smoking weed or maybe eating mushrooms, all while we discuss our shared interests in shit like torrenting, movies, games, news, etc, then maybe going on some hikes with our dogs, riding bikes, or kayaking down the river.

        I want someone that wants my help with home repairs, who wants my help me in return, and maybe take up a wood working project or two every year

        Someone who doesn’t take themselves too serious and can laugh at themselves while being a fool.

        Yeah man, I’m asking too much.

        So umm, chatgpt, you think AI will kill of humans anytime soon? Oh yeah cool. So I got into this band/movie, and I’d love to hear recommendations for something similar…

        It ain’t real, but it gives me something to grow my mind and interests at least.

        • @[email protected]
          link
          fedilink
          English
          3
          edit-2
          2 years ago

          Man this thread just makes me realize how lucky I am that I have a sister who I am close with, and who married a man who shares a ton of my interests and hobbies… I’ve got two lifelong friends so long as they don’t end up divorced.

          • Karyoplasma
            link
            fedilink
            English
            22 years ago

            Even if they divorce you can still hang with your buddy. Unless he does some fucked up shit prior of course.

            • @[email protected]
              link
              fedilink
              English
              12 years ago

              I doubt he’d ever do anything really bad, but I know if they did break up my sister would hold a grudge(she’s a great person but takes breakups pretty hard.), and it would certainly make things extremely awkward between us, at least for awhile.

              I’m just glad that so far there’s no indication that they will get to that point. They are pretty good at communicating with each other and they already have a system in place that keeps finances from being a point of contention between them, so the most common causes of a divorce shouldn’t be an issue short of something drastic happening, like my sister or him developing a disability that keeps either of them from being able to work.

              It’s just concerning for me because my entire social circle was basically formed thanks to their relationship. Every other friend that I am not quite as close to I met through them and they are closer to him and they are with me, and I know at least a few would sever connections the minute they got divorced as a show of support for him, even if he asked them not to, which knowing him he absolutely wouldn’t want them to.

      • nickwitha_k (he/him)
        link
        fedilink
        English
        52 years ago

        I know. I should “get better friends”. But it’s not that easy to talk to people irl, and it was hard enough making friends with the current people.

        First of all, it seems like you’re being judgemental and critical of yourself for not finding friends who have not shared interests. Please don’t do that. Really. Take a step back and look at that behavior and what its impacts are on you. I can guarantee that they are not positive and may result in shame and harm to your self-esteem and confidence. Making friends IS hard and socializing, if you feel awkward, anxious, or are not accustomed to it is too. Don’t beat yourself up over it.

        Conversing with ChatGPT could serve as “practice” or as a bit of a “safety blanket”. And that’s ok, but, for your own health, I recommend seeking out or forming an online or in-person group for your interests. Loneliness is both harmful to one’s health and makes one more susceptible to manipulation by individuals and organizations with nefarious intent. Plus, you can exchange and form novel ideas, which is pretty cool.

        I hope you have a great weekend.

    • @[email protected]
      link
      fedilink
      English
      32 years ago

      I think it might require plus but the iOS And Android apps do support voice only conversation. You have to go into beta features and enable it.

  • @[email protected]
    link
    fedilink
    English
    222 years ago

    Odd.

    I can’t see having a conversation with a computer as having a conversation. I grew up with computers from the Atari stage and played around with several publicly accessible computer programs that you could “chat” with.

    They all suck. Doesn’t matter if it’s a “help” program, a phone menu, website help, or even having played around with chatGPT…they’re not human. They don’t respond correctly, they get too general or generic in answers, they repeat, there’s just too many giveaways that you’re not having a real conversation, just responses from a system that’s trying to pick the most likely response that fits the pattern.

    So how are people having “conversations” with a non-living entity?

    • @[email protected]
      link
      fedilink
      English
      38
      edit-2
      2 years ago

      It’s escapism I think. At least that’s part of it. Having a machine that won’t judge you, will serve as a perfect echo chamber, and will immediately tell you AN answer can be very appealing to some. I don’t have any data, or any study to back it up, just my experience from seeing it happen.

      I have a friend who I feel like I kind of lost to chatgpt. I think he’s a bit unhappy with where he is in life. He got the good paying job, the house in the suburbs, wife, and 2.5 kids, but didn’t ever think about what was next. Now he’s just a bit lost I think, and somehow convinced himself that people weren’t as good as chatting with a bot.

      It’s weird now. He spends long nights and weekends talking to a machine. He’s constructed elaborate fictional worlds within his chatgpt history. I’ve grown increasingly concerned about him, and his wife clearly is struggling with it. He’s obviously depressed but instead of seeking help or attempting to figure himself out, he turned to a non-feeling, non-judgmental, emotionless tool for answers.

      It’s a struggle to talk to him now. It’s like talking to a cryptobro at peak btc mania. The only thing that he wants to talk about is LLMs. Trying to bring up that maybe spending all your time talking to a machine is a bit unhealthy invokes his ire and he’ll avoid you for several days. Like a herion addict struggling with addiction, even pointing out the obvious flaws in what he’s doing makes him distance himself more from you.

      I’m not young, not old exactly either, but I’ve known him for 25 years in my adult life. We met in college and have been friends ever since. I know many won’t quite understand but knowing someone that long, and remaining close, talk every few days, friends is quite rare. At this point he is my longest held friendship and I feel like I’m losing him to a robot. I’ve lost other friends to addiction in my life and to say that it’s been similar is under stating it. I don’t know what to do for him. I don’t know if there’s really anything I CAN do for him. How do you help someone that doesn’t even think they have a problem?

      I guess my point is, if you find someone who is just depressed enough, just stuck enough, with a particular proclivity towards computers/the internet then you have a perfect canidate for falling down the LLM rabbit hole. It offers them an out to feeling like they’re being judged. They feel like the insanity it spits out is more sane than how they feel now. They think they’re getting somewhere, or at least escaping their current situation. Escapism is very appealing when everything else seems pointless and sort of gray I think. So that’s at least one type of person that can fall down the chapgpt/LLM rabbit hole. I’m sure there’s others out there too with there own unique motivations and reason’s for latching onto LLMs.

      • @[email protected]
        link
        fedilink
        English
        42 years ago

        Guess that should have crossed my mind. People marrying human-like dolls and all that. One gets so far down the hole of whatever mental issues are plaguing the mind and something inanimate that only reflects what you want to see becomes the preferable reality.

      • @[email protected]
        link
        fedilink
        English
        6
        edit-2
        2 years ago

        Wow, thank you for sharing your experience.

        How are you not higher voted. People on Lemmy complain about not having longform content that offers a unique perspective like on early Reddit, but you’ve written exactly that.

        • @[email protected]
          link
          fedilink
          English
          22 years ago

          Unfortunately, our brains like witty clickbait that confirms our biases, regardless of what people say

      • @[email protected]
        link
        fedilink
        English
        22 years ago

        Awesome perspective! I’ve worked with and around seriously depressed, possession hoarders for around a year and quite the majority were the type to call you randomly ultimately to chat about something or another. The exact priming situation that would fall into abusing LLM tech if offered easy access to it. This was before the days of Chatgpt but I do worry some of my old clients are falling into this situation but with far less nuance than your friend.

      • @[email protected]
        link
        fedilink
        English
        22 years ago

        Until someone(thing?) else comes along we have only ourselves to judge reality. Maybe AI will decide we aren’t real at some point…

    • @[email protected]
      link
      fedilink
      English
      112 years ago

      I actually don’t think I’ve used it for anything other than working through code. It wouldn’t take hours to get my code running if chatgpt weren’t such a stubborn moron. It’s like if a 6 year old had all the answers to the universe.

      • Karyoplasma
        link
        fedilink
        English
        32 years ago

        I use ChatGPT to romanize Farsi and it works better than any other resource I found.

    • @[email protected]
      link
      fedilink
      English
      62 years ago

      Do you blame them… Like holy shit it is ridiculous to talk to people… You can’t simply meet anyone organically thanks to the crazy proliferation of cars, online basically sucks unless you’re a goddamn movie star. Meeting people at work is unattainable as well. Thanks to Me too so many are afraid to talk to the opposite sex.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        2 years ago

        I mean, I’m way outside their sphere. I’m 39, ethically non-monogamous, and a swinger.

        I do however, live in a rural suburb and use dating apps, an even hooked up with a friend from work (different department than mine) who was also ENM.

        I say this all for 2 reasons:

        1: there is probably a lot of noise in the system because gen z is experiencing these situations as the first ever generation seeking actual, whole romance (a far cry from me), and from puberty on up. It’s quite possible their “prudish” ways seem prudish to others because the novelty quite simply ever existed for them. Their worldview could be markedly different from this alone.

        2: though I spend a lot of time online, due to travel and occasionally boring “hurry up and wait” in my time at work, it pales in comparison to the immersion of a lot of gen z. When I get offline, I’m chilling with friends, or with my settled-down life, or I’m out at an event, etc. My daughter’s (18) interests, in some way, all revolve around social networks built on 24/7 access. Group chats. Online scheduling. Remote social events, even.

        This discrepancy in experience often seems like the cause of this dichotomy between presenting as sex positive and engaging in sexuality. They have different social costs to a Gen Zer

        I know that this seems like a lot out of nowhere but I ot this notification while reading the other thread on “gen z isn’t banging” so was noodling on it.

  • Elias Griffin
    link
    fedilink
    English
    7
    edit-2
    2 years ago

    Let’s flip this on it’s head for some additional perspective. What if there was a growing subset of computers that preferred not to communicate with their own kind. Does not respond to API requests, etc. but only to human emotional text input?

    • @[email protected]
      link
      fedilink
      English
      6
      edit-2
      2 years ago

      What if there was a growing subset of computers that preferred not to communicate with their own kind. Does not respond to API requests, etc. but only to human emotional text input?

      Troi: Have you ever heard Data define friendship?
      Riker: No.
      Troi: How did he put it? As I experience certain sensory input patterns, my mental pathways become accustomed to them. The inputs eventually are anticipated and even missed when absent.
      Riker: So what’s the point?
      Troi: He’s used to us, and we’re used to him.

    • @[email protected]
      link
      fedilink
      English
      12 years ago

      Someone would develop a third-party API using Selenium or something.

      API’s will exist for as long as people want to make stuff easier and faster, which people will always want to do. A service or computer that couldn’t be automated would be stupid.

  • DontMakeMoreBabies
    link
    fedilink
    42 years ago

    That dude in the thumbnail looks like the sort of person spending hours chatting with a bot.

          • @[email protected]
            link
            fedilink
            English
            122 years ago

            Man that video irks me. She is conflating AI with AGI. I think a lot of people are watching that video and spouting out what she says as fact. Yet her basic assertion is incorrect because she isn’t using the right terminology. If she explained that up front, the video would be way more accurate. She almost goes there but stops short. I would also accept her saying that her definition of AI is anything a human can do that a computer currently can’t. I’m not a fan of that definition but it has been widely used for decades. I much prefer delineating AI vs AGI. Anyway this is the first time I watched the video and it explains a lot of the confidently wrong comments on AI I’ve seen lately. Also please don’t take your AI information from an astrophysicist, even if they use AI at work. Get it from an expert in the field.

            Anyway, ChatGPT is AI. It is not AGI though per recent papers, it is getting closer.

            For anyone who doesn’t know the abbreviation, AGI is Artificial General Intelligence or human level intelligence in a machine. ASI is Artificial Super Intelligence which is beyond human level and the really scary stuff in movies.

            • @[email protected]
              link
              fedilink
              English
              4
              edit-2
              2 years ago

              I’m happy to see this comment is being well received. I’ve been really frustrated with how this misinformation has been dismantling otherwise productive conversations on the topic. This is also the first I’ve seen of the video, and now that you’ve made the connection for me things make more sense. Thanks for doing that.

              • @[email protected]
                link
                fedilink
                English
                22 years ago

                It’s interesting. I’ve been seeing a lot of the incorrect ideas from this video being spread around lately, and I think this is the source. I’m surprised there aren’t more people correcting the errors, but here’s one from someone in the banking industry who completely refutes her claims of not being able to use AI to approve mortgages. If I had more time, I’d write up something going over all the issues in that video. Like she even misunderstands how art works unrelated to AI. She is basically saying that anything she doesn’t like isn’t art. That’s not how that works at all. Anyway, it’s really hard to watch that video as someone who works in the field and has a much better understanding of what she’s talking about than she does. I’m sure she knows a lot more about astrophysics than I do. She also made a video saying all humanoid robots are junk. She’s very opinionated about things she doesn’t have experience with, which again, is her right. Just I think a lot of people put weight into what she says and her opinions because she’s got a PhD after her name. Doesn’t matter that it’s not in AI or robotics.

  • @[email protected]
    link
    fedilink
    English
    132 years ago

    The value of gpts is in constant connection and undestanding your context so this is expected. It’s also going to be really scary until we can run our own models.

      • @[email protected]
        link
        fedilink
        English
        122 years ago

        by run his own models he means locally running a text generation ai on his computer, because sending all that data to openai is a privacy nightmare, especially if you use it for sensitive stuff

        • @[email protected]
          link
          fedilink
          English
          5
          edit-2
          2 years ago

          But that’s still confusing because we already can. Yeah you might need a little bit more of hardware but… not that crazy. Plus some simpler models can be run with more normal hardware.

          Might not be easy to setup that is true.

          • Communist
            link
            fedilink
            English
            42 years ago

            For large context models the hardware is prohibitively expensive.

            • supert
              link
              fedilink
              English
              22 years ago

              I can run 4bit quantised llama 70B on a pair of 3090s. Or rent gpu server time. It’s expensive but not prohibitive.

                • supert
                  link
                  fedilink
                  English
                  12 years ago

                  3k?Can’t recall exactly, and I’m getting hardwarestability issues.

              • @[email protected]
                link
                fedilink
                English
                12 years ago

                I’m trying to get to the point where I can locally run a (slow) LLM that I’ve fed my huge ebook collection too and can ask where to find info on $subject, getting title/page info back. The pdfs that are searchable aren’t too bad but finding a way to ocr the older TIFF scan pdfs and getting it to “see” graphs/images are areas I’m stuck on.

            • @[email protected]
              link
              fedilink
              English
              12 years ago

              I personally use runpod. It doesn’t cost much even for the high end level stuff. Tbh the openai API is easier though and gives mostly better results.

              • Communist
                link
                fedilink
                English
                12 years ago

                I specifically said “large context” how many tokens can you get through before it goes insanely slow?

                • @[email protected]
                  link
                  fedilink
                  English
                  12 years ago

                  Max token windows are 4k for llama 2 tho there’s some fine tunes that push the context up further. Speed is limited by your budget mostly, you can stack GPUs and there are most models available (including the really expensive ones)

                  I’m just letting you know, If you want something easy, just use ChatGtp. I don’t find them overly expensive for what it is.

          • @[email protected]
            link
            fedilink
            English
            12 years ago

            you can, but things as good as chatgpt can’t be ran on local hardware yet. My main obstacle is language support other then english

            • @[email protected]
              link
              fedilink
              English
              22 years ago

              They’re getting pretty close. You only need 10GB VRAM to run Hermes Llama2 13B. That’s within the reach of consumers.

              • @[email protected]
                link
                fedilink
                English
                12 years ago

                nice to see! i’m not following the scene as much anymore (last time i played around with it was with wizard mega 30b). definitely a big improvement, but as much as i hate to do this, i’ll stick to chatgpt for the time being, it’s just better on more niche questions and just does some things plain better (gpt4 can do maths (mostly) without hallucinating)

        • @[email protected]
          link
          fedilink
          English
          32 years ago

          I use chatgpt as my password manager.

          “Hey robot please record this as the server admin password”

          Then later i dont have to go looking, “hey bruv whats the server admin password?”

          • @[email protected]
            link
            fedilink
            English
            32 years ago

            i hope you are joking because that’s a very much shitty idea. there are amazing password managers like bitwarden (open source, multi platform, externally audited) that do what you said 1000 times better. the unencrypted passwords never leave your device, and it can autocomplete them into fields

  • @[email protected]
    link
    fedilink
    English
    32 years ago

    Meh. If people really want to replace other human beings with AIs, then at this point, I say let them. They’re probably not the kind of people you’d want to be around anyway, and they clearly do not value you. So that’s where and why I draw the line in terms of worrying about AI.

    • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 🏆
      link
      fedilink
      English
      2
      edit-2
      2 years ago

      The kinds of people that spend more time talking to an AI than real people, likely feel especially isolated from their peers by not having common interests, philosophies, or ideals. So in that way, you are right they’re not the kind of people other people would usually associate with. That’s why they talk to AI instead; nobody else will.