• @[email protected]
    link
    fedilink
    English
    1
    edit-2
    1 month ago

    You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)

    Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.

    • @[email protected]
      link
      fedilink
      English
      51 month ago

      You can experiment on your own GPU

      you have lost the game

      you have been voted off the island

      you are the weakest list

      etc etc etc

      • @[email protected]
        link
        fedilink
        English
        21 month ago

        This is the most “insufferable redditor” stereotype shit possible, and to think we’re not even on Reddit

        • @[email protected]
          link
          fedilink
          English
          51 month ago

          a’ight, sure bub, let’s play

          tell me what hw spec I need to deploy some kind of interactive user-facing prompt system backed by whatever favourite LLM/transformer-model you want to pick. idgaf if it’s llama or qwen or some shit you’ve got brewing in your back shed - if it’s on huggingface, fair game. here’s the baselines:

          • expected response latencies: human, or better
          • expected topical coherence: mid-support capability or above
          • expected correctness: at worst “I misunderstood $x” in the sense of “whoops, sorry, I thought you were asking about ${foo} but I answered about ${bar}”; i.e. actual, contextual, concrete contextual understanding

          (so, basically, anything a competent L2 support engineer at some random ISP or whatever could do)

          hit it, I’m waiting.

          • David GerardOPM
            link
            fedilink
            English
            81 month ago

            you’ll be waiting a while. it turns out “i’m not saying it’s always programming.dev, but” was already in my previous ban reasons, and it was this time too.

        • @[email protected]
          link
          fedilink
          English
          71 month ago

          nah, the most insufferable Reddit shit was when you decided Lemmy doesn’t want to learn because somebody called you out on the confident bullshit you’re making up on the spot

          like LLM like shithead though am I right?

          • @[email protected]
            link
            fedilink
            English
            6
            edit-2
            1 month ago

            like LLM like shithead

            fuck, there’s potential here, but a bit too specific for a t-shirt?

            like llm like idiot

            perhaps?

    • @[email protected]
      link
      fedilink
      English
      71 month ago

      👨🏿‍🦲: how many billions of models are you on

      🗿: like, maybe 3, or 4 right now my dude

      👨🏿‍🦲: you are like a little baby

      👨🏿‍🦲: watch this

      glue pizza

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        1 month ago

        The most recent Qwen model supposedly works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            1 month ago

            Not making these famous logical errors

            For example, how many Rs are in Strawberry? Or shit like that

            (Although that one is a bad example because token based models will fundamentally make such mistakes. There is a new technique that lets LLMs process byte level information that fixes it, however)

            • @[email protected]
              link
              fedilink
              English
              61 month ago

              oh, I get it, you personally choose not to make these structurally-repeatable-by-foundation errors? you personally choose to be a Unique And Correct Snowflake?

              wow shit damn, I sure want to read your eventual uni paper, see what kind of distinctly novel insight you’ve had to wrangle this domain!