• @[email protected]
    link
    fedilink
    English
    3
    edit-2
    4 days ago

    I’m actually more medium on this!

    • Only 32K context without yarn, and with yarn Qwen 2.5 was kinda hit/miss.

    • No 32B base model. Is that a middle finger to the Deepseek distils?

    • It really feels like “more of qwen 2.5/1.5” architecture wise. I was hoping for better attention mechanisms, QAT, a bitnet test, logit distillation… something new other than some training data optimizations and more scale.

      • @[email protected]
        link
        fedilink
        English
        24 days ago

        Yeah, but only an Instruct version. They didn’t leave any 32B base model like they did for for the 30B MoE.

        That could be intentional, to stop anyone from building on their 32B dense model.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          4 days ago

          Huh. I didn’t realize that thanks. Lame that they would hold back the one that is the biggest size most consumers would ever run.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            4 days ago

            It could be an oversight, no one has answered yet. Not many people asking either, heh.