• @[email protected]
    link
    fedilink
    91 month ago

    It’s open source, although not 'cause it wants to be but 'cause it’s the best way to compete with mainstream non-chinese software internationally, you can easily remove any censorship included by default.

        • @[email protected]
          link
          fedilink
          11 month ago

          I meant, how does one run it locally. I see a lot of people saying to just “run it locally” but for someone without a background in coding that doesn’t really mean much.

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            1 month ago

            You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate.

            1. Download Ollama.

            2. Depending on the power of your GPU, run one of the following commands:

              • DeepSeek-R1-Distill-Qwen-1.5B: ollama run deepseek-r1:1.5b

              • DeepSeek-R1-Distill-Qwen-7B: ollama run deepseek-r1:7b

              • DeepSeek-R1-Distill-Llama-8B: ollama run deepseek-r1:8b

              • DeepSeek-R1-Distill-Qwen-14B: ollama run deepseek-r1:14b

              • DeepSeek-R1-Distill-Qwen-32B: ollama run deepseek-r1:32b

              • DeepSeek-R1-Distill-Llama-70B: ollama run deepseek-r1:70b

            Bigger models means better output, but also longer generation times.