• @[email protected]
    link
    fedilink
    126
    edit-2
    3 months ago

    One of those rare lucid moments by the stock market? Is this the market correction that everyone knew was coming, or is some famous techbro going to technobabble some more about AI overlords and they return to their fantasy values?

    • @[email protected]
      link
      fedilink
      1003 months ago

      It’s quite lucid. The new thing uses a fraction of compute compared to the old thing for the same results, so Nvidia cards for example are going to be in way less demand. That being said Nvidia stock was way too high surfing on the AI hype for the last like 2 years, and despite it plunging it’s not even back to normal.

      • @[email protected]
        link
        fedilink
        323 months ago

        My understanding is it’s just an LLM (not multimodal) and the train time/cost looks the same for most of these.

        I feel like the world’s gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I’m missing something that would probably account for the cost difference in current vs previous iterations.

        • @[email protected]
          link
          fedilink
          English
          393 months ago

          The thing is that R1 is being compared to gpt4 or in some cases gpt4o. That model cost OpenAI something like $80M to train, so saying it has roughly equivalent performance for an order of magnitude less cost is not for nothing. DeepSeek also says the model is much cheaper to run for inferencing as well, though I can’t find any figures on that.

          • @[email protected]
            link
            fedilink
            53 months ago

            My main point is that gpt4o and other models it’s being compared to are multimodal, R1 is only a LLM from what I can find.

            Something trained on audio/pictures/videos/text is probably going to cost more than just text.

            But maybe I’m missing something.

            • @[email protected]
              link
              fedilink
              English
              233 months ago

              The original gpt4 is just an LLM though, not multimodal, and the training cost for that is still estimated to be over 10x R1’s if you believe the numbers. I think where R 1 is compared to 4o is in so-called reasoning, where you can see the chain of though or internal prompt paths that the model uses to (expensively) produce an output.

              • @[email protected]
                link
                fedilink
                5
                edit-2
                3 months ago

                I’m not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.

                The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.

                However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude’s computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.

                Edit: and I think the real money will be in the more complex models focused on workflows automation.

              • veroxii
                link
                fedilink
                43 months ago

                Holy smoke balls. I wonder what else they have ready to release over the next few weeks. They might have a whole suite of things just waiting to strategically deploy

      • @[email protected]
        link
        fedilink
        113 months ago

        How is the “fraction of compute” being verified? Is the model available for independent analysis?

        • @[email protected]
          link
          fedilink
          273 months ago

          Its freely availible with a permissive license, but I dont think that that claim has been verified yet.

          • @[email protected]
            link
            fedilink
            English
            93 months ago

            And the data is not available. Knowing the weights of a model doesn’t really tell us much about its training costs.

      • davel [he/him]
        link
        fedilink
        English
        53 months ago

        If AI is cheaper, then we may use even more of it, and that would soak up at least some of the slack, though I have no idea how much.

    • scratsearcher 🔍🔮📊🎲
      link
      fedilink
      English
      23 months ago

      Most rational market: Sell off NVIDIA stock after Chinese company trains a model on NVIDIA cards.

      Anyways NVIDIA still up 1900% since 2020 …

      how fragile is this tower?

    • Cowbee [he/they]
      link
      fedilink
      223 months ago

      On the brightside, the clear fragility and lack of direct connection to real productive forces shows the instability of the present system.

      • @[email protected]
        link
        fedilink
        English
        93 months ago

        And no matter how many protectionist measures that the US implements we’re seeing that they’re losing the global competition. I guess protectionism and oligarchy aren’t the best ways to accomplish the stated goals of a capitalist economy. How soon before China is leading in every industry?

        • Cowbee [he/they]
          link
          fedilink
          93 months ago

          This conclusion was foregone when China began to focus on developing the Productive Forces and the US took that for granted. Without a hard pivot, the US can’t even hope to catch up to the productive trajectory of China, and even if they do hard pivot, that doesn’t mean they even have a chance to in the first place.

          In fact, protectionism has frequently backfired, and had other nations seeking inclusion into BRICS or more favorable relations with BRICS nations.

    • @[email protected]
      link
      fedilink
      63 months ago

      That’s the thing: if the cost of AI goes down , and AI is a valuable input to businesses that should be a good thing for the economy. To be sure, not for the tech sector that sells these models, but for all of the companies buying these services it should be great.

  • synae[he/him]
    link
    fedilink
    English
    183 months ago

    Idiotic market reaction. Buy the dip, if that’s your thing? But this is all disgusting, day trading and chasing news like fucking vultures

    • @[email protected]
      link
      fedilink
      10
      edit-2
      3 months ago

      Yep. It’s obviously a bubble, but one that won’t pop from just this, the motive is replacing millions of employees with automation, and the bubble will pop when it’s clear that won’t happen, or when the technology is mature enough that we stop expecting rapid improvement in capabilities.

      • @[email protected]
        link
        fedilink
        English
        33 months ago

        I love the fact that the same executives who obsess over return to office because WFH ruins their socialization and sexual harassment opportunities think think they’re going to be able to replace all their employees with AI. My brother in Christ. You have already made it clear that you care more about work being your own social club than you do actual output or profitability. You are NOT going to embrace AI. You can’t force an AI to have sex with you in exchange for keeping its job, and that’s the only trick you know!

      • @[email protected]
        link
        fedilink
        13 months ago

        Well both of those things have been true months if not years, so if those are the conditions for a pop then they are met.

        • @[email protected]
          link
          fedilink
          2
          edit-2
          3 months ago

          It’s gambling. The potential payoff is still huge for whoever gets there first. Short term anyway. They won’t be laughing so hard when they fire everyone and learn there’s nobody left to buy anything.

              • @[email protected]
                link
                fedilink
                13 months ago

                Oh! Hahahaha. No.

                the vc techfeudalist wet dreams of llm replacing humans are dead, they just want to milk the illusion as long as they can.

                • @[email protected]
                  link
                  fedilink
                  1
                  edit-2
                  3 months ago

                  The tech is already good enough that any call center employees should be looking for other work. That one is just waiting on the company-specific implementations. In twenty years, calling a major company’s customer service and having any escalation path that involves a human will be as rare as finding a human elevator operator today.

        • @[email protected]
          link
          fedilink
          23 months ago

          How are both conditions meer when all this just started 2(?) years ago? And progress is still going very fast.

          • @[email protected]
            link
            fedilink
            13 months ago

            all this started in 2023? alas no time marches on, llm have been a thing for decades and the main boom happened more in 2021. progress is not fast, no, these are companies throwing as much compute at their problems as they can. deepseek’s caused a 2t drop by being marginal progress in a field (llms specifically) out of ideas.

            • @[email protected]
              link
              fedilink
              13 months ago

              The huge AI LLM boom/bubble started after chatGPT came out.

              But of fucking course it existed before.

              • @[email protected]
                link
                fedilink
                13 months ago

                regardless of where you want to define the starting point of the boom, it’s been clear for months up to years depending on who you ask that they are plateuing. and harshly. stop listening to hypesters and people with a financial interest in llm being magic.

    • NoSpotOfGround
      link
      fedilink
      65
      edit-2
      3 months ago

      Text below, for those trying to avoid Twitter:

      Most people probably don’t realize how bad news China’s Deepseek is for OpenAI.

      They’ve come up with a model that matches and even exceeds OpenAI’s latest model o1 on various benchmarks, and they’re charging just 3% of the price.

      It’s essentially as if someone had released a mobile on par with the iPhone but was selling it for $30 instead of $1000. It’s this dramatic.

      What’s more, they’re releasing it open-source so you even have the option - which OpenAI doesn’t offer - of not using their API at all and running the model for “free” yourself.

      If you’re an OpenAI customer today you’re obviously going to start asking yourself some questions, like “wait, why exactly should I be paying 30X more?”. This is pretty transformational stuff, it fundamentally challenges the economics of the market.

      It also potentially enables plenty of AI applications that were just completely unaffordable before. Say for instance that you want to build a service that helps people summarize books (random example). In AI parlance the average book is roughly 120,000 tokens (since a “token” is about 3/4 of a word and the average book is roughly 90,000 words). At OpenAI’s prices, processing a single book would cost almost $2 since they change $15 per 1 million token. Deepseek’s API however would cost only $0.07, which means your service can process about 30 books for $2 vs just 1 book with OpenAI: suddenly your book summarizing service is economically viable.

      Or say you want to build a service that analyzes codebases for security vulnerabilities. A typical enterprise codebase might be 1 million lines of code, or roughly 4 million tokens. That would cost $60 with OpenAI versus just $2.20 with DeepSeek. At OpenAI’s prices, doing daily security scans would cost $21,900 per year per codebase; with DeepSeek it’s $803.

      So basically it looks like the game has changed. All thanks to a Chinese company that just demonstrated how U.S. tech restrictions can backfire spectacularly - by forcing them to build more efficient solutions that they’re now sharing with the world at 3% of OpenAI’s prices. As the saying goes, sometimes pressure creates diamonds.

      Last edited 4:23 PM · Jan 21, 2025 · 932.3K Views

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      3 months ago

      Deepthink R1(the reasoning model) was only released on January 20. Still took a while though.

  • ☆ Yσɠƚԋσʂ ☆
    link
    fedilink
    173 months ago

    I’d argue this is even worse than Sputnik for the US because Sputnik spurred technological development that boosted the economy. Meanwhile, this is popping the economic bubble in the US built around the AI subscription model.

  • @[email protected]
    link
    fedilink
    203 months ago

    So if the Chinese version is so efficient, and is open source, then couldn’t openAI and anthropic run the same on their huge hardware and get enormous capacity out of it?

    • @[email protected]
      link
      fedilink
      113 months ago

      Not necessarily… if I gave you my “faster car” for you to run on your private 7 lane highway, you can definitely squeeze every last bit of the speed the car gives, but no more.

      DeepSeek works as intended on 1% of the hardware the others allegedly “require” (allegedly, remember this is all a super hype bubble)… if you run it on super powerful machines, it will perform nicer but only to a certain extend… it will not suddenly develop more/better qualities just because the hardware it runs on is better

      • @[email protected]
        link
        fedilink
        23 months ago

        Didn’t deepseek solve some of the data wall problems by creating good chain of thought data with an intermediate RL model. That approach should work with the tried and tested scaling laws just using much more compute.

      • @[email protected]
        link
        fedilink
        33 months ago

        This makes sense, but it would still allow a hundred times more people to use the model without running into limits, no?

    • @[email protected]
      link
      fedilink
      English
      93 months ago

      OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.

      Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.

  • @[email protected]
    link
    fedilink
    253 months ago

    Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.

    And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!

    • @[email protected]
      link
      fedilink
      93 months ago

      With understanding LLM, I started to understand some people and their “reasoning” better. That’s how they work.

    • @[email protected]
      link
      fedilink
      10
      edit-2
      3 months ago

      It is progress in a sense. The west really put the spotlight on their shiny new expensive toy and banned the export of toy-maker parts to rival countries.

      One of those countries made a cheap toy out of jank unwanted parts for much less money and it’s of equal or better par than the west’s.

      As for why we’re having an arms race based on AI, I genuinely dont know. It feels like a race to the bottom, with the fallout being the death of the internet (for better or worse)

    • @[email protected]
      link
      fedilink
      English
      23 months ago

      I’m tired of this uninformed take.

      LLMs are not a magical box you can ask anything of and get answers. If you are lucky and blindly asking questions it can give some accurate general data, but just like how human brains work you aren’t going to be able to accurately recreate random trivia verbatim from a neural net.

      What LLMs are useful for, and how they should be used, is a non-deterministic parsing context tool. When people talk about feeding it more data they think of how these things are trained. But you also need to give it grounding context outside of what the prompt is. give it a PDF manual, website link, documentation, whatever and it will use that as context for what you ask it. You can even set it to link to reference.

      You still have to know enough to be able to validate the information it is giving you, but that’s the case with any tool. You need to know how to use it.

      As for the spyware part, that only matters if you are using the hosted instances they provide. Even for OpenAI stuff you can run the models locally with opensource software and maintain control over all the data you feed it. As far as I have found, none of the models you run with Ollama or other local AI software have been caught pushing data to a remote server, at least using open source software.

    • @[email protected]
      link
      fedilink
      73 months ago

      The difference is that you can actually download this model and run it on your own hardware (if you have sufficient hardware). In that case it won’t be sending any data to China. These models are still useful tools. As long as you’re not interested in particular parts of Chinese history of course ;p

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      3 months ago

      And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware.

      LLMs aren’t spyware, they’re graphs that organize large bodies of data for quick and user-friendly retrieval. The Wikipedia schema accomplishes a similar, abet more primitive, role. There’s nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.

      If you no longer need to boil down half a Great Lake to create the next iteration of Shrimp Jesus, that’s good whether or not you think Meta should be dedicating millions of hours of compute to this mind-eroding activity.

      • @[email protected]
        link
        fedilink
        English
        33 months ago

        There’s nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.

        Westoids? Are you the type of guy I feel like I need to take a shower after talking to?

      • @[email protected]
        link
        fedilink
        23 months ago

        I think maybe it’s naive to think that if the cost goes down, shrimp jesus won’t just be in higher demand. Shrimp jesus has no market cap, bullshit has no market cap. If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit. Those great lakes will still boil, don’t worry.

        • @[email protected]
          link
          fedilink
          English
          13 months ago

          I think maybe it’s naive to think that if the cost goes down, shrimp jesus won’t just be in higher demand.

          Not that demand will go down but that economic cost of generating this nonsense will go down. The number of people shipping this back and forth to each other isn’t going to meaningfully change, because Facebook has saturated the social media market.

          If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit.

          The efficiency is in the real cost of running the model, not in how it is applied. The real bottleneck for AI right now is human adoption. Guys like Altman keep insisting a new iteration (that requires a few hundred miles of nuclear power plants to power) will finally get us a model that people want to use. And speculators in the financial sector seemed willing to cut him a check to go through with it.

          Knocking down the real physical cost of this boondoggle is going to de-monopolize this awful idea, which means Altman won’t have a trillion dollar line of credit to fuck around with exclusively. We’ll still do it, but Wall Street won’t have Sam leading them around by the nose when they can get the same thing for 1/100th of the price.

    • @[email protected]
      link
      fedilink
      63 months ago

      artificial intelligence

      AI has been used in game development for a while and i havent seen anyone complain about the name before it became synonymous with image/text generation

      • @[email protected]
        link
        fedilink
        English
        33 months ago

        It was a misnomer there too, but at least people didn’t think a bot playing C&C would be able to save the world by evolving into a real, greater than human intelligence.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      3 months ago

      Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.

      Yep, because they believed that OpenAI’s (two lies in a name) models would magically digivolve into something that goes well beyond what it was designed to be. Trust us, you just have to feed it more data!

      And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!

      That’s the neat bit, really. With that model being free to download and run locally it’s actually potentially disruptive to OpenAI’s business model. They don’t need to do anything malicious to hurt the US’ economy.

  • @[email protected]
    link
    fedilink
    English
    533 months ago

    Good. LLM AIs are overhyped, overused garbage. If China putting one out is what it takes to hack the legs out from under its proliferation, then I’ll take it.

    • @[email protected]
      link
      fedilink
      163 months ago

      Overhyped? Sure, absolutely.

      Overused garbage? That’s incredibly hyperbolic. That’s like saying the calculator is garbage. The small company where I work as a software developer has already saved countless man hours by utilising LLMs as tools, which is all they are if you take away the hype; a tool to help skilled individuals work more efficiently. Not to replace skilled individuals entirely, as Sam Dead eyes Altman would have you believe.

      • @[email protected]
        link
        fedilink
        English
        13 months ago

        LLMs as tools,

        Yes, in the same way that buying a CD from the store, ripping to your hard drive, and returning the CD is a tool.

    • davel [he/him]
      link
      fedilink
      English
      253 months ago

      Cutting the cost by 97% will do the opposite of hampering proliferation.

      • @[email protected]
        link
        fedilink
        English
        93 months ago

        No but it would be nice if it would turn back in the tool it was. When it was called machine learning like it was for the last decade before the bubble started.

      • ArchRecord
        link
        fedilink
        English
        243 months ago

        Possibly, but in my view, this will simply accelerate our progress towards the “bust” part of the existing boom-bust cycle that we’ve come to expect with new technologies.

        They show up, get overhyped, loads of money is invested, eventually the cost craters and the availability becomes widespread, suddenly it doesn’t look new and shiny to investors since everyone can use it for extremely cheap, so the overvalued companies lose that valuation, the companies using it solely for pleasing investors drop it since it’s no longer useful, and primarily just the implementations that actually improved the products stick around due to user pressure rather than investor pressure.

        Obviously this isn’t a perfect description of how everything in the work will always play out in every circumstance every time, but I hope it gets the general point across.

      • @[email protected]
        link
        fedilink
        English
        83 months ago

        It’s not about hampering proliferation, it’s about breaking the hype bubble. Some of the western AI companies have been pitching to have hundreds of billions in federal dollars devoted to investing in new giant AI models and the gigawatts of power needed to run them. They’ve been pitching a Manhattan Project scale infrastructure build out to facilitate AI, all in the name of national security.

        You can only justify that kind of federal intervention if it’s clear there’s no other way. And this story here shows that the existing AI models aren’t operating anywhere near where they could be in terms of efficiency. Before we pour hundreds of billions into giant data center and energy generation, it would behoove us to first extract all the gains we can from increased model efficiency. The big players like OpenAI haven’t even been pushing efficiency hard. They’ve just been vacuuming up ever greater amounts of money to solve the problem the big and stupid way - just build really huge data centers running big inefficient models.

      • @[email protected]
        link
        fedilink
        English
        63 months ago

        What DeepSeek has done is to eliminate the threat of “exclusive” AI tools - ones that only a handful of mega-corps can dictate terms of use for.

        Now you can have a Wikipedia-style AI (or a Wookiepedia AI, for that matter) that’s divorced from the C-levels looking to monopolize sectors of the service economy.

  • Ech
    link
    fedilink
    English
    243 months ago

    Hilarious that this happens the week of the 5090 release, too. Wonder if it’ll affect things there.

      • @[email protected]
        link
        fedilink
        11
        edit-2
        3 months ago

        And without the fake frame bullshit they’re using to pad their numbers, its capabilities scale linearly with the 4090. The 5090 just has more cores, Ram, and power.

        If the 4000-series had had cards with the memory and core count of the 5090, they’d be just as good as the 50-series.

        • @[email protected]
          link
          fedilink
          English
          8
          edit-2
          3 months ago

          By that point you will have to buy the Mico fission reactor addon to power the 6090. It’s like Nvidia looked at the power triangle of power / price and preformence and instead of picking two they just picked one and to hell with the rest.

  • @[email protected]
    link
    fedilink
    783 months ago

    Almost like yet again the tech industry is run by lemming CEOs chasing the latest moss to eat.

  • @[email protected]
    link
    fedilink
    533 months ago

    This just shows how speculative the whole AI obsession has been. Wildly unstable and subject to huge shifts since its value isn’t based on anything solid.

    • @[email protected]
      link
      fedilink
      83 months ago

      It’s based on guessing what the actual worth of AI is going to be, so yeah, wildly speculative at this point because breakthroughs seem to be happening fairly quickly, and everyone is still figuring out what they can use it for.

      There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

      If out of the blue comes a new model that delivers similar results on a fraction of the hardware, then it’s going to chop it down by a lot.

      If someone finds another use case, for example a model with new capabilities, boom value goes up.

      It’s a rollercoaster…

      • @[email protected]
        link
        fedilink
        English
        123 months ago

        There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

        I would disagree on that. There are a few niche uses, but OpenAI can’t even make a profit charging $200/month.

        The uses seem pretty minimal as far as I’ve seen. Sure, AI has a lot of applications in terms of data processing, but the big generic LLMs propping up companies like OpenAI? Those seems to have no utility beyond slop generation.

        Ultimately the market value of any work produced by a generic LLM is going to be zero.

        • @[email protected]
          link
          fedilink
          33 months ago

          It’s difficult to take your comment serious when it’s clear that all you’re saying seems to based on ideological reasons rather than real ones.

          Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.

          • @[email protected]
            link
            fedilink
            113 months ago

            Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.

            There is zero reason to think the current slop generating technoparrots will ever lead into AGI. That premise is entirely made up to fuel the current “AI” bubble

            • @[email protected]
              link
              fedilink
              13 months ago

              They may well lead to the thing that leads to the thing that leads to the thing that leads to AGI though. Where there’s a will

              • @[email protected]
                link
                fedilink
                13 months ago

                sure, but that can be said of literally anything. It would be interesting if LLM were at least new but they have been around forever, we just now have better hardware to run them

                • NιƙƙιDιɱҽʂ
                  link
                  fedilink
                  1
                  edit-2
                  3 months ago

                  That’s not even true. LLMs in their modern iteration are significantly enabled by transformers, something that was only proposed in 2017.

                  The conceptual foundations of LLMs stretch back to the 50s, but neither the physical hardware nor the software architecture were there until more recently.

            • @[email protected]
              link
              fedilink
              23 months ago

              The market don’t care what either of us think, investors will do what investors do, speculate.

        • NιƙƙιDιɱҽʂ
          link
          fedilink
          23 months ago

          Language learning, code generatiom, brainstorming, summarizing. AI has a lot of uses. You’re just either not paying attention or are biased against it.

          It’s not perfect, but it’s also a very new technology that’s constantly improving.

          • Toofpic
            link
            fedilink
            23 months ago

            I decided to close the post now - there is place for any opinion, but I can see people writing things which are completely false however you look at them: you can dislike Sam Altman (I do), you can worry about China’s interest in entering the competition now and like that (I do), but the comments about LLM being useless while millions of people use it daily for multiple purposes sound just like lobbying.

  • DigitalDilemma
    link
    fedilink
    English
    363 months ago

    As a European, gotta say I trust China’s intentions more than the US’ right now.

    • @[email protected]
      link
      fedilink
      English
      8
      edit-2
      3 months ago

      Not really a question of national intentions. This is just a piece of technology open-sourced by a private tech company working overseas. If a Chinese company releases a better mousetrap, there’s no reason to evaluate it based on the politics of the host nation.

      Throwing a wrench in the American proposal to build out $500B in tech centers is just collateral damage created by a bad American software schema. If the Americans had invested more time in software engineers and less in raw data-center horsepower, they might have come up with this on their own years earlier.

    • @[email protected]
      link
      fedilink
      83 months ago

      With that attitude I am not sure if you belong in a Chinese prison camp or an American one. Also, I am not sure which one would be worse.

      • @[email protected]
        link
        fedilink
        33 months ago

        They should conquer a country like Switzerland and split it in 2

        At the border, they should build a prison so they could put them in both an American and a Chinese prison