Was looking through my office window at the data closet and (due to angle, objects, field of view) could only see one server light cluster out of the 6 racks full. And thought it would be nice to scale everything down to 2U. Then day-dreamed about a future where a warehouse data center was reduced to a single hypercube sitting alone in the vast darkness.

  • I think what will happen is that we’ll just start seeing sub-U servers. First will be 0.5U servers, then 0.25U, and eventually 0.1U. By that point, you’ll be racking racks of servers, with 10 0.1U servers slotted into a frame that you mount in an open 1U slot.

    Silliness aside, we’re kind of already doing that in some uses, only vertically. Multiple GPUs mounted vertically in an xU harness.

    • partial_accumen
      link
      fedilink
      92 months ago

      The future is 12 years ago: HP Moonshot 1500

      “The HP Moonshot 1500 System chassis is a proprietary 4.3U chassis that is pretty heavy: 180 lbs or 81.6 Kg. The chassis hosts 45 hot-pluggable Atom S1260 based server nodes”

      source

      • @[email protected]
        link
        fedilink
        42 months ago

        That did not catch on. I had access to one and the use case and deployment docs were foggy at best

        • @[email protected]
          link
          fedilink
          English
          42 months ago

          It made some sense before virtualization for job separation.

          Then docker/k8s came along and nuked everything from orbit.

          • partial_accumen
            link
            fedilink
            22 months ago

            The other use case was for hosting companies. They could sell “5 servers” to one customer and “10 servers” to another and have full CPU/memory isolation. I think that use case still exists and we see it used all over the place in public cloud hyperscalers.

            Meltdown and Spectre vulnerabilities are a good argument for discrete servers like this. We’ll see if a new generation of CPUs will make this more worth it.

            • @[email protected]
              link
              fedilink
              English
              42 months ago

              128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.

              Also, I happen to know they’re working on even more hardware isolation mechanisms, similar to sriov but more enforced.

              • partial_accumen
                link
                fedilink
                12 months ago

                128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.

                Sure, which is why we haven’t seen a huge adoption. However, in some cases it isn’t so much an issue of total compute power, its autonomy. If there’s a rogue process running on one of those 192 cores and it can end up accessing the memory in your space, its a problem. There are some regulatory rules I’ve run into that actually forbid company processes on shared CPU infrastructure.

                • @[email protected]
                  link
                  fedilink
                  English
                  12 months ago

                  There are, but at that point you’re probably buying big iron already, cost isn’t an issue.

                  Sun literally made their living from those applications for a long while.

          • @[email protected]
            link
            fedilink
            12 months ago

            VMs were a thing in 2013.

            Interestinly, Docker was released in March 2013. So it might have prevented a better company from trying the same thing.

            • @[email protected]
              link
              fedilink
              English
              22 months ago

              Yes, but they weren’t as fast, vt-x and the like were still fairly new, and the VM stacks were kind of shit.

              Yeah, docker is a shame, I wrote a thin stack on lxc, but BSD Jails are much nicer, if only they improved their deployment system