- cross-posted to:
- memes@lemmy.world
- cross-posted to:
- memes@lemmy.world
cross-posted from: https://lemmy.world/post/24850430
EDIT: i had an rpi it died from esd i think
EDIT2: this is also my work machine and i sleep to the sound of the fans
Is having a bunch of oscilloscopes in your electronics lab self-hosting now ?
Using old laptops or other repurposed computer for self-hosting is just great! Who does have an old computer collecting dusk in their home ? Anyone had the potential for self-hosting :)
😆
the best home server is a computer you’re not using, the second best home server is a bajillion dollar server rack you looted from behind a meta LLM farm
Sure, from behind it…
How is it overkill? Those are just PCs in rack cases. For all you know, they could be $150 budget builds made of decade old hardware bought off eBay.
i’m ok with that as soon as they serve the idea of self hosting it depends on how big you want the project want to be
Check out r/homedatacenter
Dude whaaaat? Commercial display signage outside and what could need that many Ethernet runs in a house?
Best starter for self hosting:
Although laptops technically have a built in battery backup 😎
I recently got a M710q with an i3 7100T. It uses around 3W on idle. I threw 8GB of RAM and a 512GB ramless NVMe for a total of under 100€. Absolutely would recommend (if you don’t need too much storage). Also Dell has some machines.
For more info, servethehome (they have a YouTube channel and a blog) has a whole series on “tiny mini micro” machines.
What’s a ramless NVMe? Specifically the ramless part, I know an NVMe is an SSD.
Some fancy SSDs have additional DRAM cache:
The presence of a DRAM chip means that the CPU does not need to access the slower NAND chips for mapping tables while fetching data. DRAM being faster provides the location of stored data quickly for viewing or modification.
TIL, thanks!
DRAM-less NVMe drives don’t have what basically amounts to a cache of readily accessible storage that makes large reads and writes faster. So they’re cheaper, but slower, and wear out faster
I’d say not just starter… My rack is full of tiny/mini/micros. Proxmox on all, data on the three NAS boxes, easy to replace a box if needed (for example, the optiplex 7040 that the board died on).
Way quieter than a regular rack, lower power use, etc. If all goes well following an intended move, I should be able to safely power it off solar + batt only. Grand total wattage for all these boxes is less than my desktop (when I last checked at least, I was running about 300-350W. I did swap two that have dgpu’s now, so maybe a touch higher).
My homelab is three Lenovo M920q systems complete with 9th gen i7 procs, 24GB ram, and 10Gbps fibre/Ceph storage. Those mini PCs can be beasts.
There are some 13th gen i9s at work that are usff (like a fat version of the tiny, they are p3 ultras) I can’t wait to get my hands on at home. dGPU, 2.5gbit + 1gbit on board, 64gb ram on these as purchased, etc, etc. Total monster in under 4l.
I actually ended up with a cluster of those over a standard server for a client, way more power and lower price, and with HA to boot. Should have a few all to myself next year and I can’t wait to be ridiculous with them.
I think the issue for some people (why they may buy expensive hardware) is that their server is not “enterprise grade”, literally meaning a whole server rack with a SAN, firewall, etc. If you’re new to this hobby, please consider this unsolicited advice:
Use whatever hardware you already have or buy only what you need to achieve your goals.
Some people want to “cosplay as a sysadmin” like what Jeff Geerling sells on his tshirts. That can mean doing this stuff for fun or maybe self teaching for a job. For those folks, buying “enterprise” could possibly make sense. But I would argue that even the core concepts of that hardware can be learned on stuff you already have.
Enterprise hardware is loud, inefficient, and will likely have idiosyncrasies that making them run at home kinda suck. An old laptop is perfect as a place to host stuff or play with software.
One of the things engineers/admins have to do in a datacenter is plan for rack power efficiency. That often means planning for the capacity you are going to use, for the space you have and choosing the cheapest solution for that.
I think its considered generally more impressive with how much you can do within the constraints you have, vs having so much capacity for a cheap price. Like, how many services can you run on a Raspberry Pi? Can you create “good enough” performance for a storage area network using just gigabit? The skills you get by limiting yourself probably out perform working with “the real stuff”, even if your purpose is trying to get a job. I’d argue the same for folks who simply want to self host. Run what you got until it stops, and then try to buy for capacity again.
Your power bill, the environment, and your wallet will thank you.
Downsizing from an ex biz full fat tower server to a few Pis, a mini PC and a Synology NAS was the best decision ever here.
The new hardware was paid for quickly in the power savings alone. The setup is also much quieter.
You don’t think about power consumption a lot when working with someone else’s supply (unless it’s your actual job to), but it becomes very visible when you see a server gobbling up power on a meter at home.
You’re right about the impressiveness of working creatively within constraints. We got to the moon in '69 with a fraction of the computing power available to the average consumer today. Look at the history of the original Elite videogame for another great example of working creatively and efficiently within a rather small box.
How would you connect to your “server” when you don’t know it’s IP? With static IP or DNS or both?
hostname + tailscale
For local services? - just type in static IP that I’ve assigned myself, otherwise I have a subdomain pointing to my online services. works like a charm
Dynamic DNS or static IP. Whatever is convenient for you. If humans are connecting, it is generally prefered to type in a domain name, rather than an IP address.
Yeah dynamic DNS works pretty good for me, after I set it up I never had any problems with it.
My only “server” is a modest DS218+ which runs more mainstream services that I see in those huge ass servers like in the pic, what am I missing? (I have 6 GBs of RAM):
- Arr stack (Bazarr, Sonarr, Radarr, Overseerr, Prowlarr)
- Plex
- Calibre and Calibre web
- DizqueTV
- Dozzle
- Flaresolverr
- Heimdall
- Iperf3 server
- JDownloader2
- Komga
- Openspeedrest
- Pi-hole
- Plex-Auto-Languages (for the Synology PMS and my Nvidia Shield TV Pro)
- PlexTraktSync
- Portainer
- Qbittorrent
- Riven/Rclone/Zurg
- Speedtest
- Tautulli (X2)
- Vaultwarden
- Zerotier
Everything is silent and running with Docker, aside from a bunch of stock Synology services (and Tailscale), I really feel like the only reason to own better hardware is for a better transcoding experience… And usually you don’t want to transcode.
dayem buddy thats cool i’m still a noob in selfhosting and using docker im using some containers like adguardhome and metube photoprism and memos still tweaking cuz i started 1 week ago
I went overboard but only because I was having fun with it and didn’t like the octopus of hard drives plugged into my NUC
I just have a used Dell T3600 I got for like 50 bucks at most? Desktop form factor and quiet fans mostly, but still has 32GB ECC memory, 8 core CPU and a full size PCI-E slot to put my 1070 Ti in for transcoding in Immich and Jellyfin, secondary stable Diffusion setup and such and such.
If wanting to have cool oscilloscopes and blinkenlights is wrong then I don’t want to be right.
no one said it’s wrong keep going
w520 goes hard. Still a very capable machine with the sheer amount of cpu horsepower it has from that era.
Not comparable to modern chips of course, but for what you can get those things for, damn it’s not bad.
I bought a cheap mini PC with an Intel N100 processor as my entry into self hosting, so far it absolutely crushes every task I’ve thrown at it
How do you manage storage limitations on Mini pcs?
In my case, 2 USB 3.0 hard drive enclosures with twin drives, in ZFS mirror configuration. I keep the the disks “awake” with https://packages.debian.org/bookworm/hd-idle, and it meets all my needs so far, no complaints about the speed for my humble homelab needs.
So far I haven’t needed mass storage. The Mini Pc itself has a 1TB nvme drive, which I could expand upon since there’s space for another 2.5 inch drive inside the case, plus USB ports for external drives. Obviously not close to a real NAS, but again, so far I have not had any need for that.
Which Mini-Pc do you use?
https://www.amazon.de/dp/B0CJF6CFLP
This is the one I bought, it was discounted to 220€ when I grabbed it.
Don’t worry, I’m using an over 10 year old on-board Atom Mainboard, and it works fine with several services running.
I use an Asus laptop I bought during COVID as my server. I dropped in 64GB of RAM, a pair of NVM drives and an old 2.5” SATA SSD. More than enough for my use cases. The only real software tweak I made was limiting battery charging to 60%.
UPS right on board (kind of)
how would you limit the batt i really care
For my Asus laptop the setting is maintained at the hardware level. I didn’t bother trying to find Linux software that could control it (I think there is one) but instead just booted into Windows and set it there and it will persist after that in Linux.
oh hmmm my laptop is hp so yeah nice thanks
Here’s mine. Might need to repaste it tho, the fans are literally always running pretty noticably loudly and CPU temps are at ~49° even though it’s idiling all the time at max 1%-2% CPU usage.
On a side note - is it normal for Redis to always be using 1-2% CPU even when there’s no traffic?
i literly were you but. my laptop died what are you running now?
Do you mean specs wise or software wise? It’s a Lenovo Y50 with 8 gigs RAM, an i5 4210H, and a GTX 960M.
I’m running Ubuntu server with docker and a few containers (mainly Nextcloud)
software