It’s necessary for my very important hobby of generating anime nudes.
Yes. Haha. Amusing. But really…Blender, Davinci Resolve, and a host of others. It’s not a hobby, it’s quite literally a (albeit small) portion of my income.
And it has nothing to do with anime nudes.
Anime/waifu is literally for pedos wanting a loophole. Sorry, not sorry.
I need NVDA for the gainz
Edit: btw Raspberry PI is doing an IPO later this year, bullish on AMD
I run Stable Diffusion with ROCm. Who needs CUDA?
What distro are you using? Been looking for an excuse to strain my 6900XT.
I started looking at getting it running on Void and it seemed like (at the time) there were a lot of specific version dependencies that made it awkward.
I suspect the right answer is to spin up a container, but I resent Docker’s licensing BS too much for that. Surely by now there’d be a purpose built live image- write it to a flash drive, reboot, and boom, anime
vampire princeshot girlsUbuntu native for me…no containers needed.
I resent Docker’s licensing BS too much for that.
Pay us if you’re a mid+ sized compan is BS?
I think people don’t like dramatic changes in business model. I had installed it for like 3 days, long before the switchover, to test out something from another dev. When they made the announcements, the hammer went down in our org not to use it. But that didn’t stop them from sending sales-prospecting/vaguely threateningly worded email to me, who has no cheque-writing authority anyway.
Plus, I’m not a fan of containers.
STOP DOING CONTAINERS.
- Machines were not meant to contain other smaller machines.
- Years of virtualization yet no real-world use found for anything but SNES emulation
- Wanted to “ship your machine to the end-user” anyway for a laugh? We had a tool for that. It was called “FedEx”.
- “Yes, Please give me
docker compose up meatball-hero
of something. Please give me Alpine Linux On Musl of it” – Statements dreamed up by the utterly deranged.
“Hello, I would like 7.5GB of VM worth of apples please”
THEY HAVE PLAYED US FOR ABSOLUTE FOOLS.
Poor capitalists need to pay for the tools that make them money. Stop it, I’ll break down in tears just thinking about the horror of it.
Do you use some different solution, or did you completely avoid containers and orchestration?
But that didn’t stop them from sending sales-prospecting/vaguely threateningly worded email to me, who has no cheque-writing authority anyway.
They only spam me with promotional material. You used the business email I’m guessing?
I set mine up on arch. There’s an aur package, but it didn’t work for me.
After some failed attempts, I ended up having success following this guide.
Some parts are out of date though, so if it fails to install something you’ll need to have it target a newer available package. Main example of this is inside the
webui-user.sh
file, it tells you to replace an existing line withexport TORCH_COMMAND="pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1"
. This will fail because that version of pytorch is no longer available. So instead you need to replace the download URL with an up to date one from the pytorch website. They’ve also slightly changed the layout of the file. Right now the correct edit should be to find the# install command for torch
line and change the command under it to:pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7
You may need to swap pip to pip3 too, if you get a pip error. Overall it takes some troubleshooting, look at any errors you get and see if it’s calling for a package you don’t have or anything like that.
If you don’t like docker take a look at containerd and podman. I haven’t done any cuda with podman but it is supposed to work
I use stable diffusion on rocm in an ubuntu distrobox container. Super easy to set up and there’s a good guide in the opensuse forum for it.
That is exactly what I do too and it works perfectly! This is a link to said guide.
It’s effectively install distrobox, save the config, run
distrobox assemble
and thendistrobox enter rocm
and clone the Automatic1111 stable diffusion webui somewhere and runbash webui.sh
to launch it.
I use Fedora. It works great, with some tweaks to the startup script.
Arch ofc.
Then show us your anime titty pics!
CUDA?! I barely even know’a!
I can confirm that it works just fine for me. In my case I’m on Arch Linux btw and a 7900XTX, but it needed a few tweaks:
- Having xformers installed at all would sometimes break startup of stable-diffusion depending on the fork
- I had an internal and an external GPU, I want to set HIP_VISIBLE_DEVICE so that it only sees the correct one
- I had to update torch/torchvision and set HSA_OVERRIDE_GFX_VERSIONI threw what I did into https://github.com/icedream/sd-multiverse/blob/main/scripts/setup-venv.sh#L381-L386 to test several forks.
I’m holding out building a new gaming rig until AMD sorts out better ray-tracing and cuda support. I’m playing on a Deck now so I have plenty of time to work through my old backlog.
Amd
Cuda support
😐
CUDA isn’t the responsibility of AMD to chase; it’s the responsibility of Nvidia to quit being anticompetitive.
It’s also not my problem either. I don’t give a shit what nvidia or AMD does, I just want to be able to run AI stuff on my rig in as open-source a manner as is possible.
…in as open-source a manner as is possible.
And that means “not with CUDA,” because CUDA is proprietary.
This is a semantic argument I don’t feel like getting into. I don’t give a shit what library it is – I want AMD to be able to crunch pytorch as well as nvidia.
The fact that CUDA is proprietary to Nvidia isn’t even slightly “semantics;” it’s literally the entire problem this thread is discussing. CUDA doesn’t work on AMD because Nvidia doesn’t allow it to work on AMD.
deleted by creator
deleted by creator
I was straight up thinking of going to AMD just to have fewer GPU problems on Linux myself
In my experience,
AMD is a bliss on Linux,
while Nvidia is a headache.Also, AMD has ROCM,
it’s their equivalent of Nvidia’s CUDA.Yeah but is it actually equivalent?
If so I’m 100% in but it needs to actually be. a drop in replacement for “it just works” like cuda is.
Once I’ve actually got drivers all set cuda “just works”. Is it equivalent in that way? Or am I going to get into a library compatibility issue in R or Python?
Not all software that uses CUDA has support for ROCM.
But as far as setup goes, I just installed the correct drivers and ROCM compatible software just worked.
So - it can be a an equivalent alternative, but that depends on the software you want to run.
Never had an issue with Nvidia on Linux. Yes, you have to use proprietary drivers, but outside of that I’ve been running Linux with Nvidia cards for 20 years.
Wayland is non-stop issues.
Been running Wayland for 2 years and only issue I had with it was Synergy not working.
Even not the “issue” that basically every time you update something, you have to wait a long time to download proprietary nvidia drivers?
That’s what annoyed me the most back in the day with the Nvidia drivers,
so many hours wasted on updating the drivers.With AMD, this is not the case.
And haven’t even talked about my issues with Optimus (Intel on-board graphics + Nvidia GPU) yet, which was a true nightmare, took me weeks of research to finally make it work correctly.
You don’t need to update NVIDIA drivers every time there’s a release. I don’t even do that on my Windows machine. Most driver updates are just tweaks for the latest game, not bug fixes or performance improvements.
And hell, you’re using Linux. Vim updates more often than the graphics driver, what do you expect?
It automatically happened,
I believe with every install of an updated Flatpak, which is rather often.Been a while though, since lately I’ve been happily using AMD for quite some time.
But I do recall Nvidia driver updates slowing down my update process by a lot,
while I have none of that with AMD.Ah, I always update the driver through the package manager and it never auto-updates.
It’s the equivalent, but does the software make use of the ROCM if they are programmed for CUDA?
My experience with the Deck outside of CS2 has been nothing short of mind-boggling. I don’t even REALLY have a problem with CS2 but I cannot play online for VAC reasons I can’t sort out. I have a ticket open with Steam Support. 🤷
Yeah, the deck has really increased my trust in AMD hardware.
My only regret for picking team red is that DaVinci Resolve doesn’t support hardware encoding.
deleted by creator
3D rendering with optix. I don’t do AI nonsense other than chatgpt for the occasional shell script or python function.
Oh please. There are better templates than this stupid Nazi cunt. I really don’t want to see this fuckface.
isnt the joke that he wont change his mind on his stupid ideas, though?
For the longest time I just thought he was that one guy from modern family.
I just now learned he was not
Yes! This is a nice alternative template for example.
I do actually need nvidia for blender since AMD raytracing support is still a work in progress for it.
As soon as it’s stable, works on linux, and a mid-range AMD card performs as well as my 3060 though I’m absolutely jumping to AMD.
Can we stop using the Steven Crowder meme already. The guy is a total chode.
I don’t really disagree, but I think that was the original intent of the meme; to show Crowder as a complete chode by having him assert really stupid, deeply unpopular ideas.
The meme’s use has become too soft on Crowder lately, though, I think.
I notice lately that many memes origins are worse than I thought from the context they are used in. Racist, homophobic, and lying people are not something I usually accept as entertainment, but they sneak their way unnoticed into my (non-news) feed through memes. I guess most people won’t know the origins of the meme and use it according to the meaning they formed on their own. Other memes like the distracted boyfriend meme are meaningless stock photos, so I understand why many people use memes without thinking about the origins.
Anyway, thanks for pointing out who the person in the picture actually is.
Lol. He gives chodes a bad rep. Call him what he is. A christofascist misogynist grifter.
Brother of “I need nVidia for raytracing” while only playing last decade games.
Playing old games with Ray tracing is just as amazing as playing new games with Ray tracing. I know quake rt gets too dark to play half way through, they should have added light sources in those areas.
Then again, I played through cyberpunk 2077 at 27fps before the 2.0 update. Control was pretty good at 50fps, and I couldn’t recommend portal enough at about 40fps on my 2070 super. I don’t know if teardown leveraged rt cores but digital foundry said it ran better on Nvidia and I played through that game at 70fps.
I love playing with new technologies. I wish graphics card prices stayed down because rt is too heavy nowadays for my first gen RT card. I play newer games with rt off and most setting turned down because of it.
I love playing with new technologies. I wish graphics card prices stayed down because rt is too heavy nowadays for my first gen RT card. I play newer games with rt off and most setting turned down because of it.
I wish they stayed down because VR has the potential to bring back crossfire/SLI. Nvidia’s gameworks already has support for using two GPUs to render different eyes and supposedly, when properly implemented, it results in a nearly 2x increase in fps. However, GPUs are way too expensive right now for people to buy two of them, so afaik there aren’t any VR games that support splitting rendering between two GPUs.
VR games could be a hell of a lot cooler if having 2 GPUs was widely affordable and developers developed for them, but instead it’s being held back by single-gpu performance.
Wasn’t there an issue with memory transfer latency across the connector? I thought they killed it because the latency was too high for higher frame rates causing a consistent stuttering.
They tried to reuse that enterprise connector with higher throughput but last I heard they never fully developed support for it because of a lack of interest from devs.
Not gonna lie, raytracing is cooler on older games than it is newer ones. Newer games use a lot of smoke and mirrors to simulate raytracing, which means raytracing isn’t as obvious of an upgrade, or can even be a downgrade depending on the scene. Older games, however, don’t have as much smoke and mirrors so raytracing can offer more of an improvement.
Also, stylized games with raytracing are 10/10. Idk why, but applying rtx to highly stylized games always looks way cooler than on games with realistic graphics.
Quake 2 does looks pretty rad in RTX mode
I completely unironically know people who bought a 4090 exclusively to play League
If anything AMD (for ML) is the hardware “I use [x] btw” (as in I go through unnecessary pain for purism or to one up my own superiority complex)
Stable Diffusion works on Radeon 7900XTX on Ubuntu.
7900XTX
is it with Automatic1111?
Yes.
7900 GRE is officially supported too which seems like a great <$600 option on the market right now.
Works on the AMD Pro series GPUs and my 6800xt on Pop OS
deleted by creator
Earlier in my career, I compiled tensorflow with CUDA/cuDNN (NVIDIA) in one container and then in another machine and container compiled with ROCm (AMD) for cancerous tissue detection in computer vision tasks. GPU acceleration in training the model was significantly more performant with NVIDIA libraries.
It’s not like you can’t train deep neural networks without NVIDIA, but their deep learning libraries combined with tensor cores in Turing-era GPUs and later make things much faster.
Things have changed.
I can now run mistral on my intel iGPU using Vulkan.
If you’re talking about “running”, that’s inference. I’m talking about elapsed training time.
Same thing. Inference just uses a lot less memory.
deleted by creator
AMD is catching up now. There are still performance differences, but they are probably not as big in the latest generation.
I must admit when I learned this was Crowder I had a sad
Just change and reupload :D