Speak for yourself. As an avid gamer I am excitedly looking towards the future of AI in games. Good models (with context buffers much longer than the .9s in this demo) have the potential to revolutionise the gaming industry.
I really don’t understand the amount of LLM/AI hate in Lemmy. It is a tool with many potential uses.
DLSS runs on the same hardware as raytracing. That’s the entire point. It’s all just tensor math.
DLSS is one of those things where I’m not even sure what people are complaining about when they complain about it. I can see it being frame generation, which has downsides and is poorly marketed. But then some people seem to be claiming that DLSS does worse than TAA or older upscaling techniques when it clearly doesn’t, so it’s hard to tell. I don’t think all the complainers are saying the same thing or fully understanding what they’re saying.
The Lemmy userbase seems to have this echo chamber effect where anything to do with AI is categorically bad, doesn’t matter what it is or how it performs.
Also mentioning AI gets your comment downvoted, further increasing the echo chamber effect.
I guess. It’s not like downvotes mean anything here beyond… dopamine hits, I suppose?
I don’t know that it’s Lemmy. Both supporters and detractors don’t seem to have a consistent thing they mean when they say “AI”. I don’t think many of them mean the same thing or agree with each other. I don’t think many understand how some of the things they’re railing about are put together or how they work.
I think the echo chamber element comes in when people who may realize they don’t mean the same thing don’t want to bring it up because they broadly align ideologically (AI yay or AI boo, again, it happens both ways), and so the issue gets perpetuated.
Aaaand we’ve now described all of social media, if not all of human discourse. Cool.
At least the DLSS I’ve seen looks terrible. I’ve tried it in a bunch of games, and it produces visible artifacts that are worse than TAA. Cyberpunk 2077 is a great example.
Newer versions are supposedly better, but I haven’t seen them yet.
You haven’t seen it in a while, then, because that was definitely not true of the previous version and it’s absolutely, objectively not true of the new transformer model version.
But honestly, other than what? the very first iteration? it hasn’t been true in a while. TAA and DLSS tended to artifact in different ways. DLSS struggles with particles and fast movement, TAA struggles with most AA challenge areas like sub-pixel detail and thin lines. Honestly, for real time use at 4K I don’t know of a more consistent, cleaner AA solution than DLSS. And I hesitate to call 4K DLAA a real time solution, but that’s definitely the best option we have in game engines at this point.
I don’t even like Nvidia as a company and I hate that DLSS is a proprietary feature, but you can’t really argue with results.
I mean, I’m not gonna go build a Digital Foundry comparison video for you, but this type of argument is definitely what I’m talking about when I say I don’t understand what people just claiming this out of the blue even think they’re saying.
I don’t think it’s particularly hard to understand what I’m saying - Cyberpunk 2077 with DLSS looks worse than Cyberpunk 2077 with TAA for me. You can disagree, but please don’t act like I’m saying something incredibly complex and un-understandable.
DLSS is one of those things where I’m not even sure what people are complaining about when they complain about it.
Mostly performance, from what I’ve seen. That hardware requirements go up, while the real (edit: internal) resolution goes down, meanwhile the image quality is stagnant. It’s not completely the DLSS’s fault though.
I think temporal antialiasing never looks good. I don’t really care to talk about dlss though, I just shut up and avoid upscaling (unless it’s forced grrrr).
See, this is the type of thing that weirds me out. Temporal AA doesn’t look good compared to what? What do you mean “real resolution goes down”? Down from where? This is a very confusing statement to make.
I don’t know what it is that you’re supposed to dislike or what a lot of the terms you’re using are supposed to mean. What is “image quality” in your view? What are you comparing as a reference point on all these things that go up and down?
TAA looks worse than no AA IMO. It can be better than not using it with some other techniques that cause the frames to look grainy in random ways, like real time path traced global illumination that doesn’t have enough time to generate enough rays for a smooth output. But I see it as pretty much a blur effect.
Other AA techniques generate more samples to increase pixel accuracy. TAA uses previous frame data to increase temporal stability, which can reduce aliasing effects but is less accurate because sometimes the new colour isn’t correlated with the previous one.
Maybe the loss of accuracy from TAA is worth the increase you get from a low sample path traced global illumination in some cases (personally a maybe) or extra smoothness from generated frames (personally a no), but TAA artifacts generally annoy me more than aliasing artifacts.
As for specifics of those artifacts, they are things like washed out details, motion blur, and difficult to read text.
TAA only looks worse than no AA if you have a super high res image with next to no sub-pixel detail… or a very small screen where you are getting some blending from human eyeballs not being perfectly sharp in the first place.
I don’t know that the line is going to be on things like grainy low-sample path tracing. For one thing you don’t use TAA for that, you need specific denoising (ray reconstruction is sometimes bundled with DLSS, but it’s technically its own thing and DLSS is often used independently from it). The buildup of GI over time isn’t TAA, it’s temporal accumulation, where you add rays from multiple frames over time to flesh out the sample.
I can accept as a matter of personal preference to say you prefer an oversharpened, crinkly image over a more natural, softer image, so I can accept a preference for all the missed edges and fine detail of edge detection-based blur AA, but there’s no reason decent TAA would look blurry and in any case that’s exactly the part where modern upscaling used as AA has an edge because there’s definitely no washed out details when using DLSS when compared to no AA or SMAA at the same native res. You often get additional generated detail and less blur than native with those.
Compared to some older AA tech. TAA is awful in motion in games. edit: by default. if there’s a config file it can be made better edit2: sometimes no AA is clean as fuck, depends on the game and resolution
What do you mean “real resolution goes down”? Down from where?
I mean internal resolution. Playing at 1080p with DLSS means the game doesn’t render at your specified resolution, but a fraction of it. Native (for now) is the best looking.
What is “image quality” in your view?
Mostly general clarity and stuff like particle effects, textures, stuff like that I think. You can ask those people directly, you know. I’m just the messenger, I barely play modern games.
I don’t know … a lot of the terms you’re using are supposed to mean
Yeah, that’s a problem. More people should be aware of the graphical effects in games. Thankfully some games now implement previews for damn near every quality toggle.
Alright, so no, TAA doesn’t look worse “compared to some older AA tech”. For one thing our benchmarks for “some older AA tech” is MSAA used on 720p (on a good day) on consoles. MSAA did a valiant effort that generation, but it doesn’t scale well with resolution, so while the comparatively very powerful PC GPUs were able to use it effectively at 1080p60 they were already struggling. And to be clear, those games looked like mud compared to newer targets.
We are now typically aiming for 4K, which is four times as many pixels, and at semi-arbitrary refreshes, often in the hundreds on PCs. TAA does a comparable-to-better job than MSAA much faster, so cranking up the base resolution is viable. DLSS goes one further and is able to upres the image, not just smooth out edges, even if the upres data is machine-generated.
“MSAA looked better” is almost entirely rose tinted glasses.
Internal resolution with DLSS is variable. Some games have a setting to select it on the fly depending on load, but all games give you a quality selector, so it’s ultimately a matter of power to performance where you want to set your base resolution and your output resolution. DLSS is heavier than most TAA but much better. If you’re keeping output res and settings, then yeah, you’re going to lower resolution a bit to compensate the loss, probably, but you can absolutely run DLSS at native resolution (that’s normally called DLAA). It looks great, but any AA at native 4K is gonna be heavy, so you need pretty powerful hardware for that.
So the internal resolution hasn’t “gone down”. You may need to lower it in some games to hit performance, but that’s always been the case. What has changed is we’re pushing super high definition images compared to the PS3 and even the PS4 generation. 4 to 16 times bigger.
And yeah, upscaling can show artifacts around some elements, but so can old AA. Modern versions of DLSS and FSR are a lot cleaner than older ones, but it’s not a downgrade against most comparables. It becomes a matter of whether you think some of the ghosting on particles or fast motion was more annoying than fizzling on detailed areas or thin lines. If a preference for one over the other was the conversation I’d be more than happy to chalk it up to taste, but that’s not how this is often framed. And again, modern upscaling is improving much faster than older AA techniques, a lot of the artifacting is gone, not just for new games, but for older ones where newer versions of these systems can be selected even if they weren’t implemented at launch. It’s actually pretty neat.
And that wall of text is, I think, why this conversation is so obtuse and confusing these days. That’s a lot of nuance, and it’s still superficial. People just go “this looks like crap because of particles or whatever” and I guess that’s fine, but it barely correlates to anything in reality, it’s quite deeply impacted by half-remembered results that really don’t hold up as well as people remember and clarifying all this is certainly not worth the effort. Just saying it online is a lot simpler and easier, though.
People only call fake frames to the interpolation crap they’re touting as performance… I don’t think a lot of people have issues with image upscaling at a decent level (aka quality settings)
Why? AI doing one good thing doesn’t erase the dozens of bad ways it’s utilized.
I’m interested to see AI used on a larger scale in really specific ways, but the industry seems more interested in using it to take giant shortcuts and replace staff. That is going to piss people off, and it’s going to really piss people off when it also delivers a shit product.
I’m fine with DLSS, because I want to see AI enhance games. I want it to make them better. So far, all I can see is that it’s making them worse with one single upside that I can just… toggle off on my end if I don’t like it.
You said “there’s a person operating the AI” and you referred to separating “the tool from the user”.
Please do me a favor and quote the part of that comment that refers to the way the AI is made at all. The point you were parroting was pointing out that the “AI good/bad debate” isn’t a judgement of value of the technology underlying the applications, it’s an assessment of what the companies making apps with this technology are doing with it on each individual application.
I never brought up the user in this. The user is pretty much neutral. The “person operating the AI” isn’t a factor here, it’s some constant outside the debate where we assume some amount of people will use the tools provided for them in the way the tools are designed.
And again with words in my mouth. That wasn’t even close to my point!
My point was that you were unnecessarily sarcastic in a rude way to someone. Beyond that, your comment made absolutely no sense because you were telling them that they were mad at the tool instead of the way the people are using the tool. Which, if you go back and read their comments, is what they were actually upset about. They didn’t make much, if any comment about AI itself, but rather the way people are using it.
… yeah, I’m aware AI isn’t a person. I’m not sure why that’s a question? Maybe I phrased things badly, but I’m not- nor have I ever- been really mad about AI usage. It’s mostly just disappointment.
It’s just a technology. I largely dislike the way it’s being used, partly because I feel like it has a lot of potential.
You can count the number of times DLSS makes a game look worse on a single hand. It very often looks better than native with significantly less aliasing/shimmering and better detail. At worst it basically looks the same as native, which is still a massive win as it means you get more performance.
They do. You’ll see a lot of hate for DLSS on social media, but if you go to the forums or any newly-released game that doesn’t have DLSS, you’ll find at least one post demanding that they implement it. If it’s on by default, most people don’t ever touch that setting and they’re fine with it.
The Nvidia GPUs in data centers are separate (and even on separate nodes than, with different memory chips than) gaming GPUs. The sole exception is the 4090/5090 which do see some use in data center forms, but at low volumes. And this problem is pretty much nonexistent for AMD.
…No, it’s just straight up price gouging and anti competitiveness. It’s just Nvidia being Nvidia, AMD being anticompetitive too (their CEOs are like cousins twice removed), and Intel unfortunately not getting traction, even though Battlemage is excellent.
For local AI, the only thing that gets sucked up are 3060s, 3090s, and for the rich/desperate, 4090s/5090s, with anything else being a waste of money with too little VRAM. And this is a pretty small niche.
Chip fabbing allocations are limited and what chips for Ai datacenters takeup, the desktop GPUs don’t get made. And what’s left of it are desktop chips sold for workstation Ai models like the RTX 5090 and even RX 7900 XTX because they have more memory. Meanwhile they still sell 8GB cards to gamers when it hasn’t been enough for a while. Whole situation is just absurd.
Unfortunately, no one is buying a 7900 XTX for AI, mostly not a 5090 either. The 5090 didn’t even work till recently and still doesn’t work with many projects, doubly so for the 7900 XTX.
The fab capacity thing is an issue, but not as much as you’d think since the process nodes are different.
Again, I am trying to emphasize, a lot of this is just Nvidia being greedy as shit. They are skimping on VRAM/busses and gouging gamers because they can.
Still have limited wafers at the fabs. The chips going to datacenters could have been consumer stuff instead. Besides they (nVidia, Apple, AMD) are all fabricated at TSMC.
Local AI benefits from platforms with unified memory that can be expanded. Watch platforms based on AMD’s Ryzen AI MAX 300 chip or whatever they call it take off. Frameworks you can config a machine with that chip to 128 GB RAM iirc. It’s the main reason why I believe Apple’s memory upgrades cost a ton so that it isn’t a viable option financially for local AI applications.
The chips going to datacenters could have been consumer stuff instead.
This is true, but again, they do use different processes. The B100 (and I think the 5090) is TSMC 4NP, while the other chips use a lesser process. Hopper (the H100) was TSMC 4N,
Ada Lovelace (RTX 4000) was TSMC N4. The 3000 series/A100 was straight up split between Samsung and TSMC. The AMD 7000 was a mix of older N5/N6 due to the MCM design.
Local AI benefits from platforms with unified memory that can be expanded.
This is tricky because expandable memory is orthogonal to bandwidth and power efficiency. Framework (ostensibly) had to use soldered memory for their Strix Halo box because it’s literally the only way to make the traces good enough: SO-DIMMs are absolutely not fast enough, and even LPCAMM apparently isn’t there yet.
AMD’s Ryzen AI MAX 300 chip
Funny thing is the community is quite lukewarm to the AMD APUs due to poor software support. It works okay… if you’re a python dev that can spend hours screwing with rocm to get things fast :/ But it’s quite slow/underutilized if you just run popular frameworks like ollama or the old diffusion ones.
It’s the main reason why I believe Apple’s memory upgrades cost a ton so that it isn’t a viable option financially for local AI applications.
Nah, Apple’s been gouging memory way before AI was a thing. It’s their thing, and honestly it kinda backfired because it made them so unaffordable for AI.
Also, Apple’s stuff is actually… Not great for AI anyway. The M-chips have relatively poor software support (no pytorch, MLX is barebones, leaving you stranded with GGML mostly). They don’t have much compute compared to a GPU or even an AMD APU, the NPU part is useless. Unified memory doesn’t help at all, it’s just that their stuff happens to have a ton of memory hanging off the GPU, which is useful.
I’m pretty sure the fabs making the chips for datacenter cards could be making more consumer grade cards but those are less profitable. And since fabs aren’t infinite the price of datacenter cards is still going to affect consumer ones.
TSMC is the only proven fab at this point. Samsung is lagging and current emerging tech isn’t meeting expectations. Intel might be back in the game with their next gen but it’s still to be proven and they aren’t scaled up to production levels yet.
And the differences between the different fabs means that designing a chip to be made at more than one would be almost like designing entirely different chips for each fab. Not only are the gates themselves different dimensions (and require a different layout) but they also have different performance and power profiles, so even if two chips are logically the same and they could trade area efficiency for more consistent higher level layout (like think two buildings with the same footprint but different room layouts), they’d need different setups for things like buffers and repeaters. And even if they do design the same logical chip for both fabs, they’d end up being different products in the end.
And with TSMC leading not just performance but also yields, the lower end chips might not even be cheaper to produce.
Also, each fab requires NDAs and such and it could even be a case where signing one NDA disqualifies you from signing another, so they might require entirely different teams to do the NDA-requiring work rather than being able to have some overlap for similar work.
Not that I disagree with your sentiment overall, it’s just a gamble. Like what if one company goes with Samsung for one SKU and their competition goes with TSMC for the competing SKU and they end up with a whole bunch of inventory that no one wants because the performance gap is bigger than the price gap making waiting for stock the no brainer choice?
But if Intel or Samsung do catch up to TSMC in at least some of the metrics, that could change.
Another factor is that fab choice design decisions were made way before the GPUs launched, when everything you said (TSMC’s lead/reliability, in particular) rang more true. Maybe Samsung or Intel could offer steep discounts for the lower performance (hence Nvidia/AMD could translate that to bigger dies), but that’s quite a fantasy I’m sure…
Considering that the AI craze is what’s fueling the shortage and massive increase in GPU prices, I really don’t see gamers ever embracing AI.
They’ve spent years training to fight it, so that tracks.
Speak for yourself. As an avid gamer I am excitedly looking towards the future of AI in games. Good models (with context buffers much longer than the .9s in this demo) have the potential to revolutionise the gaming industry.
I really don’t understand the amount of LLM/AI hate in Lemmy. It is a tool with many potential uses.
There’s a difference between LLMs making games and LLMs trained to play characters in a game.
I’m not opposed to either. I think of this a bit like procedural generation, except better.
DLSS (AI upscaling) alone should see gamers embracing the tech.
https://en.wikipedia.org/wiki/Deep_Learning_Super_Sampling
You must not have heard the dis gamers use for this tech.
Fake frames.
I think they’d rather have more raster and ray tracing especially raster in competitive games.
DLSS runs on the same hardware as raytracing. That’s the entire point. It’s all just tensor math.
DLSS is one of those things where I’m not even sure what people are complaining about when they complain about it. I can see it being frame generation, which has downsides and is poorly marketed. But then some people seem to be claiming that DLSS does worse than TAA or older upscaling techniques when it clearly doesn’t, so it’s hard to tell. I don’t think all the complainers are saying the same thing or fully understanding what they’re saying.
The Lemmy userbase seems to have this echo chamber effect where anything to do with AI is categorically bad, doesn’t matter what it is or how it performs.
Also mentioning AI gets your comment downvoted, further increasing the echo chamber effect.
I guess. It’s not like downvotes mean anything here beyond… dopamine hits, I suppose?
I don’t know that it’s Lemmy. Both supporters and detractors don’t seem to have a consistent thing they mean when they say “AI”. I don’t think many of them mean the same thing or agree with each other. I don’t think many understand how some of the things they’re railing about are put together or how they work.
I think the echo chamber element comes in when people who may realize they don’t mean the same thing don’t want to bring it up because they broadly align ideologically (AI yay or AI boo, again, it happens both ways), and so the issue gets perpetuated.
Aaaand we’ve now described all of social media, if not all of human discourse. Cool.
Yeah, to people things are black and white and all or nothing. Even suggesting there might be nuance to things elicits a defensive knee jerk reaction.
At least the DLSS I’ve seen looks terrible. I’ve tried it in a bunch of games, and it produces visible artifacts that are worse than TAA. Cyberpunk 2077 is a great example.
Newer versions are supposedly better, but I haven’t seen them yet.
You haven’t seen it in a while, then, because that was definitely not true of the previous version and it’s absolutely, objectively not true of the new transformer model version.
But honestly, other than what? the very first iteration? it hasn’t been true in a while. TAA and DLSS tended to artifact in different ways. DLSS struggles with particles and fast movement, TAA struggles with most AA challenge areas like sub-pixel detail and thin lines. Honestly, for real time use at 4K I don’t know of a more consistent, cleaner AA solution than DLSS. And I hesitate to call 4K DLAA a real time solution, but that’s definitely the best option we have in game engines at this point.
I don’t even like Nvidia as a company and I hate that DLSS is a proprietary feature, but you can’t really argue with results.
I can definitely argue with the results when it looks worse than TAA, thank you.
Well, if nothing else I’ve made my case.
I mean, I’m not gonna go build a Digital Foundry comparison video for you, but this type of argument is definitely what I’m talking about when I say I don’t understand what people just claiming this out of the blue even think they’re saying.
I don’t think it’s particularly hard to understand what I’m saying - Cyberpunk 2077 with DLSS looks worse than Cyberpunk 2077 with TAA for me. You can disagree, but please don’t act like I’m saying something incredibly complex and un-understandable.
You’re kidding, right? Cyberpunk looks better with DLSS4 than it does natively lol.
https://youtu.be/viQA-8e9kfE?t=13
Mostly performance, from what I’ve seen. That hardware requirements go up, while the real (edit: internal) resolution goes down, meanwhile the image quality is stagnant. It’s not completely the DLSS’s fault though.
I think temporal antialiasing never looks good. I don’t really care to talk about dlss though, I just shut up and avoid upscaling (unless it’s forced grrrr).
See, this is the type of thing that weirds me out. Temporal AA doesn’t look good compared to what? What do you mean “real resolution goes down”? Down from where? This is a very confusing statement to make.
I don’t know what it is that you’re supposed to dislike or what a lot of the terms you’re using are supposed to mean. What is “image quality” in your view? What are you comparing as a reference point on all these things that go up and down?
TAA looks worse than no AA IMO. It can be better than not using it with some other techniques that cause the frames to look grainy in random ways, like real time path traced global illumination that doesn’t have enough time to generate enough rays for a smooth output. But I see it as pretty much a blur effect.
Other AA techniques generate more samples to increase pixel accuracy. TAA uses previous frame data to increase temporal stability, which can reduce aliasing effects but is less accurate because sometimes the new colour isn’t correlated with the previous one.
Maybe the loss of accuracy from TAA is worth the increase you get from a low sample path traced global illumination in some cases (personally a maybe) or extra smoothness from generated frames (personally a no), but TAA artifacts generally annoy me more than aliasing artifacts.
As for specifics of those artifacts, they are things like washed out details, motion blur, and difficult to read text.
TAA only looks worse than no AA if you have a super high res image with next to no sub-pixel detail… or a very small screen where you are getting some blending from human eyeballs not being perfectly sharp in the first place.
I don’t know that the line is going to be on things like grainy low-sample path tracing. For one thing you don’t use TAA for that, you need specific denoising (ray reconstruction is sometimes bundled with DLSS, but it’s technically its own thing and DLSS is often used independently from it). The buildup of GI over time isn’t TAA, it’s temporal accumulation, where you add rays from multiple frames over time to flesh out the sample.
I can accept as a matter of personal preference to say you prefer an oversharpened, crinkly image over a more natural, softer image, so I can accept a preference for all the missed edges and fine detail of edge detection-based blur AA, but there’s no reason decent TAA would look blurry and in any case that’s exactly the part where modern upscaling used as AA has an edge because there’s definitely no washed out details when using DLSS when compared to no AA or SMAA at the same native res. You often get additional generated detail and less blur than native with those.
Compared to some older AA tech. TAA is awful in motion in games. edit: by default. if there’s a config file it can be made better edit2: sometimes no AA is clean as fuck, depends on the game and resolution
I mean internal resolution. Playing at 1080p with DLSS means the game doesn’t render at your specified resolution, but a fraction of it. Native (for now) is the best looking.
Mostly general clarity and stuff like particle effects, textures, stuff like that I think. You can ask those people directly, you know. I’m just the messenger, I barely play modern games.
Yeah, that’s a problem. More people should be aware of the graphical effects in games. Thankfully some games now implement previews for damn near every quality toggle.
Alright, so no, TAA doesn’t look worse “compared to some older AA tech”. For one thing our benchmarks for “some older AA tech” is MSAA used on 720p (on a good day) on consoles. MSAA did a valiant effort that generation, but it doesn’t scale well with resolution, so while the comparatively very powerful PC GPUs were able to use it effectively at 1080p60 they were already struggling. And to be clear, those games looked like mud compared to newer targets.
We are now typically aiming for 4K, which is four times as many pixels, and at semi-arbitrary refreshes, often in the hundreds on PCs. TAA does a comparable-to-better job than MSAA much faster, so cranking up the base resolution is viable. DLSS goes one further and is able to upres the image, not just smooth out edges, even if the upres data is machine-generated.
“MSAA looked better” is almost entirely rose tinted glasses.
Internal resolution with DLSS is variable. Some games have a setting to select it on the fly depending on load, but all games give you a quality selector, so it’s ultimately a matter of power to performance where you want to set your base resolution and your output resolution. DLSS is heavier than most TAA but much better. If you’re keeping output res and settings, then yeah, you’re going to lower resolution a bit to compensate the loss, probably, but you can absolutely run DLSS at native resolution (that’s normally called DLAA). It looks great, but any AA at native 4K is gonna be heavy, so you need pretty powerful hardware for that.
So the internal resolution hasn’t “gone down”. You may need to lower it in some games to hit performance, but that’s always been the case. What has changed is we’re pushing super high definition images compared to the PS3 and even the PS4 generation. 4 to 16 times bigger.
And yeah, upscaling can show artifacts around some elements, but so can old AA. Modern versions of DLSS and FSR are a lot cleaner than older ones, but it’s not a downgrade against most comparables. It becomes a matter of whether you think some of the ghosting on particles or fast motion was more annoying than fizzling on detailed areas or thin lines. If a preference for one over the other was the conversation I’d be more than happy to chalk it up to taste, but that’s not how this is often framed. And again, modern upscaling is improving much faster than older AA techniques, a lot of the artifacting is gone, not just for new games, but for older ones where newer versions of these systems can be selected even if they weren’t implemented at launch. It’s actually pretty neat.
And that wall of text is, I think, why this conversation is so obtuse and confusing these days. That’s a lot of nuance, and it’s still superficial. People just go “this looks like crap because of particles or whatever” and I guess that’s fine, but it barely correlates to anything in reality, it’s quite deeply impacted by half-remembered results that really don’t hold up as well as people remember and clarifying all this is certainly not worth the effort. Just saying it online is a lot simpler and easier, though.
People only call fake frames to the interpolation crap they’re touting as performance… I don’t think a lot of people have issues with image upscaling at a decent level (aka quality settings)
deleted by creator
Why? AI doing one good thing doesn’t erase the dozens of bad ways it’s utilized.
I’m interested to see AI used on a larger scale in really specific ways, but the industry seems more interested in using it to take giant shortcuts and replace staff. That is going to piss people off, and it’s going to really piss people off when it also delivers a shit product.
I’m fine with DLSS, because I want to see AI enhance games. I want it to make them better. So far, all I can see is that it’s making them worse with one single upside that I can just… toggle off on my end if I don’t like it.
OK, but… you know AI isn’t a person, right?
You seem to be mad at math. Which is not rare, but it is weird.
Ok, but… You know there’s a person operating that AI right?
You seem to be separating the tool from the user. Which is not rare, but it is weird.
Hold on, in this scenario you’re mad at the user of the AI app, not at the maker of it?
As in, you’re fine with the tools being trained and made as long as people use them right?
I don’t think you’re aligned with the zeitgeist there.
Please do me a favor and quote the part of that comment where I claimed I’m fine with the way AI is made.
You said “there’s a person operating the AI” and you referred to separating “the tool from the user”.
Please do me a favor and quote the part of that comment that refers to the way the AI is made at all. The point you were parroting was pointing out that the “AI good/bad debate” isn’t a judgement of value of the technology underlying the applications, it’s an assessment of what the companies making apps with this technology are doing with it on each individual application.
I never brought up the user in this. The user is pretty much neutral. The “person operating the AI” isn’t a factor here, it’s some constant outside the debate where we assume some amount of people will use the tools provided for them in the way the tools are designed.
And again with words in my mouth. That wasn’t even close to my point!
My point was that you were unnecessarily sarcastic in a rude way to someone. Beyond that, your comment made absolutely no sense because you were telling them that they were mad at the tool instead of the way the people are using the tool. Which, if you go back and read their comments, is what they were actually upset about. They didn’t make much, if any comment about AI itself, but rather the way people are using it.
… yeah, I’m aware AI isn’t a person. I’m not sure why that’s a question? Maybe I phrased things badly, but I’m not- nor have I ever- been really mad about AI usage. It’s mostly just disappointment.
It’s just a technology. I largely dislike the way it’s being used, partly because I feel like it has a lot of potential.
Yeah, I don’t disagree with the idea that the AI shills are currently peddling it for things it doesn’t do well (or at all) and that’s a big issue.
It’s just not a running tally of “AI doing good” or “AI doing bad”. “AI” isn’t a single thing, for one.
Any time I’ve enabled this, the game looked worse to me. YMMV, etc.
You can count the number of times DLSS makes a game look worse on a single hand. It very often looks better than native with significantly less aliasing/shimmering and better detail. At worst it basically looks the same as native, which is still a massive win as it means you get more performance.
first thing I turn off. It only works in tech demos with very slow moving cameras. Sometimes
They do. You’ll see a lot of hate for DLSS on social media, but if you go to the forums or any newly-released game that doesn’t have DLSS, you’ll find at least one post demanding that they implement it. If it’s on by default, most people don’t ever touch that setting and they’re fine with it.
The Nvidia GPUs in data centers are separate (and even on separate nodes than, with different memory chips than) gaming GPUs. The sole exception is the 4090/5090 which do see some use in data center forms, but at low volumes. And this problem is pretty much nonexistent for AMD.
…No, it’s just straight up price gouging and anti competitiveness. It’s just Nvidia being Nvidia, AMD being anticompetitive too (their CEOs are like cousins twice removed), and Intel unfortunately not getting traction, even though Battlemage is excellent.
For local AI, the only thing that gets sucked up are 3060s, 3090s, and for the rich/desperate, 4090s/5090s, with anything else being a waste of money with too little VRAM. And this is a pretty small niche.
Chip fabbing allocations are limited and what chips for Ai datacenters takeup, the desktop GPUs don’t get made. And what’s left of it are desktop chips sold for workstation Ai models like the RTX 5090 and even RX 7900 XTX because they have more memory. Meanwhile they still sell 8GB cards to gamers when it hasn’t been enough for a while. Whole situation is just absurd.
Unfortunately, no one is buying a 7900 XTX for AI, mostly not a 5090 either. The 5090 didn’t even work till recently and still doesn’t work with many projects, doubly so for the 7900 XTX.
The fab capacity thing is an issue, but not as much as you’d think since the process nodes are different.
Again, I am trying to emphasize, a lot of this is just Nvidia being greedy as shit. They are skimping on VRAM/busses and gouging gamers because they can.
Fabbing is limited to keep prices high. Just like OPEC turning down oil extraction when the price gets too low.
Still have limited wafers at the fabs. The chips going to datacenters could have been consumer stuff instead. Besides they (nVidia, Apple, AMD) are all fabricated at TSMC.
Local AI benefits from platforms with unified memory that can be expanded. Watch platforms based on AMD’s Ryzen AI MAX 300 chip or whatever they call it take off. Frameworks you can config a machine with that chip to 128 GB RAM iirc. It’s the main reason why I believe Apple’s memory upgrades cost a ton so that it isn’t a viable option financially for local AI applications.
This is true, but again, they do use different processes. The B100 (and I think the 5090) is TSMC 4NP, while the other chips use a lesser process. Hopper (the H100) was TSMC 4N, Ada Lovelace (RTX 4000) was TSMC N4. The 3000 series/A100 was straight up split between Samsung and TSMC. The AMD 7000 was a mix of older N5/N6 due to the MCM design.
This is tricky because expandable memory is orthogonal to bandwidth and power efficiency. Framework (ostensibly) had to use soldered memory for their Strix Halo box because it’s literally the only way to make the traces good enough: SO-DIMMs are absolutely not fast enough, and even LPCAMM apparently isn’t there yet.
Funny thing is the community is quite lukewarm to the AMD APUs due to poor software support. It works okay… if you’re a python dev that can spend hours screwing with rocm to get things fast :/ But it’s quite slow/underutilized if you just run popular frameworks like ollama or the old diffusion ones.
Nah, Apple’s been gouging memory way before AI was a thing. It’s their thing, and honestly it kinda backfired because it made them so unaffordable for AI.
Also, Apple’s stuff is actually… Not great for AI anyway. The M-chips have relatively poor software support (no pytorch, MLX is barebones, leaving you stranded with GGML mostly). They don’t have much compute compared to a GPU or even an AMD APU, the NPU part is useless. Unified memory doesn’t help at all, it’s just that their stuff happens to have a ton of memory hanging off the GPU, which is useful.
I’m pretty sure the fabs making the chips for datacenter cards could be making more consumer grade cards but those are less profitable. And since fabs aren’t infinite the price of datacenter cards is still going to affect consumer ones.
Heh, especially for this generation I suppose. Even the Arc B580 is on TSMC and overpriced/OOS everywhere.
It’s kinda their own stupid fault too. They could’ve uses Samsung or Intel, and a bigger slower die for each SKU, but didn’t.
TSMC is the only proven fab at this point. Samsung is lagging and current emerging tech isn’t meeting expectations. Intel might be back in the game with their next gen but it’s still to be proven and they aren’t scaled up to production levels yet.
And the differences between the different fabs means that designing a chip to be made at more than one would be almost like designing entirely different chips for each fab. Not only are the gates themselves different dimensions (and require a different layout) but they also have different performance and power profiles, so even if two chips are logically the same and they could trade area efficiency for more consistent higher level layout (like think two buildings with the same footprint but different room layouts), they’d need different setups for things like buffers and repeaters. And even if they do design the same logical chip for both fabs, they’d end up being different products in the end.
And with TSMC leading not just performance but also yields, the lower end chips might not even be cheaper to produce.
Also, each fab requires NDAs and such and it could even be a case where signing one NDA disqualifies you from signing another, so they might require entirely different teams to do the NDA-requiring work rather than being able to have some overlap for similar work.
Not that I disagree with your sentiment overall, it’s just a gamble. Like what if one company goes with Samsung for one SKU and their competition goes with TSMC for the competing SKU and they end up with a whole bunch of inventory that no one wants because the performance gap is bigger than the price gap making waiting for stock the no brainer choice?
But if Intel or Samsung do catch up to TSMC in at least some of the metrics, that could change.
Yeah you are correct, I was venting lol.
Another factor is that fab choice design decisions were made way before the GPUs launched, when everything you said (TSMC’s lead/reliability, in particular) rang more true. Maybe Samsung or Intel could offer steep discounts for the lower performance (hence Nvidia/AMD could translate that to bigger dies), but that’s quite a fantasy I’m sure…
It all just sucks now.