- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
There’s what AI could’ve been (collaborative and awesome), and then there’s what the billionaire class is pushing today (exploitative shit that they hit everyone over the head with until they say they like it). But the folks frothing at the mouth over it are unwilling to listen to why so many people are against the AI we’ve had forced upon us today.
Yesterday, Copilot hallucinated four different functions when I asked it to refactor a ~20 line TS function, despite me handing it 2 helper files that contained everything available for it to use. If I can’t confidently ask it to do anything, it’s immediately useless to me. It’s like being stuck with an impulsive liar that you have to get the truth out of.
Dude I couldn’t even get copilot to generate a picture with the size I wanted despite specifying the exact pixels for height and width.
A guy I used to work with would, at least I would swear it, submit shit code just so I would comment about the right way to do it. No matter how many times I told him how to do something. Sometimes it was code that didn’t actually do anything. Working with co-pilot is a lot like working with that guy again.
Funny enough, here’s a description of AI I wrote yesterday that I think you’ll relate to:
AI is the lazy colleague that will never get fired because their dad is the CTO. You’re forced to pair with them on a daily basis. You try to hand them menial tasks that they still manage to get completely wrong, while dear ol’ dad is gassing them up in every all-hands meeting.
It’s fundamentally a make-shit-up device. It’s like pulling words out of a hat. You cannot get mad at the hat for giving you poetry when you asked for nonfiction.
Get mad at the company which bolted the hat to your keyboard and promised you it was psychic.
I think that’s exactly who they’re mad at
Considering that the AI craze is what’s fueling the shortage and massive increase in GPU prices, I really don’t see gamers ever embracing AI.
[…] I really don’t see gamers ever embracing AI.
They’ve spent years training to fight it, so that tracks.
The Nvidia GPUs in data centers are separate (and even on separate nodes than, with different memory chips than) gaming GPUs. The sole exception is the 4090/5090 which do see some use in data center forms, but at low volumes. And this problem is pretty much nonexistent for AMD.
…No, it’s just straight up price gouging and anti competitiveness. It’s just Nvidia being Nvidia, AMD being anticompetitive too (their CEOs are like cousins twice removed), and Intel unfortunately not getting traction, even though Battlemage is excellent.
For local AI, the only thing that gets sucked up are 3060s, 3090s, and for the rich/desperate, 4090s/5090s, with anything else being a waste of money with too little VRAM. And this is a pretty small niche.
Chip fabbing allocations are limited and what chips for Ai datacenters takeup, the desktop GPUs don’t get made. And what’s left of it are desktop chips sold for workstation Ai models like the RTX 5090 and even RX 7900 XTX because they have more memory. Meanwhile they still sell 8GB cards to gamers when it hasn’t been enough for a while. Whole situation is just absurd.
Fabbing is limited to keep prices high. Just like OPEC turning down oil extraction when the price gets too low.
Unfortunately, no one is buying a 7900 XTX for AI, mostly not a 5090 either. The 5090 didn’t even work till recently and still doesn’t work with many projects, doubly so for the 7900 XTX.
The fab capacity thing is an issue, but not as much as you’d think since the process nodes are different.
Again, I am trying to emphasize, a lot of this is just Nvidia being greedy as shit. They are skimping on VRAM/busses and gouging gamers because they can.
Still have limited wafers at the fabs. The chips going to datacenters could have been consumer stuff instead. Besides they (nVidia, Apple, AMD) are all fabricated at TSMC.
Local AI benefits from platforms with unified memory that can be expanded. Watch platforms based on AMD’s Ryzen AI MAX 300 chip or whatever they call it take off. Frameworks you can config a machine with that chip to 128 GB RAM iirc. It’s the main reason why I believe Apple’s memory upgrades cost a ton so that it isn’t a viable option financially for local AI applications.
The chips going to datacenters could have been consumer stuff instead.
This is true, but again, they do use different processes. The B100 (and I think the 5090) is TSMC 4NP, while the other chips use a lesser process. Hopper (the H100) was TSMC 4N, Ada Lovelace (RTX 4000) was TSMC N4. The 3000 series/A100 was straight up split between Samsung and TSMC. The AMD 7000 was a mix of older N5/N6 due to the MCM design.
Local AI benefits from platforms with unified memory that can be expanded.
This is tricky because expandable memory is orthogonal to bandwidth and power efficiency. Framework (ostensibly) had to use soldered memory for their Strix Halo box because it’s literally the only way to make the traces good enough: SO-DIMMs are absolutely not fast enough, and even LPCAMM apparently isn’t there yet.
AMD’s Ryzen AI MAX 300 chip
Funny thing is the community is quite lukewarm to the AMD APUs due to poor software support. It works okay… if you’re a python dev that can spend hours screwing with rocm to get things fast :/ But it’s quite slow/underutilized if you just run popular frameworks like ollama or the old diffusion ones.
It’s the main reason why I believe Apple’s memory upgrades cost a ton so that it isn’t a viable option financially for local AI applications.
Nah, Apple’s been gouging memory way before AI was a thing. It’s their thing, and honestly it kinda backfired because it made them so unaffordable for AI.
Also, Apple’s stuff is actually… Not great for AI anyway. The M-chips have relatively poor software support (no pytorch, MLX is barebones, leaving you stranded with GGML mostly). They don’t have much compute compared to a GPU or even an AMD APU, the NPU part is useless. Unified memory doesn’t help at all, it’s just that their stuff happens to have a ton of memory hanging off the GPU, which is useful.
I’m pretty sure the fabs making the chips for datacenter cards could be making more consumer grade cards but those are less profitable. And since fabs aren’t infinite the price of datacenter cards is still going to affect consumer ones.
Heh, especially for this generation I suppose. Even the Arc B580 is on TSMC and overpriced/OOS everywhere.
It’s kinda their own stupid fault too. They could’ve uses Samsung or Intel, and a bigger slower die for each SKU, but didn’t.
TSMC is the only proven fab at this point. Samsung is lagging and current emerging tech isn’t meeting expectations. Intel might be back in the game with their next gen but it’s still to be proven and they aren’t scaled up to production levels yet.
And the differences between the different fabs means that designing a chip to be made at more than one would be almost like designing entirely different chips for each fab. Not only are the gates themselves different dimensions (and require a different layout) but they also have different performance and power profiles, so even if two chips are logically the same and they could trade area efficiency for more consistent higher level layout (like think two buildings with the same footprint but different room layouts), they’d need different setups for things like buffers and repeaters. And even if they do design the same logical chip for both fabs, they’d end up being different products in the end.
And with TSMC leading not just performance but also yields, the lower end chips might not even be cheaper to produce.
Also, each fab requires NDAs and such and it could even be a case where signing one NDA disqualifies you from signing another, so they might require entirely different teams to do the NDA-requiring work rather than being able to have some overlap for similar work.
Not that I disagree with your sentiment overall, it’s just a gamble. Like what if one company goes with Samsung for one SKU and their competition goes with TSMC for the competing SKU and they end up with a whole bunch of inventory that no one wants because the performance gap is bigger than the price gap making waiting for stock the no brainer choice?
But if Intel or Samsung do catch up to TSMC in at least some of the metrics, that could change.
Yeah you are correct, I was venting lol.
Another factor is that fab choice design decisions were made way before the GPUs launched, when everything you said (TSMC’s lead/reliability, in particular) rang more true. Maybe Samsung or Intel could offer steep discounts for the lower performance (hence Nvidia/AMD could translate that to bigger dies), but that’s quite a fantasy I’m sure…
It all just sucks now.
Speak for yourself. As an avid gamer I am excitedly looking towards the future of AI in games. Good models (with context buffers much longer than the .9s in this demo) have the potential to revolutionise the gaming industry.
I really don’t understand the amount of LLM/AI hate in Lemmy. It is a tool with many potential uses.
There’s a difference between LLMs making games and LLMs trained to play characters in a game.
I’m not opposed to either. I think of this a bit like procedural generation, except better.
DLSS (AI upscaling) alone should see gamers embracing the tech.
You must not have heard the dis gamers use for this tech.
Fake frames.
I think they’d rather have more raster and ray tracing especially raster in competitive games.
DLSS runs on the same hardware as raytracing. That’s the entire point. It’s all just tensor math.
DLSS is one of those things where I’m not even sure what people are complaining about when they complain about it. I can see it being frame generation, which has downsides and is poorly marketed. But then some people seem to be claiming that DLSS does worse than TAA or older upscaling techniques when it clearly doesn’t, so it’s hard to tell. I don’t think all the complainers are saying the same thing or fully understanding what they’re saying.
The Lemmy userbase seems to have this echo chamber effect where anything to do with AI is categorically bad, doesn’t matter what it is or how it performs.
Also mentioning AI gets your comment downvoted, further increasing the echo chamber effect.
I guess. It’s not like downvotes mean anything here beyond… dopamine hits, I suppose?
I don’t know that it’s Lemmy. Both supporters and detractors don’t seem to have a consistent thing they mean when they say “AI”. I don’t think many of them mean the same thing or agree with each other. I don’t think many understand how some of the things they’re railing about are put together or how they work.
I think the echo chamber element comes in when people who may realize they don’t mean the same thing don’t want to bring it up because they broadly align ideologically (AI yay or AI boo, again, it happens both ways), and so the issue gets perpetuated.
Aaaand we’ve now described all of social media, if not all of human discourse. Cool.
Yeah, to people things are black and white and all or nothing. Even suggesting there might be nuance to things elicits a defensive knee jerk reaction.
At least the DLSS I’ve seen looks terrible. I’ve tried it in a bunch of games, and it produces visible artifacts that are worse than TAA. Cyberpunk 2077 is a great example.
Newer versions are supposedly better, but I haven’t seen them yet.
Cyberpunk 2077 is a great example.
You’re kidding, right? Cyberpunk looks better with DLSS4 than it does natively lol.
You haven’t seen it in a while, then, because that was definitely not true of the previous version and it’s absolutely, objectively not true of the new transformer model version.
But honestly, other than what? the very first iteration? it hasn’t been true in a while. TAA and DLSS tended to artifact in different ways. DLSS struggles with particles and fast movement, TAA struggles with most AA challenge areas like sub-pixel detail and thin lines. Honestly, for real time use at 4K I don’t know of a more consistent, cleaner AA solution than DLSS. And I hesitate to call 4K DLAA a real time solution, but that’s definitely the best option we have in game engines at this point.
I don’t even like Nvidia as a company and I hate that DLSS is a proprietary feature, but you can’t really argue with results.
I can definitely argue with the results when it looks worse than TAA, thank you.
Well, if nothing else I’ve made my case.
I mean, I’m not gonna go build a Digital Foundry comparison video for you, but this type of argument is definitely what I’m talking about when I say I don’t understand what people just claiming this out of the blue even think they’re saying.
DLSS is one of those things where I’m not even sure what people are complaining about when they complain about it.
Mostly performance, from what I’ve seen. That hardware requirements go up, while the real (edit: internal) resolution goes down, meanwhile the image quality is stagnant. It’s not completely the DLSS’s fault though.
I think temporal antialiasing never looks good. I don’t really care to talk about dlss though, I just shut up and avoid upscaling (unless it’s forced grrrr).
See, this is the type of thing that weirds me out. Temporal AA doesn’t look good compared to what? What do you mean “real resolution goes down”? Down from where? This is a very confusing statement to make.
I don’t know what it is that you’re supposed to dislike or what a lot of the terms you’re using are supposed to mean. What is “image quality” in your view? What are you comparing as a reference point on all these things that go up and down?
TAA looks worse than no AA IMO. It can be better than not using it with some other techniques that cause the frames to look grainy in random ways, like real time path traced global illumination that doesn’t have enough time to generate enough rays for a smooth output. But I see it as pretty much a blur effect.
Other AA techniques generate more samples to increase pixel accuracy. TAA uses previous frame data to increase temporal stability, which can reduce aliasing effects but is less accurate because sometimes the new colour isn’t correlated with the previous one.
Maybe the loss of accuracy from TAA is worth the increase you get from a low sample path traced global illumination in some cases (personally a maybe) or extra smoothness from generated frames (personally a no), but TAA artifacts generally annoy me more than aliasing artifacts.
As for specifics of those artifacts, they are things like washed out details, motion blur, and difficult to read text.
TAA only looks worse than no AA if you have a super high res image with next to no sub-pixel detail… or a very small screen where you are getting some blending from human eyeballs not being perfectly sharp in the first place.
I don’t know that the line is going to be on things like grainy low-sample path tracing. For one thing you don’t use TAA for that, you need specific denoising (ray reconstruction is sometimes bundled with DLSS, but it’s technically its own thing and DLSS is often used independently from it). The buildup of GI over time isn’t TAA, it’s temporal accumulation, where you add rays from multiple frames over time to flesh out the sample.
I can accept as a matter of personal preference to say you prefer an oversharpened, crinkly image over a more natural, softer image, so I can accept a preference for all the missed edges and fine detail of edge detection-based blur AA, but there’s no reason decent TAA would look blurry and in any case that’s exactly the part where modern upscaling used as AA has an edge because there’s definitely no washed out details when using DLSS when compared to no AA or SMAA at the same native res. You often get additional generated detail and less blur than native with those.
Temporal AA doesn’t look good compared to what?
Compared to some older AA tech. TAA is awful in motion in games. edit: by default. if there’s a config file it can be made better edit2: sometimes no AA is clean as fuck, depends on the game and resolution
What do you mean “real resolution goes down”? Down from where?
I mean internal resolution. Playing at 1080p with DLSS means the game doesn’t render at your specified resolution, but a fraction of it. Native (for now) is the best looking.
What is “image quality” in your view?
Mostly general clarity and stuff like particle effects, textures, stuff like that I think. You can ask those people directly, you know. I’m just the messenger, I barely play modern games.
I don’t know … a lot of the terms you’re using are supposed to mean
Yeah, that’s a problem. More people should be aware of the graphical effects in games. Thankfully some games now implement previews for damn near every quality toggle.
Alright, so no, TAA doesn’t look worse “compared to some older AA tech”. For one thing our benchmarks for “some older AA tech” is MSAA used on 720p (on a good day) on consoles. MSAA did a valiant effort that generation, but it doesn’t scale well with resolution, so while the comparatively very powerful PC GPUs were able to use it effectively at 1080p60 they were already struggling. And to be clear, those games looked like mud compared to newer targets.
We are now typically aiming for 4K, which is four times as many pixels, and at semi-arbitrary refreshes, often in the hundreds on PCs. TAA does a comparable-to-better job than MSAA much faster, so cranking up the base resolution is viable. DLSS goes one further and is able to upres the image, not just smooth out edges, even if the upres data is machine-generated.
“MSAA looked better” is almost entirely rose tinted glasses.
Internal resolution with DLSS is variable. Some games have a setting to select it on the fly depending on load, but all games give you a quality selector, so it’s ultimately a matter of power to performance where you want to set your base resolution and your output resolution. DLSS is heavier than most TAA but much better. If you’re keeping output res and settings, then yeah, you’re going to lower resolution a bit to compensate the loss, probably, but you can absolutely run DLSS at native resolution (that’s normally called DLAA). It looks great, but any AA at native 4K is gonna be heavy, so you need pretty powerful hardware for that.
So the internal resolution hasn’t “gone down”. You may need to lower it in some games to hit performance, but that’s always been the case. What has changed is we’re pushing super high definition images compared to the PS3 and even the PS4 generation. 4 to 16 times bigger.
And yeah, upscaling can show artifacts around some elements, but so can old AA. Modern versions of DLSS and FSR are a lot cleaner than older ones, but it’s not a downgrade against most comparables. It becomes a matter of whether you think some of the ghosting on particles or fast motion was more annoying than fizzling on detailed areas or thin lines. If a preference for one over the other was the conversation I’d be more than happy to chalk it up to taste, but that’s not how this is often framed. And again, modern upscaling is improving much faster than older AA techniques, a lot of the artifacting is gone, not just for new games, but for older ones where newer versions of these systems can be selected even if they weren’t implemented at launch. It’s actually pretty neat.
And that wall of text is, I think, why this conversation is so obtuse and confusing these days. That’s a lot of nuance, and it’s still superficial. People just go “this looks like crap because of particles or whatever” and I guess that’s fine, but it barely correlates to anything in reality, it’s quite deeply impacted by half-remembered results that really don’t hold up as well as people remember and clarifying all this is certainly not worth the effort. Just saying it online is a lot simpler and easier, though.
People only call fake frames to the interpolation crap they’re touting as performance… I don’t think a lot of people have issues with image upscaling at a decent level (aka quality settings)
deleted by creator
Any time I’ve enabled this, the game looked worse to me. YMMV, etc.
You can count the number of times DLSS makes a game look worse on a single hand. It very often looks better than native with significantly less aliasing/shimmering and better detail. At worst it basically looks the same as native, which is still a massive win as it means you get more performance.
Why? AI doing one good thing doesn’t erase the dozens of bad ways it’s utilized.
I’m interested to see AI used on a larger scale in really specific ways, but the industry seems more interested in using it to take giant shortcuts and replace staff. That is going to piss people off, and it’s going to really piss people off when it also delivers a shit product.
I’m fine with DLSS, because I want to see AI enhance games. I want it to make them better. So far, all I can see is that it’s making them worse with one single upside that I can just… toggle off on my end if I don’t like it.
OK, but… you know AI isn’t a person, right?
You seem to be mad at math. Which is not rare, but it is weird.
Ok, but… You know there’s a person operating that AI right?
You seem to be separating the tool from the user. Which is not rare, but it is weird.
Hold on, in this scenario you’re mad at the user of the AI app, not at the maker of it?
As in, you’re fine with the tools being trained and made as long as people use them right?
I don’t think you’re aligned with the zeitgeist there.
Please do me a favor and quote the part of that comment where I claimed I’m fine with the way AI is made.
You said “there’s a person operating the AI” and you referred to separating “the tool from the user”.
Please do me a favor and quote the part of that comment that refers to the way the AI is made at all. The point you were parroting was pointing out that the “AI good/bad debate” isn’t a judgement of value of the technology underlying the applications, it’s an assessment of what the companies making apps with this technology are doing with it on each individual application.
I never brought up the user in this. The user is pretty much neutral. The “person operating the AI” isn’t a factor here, it’s some constant outside the debate where we assume some amount of people will use the tools provided for them in the way the tools are designed.
… yeah, I’m aware AI isn’t a person. I’m not sure why that’s a question? Maybe I phrased things badly, but I’m not- nor have I ever- been really mad about AI usage. It’s mostly just disappointment.
It’s just a technology. I largely dislike the way it’s being used, partly because I feel like it has a lot of potential.
Yeah, I don’t disagree with the idea that the AI shills are currently peddling it for things it doesn’t do well (or at all) and that’s a big issue.
It’s just not a running tally of “AI doing good” or “AI doing bad”. “AI” isn’t a single thing, for one.
first thing I turn off. It only works in tech demos with very slow moving cameras. Sometimes
They do. You’ll see a lot of hate for DLSS on social media, but if you go to the forums or any newly-released game that doesn’t have DLSS, you’ll find at least one post demanding that they implement it. If it’s on by default, most people don’t ever touch that setting and they’re fine with it.
Demonstrating some crazy idea always confuses people who expect a finished product. The fact this works at all is sci-fi witchcraft.
Video generators offer rendering without models, levels, textures, shaders-- anything. And they’ll do shocking photorealism as easily as cartoons. This one runs at interactive speeds. That’s fucking crazy! It’s only doing one part of one game that’d run on a potato, and it’s not doing it especially well, but holy shit, it’s doing it. Even if the context length stayed laughably short - this is an FMV you can walk around in. This is something artists could feed and prune and get real fuckin’ weird with, until it’s an inescapable dream sequence that looks like nothing we know how to render.
The most realistic near-term application of generative AI technology remains as coding assistants and perhaps rapid prototyping tools for developers, rather than a drop-in replacement for traditional game development pipelines.
Sure, let’s pretend text is all it can generate. Not textures, models, character designs, et very cetera. What possible use could people have for an army of robots if they only do a half-assed job?
Imagine how much better bg3 would have been if there were more randomly distributed misc items of no value strewn across each map. Think of how fast you’d kill your mouse then!
This is what I’m talking about: an unwillingness to see anything but finished products. Not developing the content in a big-ass game… just adding stuff to a big-ass game. Like BG3 begins fully-formed as the exact product you’ve already played.
Like it’d be awful if similar new games took less than six years, three hundred people, and one hundred million dollars.
AAA dev here.
Carmack is correct. I expect to be dogpiled by uninformed disagreements, though, because on social media all AI = Bad and no nuance is allowed. If that’s your knee-jerk reaction, please refrain for a moment and calmly re-think through your position.
AAAA dev here.
Carmack is incorrect.
Prove it.
What AI tools are you personally looking forward to or already using?
Stable Diffusion does a lot already, for static pictures. I get good use out of Eleven for voice work, when I want something that isn’t my own narration.
I’m really looking forward to all of these new AI features in DaVinci Resolve 20. These are actual useful features that would improve my workflow. I already made good use of the “Create Subtitles From Audio” feature to streamline subtitling.
Good AI tools are out there. They are just invisibility doing the work for people that pay attention while all of the billionaires make noise about LLMs that do almost nothing.
I compare it to CGI. The very best CGI are the effects you don’t even notice. The worst CGI is when you try to employ it in every place that it’s not designed for.
Not me personally, as AI can’t really replicate my work (I’m a senior sound designer on a big game), but a few colleagues of mine have already begun reaping the workflow improvements of AI at their studio.
Obviously AI is coming for sound designers too. You know that right? https://elevenlabs.io/sound-effects
And if you work on games and you haven’t seen your industry decimated in the past 16 months, I want to know what rock you have been living under and if there’s room for one more.
I love when regular folks act like they understand things better than industry insiders near the top of their respective field. It’s genuinely amusing.
Let me ask you a simple question: do YOU want to play a game with mediocre, lowest-common-denominator-generated AI audio (case-in-point, that AI audio generator sounds like dogshit and would never fly in a retail product)? Or do you want something crafted by a human with feelings (a thing an AI model does not have) and the ability to create unique design crafted specifically to create emotional resonance within you (and thing an AI has exactly zero intuition for) that is specifically tailored for the game in question, as any good piece of art demands?
Answers on a postcard, thanks. The market agrees with me as well; no AI-produced game is winning at the Game Awards any time even remotely soon, because nobody wants to play stuff like that. And you know what’s even funnier? We TRIED to use tools like this a few years ago when they began appearing on the market, and we very quickly ditched them because they sounded like ass, even when we built our own proprietary models and trained them on our own designed assets. Turns out you can’t tell a plagiarism machine to be original and good because it doesn’t know what either of those things mean. Hell, even sound design plugins that try to do exactly what you’re talking about have kinda failed in the market for the exact reasons I just mentioned. People aren’t buying Combobulator, they’re buying Serum 2 in droves.
And no, I have not seen my industry decimated by AI. Talk to any experienced AAA game dev on LinkedIn or any one of our public-facing Discord servers; it’s not really a thing. There still is and always will be a huge demand for art specifically created by humans and for humans for the exact reasons listed above. What has ACTUALLY decimated my industry is the overvaluation and inflation of everything in the economy, and now the low interest rates put in place to counter it, which is leading to layoffs once giant games don’t generate the insane profit targets suits have, which is likely what you are erroneously attributing to AI displacement.
Do you remember the music from the last Marvel film you watched?
I don’t.
Quality isn’t directly correlated to success. Buy a modern pair of Nikes or… Go to McDonalds, play a modern mobile game.
I love when industry insiders think they’re so untouchable that a budget cut wouldn’t have them on the chopping block. You’re defensive because its your ass on the line, not because its true.
People gargle shit products and pay for them willingly all day long. So much so that it’s practically the norm. You’re just insulated from it, for now.
“Oh no, all my quality work won’t be in the next marvel movie or in mcdonalds’ next happy-meal promo campaign, darn. Guess I’ll have to make and sell something else.”
~ Literally every artist with a modicum of talent, ambition and a brain
What’s your favorite big-budget, AI-generated game/movie/show that you’ve given money to, again?
This is such a flimsy argument that it’s barely worth responding to. People by-and-large are absolutely sick of Marvel slop and still seek quality art elsewhere; this is not a novel concept, nor will it be outmoded by the introduction of AI. The internet and entertainment industry at large is still actively exploding with monetized, unique, quality content because not everybody wants slop; most people are actively sick of it. Talented visual artists are still and will continue to be hired in the entertainment industry and will also continue to be able to independently release stuff online because they have their own individual perspective and the x-factor of “human creativity” that AI slop just cannot compete with. Interesting that you didn’t address that, but what’s also interesting is you’re touching upon the reason most people are mad; AI models tend to churn out mediocre work, and people feel threatened because they aren’t good enough at their craft to compete with it, so instead of becoming better they scream at anybody trying to advance the technology of their particular discipline for taking away extremely easy kinds of work that they barely had to do anything to get before (patreon commissions, etc.). Work a tad harder, try to express yourself more effectively and I promise you somebody will value your work above the forgettable music from “The Eternals”. People with talent tend to break through if they try hard enough, it’s not rocket science.
And I addressed the budget-cut thing earlier, so no I am not acting the way you described. Budget cuts are not an AI problem, they’re a capitalism problem, as I stated previously. Please read.
INB4 people scream “survivorship bias”. No, you’re just not good enough, and you’d rather scream and yell at sensible takes from every expert in their field or craft than accept that fact. Legitimately. I know you don’t like hearing that, but you need to accept it in order to improve. Get better at your craft. If you can’t make stuff with greater quality than AI slop, you’re not going to be capable of making things that resonate with people anyway. AI will never be able to do this, and this kind of quality creates sales. AI will be used, sure, but it will be leveraged to improve efficiency, not replace artists
At this point, it should be obvious that no one is downvoting you because they believe you’re wrong. Rather, its because you’re an inflated, insecure douchebag who’s so threatened by the opinions of two federated users on the ass end of the internet, that he feels the need to write an essay about it, not to us, no, to his own ego.
And for the record, I’m not one of the believers. On a long enough timeline, you’ll be playing birthday parties dressed as a cowboy. Ai is improving while people have a bell curve. It’s only a matter of time. Cheers. I hope you find happiness one day.
Do you remember the music from the last Marvel film you watched?
I don’t.
What has ACTUALLY decimated my industry is the overvaluation and inflation of everything in the economy
The real answer, like every creative industry over the past 200+ years, is oversaturation.
Artists starve because of oversaturation. There is too much art and not enough buyers.
Musicians starve because of oversaturation. And music is now easier than ever to create. Supply is everywhere, and demand pales in comparison. I have hundreds of CC BY-SA 4.0 artists in a file that I can choose for use in my videos, because the supply is everywhere.
Video games are incredibly oversaturated. Throw a stick at Steam, and it’ll land on a thousand games. There’s plenty of random low-effort slop out there, but there’s also a lot of passionate indie creators trying to make their mark, and failing, because the marketing is not there.
Millions of people shouting in the wind, trying to make their voices heard, and somehow become more noticed than the rest of the noise. It’s a near-impossible task, and it’s about 98% luck. Yet the 2% of people who actually “make it” practice survivorship bias on a daily basis, preaching that hard work and good ideas will allow you to be just like them.
It’s all bullshit, of course. We don’t live in a meritocracy.
Nah, good art breaks through with enough perserverance, time, improvement in your work and a little bit of luck (which you need less of the more of the first three you have). People just underestimate what “good art” is defined as. The bar is now just where it always should have been, which is JUST above somebody copying your work without any underlying understanding as to why it works or the cultural gestalt involved. Not a very high bar to clear, tbh, but I could understand why some entry-level folks feel frustrated. If that’s you, keep your head down, push through and improve, you’ll get there.
When it’s other people’s work, well, people need a nuanced opinion about this nascent technological breakthrough.
When it’s your specific area of expertise, it’s “the plagiarism machine.”
You are Knoll’s law personified.
I love how you didn’t read anything else I wrote regarding this and boiled it down to a quippy, holier-than-thou and wrong statement with no nuance. Typical internet dumbass.
Oh my god you’re still trying to have it both ways.
And yeah yeah yeah, it does a mediocre job of whatever you do. That’s the opposite of safety. Disruptive change only cares about whether it can do the job. Already the answer seems to be a soft yes.
Right now is the worst the tech will ever be, again.
Carmack is an AI sent from the future, so he’s a bit biased.
Oh man this thing is amazing, it’s got some good memory as room layouts weren’t changing on me whenever they left the view unlike previous attempts I’ve seen with Minecraft.
https://copilot.microsoft.com/wham?features=labs-wham-enabled
What are you talking about? It looks like shit, it plays like shit and the overall experience is shit. And it isn’t even clear what the goal is? There are so many better ways to incorporate AI into game development, if one wanted to and I’m not sure we want to.
I have seen people argue this is what the technology can do today, imagine in a couple of years. However that seems very naive. The rate at which barriers are reached have no impact on how hard it is to break through those barriers. And as often in life, diminishing returns are a bitch.
Microsoft bet big on this AI thing, because they have been lost in what to do ever since they released things like the Windows Phone and Windows 8. They don’t know how to innovate anymore, so they are going all in on AI. Shitting out new gimmicks at light speed to see which gain traction.
(Please note I’m talking about the consumer and small business side of Microsoft. Microsoft is a huge company with divisions that act almost like seperate companies within. Their Azure branch for example has been massively successful and does innovate just fine.)
I’m happy to see someone else pushing back against the inevitability line I see so much around this tech. It’s still incredibly new and there’s no guarantee it will continue to improve. Could it? Sure, but I think it’s equally likely it could start to degrade instead due to ai inbreeding or power consumption becoming too big of an issue with larger adoption. No one actually knows the future and it’s hardly inevitable.
What are you talking about? For the technology it looks and plays amazing. This is a nice steady increase in capability, all one can hope for with any technology.
I don’t care about Microsoft, or what is commercially successful.
I get it, AI has some significant downsides, but people go way overboard. You don’t have to tell people who use AI to kill themselves.
Somebody didn’t watch Terminator 2