Feel like we’ve got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you’ve got all these people invested in AI companies running around with flashlights under their chins like “bro this is so scary how good we made this thing”. Seems like bullshit.
I’ve seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don’t think I’d just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?
To the second question it’s not novel at all. The models used were invented decades ago. What changed is Moores Law striked and we got stronger computational power especially graphics cards. It seems that there is some resource barrier that when surpassed turns these models from useless to useful.
Not the specific models unless I’ve been missing out on some key papers. The 90s models were a lot smaller. A “deep” NN used to be 3 or more layers and that’s nothing today. Data is a huge component too
The specifics are a bit different, but the main ideas are much older than this, I’ll leave here the Wikipedia
"Frank Rosenblatt, who published the Perceptron in 1958,[10] also introduced an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer.[11][12] Since only the output layer had learning connections, this was not yet deep learning. It was what later was called an extreme learning machine.[13][12]
The first deep learning MLP was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa in 1965, as the Group Method of Data Handling.[14][15][12]
The first deep learning MLP trained by stochastic gradient descent[16] was published in 1967 by Shun’ichi Amari.[17][12] In computer experiments conducted by Amari’s student Saito, a five layer MLP with two modifiable layers learned internal representations required to classify non-linearily separable pattern classes.[12]
In 1970, Seppo Linnainmaa published the general method for automatic differentiation of discrete connected networks of nested differentiable functions.[3][18] This became known as backpropagation or reverse mode of automatic differentiation. It is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673[2][19] to networks of differentiable nodes.[12] The terminology “back-propagating errors” was actually introduced in 1962 by Rosenblatt himself,[11] but he did not know how to implement this,[12] although Henry J. Kelley had a continuous precursor of backpropagation[4] already in 1960 in the context of control theory.[12] In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard.[6][12] In 1985, David E. Rumelhart et al. published an experimental analysis of the technique.[7] Many improvements have been implemented in subsequent decades.[12]"
The idea of NN or the basis itself is not AI. If you had actual read D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Internal Representations by Error Propagation.” Sep. 01, 1985. then you would understand this bc that paper is about a machine learning technique not AI. If you had done your research properly instead of just reading wikipedia, then you would have also come across autoassociative memory which is the precursor to autoencoders and generative autoencoders which is the foundation of a lot of what we now think of as AI models. H. Abdi, “A Generalized Approach For Connectionist Auto-Associative Memories: Interpretation, Implication Illustration For Face Processing,” in In J. Demongeot (Ed.) Artificial, University Press, 1988, pp. 151–164.
I thank you for your critic but I’m not writing a research paper here and therefore wikipedia is a good ressource for the uniniated public. This is also why I think it’s sufficient to know a) what an artificial neural network is by talking about the simplest examples b) this field of research didn’t initiate 10 years ago as often conceived by public, when first big headlines were made. These tradeoffs are always made: correctness vs simplification. I see your disagreeing with this PoV but that’s no reason to be condescending.
You don’t get to complain about people being condescending to you when you are going around literally copy and pasting wikipedia. Also you’re not right, major progress in this field started in the 80s although the concepts were published earlier, they were basically ignored by researchers. You’re making it sound like the NNs we’re using now are the same as the 60s when in reality our architectures and just even how we approach the problem have changed significantly. It’s not until the 90s-00s that we started getting decent results that could even match older ML techniques like SVM or kNN.
layman here.
probably because…
- it can sift through alot of garbage.
- its easy to use. and not complicated to understand its value.
- its useful. like a super search engine for idiots.
- it can probably automate alot of jobs. also it can probably correct or coverup alot of gaping flaws that have existed for the last few decades.
- there’s nothing else exciting going on right now.
- it is an interesting and valuable tool. progress has hit a point at which it is hard to ignore the achievements.
** relating to LLMs/chatgpt types. snarky, opinionated, and somewhat speculative, subjective review!
deleted by creator
Yes, community list: https://lemmy.intai.tech/post/2182
LLM’s are extremely flexible and capable encoding engines with emergent properties.
I wouldn’t bank on them “replacing all software” soon but they are quickly moving into areas where classic Turing code just would not scale easily, usually due to complexity/maintainance.
Nice list dude
First of all AI is a buzzword that’s meaning has changed a lot since at least the 1950s. So… what do you actually mean? If you mean LLM like ChatGPT, it’s not AGI that’s for sure. It is another tool that can be very useful. For coding, it’s great for getting you very large blocks of code prepopulated for you to polish and verify it does what you want. For writing, it’s useful to create a quick first draft. For fictional game senses it’s useful for “embedding a character quickly”, but again you likely want to edit it some even for say a D&D game.
I think it can replace most first line chat based customer service people, especially ones who already just make stuff up to say something to you (we all have been there). I could imagine it improving call routing if hooked into speech recognition and generation - the current menus act like you can “say anything” but really only “work” if you’re calling about stuff you could also do with simple press 1,2,3 menus. ChatGPT based things trained on the companies procedures and data probably could also replace that first line call queues because it can seem to more usefully do something with wider issues. Although companies still would need to get their head out of their asses somewhat too.
Where I’ve found it falls down currently is very specific technical questions, ones you might have asked on a forum and maybe gotten an answer. I hope it improves, especially as companies start to add some of their own training data. I could imagine Microsoft more usefully replacing the first few lines of tech support for their products, and eventually having the AI pass up the chain to a ticket if it can’t solve the issue. I could imagine in the next 10 years most tech companies having purchased a service from some AI company to provide them AI support bots like they currently pay for ticket systems and web hosting. And I think in general it probably will be better for the users, because for less than the cost of the cheapest outsourced front line support person (who has near 0 knowledge) you can have the AI provide pretty good chat based access to a given set of knowledge that is growing all the time, and every customer gets that AI with that knowledge base rather than the crap shoot of if you get the person who’s been there 3 years or 1 day.
I think we are a long way from having AI just write the program or CNC code or even important blog posts. The hallucination has to be fixed without breaking the usefulness of the model (people claim guardrails on GPT4 make it stupider), and the thing needs to recursively look at it’s output and run that through a “look for bugs” prompt followed by a “fix it” prompt at the very least. Right now, it can write code with noticeable bugs, you can tell it to check for bugs and it’ll find them, and then you can ask it to fix those bugs and it’ll at least try to do that. This kind of needs to be built in and automatic for any sort of process - like humans check their work, we need to program the AI to check it’s work too. And then we might need to also integrate multiple different models so “different eyes” see the code and sign off before being pushed. And even then, I think we’d need additional hooks, improvement, and test / simulation passes before we “don’t need human domain experts to deploy”. The thing is - it might be something we can solve in a few years with traditional integrations - or it might not be entirely possible with current LLM designs given the weirdness around guardrails. We just don’t know.
AI hasn’t really changed meaning since the 50s. It has always been the field of research about how to make computers perform tasks that previously were limited to only humans. The target is always moving because once AI researchers figure out how to solve one task with computers it’s no longer limited to humans anymore. It gets reduced to “just computations”.
There’s even a Wikipedia page describing this phenomenon: https://en.wikipedia.org/wiki/AI_effect
AGI is the ultimate goal of AI research. That’s when there’s no more tasks left that only humans can do.
I mean, you’re pointing out what I am - that over time AI has referred to very different technologies and capabilities.
I’ve been using it at my job to help me write code, and it’s a bit like having a soux chef. I can say “I need an if statement that checks these values” or “Give me loop that does x y and z” and it’ll almost always spit out the right answer. So coding, at least most of the time, changes from avoiding syntax errors and verifying the exact right format and turns into asking for and assembling parts.
But the neat thing is that if you have a little experience with a language you can suddenly start writing a lot of code in it. I had to figure out something with Ansible with zero experience. ChatGPT helped me get a fully functioning Ansible deployment in a couple days. Without it I’d have spent weeks in StackOverflow and documentation trying to piece together the exact syntax.
You should try out Codeium if you haven’t. It’s a VSCode toolkit completely free for personal use. I’ve had better results with it than ChatGPT
I mean, AI can be used to design a lot of robust yet efficient structures. In engineering and architecture, with enough data, AI can generate designs for buildings, and parts that are not only sturdy but can be built with less resources along with other design considerations. There’s a really cool nasa video where competitors are trying to 3D print structures for habitation in space.
AI is also used in medicine to come up with new protein structures to create new medicine. It’s also used in environmental sciences, to help predict earthquakes or monitor land use, etc.
There’s a lot of practical uses for AI.
In various jobs, AI can do the less important and easier work for you, so you can focus on the more important work. For example, you’re doing some kind of research which needs a specific kind of data you have collected, but all of that data is cluttered and messy. AI can sort the data for you, so you can focus on your research instead of spending a lot of your time on sorting the data into something more understandable. Or in programming, AI can write the easy part of a program for you, and you do the harder and more important part, which saves you time.
Crypto and AI can’t be compared at all. One is an extremely useful and revolutionary tool. The other is just pump & dump ponzi schemes for libertarians.
AI has gone through several cycles of hype and winter. There’s even a Wikipedia page for it: https://en.m.wikipedia.org/wiki/AI_winter
Of course it’s valuable to discuss the dangers and inequities of a new technology. But one of the dangers is being misled.
Just because it’s ‘the hot new thing’ doesn’t mean it’s a fad or a bubble. It doesn’t not mean it’s those things, but…the internet was once the ‘hot new thing’ and it was both a bubble (completely overhyped at the time) and a real, tidal wave change to the way that people lived, worked, and played.
There are already several other outstanding comments, and I’m far from a prolific user of AI like some folks, but - it allows you to tap into some of the more impressive capabilities that computers have without knowing a programming language. The programming language is English, and if you can speak it or write it, AI can understand it and act on it. There are lots of edge cases, as others have mentioned below, where AI can come up with answers (by both the range and depth of its training data) where it’s seemingly breaking new ground. It’s not, of course - it’s putting together data points and synthesizing an output - but even if mechanically it’s 2 + 3 = 5, it’s really damned impressive if you don’t have the depth of training to know what 2 and 3 are.
Having said that, yes, there are some problematic components to AI (from my perspective, the source and composition of all that training data is the biggest one), and there are obviously use cases that are, if not problematic in and of themselves, at very least troubling. Using AI to generate child pornography would be one of the more obvious cases - it’s not exactly illegal, and no one is being harmed, but is it ethical? And the more societal concerns as well - there are human beings in a capitalist system who have trained their whole lives to be artists and writers and those skills are already tragically undervalued for the most part - do we really want to incentivize their total extermination? Are we, as human beings, okay with outsourcing artistic creation to this mechanical turk (the concept, not the Amazon service), and whether we are or we aren’t, what does it say about us as a species that we’re considering it?
The biggest practical reasons to not get too swept up with AI is that it’s limited in weird and not totally clearly understood ways. It ‘hallucinates’ data. Even when it doesn’t make something up, the first time that you run up against the edges of its capabilities, or it suggests code that doesn’t compile or an answer that is flat, provably wrong, or it says something crazy or incoherent or generates art that features humans with the wrong number of fingers or bodily horror or whatever…well then you realize that you should sort of treat AI like a brilliant but troubled and maybe drug addicted coworker. Man, there are some things that it is just spookily good at. But it needs a lot of oversight, because you can cross over from spookily good to what the fuck pretty quickly and completely without warning. ‘Modern’ AI is only different from previous AI systems (I remember chatting with Eliza in the primordial moments of the internet) because it maintains the illusion of knowing much, much better.
Baseless speculation: I think the first major legislation of AI models is going to be to require an understanding of the training data and ‘not safe’ uses - much like ingredient labels were a response to unethical food products and especially as cars grew in size, power, and complexity the government stepped in to regulate how, where, and why cars could be used, to protect users from themselves and also to protect everyone else from the users. There’s also, at some point, I think, going to be some major paradigm shifting about training data - there’s already rumblings, but the idea that data (including this post!) that was intended for consumption by other human beings at no charge could be consumed into an AI product and then commercialized on a grand scale, possibly even at the detriment of the person who created the data, is troubling.
I am super amateur with python and I don’t work in IT, but I’ve used it to write code for me that allows me to significantly save time in my work flow.
Like something that used to take me an hour to do now takes 15-20 minutes.
So as a nonprogrammer, im able to get it to write enough code that I can tweak until it works instead of just not having that tool.
AI is nothing like cryptocurrency. Cryptocurrencies didn’t solve any problems. We already use digital currencies and they’re very convenient.
AI has solved many problems we couldn’t solve before and it’s still new. I don’t doubt that AI will change the world. I believe 20 years from now, our society will be as dependent on AI as it is on the internet.
I have personally used it to automate some Excel stuff I do at work. I just described my sheet and what I wanted done and it gave me a block of code that did it. I had spent time previously looking stuff up on forums with no luck. My issue was too specific to my work that nobody seemed to have run into it before. One query to ChatGTP solved my issue perfectly in seconds, and that’s just a new online tool in its infancy.
Cryptocurrencies didn’t solve any problems
Well XMR solved one problem, but yeah the rest are just gambling with extra steps
What problem is that? Genuinely asking.
Traceability.
Regular financial transfers, be they credit card, direct debit, straight-up written cheques, Interac/E-transfer (I am Canadian, that’s an us thing) are all inherently tracable.
XMR/Monero is not tracable, it’s specifically designed not to be, unlike Bitcoin and most other cryptocurrencies.
Of course, shitheads consider that to be a problem, but fuck them, they’re shitheads; it’s a solution, to the problem they cause.
For context, I say all this as someone who is vehemently opposed to prohibition; as far as I’m concerned every person who works for the DEA should be imprisoned or shot
Thanks for the info. That’s quite the way to end a comment though.
I mean it though.
The people working for the DEA now are no better than the people working to enforce alcohol prohibition in 1919. It’d be nice if humanity would learn, with a hundred years to think about it, but the ruling class at least haven’t. They enforce poorly thought out puritanical laws, and the world would be better off without them.
If I lived in America rather than Canada, which thank god I don’t, the DEA would happily kick down my door, shoot me, and then probably also shoot my wife, who doesn’t even partake of anything beyond alcohol, but would obviously be upset about my being shot.
All cops are bastards, and should be torched with molotovs at any available opportunity. If they didn’t want to be bastards, they shouldn’t have signed up as cops; it’s not like they’re conscripts
For me personally cryptocurrencies solve the problem of Russian money not being accepted anywhere because of one old megalomaniacal moron
As a professional editor, yeah, it’s wild what AI is doing in the industry. I’m not even talking about chatGPT script writing and such. I watched a demo of a tool for dubbing that added in the mouth movements as well.
They removed the mouth entirely from an English scene, fed it the line, and it generated not only the Chinese but generated a mouth to say it. It’s wild.
Everyone is focused on script writers/residuals/etc, which is very important, but every VA should be updating their resumes right now.
Not the exact same thing but you will get the idea here
Wow it’s smooth too; I was expecting it to look like a creepy old Clutch Cargo cartoon.