- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
I mean, it hasn’t thus far.
New technologies are not the issue. The problem is billionaires will fuck it up because they can’t control their insatiable fucking greed.
exactly. we could very well work less hours with the same pay. we wouldnt be as depressed and angry as we are right now.
we just have to overthrow, what, like 2000 people in a given country?
Just about every major advance in technology like this enhanced the power of the capitalists who owned it and took power away from the workers who were displaced.
https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo
Try this voice AI demo on your phone, then imagine if it can create images and video.
This in my opinion changes every system of information gathering that we have, and will usher in an era of geniuses, who grew up with access to the answer to their every question in a granular pictorial video response. If you want to for example learn how white blood cells work it gives you ask your chatbot for a video, and you can then tell it to put in different types of bacteria to see the response. Its going to make a lot of systems we have now obsolete.
you can’t learn from chatbots though. how can you trust that the material is accurate? any time I’ve asked a chatbot about subject matter that I’m well versed in, they make massive mistakes.
All you’re proving is “we can learn badly faster!” or worse, we can spread misinformation faster.
Mistakes will be less in the future, and its already pretty good now for subjects with a lot of textbooks and research. I dont think this is that big of an impediment, it will still create geniuses all over the globe.
Removing the need to do any research is just removing another exercise for the brain. Perfectly crafted AI educational videos might be closer to mental junk food than anything.
It is mental junk food, its addictive, which is why I think it will be so effective. If you can make learning addictive then its bound to raise the average global IQ.
Same was said about calculators.
I don’t disagree though. Calculators are pretty discrete and the functions well defined.
Assuming AI can be trusted to be accurate at some point, your will reduce cognitive load that can be utilized for even higher thinking.
This is another level, thanks for sharing!
This presume trust in its accuracy.
A very high bar.
Holy shit, that AI chat is too good.
No surprise there. We just went through how blockchain is going to drastically help our lives in some unspecified future.
AI does improve our lives. Saying it doesn’t is borderline delusional.
Can you give some examples that I unknowingly use and improves my life?
Translations apps would be the main one for LLM tech, LLMs largely came out of google’s research into machine translation.
If that’s the case and LLM are scaled up translation models shoehorned into general use, it makes sense that they are so bad at everything else.
Every technology shift creates winners and losers.
There’s already documented harm from algorithms making callous biased decisions that ruin people’s lives - an example is automated insurance claim rejections.
We know that AI is going to bring algorithmic decisions into many new places where it can do harm. AI adoption is currently on track to get to those places well before the most important harm reduction solutions are mature.
We should take care that we do not gaslight people who will be harmed by this trend, by telling them they are better off.
I use it at work side-by-side with searches for debugging app issues.
remember when tech companies did fun events with actual interesting things instead of spending three hours on some new stupid ai feature?
Experts are working from their perspective, which involves being employed to know the details of how the AI works and the potential benefits. They are invested in it being successful as well, since they spent the time gaining that expertise. I would guess a number of them work in fields that are not easily visible to the public, and use AI systems in ways the public never will because they are focused on things like pattern recognition on virii or idendifying locations to excavate for archeology that always end with a human verifying the results. They use AI as a tool and see the indirect benefits.
The general public’s experience is being told AI is a magic box that will be smarter than the average person, has made some flashy images and sounds more like a person than previous automated voice things. They see it spit out a bunch of incorrect or incoherent answers, because they are using it the way it was promoted, as actually intelligent. They also see this unreliable tech being jammed into things that worked previously, and the negative outcome of the hype not meeting the promises. They reject it because how it is being pushed onto the public is not meeting their expectations based on advertising.
That is before the public is being told that AI will drive people out of their jobs, which is doubly insulting when it does a shitty job of replacing people. It is a tool, not a replacement.
It should. We should have radically different lives today because of technology. But greed keeps us in the shit.
How did they answer the question about rock and roll being a fad?
It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.
Everyone gains from progress. We’ve had the same discussion over and over again. When the first sewing machines came along, when the steam engine was invented, when the internet became a thing. Some people will lose their job every time progress is made. But being against progress for that reason is just stupid.
Man it must be so cool going through life this retarded. Everything is fine, so many more things are probably interesting….lucky
Your comment doesn’t exactly testify intelligence yourself.
You might want to elaborate on some arguments actually relate to the comment you’re responding to.
What progress are you talking about?
We don’t know it yet. I can’t see the future and you neither. But you cannot question the fact that AI has made a lot of things more efficient. And efficiency always brings progress in one way or the other.
I’m not sure at this point. The sewing machine was just automated stitching. It is more similar to Photos and landscape painters, only it is worse.
With the creative AI basically most of the visual art skills went to “I’m going to pay 100$ for AI to do this instead 20K and waiting 30 days for the project”. Soon doctors, therapists and teachers will look down the barrel. “Why pay for one therapy session for 150 or I can have an AI friend for 20 a month”.
In the past you were able to train yourself to use sewing machine or learn how to operate cameras and develop photos. Now I don’t even have any idea where it goes.AI is changing the landscape of our society. It’s only “destroying” society if that’s your definition of change.
But fact is, AI makes every aspect where it’s being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
Are you a trust fund kid or something
Are you a poor kid or something? Like what kind of question even is this? Why does it even need to be personal at all? This thread is not about me…
And no. I’m not. I stand to inherit nothing. I’m still a student. I’m not wealthy or anything like that.
Because you write like you think this can’t reach you, like you’re always going to have food and shelter no matter what happens.
If it reaches me, so be it. That’s life. Survival of the fittest. It’s my own responsibility to do the best in the environment I live in.
AI makes every aspect where it’s being used a lot more productive and easier.
AI makes every aspect where it’s being used well a lot more productive and easier.
AI used poorly makes it a lot easier to produce near worthless garbage, which effectively wastes the consumers’ time much more than any “productivity gained” on the producer side.
The worry is deeper than just different changes in production. Not all progress is good, think of the broken branches of the evolution.
The fact that us don’t teach kids how to write already took a lot of different childhood development and later brain development and memory improvement out of the run.
Qith ai now drawing, writing and music became a single sentence prompt. So why keep all those things? Why literally waste time developing a skill that you can not sell? Sure for fun…
And you are bringing up efficiency. Efficiency is just a buzzword that big companies are using to replace human labor. How much more efficient is a bank where you have 4 machine and one human teller? Or a fast food restaurant where the upfront employee just delivers the food to the counter and you can only place order with a computer.
There is a point where our monkey brains can’t compete and won’t be able to exist without human to human stuff. But I don’t need to worry in 2 years we will be not able to differentiate between ai and humans. And we can just fake that connection for the rest of our efficient lifes.
I’m not against improving stuff, but qhere this is focused won’t help us in the long run…That’s the first interesting argument I’m reading here. Glad someone takes an honest stance in this discussion instead of just “rich vs poor”, “but people will lose jobs” and some random conspiracies in between.
To your comment: I agree with your sentiment that AI will make it challenging for new brains to evolve as solving difficult tasks is a problem we will encounter much less in the future. I actually never thought about it that way. I don’t have a solution for that. I think it will have two outcomes: humans will lose intelligence, or humans will develop different intelligence in a way that we don’t understand yet today.
And you are bringing up efficiency. Efficiency is just a buzzword that big companies are using to replace human labor. How much more efficient is a bank where you have 4 machine and one human teller? Or a fast food restaurant where the upfront employee just delivers the food to the counter and you can only place order with a computer.
I disagree with that. Efficiency is a universal term. And humanity has always striven to do things more efficient because it increases the likelihood of survival and quality of life in general. It’s a very natural thing and you cannot stop it. Much as you cannot stop entropy. Also, I think making things more efficient is good for society. Everything becomes easier, more available, and more fun. I can see a far future where humans no longer need to work and can do whatever they want with their day. Jobs will become hobbies and family and friends are what you care about most.
I do not agree that efficiency is good.
If its is good, we would live like we keep pigs and chickens in meat farms. More efficient is to eat bug based protein, and why waste time on eating instead of 100% meal replacement foods.
Why keep people with disabilities or with different “colors of skin” (insert any other thing there) from the most “efficient” ones?
The best way to think is Matrix-esqe pods for humans and living in a simulation.
Only bad part of that picture is that we are not needed at all.And these are the dark points of unlimited change.
We all know capitalism is very bad for the majority. We know big money do not care about marginalized groups. These are all just numbers. And at the end you and I we are all numbers that can be cut. I’m probably not going to be alive, but I hope for a bright future for the upcoming generations. The problem is that I do see AI potentially darkening their skies.
Don’t get me wrong AI can be a great tool if you learn how to use it. But the benefits are not going to be in the people hands.We need a general society overhaul where not the profit is the only thing that matters. Efficiency is good when you burn renewable wooden pellets and you want to get the most out of the chemical reaction. Efficiency is good when you are using the minimum amount of material to build something (with 3x oversized safety measures). But efficiency in AI and in social terms are going to be a problem.
Humans will not have worry free lives in current society. All the replaced labor keeps the earnings in the stockholders hands. But this went really far from AI. Sorry for the rant, but I do worry for the future.
I believe blindly accepting something before even attempting to look into the pitfalls not a great idea. And we never see all the pitfalls coming.
I use AI for programming questions, because it’s easier than digging 1h through official docs (if they exists) and frustrating trial and error.
However quite often the ai answers are wrong by inserting nonsense code, using for instead of foreach or trying to access variables that are not always set.
Yes it helps, but it’s usually only 60% right.
I used to do this, but not anymore. The amount of time I have to spend to verify it and correct it sometimes takes longer than if I were just to do it myself, and the paranoia that comes with it isn’t worth the time for me anymore.
Machine stitching is objectively worse than hand stitching, but… it’s good enough and so much more efficient, so that’s how things are done now; it has become the norm.
Good enough is the keyword in a lot of things. That’s how fast fashion got this big.
Everyone gains from progress.
It’s only true in the long-term. In the short-term (at least some) people do lose jobs, money, and stability unfortunately
That’s true. And that’s why so many people are frustrated. Because the majority is incredibly short-sighted unfortunately. Most people don’t even understand the basics of economics. If everyone was the ant in the anthill they’re supposed to be we would not have half as many conflics as we have.
The current drive behind AI is not progress, it’s locking knowledge behind a paywall.
As soon as one company perfects their AI, it will draw everyone to use it, marketing it as ‘time saver’ so you don’t have to do anything (including browsing the web, which is in decline even now). Just ask and you shall receive everything.
Once everyone gets hooked, and there won’t be any competiton left, they will own the population. News, purchase recommendations, learning, everything we do to work on our congitive abilities will be sold through a single vendor.
Suddenly you own the minds of many people, who can’t think for themselves, or search for knowledge on their own… and that’s already happening.
And it’s not the progress I was hoping to see in my lifetime.
deleted by creator
being against progress for that reason is just stupid.
Under the current economic model, being against progress is just self-preservation.
Yes, we could all benefit from AI in some glorious future that doesn’t see the AI displaced workers turned into toys for the rich, or forgotten refuse in slums.
We are ants in an anthill. Gears in a machine. Act like it. Stop thinking in classes “rich vs. poor” and conspiracies. When you become obsolete it’s nobody’s fault. This always comes from people who don’t understand how this world economy works.
Progress always comes and finds its way. You can never stop it. Like water in a river. Like entropy. Adapt early instead of desperately forcing against it.
And as someone who has extensively set up such systems on their home server… yeah it’s a great google home replacement, nothing more. It’s beyond useless on Powerautomate which I use (unwillingly) at my job. Copilot can’t even parse and match items from two lists. Despite my company trying its damn best to encourage “our own” (chatgpt enterprise) AI, nobody i have talked with has found a use.
You’re using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I’m using it every day. And every colleague I talk to agrees 100%.
Then you must know something the rest of us don’t. I’ve found it marginally useful, but it leads me down useless rabbit holes more than it helps.
I’m about 50/50 between helpful results and “nope, that’s not it, either” out of the various AI tools I have used.
I think it very much depends on what you’re trying to do with it. As a student, or fresh-grad employee in a typical field, it’s probably much more helpful because you are working well trod ground.
As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you’re screwed as far as the really inventive stuff goes, but… if you’ve read “Surely you’re joking, Mr. Feynman!” there’s a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you’re making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.
My issue is that I’m fairly established in my career, so I mostly need to reference things, which LLMs do a poor job at. As in, I usually need links to official documentation, not examples of how to do a thing.
That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.
LLMs aren’t catalogs though, and they absolutely return different things for the same query. Search engines are tells catalogs, and they’re what I reach for most of the time.
LLMs are good if I want an intro to a subject I don’t know much about, and they help generate keywords to search for more specific information. I just don’t do that all that much anymore.
If you were too lazy to read three Google search results before, yes… AI is amazing in that it shows you something you ask for without making you dig as deep as you used to have to.
I rarely get a result from ChatGPT that I couldn’t have skimmed for myself in about twice to five times the time.
I frequently get results from ChatGPT that are just as useless as what I find reading through my first three Google results.
You’re using it wrong. My experience is different from yours. It produces transfer knowledge in the queries I ask it. Not even hundreds of Google searches can replace transfer knowledge.
I’ll admit my local model has given me some insight, but in researching more of something, I find the source it likely spat it out from. Now that’s helpful, but I feel as though my normal search experience wasn’t so polluted with AI written regurgitation of the next result down, I would’ve found the nice primary source. One example was a code block that computes the inertial moment of each rotational axis of a body. You can try searching for sources and compare what it puts out.
If you have more insight into what tools, especially more i can run local that would improve my impression, i would love to hear. However my opinion remains AI has been a net negative on the internet as a whole (spam, bots, scams, etc) thus far, and certainly has not and probably will not live up to the hype that has been forecast by their CEOs.
Also if you can get access to powerautomate or at least generally know how it works, Copilot can only add nodes seemingly in a general order you specify, but does not connect the dataflow between the nodes (the hardest part) whatsoever. Sometimes it will parse the dataflow connections and return what you were searching for (ie a specific formula used in a large dataflow), but not much of which seems necessary for AI to be doing.
I think a lot depends on where “on the curve” you are working, too. If you’re out past the bleeding edge doing new stuff, ChatGPT is (obviously) going to be pretty useless. But, if you just want a particular method or tool that has been done (and published) many times before, yeah, it can help you find that pretty quickly.
I remember doing my Masters’ thesis in 1989, it took me months of research and journals delivered via inter-library loan before I found mention of other projects doing essentially what I was doing. With today’s research landscape that multi-month delay should be compressed to a couple of hours, frequently less.
If you haven’t read Melancholy Elephants, it’s a great reference point for what we’re getting into with modern access to everything:
I’ve found it primarily useless to harmful in my software development, making the work debugging poorly-structured code the major place that time is spent. What sort of software and language do you use it for?
AI search is occasionally faster and easier than slogging through the source material that the AI was trained on. The source material for programming is pretty weak itself, so there’s an issue.
I think AI has a lot of untapped potential, and it’s going to be a VERY long time before people who don’t know how to ask it for what they want will be able to communicate what they want to an AI.
A lot of programming today gets value from the programmers guessing (correctly) what their employers really want, while ignoring the asks that are impractical / counterproductive.
I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it’s intelligent when it’s really just a more advanced tool like excel compared to pen and paper or an abacus.
The real threat will be people who fool themselves into thinking it’s more than that and that it’s word is law, like a diety. Or worse, the people that do understand that but like various religious and political leaders that used religion to manipulate people, the new AI Pope’s will try and do the same manipulation but with AI.
“I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it’s intelligent.”
So in other words, it will achieve human-level intellect.
AI has it’s place, but they need to stop trying to shoehorn it into anything and everything. It’s the new “internet of things” cramming of internet connectivity into shit that doesn’t need it.
You’re saying the addition of Copilot into MS Paint is anything short of revolutionary? You heretic.
Now your smart fridge can propose unpalatable recipes. Woo fucking hoo.
I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone’s heads. Basic supply and demand says my skillset will become more valuable.
Someone will need to clean up the ai slop. I’ve already had similar pistons where I was brought into clean up code bases that failed being outsourced.
Ai is simply the next iteration. The problem is always the same business doesn’t know what they really want and need and have no ability to assess what has been delivered.
A complete random story but, I’m on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren’t tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you’re doing RAG stuff maybe or other things. However, I was “crafting” my prompt and I could not give a shit less about writing a perfect prompt. I’m typically used to coding what I want but I had to find out how to write it properly: “please don’t format it like X”. Like I wasn’t using AI to write code, it was a service endpoint.
During lunch with the AI team, they keep saying things like “we only have 10 years left at most”. I was like, “but if you have AI spit out this code, if something goes wrong … don’t you need us to look into it?” they were like, “yeah but what if it can tell you exactly what the code is doing”. I’m like, “but who’s going to understand what it’s saying …?” “no, it can explain the type of problem to anyone”.
I said, I feel like I’m talking to a libertarian right now. Every response seems to be some solution that doesn’t exist.
AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.
AI isn’t good at doing a lot of other things software engineers actually do. It isn’t very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.
I work in an environment where we’re dealing with high volumes of data, but not like a few meg each for millions of users. More like a few hundred TB fed into multiple pipelines for different kinds of analysis and reduction.
There’s a shit-ton of prior art for how to scale up relatively simple web apps to support mass adoption. But there’s next to nothing about how do to what we do, because hardly anyone does. So look ma, no training set!
If it walks and quacks like a speculative bubble…
I’m working in an organization that has been exploring LLMs for quite a while now, and at least on the surface, it looks like we might have some use cases where AI could prove useful. But so far, in terms of concrete results, we’ve gotten bupkis.
And most firms I’ve encountered don’t even have potential uses, they’re just doing buzzword engineering. I’d say it’s more like the “put blockchain into everything” fad than like outsourcing, which was a bad idea for entirely different reasons.
I’m not saying AI will never have uses. But as it’s currently implemented, I’ve seen no use of it that makes a compelling business case.
I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive. AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.
I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.
If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren’t it.
They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.
Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.
Agreed. I would add that not only would job loss be enormous, but many corporations are suddenly going to be competing with individuals armed with the same AI.