Future is gonna suck, so enjoy your life today while the future is still not here.
At least it will probably be a quick and efficient death of all humanity when a bug hits the system and AI decides to wipe us out.
Thank god today doesn’t suck at all
Right? :)
The future might seem far off, but it starts right now.
We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.
Both honesty. AI can reduce accountability and increase the power small groups of people have over everyone else, but it can also go haywire.
It will go haywire in areas for sure.
I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.
Of course they will, and the threshold is going to be 2 or something like that, it was enough last time, or so I heard
Woops. Two guys left. Naa that’s enough to repopulate earth
Well what do you say Aron, wanna try to re-populate? Sure James, let’s give it a shot.
There is no such thing as a failsafe that can’t fail itself
Yes there is that’s the very definition of the word.
It means that the failure condition is a safe condition. Like fire doors that unlock in the event of a power failure, you need electrical power to keep them in the locked position their default position is unlocked even if they spend virtually no time in their default position. The default position of an elevator is stationery and locked in place, if you cut all the cables it won’t fall it’ll just stay still until rescue arrives.
I mean in industrial automation we take about safety rating. It isn’t that rare when I put together a system that would require two 1-in-million events that are independent of each other to happen at the same time. That’s pretty good but I don’t know how to translate that to AI.
Put it in hardware. Something like a micro explosive on the processor that requires a heartbeat signal to reset a timer. Another good one would not be to allow them to autonomously recharge and require humans to connect them to power.
Both of those would mean that any rogue AI would be eliminated one way or the other within a day
We’ve been letting other humans decide since the dawn of time, and look how that’s turned out. Maybe we should let the robots have a chance.
I’m not expecting a robot soldier to rape a civilian, for example.
So, it starts…
The code name for this top secret program?
Skynet.
deleted by creator
This can only end well
deleted by creator
“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary taleTech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus”
Project ED-209
“You have 20 seconds to reply…”
As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.
That is like saying that Mendelian pea plant fuckery and CRISPR therapy is basically the same thing.
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.
Well, an important point you and him. Both forget to mention is that mines are considered inhumane. Perhaps that means AI murdering should also be considered. Inhumane, and we should just not do it instead of allowing landmines.
This, jesus, we’re still losing limbs and clearing mines from wars that were over decades ago.
An autonomous field of those is horror movie stuff.
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention.
Pretty sure the entire DOD got a collective boner reading this.
And NonCredibleDefense
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.
For what it’s worth, there’s footage on youtube of drone swarm demonstrations that were posted 6 years ago. Considering that the military doesn’t typically release footage of the cutting edge of its tech to the public, so this demonstration was likely for a product that was already going obsolete; and that the 6 years that have passed since have made lightning fast developments in things like facial recognition… at this point I’d be surprised if we weren’t already at the very least field testing the murder machines you described.
Here is an alternative Piped link(s):
footage on youtube of drone swarm demonstrations
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Imagine a mine that could recognize “that’s just a child/civilian/medic stepping on me, I’m going to save myself for an enemy soldier.” Or a mine that could recognize “ah, CenCom just announced a ceasefire, I’m going to take a little nap.” Or “the enemy soldier that just stepped on me is unarmed and frantically calling out that he’s surrendered, I’ll let this one go through. Not the barrier troops chasing him, though.”
There’s opportunities for good here.
Maybe it starts that way but once that’s accepted as a thing the result will be increased usage of mines. Where before there were too many civilians to consider using mines, now the soldiers say “it’s smart now, it won’t blow up children” and put down more and more in more dangerous situations. And maybe those mines only have a 0.1% failure rate in tested situations but a 10% failure rate over the course of decades. Usage increases 10 fold and then you quickly end up with a lot more dead kids.
Plus it won’t just be mines, it’ll be automated turrets when previously there were none or even more drone strikes with less oversight required because the automated system is supposed to prevent unintended casualties.
Availability drives usage.
That sounds great… Why don’t we line the streets with them? Every entryway could scan for hostiles. Maybe even use them against criminals
What could possibly go wrong?
@FaceDeer okay so now that mines allegedly recognise these things they can be automatically deployed in cities.
Sure there’s a 5% margin of error but that’s an “acceptable” level of colateral according to their masters. And sure they are better at recognising some ethnicities than others but since those they discriminate against aren’t a dominant part of the culture that peoduces them, nothing gets done about it.
And after 20 years when the tech is obsolete and they all start malfunctioning we’re left with the same problems we have with current mines, only because the ban on mines was reversed the scale of the problem is much much worse than ever before.
Lmao are you 12?
They do have the mentality of one.
Yes, those definitely sound like the sort of things military contractors consider.
Why waste a mine on the wrong target?
Why occupy a hospital?
Why encroach on others land?
Sorry… are you saying that’s what Palestinians are doing?
This is the best summary I could come up with:
The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.
Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.
The use of the so-called “killer robots” would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times.
Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.
The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it’s unclear if any have taken action resulting in human casualties.
The original article contains 376 words, the summary contains 158 words. Saved 58%. I’m a bot and I’m open source!
As disturbing as this is, it’s inevitable at this point. If one of the superpowers doesn’t develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.
If you ask me, it’s just an arms race to see who build the murder drones first.
A drone that is indiscriminately killing everyone is a failure and a waste. Even the most callous military would try to design better than that for purely pragmatic reasons, if nothing else.
Even the best laid plans go awry though. The point is even if they pragmatically design it to not kill indiscriminately, bugs and glitches happen. The technology isn’t all the way there yet and putting the ability to kill in the machine body of something that cannot understand context is a terrible idea. It’s not that the military wants to indiscriminately kill everything, it’s that they can’t possibly plan for problems in the code they haven’t encountered yet.
Other weapons of mass destruction, biological and chemical warfare have been successfully avoided in war, this should be classified exactly the same
I feel like it’s ok to skip to optimizing the autonomous drone-killing drone.
You’ll want those either way.
If entire wars could be fought by proxy with robots instead of humans, would that be better (or less bad) than the way wars are currently fought? I feel like it might be.
You’re headed towards the Star Trek episode “A Taste of Armageddon”. I’d also note, that people losing a war without suffering recognizable losses are less likely to surrender to the victor.
Won’t that be fun!
/s
The sad part is that the AI might be more trustworthy than the humans being in control.
Have you never met an AI?
Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.
Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.
Drone strikes minimize casualties compared to the alternatives - heavier ordinance on bigger delivery systems or boots on the ground
If drone strikes upset you, your anger is misplaced if you’re blaming drones. You’re really against military strikes at those targets, full stop.
When the targets are things like that wedding in Mali sure.
I think your argument is a bit like saying depleted uranium is better than the alternative, a nuclear bomb. When the bomb was never on the table for half the stuff depleted uranium is.
Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.
Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.
It was literally the standard policy prior to drones.
Yeah, I think the people who are saying this could be a good thing seem to forget that the military always contracts out to the lowest bidder.
Eventually maybe. But not for the initial period where the tech is good enough to be extremely deadly but not smart enough to realize that often being deadly is the stupider choice.
No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.
Autonomous killings is an absolutely terrible, terrible idea.
The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:
In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.
https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident
As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.
Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.
What’s the opposite of eating the onion? I read the title before looking at the site and thought it was satire.
Wasn’t there a test a while back where the AI went crazy and started killing everything to score points? Then, they gave it a command to stop, so it killed the human operator. Then, they told it not to kill humans, and it shot down the communications tower that was controlling it and went back on a killing spree. I could swear I read that story not that long ago.
It was a nothingburger. A thought experiment.
The link was missing a slash: https://www.reuters.com/article/idUSL1N38023R/
This is typically how stories like this go. Like most animals, humans have evolved to pay extra attention to things that are scary and give inordinate weight to scenarios that present danger when making decisions. So you can present someone with a hundred studies about how AI really behaves, but if they’ve seen the Terminator that’s what sticks in their mind.
Even the Terminator was the byproduct of this.
In the 50s/60s when they were starting to think about what it might look like when something smarter than humans would exist, the thing they were drawing on as a reference was the belief that homo sapiens had been smarter than the Neanderthals and killed them all off.
Therefore, the logical conclusion was that something smarter than us would be an existential threat that would compete with us and try to kill us all.
Not only is this incredibly stupid (i.e. compete with us for what), it is based on BS anthropology. There’s no evidence we were smarter than the Neanderthals, we had cross cultural exchanges back and forth with them over millennia, had kids with them, and the more likely thing that killed them off was an inability to adapt to climate change and pandemics (in fact, severe COVID infections today are linked to a Neanderthal gene in humans).
But how often do you see discussion of AGI as being a likely symbiotic coexistence with humanity? No, it’s always some fearful situation because we’ve been self-propagandizing for decades with bad extrapolations which in turn have turned out to be shit predictions to date (i.e. that AI would never exhibit empathy or creativity, when both are key aspects of the current iteration of models, and that they would follow rules dogmatically when the current models barely follow rules at all).
That highly depends on the outcome of a problem. Like you don’t test much if you program a Lego car, but you do test everything very thorough if you program a satellite.
In this case the amount of testing needed to allow a killerbot to run unsupervised will probably be so big that it will never be even half done.
Well, Ultron is inevitable.
Who we got for the Avengers Initiative?
Ultron and Project Insight. It’s like the people in charge watched those movies and said, “You know, I think Hydra had the right idea!”
Wouldn’t put it past this timeline.
How about no
Yeah, only humans can indiscriminately kill people!
For everyone who’s against this, just remember that we can’t put the genie back in the bottle. Like the A Bomb, this will be a fact of life in the near future.
All one can do is adapt to it.
There is a key difference though.
The A bomb wasn’t a technology that as the arms race advanced enough would develop the capacity to be anywhere between a conscientious objector to an usurper.
There’s a prisoner’s dilemma to arms races that in this case is going to lead to world powers effectively paving the path to their own obsolescence.
In many ways, that’s going to be uncharted territory for us all (though not necessarily a bad thing).
If you can dodge a wrench, you can dodge anything.
Similarly, if you can dodge a shoe, you can dodge war crimes tribunals.
Oh snap