Comments

You must log in or register to comment.

PotatoRT t1_ix63gmq wrote

I suggest that all kill bots be programmed with a pre-determined kill limit, and we shall call it Brannigans Law

46

Cloudboy9001 t1_ix6m2x1 wrote

Brannigans Laws:

  1. No killbot shall exceed 9 kills during fully autonomous operation.
  2. No killbot shall be permitted to hit the reset button on another killbot.
  3. No killbot shall operate in a fully autonomous state unless remote operation is interrupted by the enemy.
14

pinkfootthegoose t1_ix6royr wrote

what is this "we" that they talk about. We all know that those in power will make the decisions in these matters. And they will manufacture consent after the fact.

16

GetOutOfNATO t1_ix899t6 wrote

Don’t look at me, I didn’t vote for any of them.

2

improper84 t1_ix6v7n1 wrote

The inevitability of Skynet is why I'm always very polite to all my appliances.

10

DirectorMysterious64 t1_ix755bk wrote

Yea, I gave my fridge a pie and some whipped cream yesterday. I hope he/she/it liked it. 😆

2

SupremeEmperorNoms t1_ix6ecrh wrote

We can't! AI programs are self-learning in an attempt to improve their given task whenever possible and new data is introduced. If an AI's core program is to kill humans, then it will do so with greater and greater efficiency, no matter how often people attempt to give them "Exceptions" War is far too grey for AI to be effectively used without serious consequences. Warehouses? Business? Jobs with repetitive tasks and specialized knowledge? AI can take pretty much every job in existence and do it effectively with little to no backlash against the humans that built them, but making AI for war will never end well.

Right this second, I'm not worried. Neural Networks still have a lot of work to do to even be able to develop something as complex as culture, but the use of AI in war should be seen as just as bad an idea as the Manhattan Project...

6

Mrsparkles7100 t1_ix7610d wrote

Believe first step is the loyal wingman program.

https://breakingdefense.com/2022/09/air-force-faces-key-questions-for-next-gen-fighters-drone-wingmen/

Autonomous style drone wingman

Plus they have really gone in on the name of the new project called Skyborg :) Valkyrie success may push Skyborg drone concept to other programs, Kratos says

2

SupremeEmperorNoms t1_ix77ccw wrote

From the sounds of it, it's far less AI heavy, requiring a great deal of human input, but I would still be nervous about how much they allow the AI to learn and how many decisions they eventually automate.

1

Mrsparkles7100 t1_ix77xho wrote

It’s the start of program. Pilot is meant to have minimal control. Couple of pilots with with 5 drones as wingman. Now put around 20-30 manned planes with 100 automatous drones doing various mission objectives. That’s the future USAF is planning for the next couple of decades.

Found this in Defense news article

Hawk Carlisle, a retired general who formerly led Air Combat Command, said the ability to extend an aircraft’s reach with AI-infused wingmen is the next step for air combat. “This is a natural evolution, especially when you look at the capability today with respect to AI, with respect to systems, with respect to the computing power and capability you can put in a particular size”

New upgrade to B2 is meant to be shown in December. Interesting to see if they announce any future interaction with loyal wingman.

Also this is an interesting article

https://www.defensenews.com/industry/techwatch/2021/11/08/darpa-has-caught-a-gremlin-drone-in-midair-can-it-grab-four-in-a-half-hour/

1

BWALK16 t1_ix70v0s wrote

“How can we control weaponized robots?”

I think not making them in the first place would be pretty effective.

5

Gari_305 OP t1_ix5xz8k wrote

From the Article

>“In war, unexpected things happen all the time. Outliers are the name of the game and we know that current AIs do not do a good job with outliers,” says Batarseh.
>
>To trust AIs, we need to give them something that they will have at stake
>
>Even if we solve this problem, there are still enormous ethical problems to grapple with. For example, how do you decide if an AI made the right choice when it took the decision to kill? It is similar to the so-called trolley problem that is currently dogging the development of automated vehicles. It comes in many guises but essentially boils down to asking whether it is ethically right to let an impending accident play out in which a number of people could be killed, or to take some action that saves those people but risks killing a lesser number of other people. Such questions take on a whole new level when the system involved is actually programmed to kill.

2

Rogaar t1_ix6gytz wrote

With the whole trolley problem, replace the AI with a human. How would the human choose in this situation? Probably not logically as there would be emotion that plays a role.

We are projecting these idea's on machines and expecting AI to solve them yet we are don't have solutions ourselves.

4

AgentTin t1_ix6481v wrote

Do you think you could train an AI vision algorithm to recognize people with concealed weapons? What about suicide bombers?

You could use it arbitrarily to clear an area, but it certainly won't be any more effective than a tomahawk missile at the same job. We've all seen those videos where a robot flicks the unripened cherries out of the air, are we imagining it doing the same with people? Letting the good ones pass while delicately cutting the rest down with a burst of gunfire? Facial recognition technology functions worse the darker your skin is, which is unfortunate due to the number of brown people we like to target with these weapons. I don't imagine these will see a lot of use in Western Europe

.

1

onedoesnotjust t1_ix6cs89 wrote

I think the opposite tbh.

It will be developped first, and once AI bots start seeing more action it will be a part of war for everyone, like drones. You can have similar protocals.

It's easily justified in the math, easier to replace a robot.

Also when you have smaller countries, that could potentially build a world class military with drones and weaponized bots and way less soldiers. In fact you could simply hire out everyone.

I even see a privatized robot military that works for the highest bidder. Lots of places don't have the same ethical qualms.

How long before these dogs get bombs attached to them.

Once production costs go down, they will be mass produced for consumers, after this 3rd world starts getting all the "cast off" old tech.

1

AgentTin t1_ix6iliy wrote

Either we are talking about remote controlled robots, like predator drones or we are talking about fully autonomous systems. The question is what job we think is better served by the robots.

Robots work as bombers because people and the stuff necessary to keep them alive are heavy and a plane functions much better without us sitting in it.

A robotic dog isn't really any better in most situations than just a dude, or a regular dog. It is going to require a power grid and maintenance while soldiers just require food, bullets, and water.

AI on the other hand is more interesting. It can notice patterns humans don't and it can potentially make choices and act far more quickly than a human can. One space I think this might be helpful is in point defense. If the AI could recognize car bombs or suicide bombers it could act to neutralize the threat before the guards are even aware of it.

Fork lifts exist because humans are weak, AI exists because people are stupid and expensive. Where can AI either outperform or undercut a human?

2

onedoesnotjust t1_ix6jxc3 wrote

Training a soldier costs millions. Its more than just food and water.

More equivalent would be longbows vs. crossbows.

Takes years of training to use a longbow, crossbows are easy to use.

You don't have to seperate it all, it's unrealistic.

Combine all that, drones with bombs/cameras, dogs with guns/cameras, AI system sorting through footage real time and giving analysis, and one operator to get operational permission.
That's future war.

2

Mrsparkles7100 t1_ix75jh5 wrote

Pretty much the loyal wingman program. F35,22 and next gen fighters act as the mobile control centres. Have squadron of fully/semi autonomous drones as wingman. Let’s say 2 human manned planes and 4 or 5 wingman. Have one pilot giving out instructions to drones and then AI takes over as it attacks it target.

Then you can leave that wingman on continuously loitering programs over regions. Have their own Kill list of priority targets. Real time intel gets uploaded, triggers the strike program in the drone.

UK is looking to adapt catapult system to its carriers to adapt to its future loyal wingman program.

So yeah that film Stealth isn’t too crazy sounding given enough time.

3

Praeteritus36 t1_ix6meru wrote

We absolutely need to come to a global consensus on the use of AI products. If we don't, we won't be fighting each other and wish we had actually culminated as the human race....

2

HankyDanger t1_ix6qyje wrote

After so much conversation on the topic are we really going to make killer robots? The hubris is staggering.

2

FuturologyBot t1_ix622pa wrote

The following submission statement was provided by /u/Gari_305:


From the Article

>“In war, unexpected things happen all the time. Outliers are the name of the game and we know that current AIs do not do a good job with outliers,” says Batarseh.
>
>To trust AIs, we need to give them something that they will have at stake
>
>Even if we solve this problem, there are still enormous ethical problems to grapple with. For example, how do you decide if an AI made the right choice when it took the decision to kill? It is similar to the so-called trolley problem that is currently dogging the development of automated vehicles. It comes in many guises but essentially boils down to asking whether it is ethically right to let an impending accident play out in which a number of people could be killed, or to take some action that saves those people but risks killing a lesser number of other people. Such questions take on a whole new level when the system involved is actually programmed to kill.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/z0jvzm/part_of_the_kill_chain_how_can_we_control/ix5xz8k/

1

MaracaBalls t1_ix67jpm wrote

It’s Skynet in the making. COME VIT ME IF YOU VANT TO LIVE. HASTA LA VISTA, BABY!

1

darrellbear t1_ix6uvkc wrote

Asimov's Three Rules of Robotics seem naive nowadays:

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Zeroth Law

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

The zeroth law was an addition.

1

JaxJaxon t1_ix6xijf wrote

You cant. That is why it is called A.I. the last letter should give it away.

1

Mrsparkles7100 t1_ix768fb wrote

Keep an eye out for term called collaborative combat aircraft. New term for USAF autonomous drone wingman.

1

badforman t1_ix7ej82 wrote

Just make a female versions with big boobies. If it is true AI they will get to distracted to do their job effectively.

1

mufftruck t1_ix69skk wrote

The best way to control them is with 50bmg black or silver tips.

0