glass_superman

glass_superman t1_j3h1lkg wrote

This is giving the clock too much credit. It's just a clock. If you want me to claim that hours of unpaid homemaking are a tyranny then blame the society that didn't pay them, not the hours.

2

glass_superman t1_j26l6c5 wrote

That's totally what is going to happen. Looks at international borders. As nuclear weapons and ICBMs have proliferated, we find the nation borders are now basically permanent. Before WWII shit was moving around all the time.

AI will similarly cement the classes. We might as well have a caste system.

1

glass_superman t1_j24pzoq wrote

You'll not be comforted to know that the AI that everyone is talking about, ChatGPT, was funded in part by Elon Musk!

We think of AI as some crazy threat but it might as well be the first bow and arrow or AK-47 or ICBM. It's just the latest in tools that is wielded by the wealthy for whatever purpose they want. Usually to have a more efficient way do whatever it is that they were already doing. Never an attempt to modify society for the better.

And why would they? The society is already working perfectly for them. Any technology that further ingrains this is great for them! AI is going to make society more like it is already. If it makes society worse it's because society is already bad.

1

glass_superman t1_j24nq3v wrote

Koch Bros are not as deeply depraved as a fascist leader but they have a much wider breadth of influence. They are more dangerous than Pol Pot because what they lack in depth, they more than make up for in breadth.

4

glass_superman t1_j241gi4 wrote

Is it ridiculous to worry about evil AI when we are already ruled by evil billionaires?

It's like, "oh no, what if an AI takes over and does bad stuff? Let's stop the AI so that we can continue to have benevolent leaders like fucking Elon Musk and the Koch Brothers."

Maybe our AI is evil because it is owned by dipshits?

80

glass_superman t1_j0vqdqp wrote

> many people don’t feel well-represented by either of the two major political parties.

Maybe that was no accident?

If you look at the elections in the USA, in every race there is exactly one winner. President obviously but even, say, house representatives. Though your state may have many, there is a different election for each one. Same for senators, governors, etc.

Opposed to this would be, say, a parliament where you can only vote for one person/party but the top 100 most popular would win. Or like a running race where the top 5 advance.

https://en.m.wikipedia.org/wiki/Duverger%27s_law#:~:text=This%20is%20because%20Duverger's%20law,of%20seats%20in%20a%20constituency.

> This is because Duverger's law says that the number of viable parties is one plus the number of seats in a constituency.

So that's why we have only two parties in America.

I ask: Was the country intentionally designed this way in order to provide the illusion of choice without actually providing any choice?

The two parties agree on almost everything. We're so hyper-focused on the differences between them that we fail to notice how very similar they are. Which of them is in opposition to a stronger military? Which one is anticapitalist? Which party is against eating meat? Which party wants to dissolve federal government? On major issues there is no dissent.

5

glass_superman t1_ixga3jd wrote

>And that's because the transparency of the process is inextricable from your ability to see if it's working.

Would you let a surgeon operate on you even though you don't know how his brain works? I would because I can analyze results on a macro level. I don't know how to build a car but I can drive one. Dealing with things that we don't understand is a feature of our minds, not a drawback.

>Take a situation described in the OP where the police are not distributed evenly along some racial lines in a community. Lets say that the police spend 25% more time in the community of racial group A than they do of racial group B. That group is going to assert that there is bias in the algorithm that leads to them being targeted, and if you cannot DEMONSTRATE that not to be the case than you'll have the kind of rejection of policing that we've been seeing throughout the country in the last few years. You won't be able to get people to join the police force, you won't get communities to support the police force, and when that happens it's not going to matter how efficiently you can distribute them.

Good point and I agree that policing needs more explainability than a chess algorithm. Do we need 100%? Maybe.

>Just like not crashing might be the metric with which you measure the success of an AI that drives cars; trust would be one of the metrics with which you would measure the success of some kind of AI legal system.

Fair enough. So for policing we require a high level of explainability, let's say. We offer the people an unexplainable AI that saves an extra 10,000 people per year but we opt for the explainable one because despite the miracle, we don't trust it! Okay.

Is it possible to make a practical useful policing AI with a high level of explainability? I don't know. It might not be. There might be many such fields where we never use AIs because we can't find them to be both useful enough and explainable enough at the same time. Governance, for instance.

1

glass_superman t1_ixfujfq wrote

It's hard for me to imagine the future of AI policing because we don't know how it may be used in the future.

If we don't rule out AIs working together, maybe the public works AI and the policing AI implicitly collude to not repair broken windows in some neighborhoods. https://en.m.wikipedia.org/wiki/Broken_windows_theory

That's not a great example. Hmm...

Your assumption is that the police AI wouldn't be plugged in to some other AI where they could increase crime, right? Is that a reasonable assumption? Do we find that AI systems don't interact?

In the stock market, quants program AI to trade stock. And often those programs are interacting with each other. In fact, most of the stock market volume is trades between programs. So we do have examples of AIs connecting.

You could imagine a future where the policing AI convinces the police chief to let the AI connect to the internet. And then the AI uses twitter to incite a riot and then sends police to quell it, to earn points for being a good riot-stopping AI.

Eliazar Yudowsky did the "escape the box" thing twice.

https://towardsdatascience.com/the-ai-box-experiment-18b139899936

Even if you don't find these arguments fully convincing, hopefully between the YouTube example, the quants, and Yudowsky, there is at least some inkling that humanity might somewhere develop a policing AI that would intentionally try to increase crime in order to have more crime to police. It could happen?

1

glass_superman t1_ixfso7p wrote

Consciousness emerged from life as life advanced. Why not from computers?

You could argue that we wouldn't aim to create a conscious computer. But neither did nature aim to create consciousness and here we are.

So I absolutely do think that there's a chance that it simply emerges. Just like it did before. Every day some unconscious gametes get together and, at some point, consciousness emerges, right? If carbon, why not silicon?

1

glass_superman t1_ixfsc4n wrote

I agree with you except for the part where you seem very certain that understanding trumps all utility. I am thinking that might find some balance between utility and explainability. Presumably there would be some utilitarian calculus that would judge the importance of explainability versus utility of AI function.

Like for a chess playing AI, explainability might be totally unimportant but for policing it is. And for other stuff it's in the middle.

But say you have the choice between an AI that drives cars and you don't understand it versus an explainable one but the explainable one is shown to lead to 10 times the fatalities of the other one. Surely there is some level of increased fatalities where you'd be willing to accept the unexplainable?

Here's a blog with similar ideas:

https://kozyrkov.medium.com/explainable-ai-wont-deliver-here-s-why-6738f54216be

1

glass_superman t1_ixeoz42 wrote

>And you're missing the point of the field if you're making the trivial observation that working out an explanation decreases the usefulness.

That's not what I said! I'm saying that limiting AI to only the explainable may decrease usefulness.

This is trivially true. Imagine that you have many AI programs, some of which you can interrogate and some that you can't. You need to pick the one to use. If you throw out the explainable ones, you have fewer tools. It's not a more useful situation.

>That is the point. We want to decrease it's usefulness and increase its accuracy in fields where the accuracy is paramount.

But accuracy isn't the same as explainability! A more accurate AI might be a less explainable one. Like a star poker player with good hunches vs a mediocre one with good reasoning.

We might decide that policing is too important to be unexplainable so we decide to limit ourselves to explainable AI and we put up with decreased utility of the AI in exchange. That's a totally reasonable choice. But don't tell me that it'll necessarily be more accurate.

> Saying "math reduces the usefulness by requiring an explanation for seemingly okay steps" is to miss the point of what mathematics is trying to do.

To continue the analogy, there are things in math that are always observed to be true yet we cannot prove it. And we might never be able to prove them. Yet we proceed as if they are true. We utilize that for which we have no explanation because utilizing it makes our lives better than waiting around for the proof that might never come.

Already math utilizes the unexplainable. Why not AI?

1

glass_superman t1_ixen19z wrote

>And if someone were to design a computer capable of suffering, or in other words, a machine that can experience - I don't think its possible and it would need to be so entirely different from the computers we know that we wouldn't call it a "computer" - that person is evil.

I made children that are capable of suffering? Am I evil? (I might be, I dunno!)

If we start with the assumption that no computer can be conscious then we will never notice the computer suffer, even if/when it does.

Better to develop a test for consciousness and apply it to computers regularly, to have a falsifiable result. So that we don't accidentally end up causing suffering!

0

glass_superman t1_ixem81g wrote

>I don't necessarily agree that we need to have what you call 'unexplainable AI'

To be more precise, I'm not saying that we must have unexplainable AI. I'm just saying that limiting our AI to only the explainable increases our ability to reason about it (good) but also decreases the ability of the AI to help us (bad). It's not clear if it's worth the trade-off. Maybe in some fields yes and other no.

Most deep learning is already unexplainable and it's already not useful enough. To increase both the usefulness and the explainability will be hard. Personally, I think that maximizing both will be impossible. I also think that useful quantum computers will be impossible to build. I'm happy to be proven wrong!

1

glass_superman t1_ixe2glj wrote

A baby doesn't need to learn to be hungry but neither does a computer need to learn to do math. A baby does need to learn ethics, though, and so does a computer.

Whether or not a computer has something fundamentally missing that will make it never able to have a notion of "feeling" as humans do is unclear to me. You might be right. But maybe we just haven't gotten good enough at making computers. Just like we, in the past, made declarations about the inabilities of computer that were later proved false, maybe this is another one?

It's important that we are able to recognize when the computer becomes able to suffer for ethical reasons. If we assume that a computer cannot suffer, do we risk overlooking actual suffering?

−2

glass_superman t1_ixe1b9e wrote

I didn't mean that the AI should be able to explain itself. I meant that we should be able to dig in to the AI and find an explanation for how it worked.

I'm saying that that requiring either would limit AI and decrease it's usefulness.

Already we have models where it's too difficult to dig into them and figure out why a choice was made. As in, you can step through the math of a deep learning system to follow along with the math but you can't pinpoint the decision in there and more than you can root around in someone's brain to find the neuron responsible for a behavior.

1

glass_superman t1_ixdtiuc wrote

> The computer has no idea what it's actually doing

Counterpoint: Neither do we.

Expert poker players are often unable to explain their reasoning for why it felt like a bluff. It could be that they are picking up on something and acting without being able to reason about it.

Likewise, a doctor with a lot of experience might have some hunch that turns out to be true. The hunch was actually solid deduction that the doctor was unable to reason about.

Even you, driving, probably sometimes get a hunch that a car might change lanes or get a hunch that an intersection should be approached slowly.

I (and others) feel that explainable AI might be a dead-end. If we told the poker player that you can only assume a bluff if you can put into words what is wrong, that player might perform worse. It might be that forcing AI to be explainable is artificial limiting it's ability to help us.

Even if you don't buy that, there are those studies that show that consciousness is explaining our actions after the fact like an observer. So we're not really using reason to make decisions, we just do things and then reason about why.

We let humans drive cars on hunches. Why should we hold AI to a higher standard? Is a poorly performing explainable AI better than an unexplainable one that does a good job?

3

glass_superman t1_ixdpqis wrote

Another problem that AI has, which is not mentioned here, is creating proper incentives. I'll give the example of YouTube.


YouTube has the incentive for more ads to be viewed, which roughly coincidences with people staying on YouTube longer which means the YouTube needs to select the correct next video for you to watch so that you won't tune out.

An AI algorithm might work hard to be a better predictor of your preferences. But it might also work hard to change you to be easier to predict. We find that if you watch enough YouTube videos, eventually you will enter a loop of extremist views on politics. Extremists are easier to predict. YouTube will modify your mind to make you more predictable.

https://www.technologyreview.com/2020/01/29/276000/a-study-of-youtube-comments-shows-how-its-turning-people-onto-the-alt-right/


Back to policing. Imagine that the algorithm discovers a way to increasing the crime rate in one part of town. It could do that while also deploying more police there. This would make the algorithm appear more effective in stopping crime though the algorithm was actually also the cause of the crime.

It seems like we wouldn't make an algorithm that could increase crime but we could imagine the AI plugged into other ones that could, like maybe an AI determining which neighborhoods get better roads and schools. And anyway, probably no one at YouTube imagined that their AI would intentionally radicalize people but here we are. So we probably should be worried that an AI controlling policing might try to increase crime.

1

glass_superman t1_iww1kip wrote

That's true. And there are probably even more instances than we know because if some Rabbi declared that the world would end in seventy years and then it didn't happen they probably adjusted it while passing down the text generation to generation.

The book of Daniel is maybe the most famous of the apocalyptic one.

So we've always had a fairly popular belief that humanity is on the way out?

2

glass_superman t1_iwvz4i8 wrote

That list is all quite modern. Those things have concerned us, some continue to, some don't. But they are concerns and distractions.

Did similar concerns exist hundreds of years ago? Did people take breaks from moral philosophy because they're like, "Ah, who cares, no one will be alive in 100 years anyway."?

I'm wondering if this feeling that humanity might end is new.

3

glass_superman t1_iwvpkdd wrote

Probably it's hard to focus on theories of mind and the like when you're concerned if your great-grandchildren could ever possibly exist.

It should concern us that we have to pause our moral and ethical progress to deal with this matter of everyone dying pretty soon.

Is this phenomenon of being worried about human extinction a new thing or did people commonly feel this way 500+ years ago?

8