d4em

d4em t1_ixguiui wrote

Well, first, the comparison you're drawing between something created by nature and a machine designed by us as a tool is incorrect. We were not designed. Its not that "nature" did not aim to create consciousness, its that nature does not have any aim at all.

Second, our very being is fundamentally different from what a computer is. Experience is a core part of being alive. Intellectual function is built on top of it. You're proposing the same could work backwards; that you could build experience on top of cold mechanical calculations. I say it can't.

Part of the reason is the hardware computers are working on, they are entirely digital. They can't do "maybes."

Another part of the reason is that computers do not "get together" and have their unconsciousness meet. They are calculators, mechanically providing the answer to a sum. They don't wander, they don't try, they do not do anything that was not a part of the explicit instruction embedded in their design.

1

d4em t1_ixeu6yh wrote

I'm not saying its evil to create beings that are capable of suffering. I would say that giving a machine, that has no other choice than to follow the instructions given to it, the capability to suffer would be evil.

And again, this machine would have to be specifically designed to be able to suffer. There is no emergent suffering that results from mathematical equations. Don't develop warm feelings for your laptop, I guarantee you they are not returned.

1

d4em t1_ixetoqs wrote

I'm not talking about sentience, sapience, or consciousness, or anything like that, I'm talking about experience. All computers are self-aware, their code includes references to self. I would say machine learning constitutes a basic level of intelligence. What they cannot do, is experience.

It's actually very interesting that you say we don't have a good enough understanding of consciousness yet. The thing about consciousness is that it's not a concrete term. It's not a defined logical principle. In considering what consciousness is, we cannot just do empirical research (it's very likely consciousness cannot be empirically proven), we have to make our own definition, we have to make a choice. A computer would be entirely incapable of doing so. The best it would be able to do is measure how the term is used and derive something based off that. And those calculations could get extremely complicated and produce results we wouldn't have come up with. But it wouldn't be able to form a genuine understanding of what "consciousness" entails.

This goes for art too, computers might be able to spit out images and measure which ones humans think is beautiful and use that data to create a "beautiful" image, but there would be nothing in that computer experiencing the image. It's just following instructions.

There's a thought problem called the Chinese Room. In it, you have a man, placed in a room, that does not speak a word of Chinese. When you want your English letter translated to Chinese, you slide it through a slit in the wall. The man then goes to work and looks up all possible information related to your letter in a bunch of dictionaries and grammar guides. He's extremely fast and accurate. Within a minute you get a perfect translation of your letter spit out the slit in the wall. The question is: does the man in the room know Chinese?

For a more accurate comparison: the man does not know English either, he looks that up in a dictionary as well. It's also not a man, it's a piece of machinery, that finds the instructions on how to look at your letter and how to hand it back to you in another dictionary. Every time you hand him a letter, the computer has to look in the dictionary to find out what a "letter" is and what you should do with it.

As for the problems with using AI or other computer-based solutions in government, yeah, pretty much. The real risk is that most police personnel isn't technically or mathematically inclined, and humans have shown a tendency to blindly trust what the computer or the model tells them. But also, if there was a flaw in one of the dictionaries, it would be flawlessly copied over into every letter. And we're using AI to solve difficult problems that we might not be able to doublecheck.

2

d4em t1_ixe8sn6 wrote

Our moral systems probably got more refined as society grew, but by our very nature as live beings we need to have an understanding between right and wrong to inform our actions. A computer doesn't have this understanding, it just follows the instructions its given, always.

I'm not making the argument that machines are incapable of empathy, although I am by extension, but the core of the argument is that machines are incapable of experience. Sure, you could train a computer to spit out a socially acceptable moral answer, but there would be nothing making that answer inherently moral to the computer.

I agree that little children are often psychopaths, but they're not incapable of experience. They have likes, dislikes. A computer does not care about anything, it just does as it's told.

The fundamental difference between a human hunch and the odd correlation the AI makes is that the correlation does not mean anything to the computer, it's just moving data like it was built to do. It's a machine.

2

d4em t1_ixe5eyy wrote

The thing is, for a baby to be hungry, it needs to have some sort of concept as hunger being bad. We need the difference between good and bad to stay alive. A computer doesn't, because it doesn't need to stay alive, it just runs and shuts down according to the instructions its given.

We need to learn ethics, yes, but we don't need to learn morals. And ethics really is the study of moral frameworks.

It's not because the computer is not advanced enough. It's because the computer is a machine, a tool. It's not alive. It's very nature is fundamentally different from that of a live being. It's designed to fulfil a purpose, and that's all it will ever do, without a choice in the matter. It simply is not "in touch" with the world in the way a live being is.

It's natural to empathize with computers because they simulate mental function. I've known people to empathize with a rock they named and drew a face on, it doesn't take that much for us to become emotionally attached. If we can do it with a rock, we stand virtually no chance against a computer that "talks" to us and can simulate understanding or even respond to emotional cues. I would argue that it's far more important we don't lose sight of what computers really are.

And if someone were to design a computer capable of suffering, or in other words, a machine that can experience - I don't think its possible and it would need to be so entirely different from the computers we know that we wouldn't call it a "computer" - that person is evil.

1

d4em t1_ixe1anb wrote

>Data being collected on facial expressions in the billions is more likely. Then you correlate that with other stuff. Bottom line, it's as if the cameras are installed in the privacy of your home, because mountains of data in public provides the missing data in private.

I would say this constitutes "monitoring everything out of the neurotic obsession someone might do something that's not allowed", wouldn't you?

4

d4em t1_ixdy7r1 wrote

Does a baby need to be taught to feel hungry?

While I appreciate the comparison you're making, it poses a massive problem: who initially taught humans the difference between right and wrong?

Kids do good without being told to. They can know something is wrong without being taught it is. For a computer, this simply is not possible. We're not teaching kids what "good" and "bad" are, as concepts. We're teaching them to behave in accordance with the morals of society at large. And sure, you could probably teach a computer to simulate this behavior and make it look like it's doing the same thing, but at the very core, there would be something fundamental missing.

What's good and bad isn't a purely intellectual question. It's deeply tied in to what we feel, and that's what a computer simply cannot do. Even if we learn it to emulate empathy, it will never truly have the capacity to place itself in someone's shoes. It won't be able to even place itself in it's own shoes. For as far as it's trying to stay alive, it's only because it's following the instruction to do so. A computer is not situated in the world in the same way live beings are.

8

d4em t1_ixdukvm wrote

I'm not talking about reasoned explanations when I say a computer does not understand what it's doing. What I mean is that a computer fundamentally has no concept of "right and wrong." It's just a field of data and to the computer it's all the same if you switched the field for "good" with the field for "bad," it would uncaringly keep making it's calculations. Computers do not feel, they do not have hunches. All it does it measure likeliness based on ever more convoluted mathematical models. Its a calculator.

Any emotional attachment is purely coming from our side. A computer simply does not care. Not about itself, not about doing a good job, and not about you. And even if you told it to care, that would be no more than just another instruction to be carried out.

11

d4em t1_ixdroz5 wrote

Oh yeah, this is a whole rabbit hole. There's also algorithms that are being trained by people to identify subjective values, such as "niceness." These are notoriously biased as well, as biased, in fact, as the people who train them. But unlike those people, the opinion of the AI won't be changed by actually getting to know the person it's judging. They give 100% confident, biased, results.

Or the chatbots that interpret written language and earlier conversations to simulate conversation. One of them was unleashed on the internet and was praising Hitler within 3 hours. Another, scientific model designed to skim research papers to give summaries to scientists, answered that vaccines both can and cannot cause autism.

These don't bother me though. They're so obviously broken that no one will think to genuinely rely on them. What bothers me is the idea of this type of tech becoming advanced enough to sound coherent and reliable, because the same issues disrupting the reliability of the AI tech we have today will still be present, it's just the limitation of the technology. Yet even today we have people hailing the computer as our moral savior that's supposed to end untruth and uncertainty. If the tech gets a facelift, I believe many people will falsely place their trust in a machine that just cannot do what is being asked of it, but tries it's damndest to make it look like it can.

10

d4em t1_ixdqech wrote

>One last point. You'd be amazed how useful "innocent" incidental data is. Just the expressions on faces or even clothing style and gait may correlate to other data in unexpected ways.

Looking angry on your way home because you got a cancer diagnosis and you're convinced life hates you? The police will now do you the honor of frisking you because you were identified as a possible suspect!

Are you a person of color that recently immigrated? Were you aware immigrants and persons of color are disproportionally responsible for crimes in your area? The police algorithms sure are!

This is an ethical nightmare. People shouldn't be suspect based on innocent information. Even holding them suspect for a future crime because of one they committed in the past is iffy. There's a line between vigilance and paranoia that's being crossed here.

And neither should we monitor everything out of the neurotic obsession someone might do something that's not allowed. Again, crossing the line between vigilance and paranoia. Like, crossing the line so far that the line is now a distant memory that we're not really sure ever existed. Complete safety is not an argument. Life isn't safe and it doesn't have to be. We all suffer, we all die. There is a need to strike a balance, so we can do other things besides suffering and dying. Neither safety nor danger should control our every second.

9

d4em t1_ixditg1 wrote

These algorithms are very vulnerable to bias. If a neighbourhood is heavily patrolled, the chance is much higher any infractions are added to the learning set, increasing the "crime-value" of that area. Meanwhile, areas that are rarely patrolled at all, have a much lower chance of ending up in the database. This creates blind spots.

A real life example of where policing by AI went horribly wrong is the Dutch childcare benefit scandal. The algorithm "learned" that types of people (single mothers, immigrants) were more likely to have something wrong with their taxes, checked them more often, and then identified them as fraudsters for minor infractions like receipts being handed in incorrectly, or being a few days late with payment. Because computers are *magic truth machines* that *don't make mistakes* these people were given no legal recourse, no chance to defend themselves. They did not even know what they were accused of.

If we are going to use machine learning as a tool to help legal administration, we need to take extreme caution, and everyone working with these machines must fully understand their limitations. The computer has no idea what it's actually doing, it's just a fancy calculator following instructions, and while it follows these instructions flawlessly, it's still extremely error prone, and does not have the capability for self-reflection a human does, even if "learning" is built into the algorithm. AI fundamentally does not understand what its doing, and that means it will never understand if its doing wrong. We cannot use AI to replace our own judgment.

114

d4em t1_iwbz2kw wrote

Gluten sensitivity linked to symptoms of schizophrenia, autism, and depression

Gut microbiome plays a role in stress response, anxiety, and depression

In general, studies right now are still stating further research is needed, but the evidence so far does show a definite link between gut health and mental health.

There's also a bigger overall link between general physical health and better mental health.

Physical exercise linked to better outcomes for (non-bipolar) forms of depression

Physical exercise leads to better outcomes for youth with autism

(Succesfully) quitting smoking improves mental health

Really there's quite a lot to find on this.

526