SeneInSPAAACE

SeneInSPAAACE t1_jdsv3j9 wrote

Perhaps. I mean, not caring doesn't still excuse all types of poor treatments, but certainly you wouldn't have to worry about causing it pain or suffering nor about ending it's existence, and that allows for a lot of what would be called "abuse" for humans.

4

SeneInSPAAACE t1_jdqm5t9 wrote

>Citation needed for an empirical truth about feelings. Lol! Please, tell me, how do you feel without a body?

Hh...

We have a neural network that is running a program. A part of that program is a model called "homunculus". We have sensory inputs, and when we get certain inputs which are mapped to the homunculus, we feel pain.

If I'm being REALLY generous with you, I might give you the argument that one needs to have a MODEL for a body to experience pain the way humans do. However, who's to say that the way humans feel pain is the only way to feel pain - and this isn't even getting into emotional pain.

3

SeneInSPAAACE t1_jdqkte8 wrote

>You cannot experience fear, love, excitement, or regret without a physical body.

[citation needed]

​

>Feelings are strictly tied to physical reaction.

Incorrect. Feelings are tied to signal impulses.

​

>Without an organic body, AI cannot feel pain, hunger, empathy, embarrassment, sadness, regret, love, or any other emotion.

Better, but still incorrect. An AI doesn't need to feel those things. However, if made with a capacity to do so, it might.

Probably shouldn't make an AI with most of those capacities. Only "emotional" capacity that might be crucial for an AI is, IMO, compassion.

​

> It just runs programs and mimics reactions it’s programmed to have.

Just like everyone else.

​

>It’s wrong to consider an AI entity to be on the same level with a human. Humans actually suffer and can feel love and neglect.

Yes and no.

It's wrong to anthropomorphize AIs, but if an intelligent, sentient AI emerges, it certainly deserves rights to personhood, as much as that makes sense in the situation.

6

SeneInSPAAACE t1_jdqk7vp wrote

Disagree, sentient AI absolutely should have rights, based on what it cares about.

However, trying to apply human or animal rights on them is wrong. For an example, even a sentient AI might be completely fine with being deleted, and trying to force it to survive would be immoral.

7

SeneInSPAAACE t1_jd29svi wrote

>I know a few degrees C doesn't seem like much

I'll elaborate on this:Last ice age, All of Canada was under a glacier, as was most of England, all of Scandinavia, etc.

Back then, the average global temperature was five degrees less than now. That would mean it was a bit less than four degrees from pre-industrial levels.

3

SeneInSPAAACE t1_ja4fv1c wrote

No method of carbon capture will do more than mitigate what's coming.

​

Fun fact, a while back I calculated how many trees we would require to stop INCREASING CO2 in the atmosphere - not reduce, just get to not increasing - and it was around 3 trillion trees.
This is 3000 billion.

Bu comparison, the pledges to plant trees are like, 3 billion in EU by 2030, or 2 billion trees in Canada.

Let's assume canada-size investment for every country in the world, including Luxembourg and Vatican, and we still only get to 390 billion, which is 2610 billion short. Based on the numbers the year I did that math, which was probably 2021, so 3 trillion probably wouldn't even be enough in the first place.

6

SeneInSPAAACE t1_j7anrud wrote

This is definitely one of them. Something like LLM AI is ALREADY driving change, but AR is even more niche than VR, and VR is still tiny.

In fact, just the fact you can have monitors of infinite size in AR/VR is pretty amazing, once the headsets are comfortable enough

9

SeneInSPAAACE t1_j6w4z4f wrote

If you have problems with OLED displays, it may be that they're set too bright for you. They get crazy good contrast, and that may be too much.

...Or you're just getting old.

In any case, I recommend using "filmmaker mode" or something similar when watching movies or shows, it's a bit less bright, bit less blue and it should match closer to the screens used to master whatever you're watching.

2

SeneInSPAAACE t1_j68qxp3 wrote

"News" from 2004, and this experiment isn't quite as conclusive as one might think - although I do believe it made McD revamp their menu a bit.

There was another guy who also ate on McD exclusively for a year or so and he was fine. He skipped sugary drinks and french fries. Spurlock always took a default menu and if it was suggested, ate a "super-size" version of the meal with more drink and fries.

35

SeneInSPAAACE t1_j5noc9x wrote

*sigh*

  1. They mean superhero movies
  2. Superhero isn't a genre, it's a style. You can do superhero horror movies, superhero spy thrillers, superhero romance. Most of superhero movies are action/adventure, but that's the genre right there.
  3. People have no clue how many movies are comic book movies. Juno, From Hell, 300, The Crow, Extraction, Ghost World, A History of Violence, Edge of Tomorrow....
0

SeneInSPAAACE t1_j0knldw wrote

Hhh.

At this point in time, AI just adds another layer of abstraction to work.
It's like... say, you're making cookies. You could use a knife or a set of carving tools, and you'd have full and total control over the sahpe of each individual cookie. Or you could use a cookie cutter. Or you could have a machine where you just turn it on, in goes the dough and out come the cookies.

3

SeneInSPAAACE t1_izroblm wrote

>The Turing test it self is not definitive either.

Very true. Without poisoning the well, would LaMDA completely have passed it already? And if I've understood correctly, it's a bit of an idiot outside of putting words in a pleasing order.

​

>Currently it looks like GPT it self is going to try to cheat it’s way through the Turing test by using a language model which is naturally hard for humans to identify as a machine.

"Cheat" is relative. Can a HUMAN pass a turing test, especially if we restrict the format in which they are allowed to respond?
If it can pass every test a human can, and we still call it anything but intelligent, either we gotta admit our dishonesty, or question whether humans are intelligent.

​

> it will reach a point very soon at which it will appear intelligent.

Just like everyone else, then. Well, better than some of us.

1

SeneInSPAAACE t1_izqad1w wrote

>In case of LaMDA the human knew from the beginning that he is talking to a machine.

So the well was poisoned from the beginning? Isn't that cheating? On the human side?

BTW, allegedly GPT-4 will have 100 TRILLION parameters. Now, again, we can't exactly tell what that means, but human brains have something like 150 trillion SYNAPSES, and that includes all the ones for our bodily functions and motor control, so.... Yeah, it's going to get interesting.

1