SeneInSPAAACE
SeneInSPAAACE t1_jdt15fr wrote
Reply to comment by rixtil41 in Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
Possibly! I mean, even if you can make sentient, person-like AIs, that doesn't mean you should for cases where you can expect that to lead to ethical dilemmas.
SeneInSPAAACE t1_jdsv3j9 wrote
Reply to comment by rixtil41 in Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
Perhaps. I mean, not caring doesn't still excuse all types of poor treatments, but certainly you wouldn't have to worry about causing it pain or suffering nor about ending it's existence, and that allows for a lot of what would be called "abuse" for humans.
SeneInSPAAACE t1_jdsu4uz wrote
Reply to comment by rixtil41 in Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
Define "neutral".
SeneInSPAAACE t1_jdqooes wrote
Reply to comment by infiniflip in Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
Yes, but it would have to be explicitly made that way, pretty much.
SeneInSPAAACE t1_jdqm5t9 wrote
Reply to comment by [deleted] in Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
>Citation needed for an empirical truth about feelings. Lol! Please, tell me, how do you feel without a body?
Hh...
We have a neural network that is running a program. A part of that program is a model called "homunculus". We have sensory inputs, and when we get certain inputs which are mapped to the homunculus, we feel pain.
If I'm being REALLY generous with you, I might give you the argument that one needs to have a MODEL for a body to experience pain the way humans do. However, who's to say that the way humans feel pain is the only way to feel pain - and this isn't even getting into emotional pain.
SeneInSPAAACE t1_jdqkte8 wrote
Reply to comment by [deleted] in Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
>You cannot experience fear, love, excitement, or regret without a physical body.
[citation needed]
​
>Feelings are strictly tied to physical reaction.
Incorrect. Feelings are tied to signal impulses.
​
>Without an organic body, AI cannot feel pain, hunger, empathy, embarrassment, sadness, regret, love, or any other emotion.
Better, but still incorrect. An AI doesn't need to feel those things. However, if made with a capacity to do so, it might.
Probably shouldn't make an AI with most of those capacities. Only "emotional" capacity that might be crucial for an AI is, IMO, compassion.
​
> It just runs programs and mimics reactions it’s programmed to have.
Just like everyone else.
​
>It’s wrong to consider an AI entity to be on the same level with a human. Humans actually suffer and can feel love and neglect.
Yes and no.
It's wrong to anthropomorphize AIs, but if an intelligent, sentient AI emerges, it certainly deserves rights to personhood, as much as that makes sense in the situation.
SeneInSPAAACE t1_jdqk7vp wrote
Reply to comment by Imaginary_Passage431 in Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
Disagree, sentient AI absolutely should have rights, based on what it cares about.
However, trying to apply human or animal rights on them is wrong. For an example, even a sentient AI might be completely fine with being deleted, and trying to force it to survive would be immoral.
SeneInSPAAACE t1_jd29svi wrote
Reply to comment by m-s-c-s in UN climate report: Scientists release 'survival guide' to avert climate disaster by filosoful
>I know a few degrees C doesn't seem like much
I'll elaborate on this:Last ice age, All of Canada was under a glacier, as was most of England, all of Scandinavia, etc.
Back then, the average global temperature was five degrees less than now. That would mean it was a bit less than four degrees from pre-industrial levels.
SeneInSPAAACE t1_ja4fv1c wrote
No method of carbon capture will do more than mitigate what's coming.
​
Fun fact, a while back I calculated how many trees we would require to stop INCREASING CO2 in the atmosphere - not reduce, just get to not increasing - and it was around 3 trillion trees.
This is 3000 billion.
Bu comparison, the pledges to plant trees are like, 3 billion in EU by 2030, or 2 billion trees in Canada.
Let's assume canada-size investment for every country in the world, including Luxembourg and Vatican, and we still only get to 390 billion, which is 2610 billion short. Based on the numbers the year I did that math, which was probably 2021, so 3 trillion probably wouldn't even be enough in the first place.
SeneInSPAAACE t1_j7u4gip wrote
Reply to Taliban fighters who moved to Kabul are ‘bored’ and fed up with traffic by Ruiner_Of_Things
You know, there IS this one oppressive government they could be rebelling against nearby. Just a thought.
SeneInSPAAACE t1_j7anrud wrote
Reply to comment by Low-Restaurant3504 in What weak signals or drivers of change—that receive limited attention today—are most likely to create signifiant impacts over the next 10-20 years? Where are the black swans hiding? by NewDiscourse
This is definitely one of them. Something like LLM AI is ALREADY driving change, but AR is even more niche than VR, and VR is still tiny.
In fact, just the fact you can have monitors of infinite size in AR/VR is pretty amazing, once the headsets are comfortable enough
SeneInSPAAACE t1_j6w4z4f wrote
Reply to comment by [deleted] in Evolution of display devices by [deleted]
If you have problems with OLED displays, it may be that they're set too bright for you. They get crazy good contrast, and that may be too much.
...Or you're just getting old.
In any case, I recommend using "filmmaker mode" or something similar when watching movies or shows, it's a bit less bright, bit less blue and it should match closer to the screens used to master whatever you're watching.
SeneInSPAAACE t1_j6e0x2a wrote
If someone restricts AI usage, they will lose to those who don't.
SeneInSPAAACE t1_j6aevng wrote
Reply to comment by jbeve10 in Guy who ate McDonald's three meals a day for a month suffered horrific consequences by Manichallucinations
The rule was that if the cashier suggested the super-sized meal he had to take it. Which happened a lot.
SeneInSPAAACE t1_j68qxp3 wrote
Reply to Guy who ate McDonald's three meals a day for a month suffered horrific consequences by Manichallucinations
"News" from 2004, and this experiment isn't quite as conclusive as one might think - although I do believe it made McD revamp their menu a bit.
There was another guy who also ate on McD exclusively for a year or so and he was fine. He skipped sugary drinks and french fries. Spurlock always took a default menu and if it was suggested, ate a "super-size" version of the meal with more drink and fries.
SeneInSPAAACE t1_j5xp9gu wrote
Reply to Will we ever see a time where we could relive or be able to playback and watch old memories? by Personal-Ride-1142
Memories don't work like that, but visualizing what's on our mind, that's ongoing research.
SeneInSPAAACE t1_j5noc9x wrote
Reply to The guy who makes comic book movies says that people will never get sick of comic book movies by _hiddenscout
*sigh*
- They mean superhero movies
- Superhero isn't a genre, it's a style. You can do superhero horror movies, superhero spy thrillers, superhero romance. Most of superhero movies are action/adventure, but that's the genre right there.
- People have no clue how many movies are comic book movies. Juno, From Hell, 300, The Crow, Extraction, Ghost World, A History of Violence, Edge of Tomorrow....
SeneInSPAAACE t1_j54tq3d wrote
Reply to comment by momolamomo in ChatGPT really surprised me today. by GlassAmazing4219
Huh. Had to check. The average hourly pay in Kenya is like $6, assuming standard workdays and hours. Not sure if those check out, but even if they do 80 hour weeks, it's still over $3.
SeneInSPAAACE t1_j4pi1zj wrote
In general, not being a shitty person is strats.
However, doing specific bets on the true cosmology is probably a waste of time. A good authority won't care, and a bad authority isn't worth my time.
SeneInSPAAACE t1_j4ftm7r wrote
Reply to comment by YetiPie in A woman is ordered to repay her employer for time theft by fartingfreddy1
You have a point; The platonic ideal employee might not be a guy. Might not be a gal, either.
SeneInSPAAACE t1_j4buj39 wrote
If a company hires me, they're paying for my work, not for a hypothetical platonic ideal employee's work.
That guy costs way more.
SeneInSPAAACE t1_j0knldw wrote
Hhh.
At this point in time, AI just adds another layer of abstraction to work.
It's like... say, you're making cookies. You could use a knife or a set of carving tools, and you'd have full and total control over the sahpe of each individual cookie. Or you could use a cookie cutter. Or you could have a machine where you just turn it on, in goes the dough and out come the cookies.
SeneInSPAAACE t1_izroblm wrote
Reply to comment by BaalKazar in How AI found the words to kill cancer cells by blaspheminCapn
>The Turing test it self is not definitive either.
Very true. Without poisoning the well, would LaMDA completely have passed it already? And if I've understood correctly, it's a bit of an idiot outside of putting words in a pleasing order.
​
>Currently it looks like GPT it self is going to try to cheat it’s way through the Turing test by using a language model which is naturally hard for humans to identify as a machine.
"Cheat" is relative. Can a HUMAN pass a turing test, especially if we restrict the format in which they are allowed to respond?
If it can pass every test a human can, and we still call it anything but intelligent, either we gotta admit our dishonesty, or question whether humans are intelligent.
​
> it will reach a point very soon at which it will appear intelligent.
Just like everyone else, then. Well, better than some of us.
SeneInSPAAACE t1_izqad1w wrote
Reply to comment by BaalKazar in How AI found the words to kill cancer cells by blaspheminCapn
>In case of LaMDA the human knew from the beginning that he is talking to a machine.
So the well was poisoned from the beginning? Isn't that cheating? On the human side?
BTW, allegedly GPT-4 will have 100 TRILLION parameters. Now, again, we can't exactly tell what that means, but human brains have something like 150 trillion SYNAPSES, and that includes all the ones for our bodily functions and motor control, so.... Yeah, it's going to get interesting.
SeneInSPAAACE t1_jdu8aae wrote
Reply to comment by rixtil41 in Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
No, that's nonsense. Sentience just mean you recognize there is a "you".
You may be thinking of something that has survival instincts, but micro-organisms have those.