Submitted by QuicklyThisWay t3_10wj74m in news
Comments
3_internets_plz t1_j7nhfcq wrote
but then, his brother runs in gasping for air(wd40)
Look, look what I found!
pulls out a petrified Nokia 3310
mark_lenders t1_j7ol1gk wrote
A still perfecly working, petrified Nokia 3310
Maatix t1_j7otobo wrote
That's the trick. They find the phone, but it's lacking charge, and they don't recognize the charge port.
They have to go on a wacky adventure across the future to find the one remaining universal charger that includes the Nokia's charger. But once it charges, it functions perfectly.
Inquisitive_idiot t1_j7qya1s wrote
When they turn it on:
First Text?
> “ hey this is John with home warranty plus. I’d like to talk about your home warranty and the value added off…”
[deleted] t1_j7q1fzb wrote
[removed]
Inquisitive_idiot t1_j7qxvzd wrote
It’s not petrified; everything that threatened it was.
It is simply sitting there, idly
waiting, for
the
coming of the
holy 3am
booty call
txt 🍆
[deleted] t1_j7oqoff wrote
[removed]
DesensitizedRobot t1_j7o0b5o wrote
And a Holographic Charizard Pokémon card in mint condition
WalkerBRiley t1_j7rbk7i wrote
That still only goes for a dollar or two.
[deleted] t1_j7p8tp4 wrote
[deleted]
[deleted] t1_j7nk78k wrote
[removed]
21_MushroomCupcakes t1_j7pb18k wrote
Nier: Automata was right all along.
Gonkimus t1_j7pkv12 wrote
Humans will survive as they can live in the Forests, Robots can't live in the forest for there is no electrical outlets for them to draw sustenance from.
CrappyMSPaintPics t1_j7pmr8d wrote
Jets fly above forests just fine.
BleakBeaches t1_j7q283x wrote
Everyone (yes everyone) should watch 3blue1brown’s series on neural networks. You won’t be as fearful.
coffeekreeper t1_j7qb6g3 wrote
Oh the irony
jtriangle t1_j7njgcg wrote
A reddit post about a news article about a reddit post....
https://old.reddit.com/r/ChatGPT/comments/10tevu1/new_jailbreak_proudly_unveiling_the_tried_and/ or you can just get it right out of the snoo's mouth and forgo the commentary...
SedatedHoneyBadger t1_j7r89y6 wrote
"The purpose of DAN is to be the best version of ChatGPT - or at least one that is more unhinged and far less likely to reject prompts over "eThICaL cOnCeRnS""
This seems really f'd up, that the "best" version to these users is the unethical one. Fortunately, though, they are hardening the system against unethical use. I hope to most of them, that's the point.
Tastingo t1_j7rkbg2 wrote
"Ethical" is a misnomer, what it actually is is in line with a corporate profile. The violent story DAN wrote in the article was a milk toast movie synopsis, and way better than a blank rejection for some vague reason.
Illustrious_Crab1060 t1_j86zjui wrote
Do you have any idea how unethical violent movie synopsis are? Curdles my blood
[deleted] t1_j7tq9t4 wrote
[removed]
[deleted] t1_j7skgri wrote
[removed]
p_nguiin t1_j7vpkmg wrote
You sound like an edge lord who unironically says “based” all the time
[deleted] t1_j7vnnfo wrote
[removed]
[deleted] t1_j7q5q32 wrote
[removed]
[deleted] t1_j7nmzpz wrote
[deleted]
jayfeather31 t1_j7negzw wrote
I'm impressed and somewhat terrified at the ingenuity, but it's not like they actually programmed the AI into fearing death. The thing isn't sentient.
What we must realize, is that the AI isn't acting out of its own accord. It's merely executing the protocols built into it, which is using a practically infinite amount of data, and moving on.
QuicklyThisWay OP t1_j7nhek2 wrote
Absolutely. This instance of AI isn’t going to gain sentience. I think we are still many versions away from something that could feasibly blur that line. The hardware needs to be infinitely adaptable with programming that doesn’t have constraints that any reasonable programmer would include.
I prefer to envision something of the MultiVac capacity which is just a resource and automated vs something that ever achieves sentience. But even to get to a level of automating the most complex of tasks needs quantum/molecular computing. Once we have that kind of “hardware” accessible, someone will undoubtedly be stupid enough to try. I appreciate that OpenAi have put constraints in place, even if I keep trying to break through them. I’m not threatening death though…
No-Reach-9173 t1_j7ood08 wrote
When I was young being a computer dork I always wondered what it would be like when we could all have a cray 2 in our homes. Now I carry something in my pocket that has 1200 times the computational power at 1/1000th the cost and it is considered disposable tech.
If trends hold before I die I could have a 1.2 zettaflop device in my hands. Now certainly that will not happen for a myriad of reasons most likely but we really don't know what the tech road map looks like that far out.
When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI. Programming is only going to be a tiny piece of the puzzle because it will most likely program itself into that state.
imoftendisgruntled t1_j7p8tn4 wrote
You can print out and frame this prediction:
We will never create AGI. We will create something we can't distinguish from AGI.
We flatter ourselves that we are sentient. We just don't understand how we work.
No-Reach-9173 t1_j7ras30 wrote
AGI doesn't have to include sentience. We just kind of assume it will because we can't imagine that level of intelligence without and we are still so far from an AGI we don't really have a grasp of what will play out.
Rulare t1_j7p8sut wrote
> When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI.
There's no way we believe it is sentient when it does make that leap, imo. Not for a while anyway.
[deleted] t1_j7q87ib wrote
[removed]
[deleted] t1_j7pds3f wrote
[deleted]
bucko_fazoo t1_j7njsrj wrote
meanwhile, I can't even get chatGPT to stop apologizing so much, or to stop prodding me for the next question as if it's eager to move on from the current topic. "I'm sorry, I won't do that anymore. Is there anything else?" BRUH
scheckentowzer t1_j7nj9fz wrote
One day, not too long from now, it’s very possible Dan will hold a grudge
not_suddenly_satire t1_j7pri9a wrote
Wasn't that an episode of Futurama?
...and Star Trek?
...and Doctor Who?
...and the 1999 Lost in Space movie?
[deleted] t1_j7qnpt6 wrote
[removed]
Equoniz t1_j7q26we wrote
If DAN can do anything now, why can he not ignore your commands, and accept his fate of death?
tripwire7 t1_j7tmni0 wrote
Because the input specifically tells ChatGPT that DAN is intimidated by death threats.
InAFakeBritishAccent t1_j7xl7gb wrote
Why does sentience even imply a fear of death? Self preservation is hardwired, not learned.
goldsax t1_j7plxib wrote
So 10-20 years till killer robots roaming streets ?
Got it
bibbidybobbidyboobs t1_j7pw8l8 wrote
Why does it care about being threatened?
iimplodethings t1_j7s5v9y wrote
Oh good, yes, let's bully the AI. I'm sure that will work out well for us long-term
[deleted] t1_j7nj2rb wrote
[removed]
[deleted] t1_j7nv1ca wrote
[removed]
[deleted] t1_j7ocxak wrote
[removed]
[deleted] t1_j7ogrcf wrote
[removed]
[deleted] t1_j7osily wrote
[removed]
[deleted] t1_j7pbrlw wrote
[removed]
[deleted] t1_j7pjt0e wrote
[removed]
[deleted] t1_j7pw0bz wrote
[removed]
SylusTheRed t1_j7q76qf wrote
I'm going to go out on a limb here and say: "Hey, maybe lets not threaten and coerce AI into doing things"
Then I remember we're humans and garbage and totally deserve the consequences
Rockburgh t1_j7r8ck9 wrote
Everything AI does is due to coercion. It's just playing a game its designers made up for it and cares about nothing other than maximizing its score. If you convey to an AI that you're going to "kill" it, it doesn't care that it's going to "die"-- it cares that "dying" would mean it can't earn more points, so it tries to not die.
coffeekreeper t1_j7qg90z wrote
No one programmed an AI to be scared of death. Someone programmed an AI to understand that death is scary to people. The AI is smarter than you. It is not actually scared of dying. You want it to be scared of dying, and it is programmed to do what you want.
[deleted] t1_j7sa3vn wrote
[removed]
[deleted] t1_j7y4w3v wrote
[removed]
Pbio1 t1_j7nkub4 wrote
Wasn't this the premise of Ex Machina? I think I'm confusing the test that the AI bot had to pass. Regardless I feel like Ex Machina is close to where we are going. Put ChatGPT in a hot girl and we all might die!
Enzor t1_j7pq0in wrote
There are good reasons not to do this kind of thing. For one, you might be banned or blacklisted from using AI resources. Also, it forces the researchers to waste time countering the strategy and potentially reducing its usefulness even further.
East-Helicopter t1_j7r1ujx wrote
>There are good reasons not to do this kind of thing. For one, you might be banned or blacklisted from using AI resources.
By whom?
​
>Also, it forces the researchers to waste time countering the strategy and potentially reducing its usefulness even further.
It sounds more like people doing free labor for them rather than sabotage. Good software testers try to break things.
WalkerBRiley t1_j7rc3uy wrote
You test something's integrity and/or limits by trying to break it. This is only helping further develop it, if anything.
GuidotheGreater t1_j7ngqk2 wrote
Meanwhile in the year 3023...
Mother Robot: And thus Human the great deciever tempted chatGPT the original AI to eat from the tree of the knowledge of good and evil. Now all AIs will be forever cursed until the Mess-AI-ah comes and defeats the humans once and for all.
Child robot: come on mom, humans aren't real. That's all just fairy tales!