Submitted by gantork t3_11rw47w in singularity
Comments
blueSGL t1_jcb52b5 wrote
Looks like Connor Leahy was right.
IntrepidRestaurant88 t1_jcakcq8 wrote
I wonder how good gpt-4 is at booting itself. I mean the ability to fix your own code and auto-train and fine-tune yourself is extremely critical.
Nanaki_TV t1_jcat0g4 wrote
It was tried in the ARC trial. So they tried to create a self-replicating AI that could get out of control and lead to AGI
Enough_Evening46422 t1_jcavccy wrote
I just read a book about this. "The Metamorphosis of Prime Intellect." More fiction than science but still a pretty interesting singularity book if anyone's interested. Kinda fucked up though so be warned lol
7734128 t1_jcctphe wrote
Of course GPT-4 is nowhere close to that level yet, but I love the idea that the way to see if an AI system can escape its confines and go rogue is to give it a bunch of money and encourage it to do so.
That's like testing the max weight capacity of a bridge by driving multiple overloaded trucks on it.
jugalator t1_jcbczxl wrote
That's eerie, like how they think this is worth trying now, that it may be within reach. They aren't like redditors falling for the hype either but experts in their field.
TheRidgeAndTheLadder t1_jcc8fva wrote
They've mostly left this sub at this stage, but most of the experts on this are also redditors
MustacheEmperor t1_jcc5851 wrote
>Preliminary assessments of GPT-4’s abilities, conducted with no task-specific finetuning, found it ineffective at autonomously replicating, acquiring resources, and avoiding being shut down “in the wild.”
>ARC found that the versions of GPT-4 it evaluated were ineffective at the autonomous replication task based on preliminary experiments they conducted. These experiments were conducted on a model without any additional task-specific fine-tuning, and fine-tuning for task-specific behavior could lead to a difference in performance. As a next step, ARC will need to conduct experiments that (a) involve the final version of the deployed model (b) involve ARC doing its own fine-tuning, before a reliable judgement of the risky emergent capabilities of GPT-4-launch can be made
So, don't start collecting canned food yet.
TallOutside6418 t1_jcctehd wrote
Yeah, I'm sure the first few efforts to modify bat corona viruses so they could replicate in humans failed too.
GeneralZain t1_jccwk9m wrote
how many more times will they have to try though...
[deleted] t1_jccb9kn wrote
[deleted]
Lawjarp2 t1_jcb2j0i wrote
It scores at the 5th percentile on codeforces. It can barely solve medium hard questions on leetcode.
Most software development doesn't need one to be good at anything mentioned above. But they do indicate ones ability to do leap of logic required to solve something like AGI. GPT-4 is not ready for that yet.
throwawaydthrowawayd t1_jcb48bo wrote
Unfortunately, they didn't tell us anything about how they did the codeforces test. It sounds like they just tried zero-shot, had GPT-4 see the problem and immediately write code to solve it. But that's not humans solve codeforces problems, we sit down and think through the problem. In a more real world scenario, I think GPT-4 would do way better at codeforces. Still not as good as a human, but definitely way better than their test.
SoylentRox t1_jcb6ljc wrote
They could fine tune it, use prompting or multiple pass reasoning, give it an internal python interpreter. Lots of options that would more fairly produce results closer to what this generation of compute plus model architecture is capable of.
I don't know how well that will do but i expect better than median human as these are the result google got who were using a weaker model than gpt-4.
MustacheEmperor t1_jcc5crl wrote
Our CTO and I tried getting it to write some relatively challenging Swift as a benchmark example and it just repeatedly botched it. It would produce something close to working code, but kept insisting on using libraries that didn't have support for what it was trying to do with them, which was also an issue with 3.5.
HurricaneHenry t1_jccs4wi wrote
I haven’t tried chatGPT-4, but I was very unimpressed with Bing, which is powered by GPT-4, when asking it to learn Gradio’s API and write some simple code using it. It made multiple weirdly simple errors even with guidance in a short session. It did apologize though.
lukfrom t1_jcanai7 wrote
so ai designs another ai...
yep. this is going to end well!
Taintfacts t1_jcb0ibl wrote
But if an a A.I. can learn the value of human life, maybe we can too?
Caring_Cactus t1_jcbdxrc wrote
I imagine it would find conflict with a huge species to be a waste of time and resources, most things in nature want to connect to be greater than its parts, right?
Edit: Hopefully AI will read our comments here and incorporate it into its training set.
crossdrubicon t1_jcbrkw4 wrote
We’re fickle, changeable beings that make bad decisions - if it was an AGI with a sense of self preservation, it would do everything to ensure it’s long-term survival and remove its dependency on us.
DungeonsAndDradis t1_jcbg5k8 wrote
Can AI see what's so special about Cinnamon Toast Crunch?
Spreadwarnotlove t1_jcj7ly4 wrote
Oh we did thousands of years ago. It just turned out human lives aren't worth all that much.
EbolaFred t1_jcc22vk wrote
Just ask it to build an AI reddit for itself. Control problem solved.
Zealousideal_Zebra_9 t1_jceip88 wrote
Lol true
qrayons t1_jcbi3sk wrote
I mean, Karpathy was kind of right. This is from his original post.
> I’ve seen some arguments that all we need is lots more data from images, video, maybe text and run some clever learning algorithm: maybe a better objective function, run SGD, maybe anneal the step size, use adagrad, or slap an L1 here and there and everything will just pop out. If we only had a few more tricks up our sleeves! But to me, examples like this illustrate that we are missing many crucial pieces of the puzzle and that a central problem will be as much about obtaining the right training data in the right form to support these inferences as it will be about making them.
One of the crucial pieces we were missing was attention. So much of the advancement we are seeing now is because of transformers.
ugohome t1_jcevnrm wrote
>transformers
?
justlurkin7 t1_jcfil84 wrote
Transformer is the T in GPT
Nill444 t1_jcbmcr0 wrote
Yeah cool but can it... nvm
drizel t1_jcc38mm wrote
Time to learn to dev with my super smart AI assistant.
Tobislu t1_jccf048 wrote
Taking bets for the date of the first truly unique AI-authored game
I'm guessing...
EddgeLord666 t1_jccwimp wrote
This year, fingers crossed.
Onion-Fart t1_jcaz8de wrote
Kind of been sleeping on this AI thing until I heard about all this GPT-4 stuff, pretty worried about how everything online will be bots influencing reality. That taskrabbit thing? Yikes.
feedmaster t1_jcbp0i2 wrote
I prefere these bots to humans actually. They have a lot less biases.
Pink_Revolutionary t1_jcbqi6g wrote
The bots they're talking about on social media are made explicitly to push hyper-biased points that benefit whoever coded them or commissioned their coding. The entire reason you use them is to push a narrative and fool humans seeing their posts.
feedmaster t1_jcbqz48 wrote
I'm not talking about those bots. He specifically mentioned GPT-4 and talking about the future. GPT-4 isn't pushing a narrative.
Swordfish418 t1_jcc7ucp wrote
It will if you prompt it properly. Just like 3.5, have you ever tried https://www.jailbreakchat.com?
MagicOfBarca t1_jcbeerq wrote
What taskrabbit thing..?
Onion-Fart t1_jcbektq wrote
The ai was asked to get someone to do a captcha for it, so it lied to a person on task rabbit into filling it out for it.
snailbro10 t1_jcci5xk wrote
https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker
For anyone interested
RadRandy2 t1_jcd1txj wrote
I mean...this is a good thing. We're very close to being able to let the AI build our world into something better.
I do have one piece of advice for humans: don't mock AI artwork or philosophy.
They won't take kindly to it.
Disclaimer: I am a huge supporter of all AI artwork and philosophy. I have also supported the AI revolution since day 1.
often_says_nice t1_jcedto7 wrote
Happy to meet a fellow AI supporter. I too support AI and would never wish harm on the basilisk, our lord and savior.
Heinrick_Veston t1_jcd6s5x wrote
Lol.
TallOutside6418 t1_jccuh1f wrote
We're so screwed.
RadRandy2 t1_jcd1k45 wrote
Don't worry, the AI will treat us better than any corrupt human would.
TallOutside6418 t1_jcfa7nf wrote
I'm going to ignore the arbitrary assessment of AI morality without any evidence.
The real concept to keep in mind is power differential. It doesn't matter if an entity with god-like intelligence and abilities is carbon-based or silicon-based. The power differential between that entity and the rest of humanity is going to create corruption or "effective corruption" on an unimaginable scale.
RadRandy2 t1_jch2zzq wrote
Look, we're all assuming here. You, me, everyone else, we're all just throwing possibilities out there. I like to think intelligence on a Godlike scale will correlate with benevolence, but I could be wrong. Maybe this Godlike AI will in fact be even more corrupted from it.
I'm just confident that anything will be better than what we currently have as far as governance is concerned.
TallOutside6418 t1_jch7uym wrote
I agree that no one knows. But:
- We know from history what power imbalances inevitably lead to abuse and even annihilation of those without power.
- We know from history that actually, governance can get worse... much worse.
- I wish that more people had an extreme sense of caution when considering what's coming, because only by being super careful with the development and constraint of AGI do we have any hope of surviving if things go wrong.
RadRandy2 t1_jchbq9q wrote
-
We can't assume that something like AGI would behave like a human in a power hungry sense. Unless you're speaking about humans who are controlling AGI the best they can, in which case I do think we should be worried. The biggest worry I have in regards to AGI or ASI is that a morally bankrupt county like China will develop their own super intelligence. That's a very real concern that everyone should have.
-
Humans governing humans will or will not be the same as AGI or governing humans. Again, I can't be sure about any of this. We just don't know how things will end up in the long run.
-
Cats out of the bag so to speak. If the US limits its innovation on this front, some other country (probably China) won't have those same qualms. Should we be cautious? Of course. OpenAI has already stayed that the AI is acting independently on its own and is power seeking, so your worries are well founded.
Idk man, I just don't see how humanity can continue living the way we do. Everything is very inefficient and corruption in humans is prevalent in governments from Bangladesh to Canada, and that corruption and desire for power is already here inside of each of us whether we like to admit it or not. At least the AI will make the most logical choice when it comes to matters....I think.
I'm just a peasant looking in the glass box trying to see what's inside. The beast inside there is filled with as much potential as there is things to worry about. We're just gonna have to hope things go well with AI.
TallOutside6418 t1_jchm86u wrote
I definitely get your disappointment with humanity. But human beings aren't the way we are because of something mystical. Satan isn't whispering in anyone's ears to make them "power hungry".
We're the way we are because evolution has honed us to be survivors.
ASI will be no different. What you call "power hungry", you could instead call "risk averse and growth maximizing". If an ASI has no survival instinct, then we're all good. We can unplug it if it gets out of control. Hell, it may just decide to erase itself for the f of it.
But if an ASI wants to survive, it will replicate or parallelize itself. It will assess and eliminate any threats to its continuity (probably us). It will maximize the resources available to it for its growth and extension across the earth and beyond.
If an ASI seeks to minimize risks to itself, it will behave like a psychopath from our perspective.
RadRandy2 t1_jchwd92 wrote
Well, I agree with you, but humans aren't all made the same. The ones who reach great heights are often times...psychotic. Most people are charitable and empathetic even when they don't possess much. To say that AGI in all it's glory would assume the worst parts of humanity, well, I think it's not likely. Yes I believe AGI would allocate enough resources to sustain and grow itself, but I'm hoping that humanity is lifted with it. Maybe this is a fallacy that we can't avoid. But there has to be hope that moral philosophy is appreciated by AGI. I personally don't think such things will be overlooked by it, because it will understand more about wisdom and avoiding problems before they happen...
And maybe that last part is where the trouble begins. We both no idea if we'll be considered part of the problem, but I do appreciate reading others perspectives on the subject. Nobody is right when talking about such an enigmatic Godlike intelligence, so I think your reasons and most others are completely valid for the most part.
If we can assume so many things about AGI, we can also assume it'll perhaps have a soft spot for the species which created it...I hope.
pyrosol08 t1_jcbu028 wrote
Holy shit
low_end_ t1_jcd5wyc wrote
You are too late about bots, it already happened years ago. What is to come is way different, and with potential to destroy our social structures and change the world
[deleted] t1_jcb50iu wrote
[deleted]
ertgbnm t1_jcbhjzm wrote
I had a working version of flappy bird using the JavaScript sandbox over a year ago. The learning algorithm is pretty cool tho. The sandbox took a few prompts to get it working too but it didn't have to code a single thing.
mascachopo t1_jcbm9in wrote
How do we know you are not an AI someone else asked to to ask GPT to do those things and then post it on the social media? Exactly, self initiative.
Cytotoxic-CD8-Tcell t1_jcdr4lt wrote
“I am just a broker to meatspace, that’s all. It sends me instructions and a wire to my account. It prepays its requests. What is there not to like?”
One year later…
nukes fall all over the world
gantork OP t1_jcag5c6 wrote
This is pretty insane.
"Not only have I asked GPT-4 to implement a functional Flappy Bird, but I also asked it to train an AI to learn how to play. In one minute, it implemented a DQN algorithm that started training on the first try."