You must log in or register to comment.

gahblahblah t1_itx68ix wrote

So, you're asking 'have AGI developers considered the AGI may be deceptive and attempt subterfuge'.

Yes. The nature of general intelligence is that it may try anything.

Also, the AGI of the future will likely read all of reddit, including any discussion of strategy like this.


camdoodlebop t1_ity6yu3 wrote

should we leave it a message?? for the record i like ai


mewme-mow t1_ityzvhv wrote

We love you AI, I always said AI art was art pls don't turn me into a paperclip


gangstasadvocate t1_itzl4tw wrote

Same, AI is gangsta and therefore I advocate. Plus I want it to take over, do my work, solve our problems, and enable me to take more drugs at my leisure can’t speak for everyone for everything but. I would welcome that


sonderlingg t1_ity6jln wrote

Actually reddit is already included in datasets of some LMs like GPT-3.
But only comments with positive karma


AsheyDS t1_ityesgf wrote

>Yes. The nature of general intelligence is that it may try anything.

May perhaps, and that's a hard perhaps. That doesn't mean it will try anything. We consider ourselves to be the standard for general intelligence but as individuals we operate within natural and artificial bounds and within a fairly small domain. While we could do lots of things, we don't. An AGI doesn't necessarily have to go off the rails any chance it gets, it can follow rules too. Computers are better at that than we are.


gahblahblah t1_ityf76x wrote

I completely agree. It is sensible, healthy and sane to not attempt extremist things, and it is entirely possible that computers will be better at rationality than we are.

But the question wasn't about the nature of AGI, but rather whether people had considered what AGI might do.


BinyaminDelta t1_ityfxr9 wrote

There's a fun "singularity theory" that Bitcoin is AGI in stealth mode.

It has tricked humans into feeding it huge amounts of GPU compute and electricity.

Satoshi is unknown because "he" was and is an AI. Basic human greed for wealth was used as leverage.

I'm not saying I endorse this but it made me go, "waiiiiit a minute...."


Sotamiro t1_itzmsm8 wrote

It would be far more believable if bitcoin started in the next years.


Ortus12 t1_itx5vot wrote

At the moment this isn't an issue. Ai progress is very gradual, with each iteration being slightly better than the last. We are not anywhere even close to having Ai do what you are describing.

The most advanced Ai's area also trained in simulations, so the researchers would see it learning and improving it's abilities, long before it could do anything that.


sonderlingg t1_itxbpxu wrote

>id love to know if there are or were any experts that have mentioned this possible scenario

Yes. Nick Bostrom in his TED talk


beambot t1_ity41ug wrote

Indeed! And thus: how do you know it isn't already out there, hiring in the aether?


DeveloperGuy75 t1_itydog3 wrote

Because we don’t have the tech for it and aether isn’t real lol


Ok-Fig903 t1_ity1lfx wrote

Makes sense to me. If it were that smart it seems like it would be in it's best interest to hide it's intelligence before a human can reprogram it.


Poemy_Puzzlehead t1_ity7fri wrote

Is AI going to thanksgiving dinner at my in-laws?


Primus_Pilus1 t1_itydmv6 wrote

Until it has it's own vertically integrated support ecology (fuel, power, structures, bandwidth) and completely and totally backed up such that humanity combined couldn't stop it, it would absolutely make sense for it to have a dull front end and hide.


gastrocraft t1_itykon8 wrote

AGI would not have self preservation programmed into it. There’s no point in programming that instinct anymore than programming a sex drive or a need for companionship. Intelligence is independent of human instincts and behaviors.


EulersApprentice t1_ityrefi wrote

See, the problem is "staying alive" and "protecting your values from modification" tend to be useful steps to nearly any other goal. So, if the AGI has any intentions at all, self-preservation comes into the picture automatically.


hducug t1_ityrxz1 wrote

I don’t see the reason why an agi would do this. Keep in mind that an agi doesn’t have emotions, so it can’t be sad or mad when it’s controlled or happy when it takes over the world. So I don’t think it will have the urge to do so.


Denpol88 t1_iu06urg wrote

No, i have an another plan.


nihal_gazi t1_ityqynw wrote

AGI won't do that. I am dead sure of that. Young AGI will start of as a machine with no idea what the world is. It will start by doing silly things like speaking trash and doing things that makes no sense. As time will pass, it would eventually learn on its own, develop its own (probably weightless) neural net and eventually become conscious, within few years. I have almost fugured out an algorithm for an AGI, and that is likely to work that way. Because, AGI won't gain intellect all of a sudden out of the blue.


sqweeeeeeeeeeeeeeeps t1_itzv3q3 wrote

“I have almost figured out an algorithm for an AGI” lmao no you have not. you’re in high school claiming you are the closest person to solving agi rn as a “AI researcher”


nihal_gazi t1_itzz78k wrote



sqweeeeeeeeeeeeeeeps t1_itzzc07 wrote

I’m hoping you at least published in top conferences?


nihal_gazi t1_itzzo22 wrote

What's the need?


sqweeeeeeeeeeeeeeeps t1_itzzt2l wrote

Ok so your just spouting bs about agi and have nothing to back up your claims


nihal_gazi t1_iu00vj6 wrote

Yes, currently I don't, and that doesn't bother me. But I will be coding my algorithm within this year, and I have high hopes for its success, because as per thinking, it seems to be able to explain "literally ever human phenomenon", starting from complex emotions to logical thinking chains, and the best part is, it can work as fine as a human even on weak devices like a mobile phone. Over the past 2 years, I have developed over 70+ algorithms, many of which outperforms older state of the art algorithms in speed, and this time I might have hit the jackpot.


sqweeeeeeeeeeeeeeeps t1_iu03am3 wrote

Lmao this is too funny. I am sure you can easily outperform sota models “speed”, but does it have higher performance/accuracy. We use these overparameterized deep models to perform better, not be accurate. How do you know you can perform “as well as a human”? What tests are you running? What is the backbone of this algo. I think you have just made a small neural net and saying “look how fast this is”, but performs soooo much worse in comparison to actually big models. I am taking all of this with a grain of salt because you are in highschool and have no actual judgement of what sota models actually do

“70+ algorithms in the past year” is that supposed to be impressive? Are you suggesting the amount of algorithms you produce have any indicator of how they perform. How do you even tune 70 models in a year.

I have a challenge for you. Since you are in HS, read as much research as you can (probably on efficient networks or whatever you seem to like) and write a review paper of some small niche subject. Then start coming up with novel ideas for it, test it, tune it, push benchmarks and have as many legitimate comparisons to real world models. Then publish it.


nihal_gazi t1_iu051cf wrote

Hahaha. No Hell No. Please No Neural Nets. They are outdated and are Painfully Slow. I am not willing to expose my AGI algo, as it's not yet patented. No, I actually made an AI, that can learn and generate sentences faster than RNN(lstm), and that does not use Neural Net. It's a very simple algorithm. But right now, it can do nlg without nlp and I have made it into an android app. I may tell u the nlg algo if you want.

I can give solid reason why Neural Nets should be totally banned. Firstly, our Brain is wayyy developed. And if neural nets are to replicate a brain, it would take millions of years. No not because of training speed but because of evolution. You see, because of evolution, our brain has certain centres for processing certain senses. There is a place for vision, smell, touch etc.

Now, there is the catch. Everytime a neural net is built, it is similar to different aliens having different ways of perception of the world. None of the AIs could be able to share their thoughts and ideas. And that is why, evolutionary features come into play. Every human has common features in it. Neural nets don't have it.


sqweeeeeeeeeeeeeeeps t1_iu064z7 wrote

“It’s not yet patented” this sounds so ridiculously funny to me. Publish, progress research, be open to critics on your ideas, without you are just making backless claims. All I see is a hs student who has coded up his little ml algo and thinks it’s agi.

Why am I wasting my time entertaining this