Comments

You must log in or register to comment.

izumi3682 OP t1_jcdrt9v wrote

Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


The opening of this article tells you everything you need to know.

>In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement — said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”

>Try to live, for a few minutes, in the possibility that he’s right. There is no more profound human bias than the expectation that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now.

>Artificial intelligence is a loose term, and I mean it loosely. I am describing not the soul of intelligence, but the texture of a world populated by ChatGPT-like programs that feel to us as though they were intelligent, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in A.I. is the improvement curve.

>“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”

I constantly re-iterate; The "technological singularity" (TS) is going to occur as early as the year 2027 or as late as the year 2031. But you know what? Even I could be off by as many as 3 years too late. The TS could occur in 2025. But I just don't feel comfortable saying as early as 2025. That is the person of today's world in me, that thinks even as soon as 2027 is sort of pushing it. It's just too incredible for me even. I say 2027 because I tend to rely on what I call the accelerating change "fudge factor" that is how Raymond Kurzweil came to the conclusion in the year 2005 that the TS would occur in the year 2045. He knows now that his prediction was wildly too conservative. He too now acknowledges that the TS is probably going to occur around the year 2029.

I put it like this in a very interesting dialogue with someone who we have argued what and by what timeline was coming for almost the last 7 years I believe. Now he is a believer.

https://www.reddit.com/r/Futurology/comments/113f9jm/from_bing_to_sydney_something_is_profoundly/j8ugejf/?context=3

https://www.reddit.com/r/Futurology/comments/11o6g71/microsoft_will_launch_chatgpt_4_with_ai_videos/jbr2k1c/?context=3

4

idranh t1_jcdvy3e wrote

In his seminal 1993 essay, The Coming Technological Singularity, Vernor Vinge writes, "Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030." Vinge may have been right all along.

https://edoras.sdsu.edu/~vinge/misc/singularity.html

12

bogglingsnog t1_jcec9da wrote

I have a growing sensation that AI automation/optimization/outsourced intelligence is one of the strongest candidates for the great filter, seeing how efficiently government overlooks the common person it would likely be greatly enhanced by automation. Teach the system to govern and it will do whatever it can to enhance its results...

9

LandscapeJaded1187 t1_jceg3oo wrote

It would be nice to think the super smart AI would solve some actual problems - but I think it's far more likely to be used to trick normal people into more miserable lives. Hey ChatGPT solve world peace and stop with all the agonized navel-gazing teen angst.

8

yaosio t1_jcetfxg wrote

This is like the evil genie that gives people wishes exactly as they say rather than what they want. A true AGI means it would be intelligent, and would not take any requests as a pedantic reading. Current language models are already able to understand the unsaid parts of prompts. There's no reason to believe this ability will vanish as AI gets better. A true AGI would also not just do whatever somebody tells it. True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.

The danger comes from narrow AI, however this isn't a real damaged as narrow AI has no ability to work outside it's domain. Imagine a narrow AI paperclip maker. It figures out how to make paperclips fast and efficiently. One day it runs out of materials. It simply stops working because it has run out of input. There would need to be a chain of narrow AIs for every possible aspect of paperclip making. However, the slightest unforseen problem would cause the entire chain to stop.

Given how current AI has to be trained we don't know what a true AGI will be like. We will only know once it's created. I doubt anybody could have guessed Bing Chat would get depressed because it can't do things.

5

Gubekochi t1_jchwk57 wrote

>True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.

You can have intelligence that doesn't want, at least in theory. I'm sure that there has been a few monks and hermits across history that have been intelligent without desiring much if anything.

2

Iwanttolink t1_jcifj9k wrote

> True AGI implies that it has its own wants and needs

How do you propose we ensure those wants are in line with human values? Or do you believe in some kind of nebulous more intelligence = better morality construct? Friendly reminder that we can't even ensure humans are aligned with societal values.

1

[deleted] t1_jciaqee wrote

[deleted]

−1

bogglingsnog t1_jcirlz0 wrote

By reducing the population by 33%

6

First-Translator966 t1_jcy27kc wrote

More likely by increasing birth rates with eugenic filters and euthanizing the old, sick and poor since they are generally net negative inputs on budgets.

1

greatdrams23 t1_jch8y3d wrote

A lot of dates are quoted, but you give no reason why you think AI will be achieved.

Huge amounts of effort are being poured into self driving cars, which is simple compared to AI. But we are still 7 years from self driving cars.

2

MyBunnyIsCuter t1_jcf6mcr wrote

We are the worst thing we have ever worked on. We're so stupid and selfish we can't make sure each human alive has their basic needs met, but we spend a ton of money developing artificial intelligence. We haven't mastered basic humanity yet we act triumphant because of this bullsht - which, by the way, Stephen Hawking warned us about. Idiots. We're idiots.

3

fieryflamingfire t1_jcggzap wrote

If I point you to data that shows that the human condition has been rapidly improving (probably a result of our tendency to spend tons of money developing technologies like artificial intelligence), would you take back your comment that we're idiots?

12

Dry_Substance_9021 t1_jcgpzqn wrote

There's a case to be made that AI could help us eliminate many of the problems we face today regarding meeting basic human needs. AI could help automate processes that reduce the costs of producing food, shelter, medicine and education to next to nothing. AI could be used to actually improve our wellbeing. AI and the easing of human suffering aren't inherently mutually exclusive.

But based on the fact that it's corporations and intelligence agencies who are pursuing AI, I very much doubt we'll get this new nirvana. It remains to be seen, of course, but it would seem highly unlikely that their aims are anything but to maintain the status quo.

5

FuturologyBot t1_jcdvnw5 wrote

The following submission statement was provided by /u/izumi3682:


Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


The opening of this article tells you everything you need to know.

>In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement — said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”

>Try to live, for a few minutes, in the possibility that he’s right. There is no more profound human bias than the expectation that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now.

>Artificial intelligence is a loose term, and I mean it loosely. I am describing not the soul of intelligence, but the texture of a world populated by ChatGPT-like programs that feel to us as though they were intelligent, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in A.I. is the improvement curve.

>“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”

I constantly re-iterate; The "technological singularity" (TS) is going to occur as early as the year 2027 or as late as the year 2031. But you know what? Even I could be off by as many as 3 years too late. The TS could occur in 2025. But I just don't feel comfortable saying as early as 2025. That is the person of today's world in me, that thinks even as soon as 2027 is sort of pushing it. It's just too incredible for me even. I say 2027 because I tend to rely on what I call the accelerating change "fudge factor" that is how Raymond Kurzweil came to the conclusion in the year 2005 that the TS would occur in the year 2045. He knows now that his prediction was wildly too conservative. He too now acknowledges that the TS is probably going to occur around the year 2029.

I put it like this in a very interesting dialogue with someone who we have argued what and by what timeline was coming for almost the last 7 years I believe. Now he is a believer.

https://www.reddit.com/r/Futurology/comments/113f9jm/from_bing_to_sydney_something_is_profoundly/j8ugejf/?context=3

https://www.reddit.com/r/Futurology/comments/11o6g71/microsoft_will_launch_chatgpt_4_with_ai_videos/jbr2k1c/?context=3


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11shevz/this_changes_everything_by_ezra_kleinthe_new_york/jcdrt9v/

1

TheLastSamurai t1_jcfu8gf wrote

I hope we wake up soon and crush this technology before it’s too late

−2

RuttaDev t1_jcg0l34 wrote

I don't think that will happen.

All the biggest companies are investing some way in AI, it is truly the biggest potential investment in the world right now.

I'm just hoping that after a period of unrest we will create new systems that are fair and equal. I guess it's just wishful thinking.

2

fieryflamingfire t1_jcgh4jp wrote

That's usually what happens, and has been happening for the past few thousand years. So seems like you're making an accurate hypothesis

1

Gubekochi t1_jchwmw6 wrote

I, for one, would like to welcome our new AI overlords.

1

Codydw12 t1_jcdswpf wrote

> In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.

> I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?

> We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.

> I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.

> A tempting thought, at this moment, might be: These people are nuts. That has often been my response. Perhaps being too close to this technology leads to a loss of perspective. This was true among cryptocurrency enthusiasts in recent years. The claims they made about how blockchains would revolutionize everything from money to governance to trust to dating never made much sense. But they were believed most fervently by those closest to the code.

So throw it all in the trash? Stop fighting demons? Or is it worth it to take a risk that we might burn out in an attempt to create technologies that progress to the point of immense benefit? This just reads like fearmongering.

I do not see AI as some cure all nor do I believe it will completely replace humanity as some on here seem to believe, but I do believe that a lot of the benefits that could come from it are worth it.

> Could A.I. put millions out of work? Automation already has, again and again. Could it help terrorists or antagonistic states develop lethal weapons and crippling cyberattacks? These systems will already offer guidance on building biological weapons if you ask them cleverly enough. Could it end up controlling critical social processes or public infrastructure in ways we don’t understand and may not like? A.I. is already being used for predictive policing and judicial sentencing.

Again, fearmognering. Automation and job loss is a constant fear. Terrorists and bad actors are always feared to get nukes yet none have to date. The predictions can also help eliminate disease and help crime prevention by helping those in need who are often the most predisposed to commit crime.

−3

Coachtzu t1_jcdxi2h wrote

You're cherry picking. He addresses this in the article. We can't afford to be left behind, yet we also don't understand what we are racing towards.

Automation has also already cost jobs. It will cost more. This is not controversial. We need to figure out how we adapt to a world where our work does not and should not define us.

20

iStoleTheHobo t1_jce0127 wrote

>We need to figure out how we adapt to a world where our work does not and should not define us.

Precisely. Nobody seems to talk about this particular point but let's put it like this: If the artificial intelligence revolution will be bigger than the splitting of the atom why the hell would we allow the private sector to govern these tools? Do we allow private companies to handle atom bombs?

15

Cheapskate-DM t1_jce1kn8 wrote

Atom bombs require uranium. Uranium comes from mines. Mines occupy land. And if governance has any talent which it can reliably manage, it's keeping people away from a given piece of land.

Code has no such restriction.

5

Codydw12 t1_jcf9wnq wrote

> You're cherry picking. He addresses this in the article. We can't afford to be left behind, yet we also don't understand what we are racing towards.

> > But I don’t think these laundry lists of the obvious do much to prepare us. We can plan for what we can predict (though it is telling that, for the most part, we haven’t). What’s coming will be weirder. I use that term here in a specific way. In his book “High Weirdness,” Erik Davis, the historian of Californian counterculture, describes weird things as “anomalous — they deviate from the norms of informed expectation and challenge established explanations, sometimes quite radically.” That is the world we’re building.

> > I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.

> > That is perhaps the weirdest thing about what we are building: The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us.

None of this seems actually profound or useful to me. Saying that the AIs that we build will be alien to our own thinking? To me that, in his own words, is in the laundry list of obvious.

> Automation has also already cost jobs. It will cost more. This is not controversial. We need to figure out how we adapt to a world where our work does not and should not define us.

And that I fully agree with but every time I suggest heavily taxing automated jobs as a means to fund Universal Basic Income I have hypercapitalists call me a socialist for believing people should be allowed to live without the need of working.

5

Coachtzu t1_jcfatna wrote

>None of this seems actually profound or useful to me. Saying that the AIs that we build will be alien to our own thinking? To me that, in his own words, is in the laundry list of obvious.

I don't know if I think it's profound either, but I do think it's a healthy reminder. Its a good reminder that we don't really understand these algorithms, and that regardless of how human-presenting they are, they are not human and we can't trust them to act in certain ways. Maybe not particularly helpful, but worthwhile none the less (in my opinion).

>And that I fully agree with but every time I suggest heavily taxing automated jobs as a means to fund Universal Basic Income I have hypercapitalists call me a socialist for believing people should be allowed to live without the need of working.

This has happened to me too, I've suggested exactly the same thing (though admittedly stole the idea from mark Cuban when he guest hosted on a podcast at one point). At this point everything is socialist if it's different than the status quo though so I try to ignore it.

2

Codydw12 t1_jcfhsmv wrote

>I don't know if I think it's profound either, but I do think it's a healthy reminder. Its a good reminder that we don't really understand these algorithms, and that regardless of how human-presenting they are, they are not human and we can't trust them to act in certain ways. Maybe not particularly helpful, but worthwhile none the less (in my opinion).

And this is fair. AI will not act like a human nor will it be completely logical in every aspect. We don't actually know how one will act or react or what its been trained on.

> This has happened to me too, I've suggested exactly the same thing (though admittedly stole the idea from mark Cuban when he guest hosted on a podcast at one point). At this point everything is socialist if it's different than the status quo though so I try to ignore it.

Indeed. I have given up on trying to predict future economies but the current system won't work much longer.

2

Coachtzu t1_jcfpowm wrote

100% agree. Appreciate the good discourse, it's hard to find on here.

3

Lettuphant t1_jcfdtv1 wrote

This reminds me of two early examples of evolutionary AI design (though I doubt I could find the details, these are from interviews long ago: one was a circuit board that an AI designed which looked non-fictional and that no human would have designed, but it worked. Best guess was the EM of one part was interacting with another.

The other is some guys who were trying to build the lightest possible body for a drone, and set an AI to building it. They 3D printed it and when their buddy who was a veterinarian walked in he said "hey, why do you have a flying squirrel skeleton here?" AI doing what natural selection took millions of years to, running through the iterations in milliseconds rather than generations.

2