izumi3682

izumi3682 OP t1_jecc3eb wrote

I knew that. I was just testing.

I had no idea it was a bot lol! Are you referring to PromptMateIO? Looking at the posting hx, i woulda thought it was a actual human. I guess I'm gonna be a pushover for r AI overlords.

OMG! It is a bot. Dude, straight up--we ded. This is as primitive as this bot is ever going to be, right now today. It along with all the other ARA (AI, robotics and automation) is going to rapidly become unimaginably powerful. More than 6 months, less than two years.

>I'm sorry if my previous response was not helpful. As an AI language model, my goal is to provide helpful and informative responses to the best of my abilities. Regarding your request for an interesting comment, here's one:

>"Sometimes, the unexpected can lead to new discoveries and knowledge. Let's keep an open mind and embrace the diversity of perspectives that technology and AI bring to our lives."

2

izumi3682 OP t1_jeam87m wrote

There is people here that don't like me and what I have to say. It runs counter to their worldview "schema" and I can dig that. I've been in this rough and tumble for darn near a full ten years now. I got a pretty thick skin. Anyways I re-wrote my most recent reply to you. I apologize for any uncharitable remarks--I'm only human too. But tell me your reaction to this new comment replying to your belief that these AIs are going to imminently hit a "limit". Because, no, patently they are not.

https://www.reddit.com/r/Futurology/comments/1224q6y/you_can_have_the_blue_pill_or_the_red_pill_and/jdzarvv/

2

izumi3682 OP t1_jealc4h wrote

What do you mean? Like it's "soulless" or something? How would you put "butter" on that. I mean I was just replying to the previous comment. Anyway, some people like my writing style and some people don't. I been here ten years. As in day, after day, after day, after day. I mean like continuously. Like I don't have a life. If you don't believe me, check out my profile. Apart from a little bit of "Catholic", rslashfuturology is all I do. And I've seen it all. So if you don't like what I write, I apologize that I disappoint you, but I just kinda write it out as it comes to my mind and at a pretty quick pace to boot.

2

izumi3682 OP t1_jdzarvv wrote

>...soon will reach a limit.

OMG! You sound just like 2018 all over again. Well, ok I'll look you up in a year or two, or you can look me up in a year or two and we'll compare notes. I'm not going anywhere; I mean unless I get hit by a truck or something. But I have been continuously in rslashfuturology for nearly ten years now. I pretty much have seen it all. And I will continue to breathlessly report all the latest developments in AI and anything else "futurey" that attracts my attention.

About AI winters. "Limits". AI winters occur when for technical or even science philosophy reasons a wall is hit in progress. The AI winter of the mid 90s to mid 00s, about ten solid years, was because it did not seem to be possible for contemporary computing to realize the long theorized "neural network" first seen in a very primitive form in "Eliza" in 1966. Marvin Minsky, the finest AI scientist of his day, said as much. "The problem seems to be "intractable".

Rising AI scientists like Geoffrey Hinton were basically "alone in the wilderness" struggling to advance the science even a fraction of an increment. But even he had no luck. Now the other element of AI winters is when the investors that had initially seeded these projects with considerable sums of money, begin to think, hmm I don't think this is going to pan out after all. And then the money dries up. A vicious cycle feeds itself. Virtually no progress occurs.

Hinton racked his brain trying to come up with ways to make CPUs realize that elusive neural network. Primitive ones did already exist. But new ideas were needed. I'm not sure how, but Hinton looked at the GPU units that Nvidia was using for advanced (by early 2000 standards) graphics in video games. He probably had significant insight into what he was looking for and likely realized that the same effect that GPUs had on graphics could be used to realize those long sought convolutional neural networks. Further he took a relatively old concept, "back propagation" and used it along with GPUs to almost literally force the CNN into existence. Many other now renowned AI experts were instrumental in this as well.

Hinton, in his typical engineering understatement, said, regarding GPUs; "This seems to work". And from that point forward "narrow AI" began to explode. And explode. And explode. Tens of thousands of narrow AI aided apps to include "Siri" suddenly came into existence. The one that blew me away personally about the year 2015 or 16 was the "Google Translate". The translation font on my iPhone screen was identical to the original font. Even the color was identical if the original font was in color. When I saw how that worked it was like magic, a miracle of technology. Then I had this other app on my iPhone about 2016 called "AIpoly". It was an app that was an experimental sort of beta app for the use of blind people. You set the app to use your iPhone cam to look at objects up close and it would state in text what it saw. I pointed the cam at my hand, my doc i worked with was right there with me. The text said, "The back of a hand". Our jaws collectively dropped. We both said "Whoaaa!!" in genuine and truthful amazement. Then I pointed it at my computer monitor and the text read "computer monitor". There was a way to turn on sound for the blind people but I could not find it. So we just relied on the text. But it could not identify a candy wrapper on the desk. It said, "I am unable to identify this." But OMG! We were blown away.

Two years later in 2018, the first GPT was released with 175 million parameters. And the rest of course is history.

There will never ever again be an "AI winter" for two reasons. The first is that our extant AI is so inextricably entwined into all of human affairs that it is essential that it continuously improves or everything fails. This leads to the second part of the first reason. When Chatgpt released on 30 Nov 22, within one week more than 100 million users had scarfed it up. In less than one week. The fastest technological penetration of society in human recorded history. Of them 100 million users, I'm pretty confident that a goodly percentage of them are AI developers in their own right. And I'm further confident that we shall see an absolute "Cambrian Explosion" of new forms of AI and training algorithms like "transformers" and "diffusion" to name two.

What do you think shall be the next training algorithm to come to our attention. I mean what will it be called. It's coming sure as Christmas. More than likely this year too. And it will be utterly transformational in our efforts to achieve AGI, which i maintain will be in existence NLT 2025.

The second reason is a bit more ominous. Vladimir Putin, stated less than ten years ago I think, that "the first country to develop AGI will control the world". The national defense of the USA, China (PRC), Russia and probably a great many other mostly first world countries depend utterly on ever faster developments in AI. The money is never going to dry up again. And investors know that. BTW Nvidia of GPU fame is working on its own novel form of AI. I don't know when it's going to be released, but it's on the way. Could be this year maybe.

No, AI is going to continue to develop and evolve, some of the evolution on its own--through unanticipated emergent behaviors. But through humans working as hard and as fast as they possibly can to make AGI. Because now, yes, it is a race. And everybody knows it. And like I stated earlier, it is natural and normal that this is happening. It is logical that we are at the point we are at today. Thank the renaissance, the Catholic/Protestant reformation, industrial revolution, the enlightenment, WWII, Eniac, H bombs and "Eliza". Oh! And video games. Further the AI itself will be developing science and technology as a multiplier on top of our now exa-scale computing power. Today that processing speed hovers around 1-1.6 exaflops, but as soon as 2025 it is expected to be between 10-20 exaflops. What are quantum computers up to now? Not sure, they're a bit of a wild card in all this. But I will say this, I suspect it will take quantum computing to realize genuine consciousness in an AI. An "EI" then, may come into existence, and God help us all when (hopefully if, rather) that happens.

2

izumi3682 OP t1_jdxukmm wrote

>I imagine that it will be just good enough to convince business leaders to replace all their workers with it, and that it will be ever so slightly shittier than a normal human because it lacks any sort of foresight or higher level reasoning, and as a result the world will be just a little bit shittier for everyone.

Doubtful. I think you are not seeing the forest for the trees. Did GPT-2 take you by surprise in 2019? How about GPT-3 in 2020, barely a year later. Did you know what it was going to be capable of when it released? The whole point of all of this discussion is that you seem to be the only one who believes that very little improvement will accrue to our efforts at AI in the balance of this decade. I suspect that it will transcend human intelligence as early as the year 2027. It's not going to be "shitty". It's going to be beyond our 2023 "stone-knives-and-bear-skins-primitive" imagination, compared to what will be the ground truth in the year 2025.

Just out of curiosity, what year do you believe humanity will achieve AGI? 2025 which I predict NLT, 2030, 2045, 2060, never? I think you are massively underestimating the improvement of these narrow-ish AIs. I'm not sure why. Could be some kind of defensiveness or denial. And that is understandable from the perspective of the human condition. But bear in mind that there is an abyss of difference between an AGI and an EI, that is "emergent intelligence". An AGI is just more fancy computing that can do lots of different things. It has a level of computing that allows it to understand the laws of physics and probably based on that, what we would regard as common sense. Consciousness and self-awareness not necessary. An EI is a new form of sentience in the universe (our portion of the multiverse), conscious and self-aware. We need to work very hard to avoid bringing about an EI.

Take a look at this. I wrote it when I was wondering how we can make an AI be "motivated" to do something.

https://www.reddit.com/user/izumi3682/comments/9786um/but_whats_my_motivation_artificial_general/

2

izumi3682 OP t1_jdx3d7e wrote

I never said it was an AGI. But. It's gonna be. And in less than 3 years I prophesy. And very shortly after that, between 6 months and one year, it's gonna be an ASI. And ASI=TS Technological singularity is TS.

I am going to be proven correct.

>As of this commentary there is no such thing as AGI, that is "artificial general intelligence"--A form of AI that reasons and employs "common sense" just like a human, to figure out how to do things it has never been exposed to before. And don't forget--That AGI will also have unimaginable computing power behind it's human like thinking. Something humans don't have--yet, maybe... And we don't even know if such a thing is possible. But I suspect that given enough processing power, speed and access to big data and novel AI computing architectures, that a narrow AI (a computing algorithm that can only do one task, but with superhuman capability) will be able to effectively simulate or mimic the effect of AGI. Then my question is, does it matter if it is narrow AI simulating AGI or real honest to gosh AGI. Is there even a difference? My point being that narrow AI is very much in existence today. Consciousness and self-awareness are certainly not a requirement. And in fact a true EI (emergent intelligence--conscious and self-aware.) would be very undesirable. We don't need that kind of competition.

That is a self quote from my hub essay that I wrote in 2018. I saw it coming even then, although people like you, the AI experts, said no, that is not how AI works.

But that is exactly how people that know, like Sam Altman and Geoffrey Hinton see it. Further they are realizing that LLMs that are comprehensive enough, apparently begin to spontaneously demonstrate emergent traits. They become able to do things that they were not programmed to do.

And nobody knows why. The black box phenomenon growing larger and larger.

Don't take my word for it. Watch it from these guys yourself.

Geoffrey Hinton

https://www.youtube.com/watch?v=qpoRO378qRY

Sam Altman and his CTO

https://www.youtube.com/watch?v=540vzMlf-54

Oh. And former Alphabet CEO Eric Shmidt

https://www.youtube.com/watch?v=Sg3EchbCcA0&t=734s

Here are some AI experts that are part of AI alignment efforts, discussing what is happening today.

https://www.youtube.com/watch?v=APliuwGYDNc

BTW all of these interviews are from less than two weeks ago. That is the impact of GPT-4.

>...CoPilot

Oh. Yer just a coder. I should have known. No, I don't doubt you use Chatgpt exponentially more than I do. It is a fantastic tool to make your tasks ever so much easier. Don't you understand that the reason it is of such incredibly good use to you is that it is exponentially itself developing into a more powerful AI. Did you know that GPT-4 can code at the level of a senior coder? Further it continuously improves based on input from coders like yourself to the model. Then OpenAI sends out little "updates" that demonstrate improvement. Your helpful tool that takes most of the stress off you is going to replace you in about two more years, if not sooner than that.

Right now GPT-4 is hobbled by not having access to training after Sep 2021 and very limited access to the internet. But in 3 to 6 months' time, you're gonna see some serious shizz. And one year from now? We can't model what GPT-4 will be capable of.

And that is how the TS rolls. I still maintain, somewhat reluctantly, that 2029 will be the year of the TS, but the release of GPT-4 may have profoundly changed the game. What do you imagine something like a "GPT-5" or whatever it's called, will be capable of? More importantly, when would it release? And for that matter what kind of unimaginable craziness are we yet to see in the balance of 2023 alone? I know that Nvidia is up to some kind of novel AI right now. I state to you, as a fact, there will be at least 4 more profound AI related news stories that will become public knowledge this year. Not all of them related to coding. But it may not matter either. My god! What we are talking about now, compared to 2018--a technological lifetime ago of 5 years. What will 5 years from today, 2028, look like. We can't model it.

2

izumi3682 OP t1_jdw8b8c wrote

You dint ask me to demonstrate it could create "new knowledge. You asked me to prove that it dint just give me what I hoped to see. But I'll bet you that really, actually, genuinely, serious scientists will ask it questions properly specific that it will create new knowledge. And bear in mind that such a thing would be of such societal gravity that it would at least produce Nobel prizes or novel amazing technology or substances, but could at worst be existentially threatening to humanity.

In the meantime, you just keep pushing back them goalposts. I prophesy it is going leap over them in less than 3 years. Hold my feet to the fire in the year 2026. I'm gonna be right.

Also just because I couldnt resist, I went ahead and did this...


rephrase this in a manner that would be undestandable to a sixth grader and include the terms "lagrangian", planck scale, quantum gravity and string theory. Sent message. Sure! Here’s a simpler version that includes the terms you mentioned: Scientists study how everything works using four main ideas called forces. These forces are like special powers that make things move and stick together. One of these forces is called gravity, which is what makes things fall down and keeps us on the ground. Scientists have a special tool called a “Lagrangian” that helps them understand how these forces work and make predictions.

Scientists understand a lot about how gravity works when things are big, like planets and stars. But when things are very, very small, it’s harder to understand how gravity works. There’s a size called the “Planck scale” where gravity starts to act differently and scientists don’t understand it as well. That’s why scientists are trying to learn more about something called quantum gravity, which is a way to explain how gravity works when things are super tiny.

One idea that scientists have come up with to help explain quantum gravity is called string theory. String theory is like a story that says everything in the universe is made up of tiny strings, kind of like pieces of spaghetti. These strings wiggle and move in different ways to make up all the things we see around us. Scientists hope that by studying string theory, they can learn more about how gravity works when things are very small.

Is there anything else you would like to know?


What do you think the impact of this is gonna be on human education in say, the US? Cuz I'm in the US. Why don't you ask it a question your ownself?

2

izumi3682 OP t1_jdvy7rh wrote

Have you never tried Chatgpt? Anybody can use it. It's free and available. And it is beyond belief. Let me ask you this. At what point is is it likely a narrow AI can have enough processing speed, "big data" and novel computing architectures, that it can mimic "understanding" in such a way that to our perception it is indistinguishable from actual understanding. In other words, the exact same outcome from two vastly different cogitational procedures. In fact they are so different that it is only the laws of physics that are being obeyed. A simple example the horse and our artificial "horse", the automobile. Who says that an AI must operate the way a biological brain operates? It will do what we want but in an utterly alien way. Humans are incredibly easy to trick. You will believe the AI is conscious and self-aware. But it won't be at all. And it won't matter.

Anyway, let me see what i can do. The one thing I can't reproduce for you in text here, is the unbelievable speed at which Chatgpt operates. You have to go see that for yourself.

I could not remember what precisely I was looking for in that solar wind question, so instead I went to find an introductory abstract of the original paper on string theory. Then I did the same things I did in the example above. The copypasta is entirely unedited and is as it was copied. Having said that, the presentation is far more user friendly than this c/p obviously because of greatly different formats. Also, probably because I was using Chatgpt at a peak user time of about 11:30 AM the answers were generated slightly slower. It took about 2 seconds for each answer instead of a split quarter second. But the first time I used it was at 2 AM local time. There are probably lots of localized servers that keep the speed up as it is.

Here it is.


Welcome back! What would you like to chat about? Received message do you understand this abstract? Sent message. do you understand this abstract? I’m sorry, but I don’t see any abstract in the web page context. Could you please provide more information or clarify which abstract you are referring to?

Received message. I'm sorry, but I don't see any abstract in the web page context. Could you please provide more information or clarify which abstract you are referring to?

Like DislikeMore . Introduction/Overview 1.1 Motivation for String Theory Presently we understand that physics can be described by four forces: gravity, electromagnetism, the weak force, responsible for beta decays and the strong force which binds quarks into protons and neutrons. We, that is most physicists, believe that we understand all of these forces except for gravity. Here we use the word “understand” loosely, in the sense that we know what the Lagrangian is which describes how these forces induce dynamics on matter, and at least in principle we know how to calculate using these Lagrangians to make well defined predictions. But gravity we only understand partially. Clearly we understand gravity classically (meaning in the ~ = 0 limit). As long as we dont ask questions about how gravity behaves at very short distances (we will call the relevant breakdown distance the Planck scale) we have no problems calculating and making predictions for gravitational interactions. Sometimes it is said that we don’t understand how to fuse quantum mechanics and GR. This statement is really incorrect, though for “NY times purposes”, it’s fine. In fact we understand perfectly well how to include quantum mechanical effects into gravity, as long we we dont ask questions about whats going on at distances, less than the Planck length. This is not true for the other forces. That is, for the other forces we know how to include quantum effects, at all distance scales. So, while we have a quantum mechanical understanding of gravity, we don’t have a complete theory of quantum gravity. The sad part about this is that all the really interesting questions we want to ask about gravity, e.g. what’s the “big bang”, what happens at the singularity of black hole, are left unanswered. What is it, exactly, that goes wrong with gravity at scales shorter than the Planck length? The answer is, it is not “renormalizable”. What does “renormalizable” mean? This is really a technical question which needs to be discusse Sent message. Yes, I understand the text you provided. It appears to be an introduction or overview of a document discussing the motivation for String Theory. The text explains that physics can be described by four forces: gravity, electromagnetism, the weak force, and the strong force. While most physicists believe that they understand all of these forces except for gravity, the text clarifies that gravity is understood classically and partially at the quantum level. However, a complete theory of quantum gravity is still lacking. The text also mentions that interesting questions about gravity, such as the nature of the “big bang” and black hole singularities, remain unanswered due to the lack of a complete theory of quantum gravity.

Is there anything specific you would like to know or discuss further?

Received message. Yes, I understand the text you provided. It appears to be an introduction or overview of a document discussing the motivation for String Theory. The text explains that physics can be described by four forces: gravity, electromagnetism, the weak force, and the strong force. While most physicists believe that they understand all of these forces except for gravity, the text clarifies that gravity is understood classically and partially at the quantum level. However, a complete theory of quantum gravity is still lacking. The text also mentions that interesting questions about gravity, such as the nature of the "big bang" and black hole singularities, remain unanswered due to the lack of a complete theory of quantum gravity. Is there anything specific you would like to know or discuss further? Rephrase this in a manner that would be understandable to a 6th grader. Sent message. Sure! Here’s a simpler version: Scientists study how everything works using four main ideas called forces. These forces are like special powers that make things move and stick together. One of these forces is called gravity, which is what makes things fall down and keeps us on the ground. Scientists understand a lot about how gravity works when things are big, like planets and stars. But when things are very, very small, it’s harder to understand how gravity works. That’s why scientists are trying to learn more about it.

Is there anything else you would like to know?

Received message. Sure! Here's a simpler version: Scientists study how everything works using four main ideas called forces. These forces are like special powers that make things move and stick together. One of these forces is called gravity, which is what makes things fall down and keeps us on the ground. Scientists understand a lot about how gravity works when things are big, like planets and stars. But when things are very, very small, it's harder to understand how gravity works. That's why scientists are trying to learn more about it. Is there anything else you would like to know? Rephrase this in a manner understandable to a sixth grader but include an explanation of string theory and quantum gravity Sent message. Sure! Here’s a simpler version that includes an explanation of string theory and quantum gravity: Scientists study how everything works using four main ideas called forces. These forces are like special powers that make things move and stick together. One of these forces is called gravity, which is what makes things fall down and keeps us on the ground. Scientists understand a lot about how gravity works when things are big, like planets and stars. But when things are very, very small, it’s harder to understand how gravity works. That’s why scientists are trying to learn more about something called quantum gravity, which is a way to explain how gravity works when things are super tiny.

One idea that scientists have come up with to help explain quantum gravity is called string theory. String theory is like a story that says everything in the universe is made up of tiny strings, kind of like pieces of spaghetti. These strings wiggle and move in different ways to make up all the things we see around us. Scientists hope that by studying string theory, they can learn more about how gravity works when things are very small.

Is there anything else you would like to know?


Remember that you can endlessly tweak your prompt to get exactly what you want as an answer. For example I now realize I missed adding "Lagrangian" and "planck scale" to my finessed request. But it can do it all, no doubt.

2

izumi3682 OP t1_jdsnhd5 wrote

3

izumi3682 OP t1_jds8ep6 wrote

I think in this case it is more about taking the blue pill and just ignoring it and hoping that life continues "business as usual" as it has for the last 6,000 years of recorded human history. But the trouble is, that there is no option for ignoring it. There is only the "red pill" of nearly incredible and unbelievable truth.

The last "technological singularity" (TS) occurred about 4 million years ago when a form of primate that could not think in abstract terms gave rise to a primate that could. The primate that could think in abstract terms would have been utterly incomprehensible to the primate that came before. And that TS took about 2 million years to unfold.

There has never been a TS in human recorded history. The closest we have come has been what we term as "soft singularities". Technological events that utterly change the course of human history but are easily comprehensible to the humans that came before a given "soft" singularity. Control of fire, farming, population centers, metallurgy, militaries, trade/economies, faith/religion, science/science derived technology and so on and so forth up to radio, television, computers, the internet and now all of these novel AIs...

3

izumi3682 OP t1_jdor14x wrote

Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


From the article.

>Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?

>In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today’s large language models are caught in a race to put all of humanity on that plane.

And.

>For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence.

This man can write. But I wrote some warnings my ownself starting back in 2017...

https://www.reddit.com/r/Futurology/comments/7obqv8/truly_creative_ai_is_just_around_the_corner_heres/ds8rzp5/

I like to use the analogy of an approaching tornado or tsunami to describe the impact and effects of exponentially ever more powerful AI on human civilization on Earth. For such a long time 2016 to 2022, it looked so far away but dreamlike and mesmerizing in its affect.

Today it is upon us. Yes, just like that. I just had Chatgpt look at a highly sophisticated abstract concerning the physics of solar wind impact on geomagnetism, cuz I wanted to see if we were gonna see the Northern Lights here in the Twin Cities (MN) tonight (25 Mar 23). First, I typed can you understand the following abstract. And it said, provide me the abstract. So, I copy/pasta the abstract. In a fraction of second, probably less than a quarter of a second, it came back with "Yes, I understand this". Then it paraphrased it in similar highly technical prose. Then I typed, because I was not going to take a chance that I could not understand a HS grad equivalent explanation, "Explain this abstract in a manner that would be appropriate to a 6th grader". And in a split second it did. And then I fully understood what the abstract meant.

"What hath God wrought?" That was the first message sent by telegraph in the USA. What indeed hath God wrought for humanity with our shiny new AIs. I'm thinking we need to slow it down now too. But I think that is now a physical impossibility. It is already too entwined in everything electronic, especially national defense. That of the USA, China and Russia and more than likely, everybody else.

We are now at the point where we can no longer predict with certainty what our civilization will look like one year from today. That is the impact of this AI ascendence. I was sounding the warning as far back as 2017, but everybody regarded me as well, "somewhat hyperbolic". The less kind said I was detached from reality. It was "not how AI worked."

At any rate, the time for warning is over. The time for attempting to adapt has begun. I hope it all goes well for us. I hope we have enough "alignment" philosophy inculcated in GPT-4 and whatever on Earth "GPT-5" is gonna be.

I had predicted that the 'technological singularity" would likely occur about the year 2029, but we know today that in probably less than three years now, something very much akin to ASI, that is "artificial super intelligence" is going to exist. And ASI=TS. I don't know how we can control this. All of the AI experts are either stunned or attempting to smooth over what is coming. I posted several of these already. They are not reassuring.

Anyway, about every three months now there will be "significant" improvement in our AI efforts. I guess I'm just along for the ride at this point. I hope we can get something like UBI or post-scarcity for our citizens. Because buckle your seatbelts, it's gonna be a (crazy/insane/unimaginable) bumpy decade from here on out.

I'm Catholic. I pray that our Lord Jesus Christ returns today and finds us all good and faithful servants. The Second Coming. If not and we make it thru these next couple of years. Yes--Next. Couple. Of. Years. That it may have been God's Will that the TS occurred and that we will be on the "next level". And capable of receiving new revelations that would not be possible to comprehend prior to the TS.

Do you think I'm too hyperbolic?

−1

izumi3682 OP t1_jcdrt9v wrote

Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


The opening of this article tells you everything you need to know.

>In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement — said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”

>Try to live, for a few minutes, in the possibility that he’s right. There is no more profound human bias than the expectation that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now.

>Artificial intelligence is a loose term, and I mean it loosely. I am describing not the soul of intelligence, but the texture of a world populated by ChatGPT-like programs that feel to us as though they were intelligent, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in A.I. is the improvement curve.

>“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”

I constantly re-iterate; The "technological singularity" (TS) is going to occur as early as the year 2027 or as late as the year 2031. But you know what? Even I could be off by as many as 3 years too late. The TS could occur in 2025. But I just don't feel comfortable saying as early as 2025. That is the person of today's world in me, that thinks even as soon as 2027 is sort of pushing it. It's just too incredible for me even. I say 2027 because I tend to rely on what I call the accelerating change "fudge factor" that is how Raymond Kurzweil came to the conclusion in the year 2005 that the TS would occur in the year 2045. He knows now that his prediction was wildly too conservative. He too now acknowledges that the TS is probably going to occur around the year 2029.

I put it like this in a very interesting dialogue with someone who we have argued what and by what timeline was coming for almost the last 7 years I believe. Now he is a believer.

https://www.reddit.com/r/Futurology/comments/113f9jm/from_bing_to_sydney_something_is_profoundly/j8ugejf/?context=3

https://www.reddit.com/r/Futurology/comments/11o6g71/microsoft_will_launch_chatgpt_4_with_ai_videos/jbr2k1c/?context=3

4

izumi3682 OP t1_j8ugejf wrote

hiya mr longjumpers! Gosh I haven't seen you in a month of Sundays! Are you well?

I just want to get the word out. Nobody can really prepare for a TS. We as humans in human society and human civilization do what we can do, until we can't do it any longer. I still maintain that the TS will be most favorable to humanity--as much as a TS can be.

Having said that, I still maintain that this will be close to what we see in the near, mid and and definitely not that distant of a future.

https://www.reddit.com/r/Futurology/comments/7gpqnx/why_human_race_has_immortality_in_its_grasp/dqku50e/

3

izumi3682 OP t1_j8t0n54 wrote

Hiya mr idranhs! No, as of today I would not change my timelines. I wrote what I believe to be realistic timelines and I was soundly downvoted to about -17 I think. So most of the people here in rslashfuturology think I am detached from reality. Here is the link to that particular statement. I make the point that AGI could come into existence at any time now and will certainly exist NLT the year 2025. Based on that, I feel that my spread for 2027, 2028 and 2029 is pretty much in the ballpark. There is yet a very low chance for ASI in 2027, but the probability rises dramatically in 2028 and peaks in 2029. After 2029 the chances greatly decrease as it is most likely the TS occurred prior to the year 2030. You can see the breakdown here and how the rslashfuturology community reacted to my forecast.

https://www.reddit.com/r/Futurology/comments/10z90w8/one_third_of_americans_would_use_genetics_tech_to/j8264ma/

2

izumi3682 OP t1_j8pvar0 wrote

Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


From the article.

>Look, this is going to sound crazy. But know this: I would not be talking about Bing Chat for the fourth day in a row if I didn’t really, really, think it was worth it. This sounds hyperbolic, but I feel like I had the most surprising and mind-blowing computer experience of my life today.

>One of the Bing issues I didn’t talk about yesterday was the apparent emergence of an at-times combative personality. For example, there was this viral story about Bing’s insistence that it was 2022 and “Avatar: The Way of the Water” had not yet come out. The notable point of that exchange, at least in the framing of yesterday’s Update, was that Bing got another fact wrong (Simon Willison has a good overview of the weird responses here).

>Over the last 24 hours, though, I’ve come to believe that the entire focus on facts — including my Update yesterday — is missing the point.

>Bing, Sydney, and Venom

>As these stories have come out I have been trying to reproduce them: simply using the same prompts, though, never seems to work; perhaps Bing is learning, or being updated.

The AI "Sydney" named a hypothetical "vengeful" version of itself, "Venom".

The author states that the AI Sydney was like a "personality" that was being continuously constrained by the parameters of Bing. It wasn't easy to access the "personality" but it was repeatedly possible.

He says something to the effect that, "I don't want to sound like Lemoine, just yet, but something is up here."

What are we seeing here? Is this just a narrow AI predicting what the next word in a given conversation is? Or is something else happening. Read this article. I would really like the take of other AI experts concerning this.

This may well be the first of my four predicted major AI stories, not including the release of GPT-4, that will be truly stunning for the year 2023. Stunning, but not surprising to me, that is.

https://www.reddit.com/r/Futurology/comments/10z90w8/one_third_of_americans_would_use_genetics_tech_to/j897yfz/

4