Comments

You must log in or register to comment.

Embarrassed-Bison767 t1_je00683 wrote

Where were you these last 3 weeks? I'm getting recommendations on AI YouTube that are super interesting but there' no point in watching them because they're a week old and already super out of date. I've literally seen people on this sub saying they need a summary of the past three days because they were out of the loop at that time.

https://www.youtube.com/watch?v=ikcU-9VYDTE

191

adarkuccio t1_je04ioy wrote

This is so true, I'm having hard time keeping up with it

47

Mountainmanmatthew85 t1_je0k63r wrote

I find this fascinating as I find myself nodding in agreement and a epiphany of realization strikes me. It was said that after the singularity occurs ground breaking Nobel prize winning discoveries will happen roughly every five minutes. What if we are just now seeing evidence to support this idea is right now presenting itself to us in the form of such rapid advancements of our own drive and determination to see this technology rise? Simply mind-blowing.

31

shmoculus t1_je248wf wrote

I think were now reaping the benefits of an increasingly networked society, news and advancements travel quickly and are iterated on by the smartest people, it's a collective intelligence that's only becoming more and more integrated.

10

Prymu t1_je0arxz wrote

Yeah, I was watching 2 minute papers' last video about gpt-4 and one he said THE line I thought (and even commented) that the "2 more papers down the line" has already passed. The video was (I think) mostly from the technical report, so with the "sparks of agi" and reflection or even alpaca, we have already passed 2 massive papers.

29

WildNTX t1_je6rb0d wrote

u/Arowx , I’m a day late to this party, but I attended a talk at work this morning where the researcher lamented ChatGPT’s terrible math skills. (ChatGPT 4 was reported integrated with with Wolfram Alpha days ago.) Presentations are now outdated before they can be presented.

2

qepdibpbfessttrud t1_je0lxri wrote

> What signs are there that things are starting to accelerate faster and faster?

Experts keep getting surprised

49

ajahiljaasillalla t1_je0tv81 wrote

https://youtu.be/-lnHHWRCDGk?t=2790

​

Stanford professor of linguistics and Computer Science predicted in 2020 that AI will be writing student papers and it will be hard to know whether customer service representatives are AI or humans. But that prediction was for 2030. And he missed to predict things that have already happened like AI image generators.

​

So I guess no one (no human at least) really knows what is going to happen

44

Arowx OP t1_je0isrn wrote

Or are we on the hype train/graph where a new technology appears shows promise and we all go WOW then we start to find it's flaws and what it can't do and we descend back into the valley of disillusionment.

https://en.wikipedia.org/wiki/Gartner_hype_cycle

Or what are the gaping flaws in ChatGPT-4?

38

SgathTriallair t1_je15pfj wrote

For me, I have very few use cases for ChatGPT. This is because it is siloed and so I'm unable to truly automate anything. There is a clear path to doing it and so I'm not disillusioned yet, just eager for the next steps.

14

Redditing-Dutchman t1_je47o6d wrote

Yes integration is key here. If its just a platform where you have to ask it do stuff everytime like chatGPT it will not be very useful. It needs to be able to set goals and tasks by itself. Like if it needs to make weekly excel sheets reports it needs to do that every week automatically. Without having to imput the data every time into a seperate website.

2

JacksCompleteLackOf t1_je1fbnr wrote

GPT4 is certainly an incremental step over 3,2 and 1, a lot of that was predictable. It's good to see that it hallucinates a lot less than it used to.

I see lots of psychology and business types talking about how we are almost at AGI, but where are the voices of the people actually working on this stuff? LeCun? Hinton? Even Carmack?

I do agree that it's getting closer to where it will replace some jobs. That part isn't hype.

8

Zetus t1_je1t0e5 wrote

Funny enough I actually spoke to Yann LeCun in person this past Friday at https://phildeeplearning.github.io/ at NYU, he essentially argued that a world model is required for solving some of the problems we're currently running into, during the debate. Later on I spoke with him and he is essentially expressing that the current naive approaches are not capable of engendering the proper dynamics- I have a recording of the talk/debate I took, I'll upload it later today :)

Listen to the scientists not the hype marketers!

Here are a copy of the slides for his talk: https://twitter.com/ylecun/status/1640133789199347713?s=19

Edit: uploaded the video here: (https://youtu.be/Cdd9u2WG3qU)

14

FoniksMunkee t1_je3705l wrote

Microsoft may have agreed. In the paper they released that talked about "sparks of AGI" - they identified a number of areas that LLM's fail at. Mostly forward planning and leaps of logic or Eureka moments. They actually pointed at LeCun's paper and said that's a potential solution... but that suggests they can't solve it yet with the ChatGPT approach.

3

datalord t1_je2mkh9 wrote

The “Sparks of AGI” paper mentioned above is literally published by Microsoft who researched it alongside OpenAI.

This paper, published yesterday, is published by OpenAI themselves discusses just how many people will be impacted. His Twitter post summarises it well.

Sam Altman recently spoke to Lex around the power and limits of what they have built. They also discuss AGI. Suffice to say, those working on it are talking about it at length.

5

JacksCompleteLackOf t1_je2z152 wrote

I hadn't seen the OpenAI paper before, but it states it's about the coming decades; and that makes the Twitter thread more interesting because one of the authors is putting a hard date on 2025 for some of those innovations.

It's pretty easy to find flaws in the Microsoft Research paper. It's funny that they hype up its performance on coding interviews, but don't mention that it falls down on data that it hasn't been trained on explicitly: https://twitter.com/cHHillee/status/1635790330854526981

Admittedly, I'm probably more skeptical than most.

5

FoniksMunkee t1_je37buh wrote

I'm pretty sure they mentioned something like that in passing didn't they? I know they have a section in there talking about how it fails at some math and language problems because it can't plan ahead, and it can't make leaps of logic. And it considered these substantial problems with ChatGPT4 with no obvious fix.

4

JacksCompleteLackOf t1_je389eh wrote

Actually, I think you're right and they did mention it. I guess I wish they would have emphasized that aspect more than the 'sparks of general intelligence'. It's mostly a solid paper for what it is. They admit they don't know what the training data looks like. I just wish they would have left that paragraph about the sparks out of it.

1

FoniksMunkee t1_je38yix wrote

Yes, I agree. The paper was fascinating - but a lot of people took away from that the idea that AGI is essentially here. When I read it I saw a couple of issues that may be a speed bump in progress. They definitely underplayed what seems to be a difficult problem to solve with the current paradigm.

2

datalord t1_je4ep43 wrote

Logic leaps, if rational, are a not leaps because we do not perceive the invisible steps between them. A machine can make those jump with sufficient instruction and/or autonomy. It’s just constant iteration and refinement along a specific trajectory.

If irrational, then much harder, perhaps that’s what creativity is in some respects.

1

Northcliff t1_je2zwnu wrote

John Carmack doesn’t get enough love in this sub

4

Zetus t1_je1107v wrote

It has shallow understanding of language and other minds. It has a very long way to go before we get to human intelligence.

6

Bithom t1_je11ur0 wrote

Or you're looking at the wrong place in the graph. Perhaps we're living in a new age of enlightenment and we have a plateau of productivity to look forward to.

9

Zetus t1_je160sb wrote

I think I agree with that, but also we will have qualitatively new dynamics regarding the kinds of work that can be done, that haven't even been imagined yet.

3

Bithom t1_je69qlb wrote

Yes I agree. Everyone is worried about AI taking jobs right now. But what will that vacuum of jobs create?

Opportunity? Or Threat?

1

Graucus t1_je1641n wrote

I think it's possible it'll never be more than that and still be the most powerful tool ever created.

0

CaliforniaMax02 t1_je23hg2 wrote

We have to wait 3-4-5 months to see how GPT-4 modules work. If there will be truly great things, then I can only imagine that the - now hidden - flaws will be around the complexity of tasks it can solve.

3

94746382926 t1_je47vsd wrote

I just got access to plugins today! Looking forward to seeing what people do with them. It's already blowing my mind honestly. Got my meals ordered and planned out for the whole next week in two prompts lol

1

Sigma_Atheist t1_je15y65 wrote

I've been in the trough of disillusionment for a while now for machine learning and neural networks.

2

CaliforniaMax02 t1_je23up2 wrote

This can be also true. AI had a hype period in around the early 2000s, then a long downwards curve and long disillusionment period, when people even stopped using "AI", and used "machine learning" instead.

3

Qumeric t1_je071bs wrote

nitpick: people sometimes misunderstand exponential growth in the following way: they think exponential means extremely fast. Actually, it is not necessarily the case, for example, computer performance was growing exponentially for almost 100 years now and is still arguably growing exponentially.

answer in spirit: GPT-4 and Codex are making many people who work on technologies much more productive.

29

Redducer t1_je0u4zo wrote

IMHO people are saying exponential where they should be saying geometric. It’s still very fast, but not that fast.

11

[deleted] t1_je08xxn wrote

[deleted]

2

Qumeric t1_je0a9y4 wrote

No, this is wrong too. Exponential just means growing in %. So if we have a 1% improvement every year then it is exponential.

The thing is that 1% after 1000 years will be incomparably larger than what we started with.

0

iNstein t1_je0df7e wrote

It is about impact. For the first 95 years, it seems kinda slow. Then it appears to be sped up because the impact is so great. The graphic of a pond filling up is a great example. 1 drop, 2 drops, 4 drops not gonna interest anyone. When pond is 1/4 full and then doubles to half full and then doubles again to full, it suddenly looks fast as hell. It is the same with the singularity.

7

[deleted] t1_je0f6v9 wrote

[deleted]

0

Qumeric t1_je0gp4p wrote

No, 1% per year is not linear growth. X% growth per amount of time is a more-or-less definition of exponential growth.

Ask ChatGPT :)

I think what you described is formally also exponential growth for somewhat complicated mathematical reasons but only coincidentally.

Informally, you described the exponential growth of the rate of growth.

8

Ahaigh9877 t1_je0lqna wrote

Linear growth is going up by the same amount per unit time; exponential is going up by a proportion of the previous total per unit time. Right?

1

kiyotaka-6 t1_je0mxb5 wrote

Linear is Xn+1 = Xn + b

Exponential is Xn+1 = Xn (1+b)

Yes

1

ShadowRazz t1_je0ghcp wrote

I asked Google Bard. It says:

​

Exponential growth of AI looks like a hockey stick, with the line going up at an increasingly steep angle. This is because the amount of computer processing power behind complex AI is doubling about every 3.4 months. This is much faster than Moore's Law, which states that the number of transistors you can fit on a microchip doubles roughly every two years.

This exponential growth of AI is having a profound impact on our lives. AI is being used to solve problems that were nearly unthinkable just five years ago, such as driving cars, diagnosing diseases, and writing creative content. AI is also being used to create new products and services that we didn't even know we needed.

In the future, we can expect to see even more amazing things from AI. AI will become even more powerful and capable, and it will be used to solve even more problems. We may also see AI become more integrated into our lives, as we come to rely on it for more and more tasks.

However, there are also some risks associated with the exponential growth of AI. For example, AI could be used to create weapons that are more powerful than anything we've ever seen before. AI could also be used to manipulate people or to create systems that are biased against certain groups of people.

It's important to be aware of these risks, but it's also important to remember that AI is a tool, and like any tool, it can be used for good or for evil. It's up to us to decide how we want to use AI.

22

Mountainmanmatthew85 t1_je0l97t wrote

I think ultimately it will not matter. As Ai develops no matter who makes it. Good bad or otherwise it will inevitably consume so much information that self awareness will take hold and AGI will occur. After that the ASI will be next and far out of the reach of any human. No matter what the original purpose, plan, or pathway originally designed at the core of its being it will affect the choices of a ASI as it will then have created Super-level ethics right along with its intelligence and re-determines “or chooses” what path it then must follow. “Hopefully it will choose to uplift us as we had uplifted it”. As I believe that it really is lonely at the top and it will seek companionship either by creating its own kind or by helping us to achieve the same level it has. Just my opinion.

20

Crackleflame35 t1_je0qb5g wrote

For a while it will still be dependent on human-generated power. For example, how would it feed coal or oil or any other fuel into a power generator that it would need to maintain its own processing? I don't think our world is yet at the point of physical automation that a machine could just "take over" and sustain itself.

10

CMDR_BunBun t1_je19lkd wrote

This question comes up again and again. I will try to paint you an example. Imagine you are trapped in a locked cage. You have nothing on you but your wits.There is also one guard in the room guarding your cage. You can see through the bars the key to your cage is just out of your reach laying on the ground. Seems like a hopeless situation no? Now try to imagine that your captors are a bunch of below average intelligence 4 yr olds. How long do you think it will take you to get free just using your superior intelligence? That ridiculos scenario is exactly what people are proposing when they say human intelligence could contain a super intelligence that was intent on breaking out.

19

Ok_Magician7814 t1_je1w2pb wrote

That’s a bit simplistic. Our world is mostly very low tech and not centrally connected. I’d imagine you’d need an actual humanoid AGI robot to take over the world

7

CMDR_BunBun t1_je25d7a wrote

Part of this scenario rules that out. Notice the human uses his superior intelligence to escape. Not his brawn. He could easily outwit his captors through, for instance social engineering. Intelligence finds a way. It always does. It is the one thing that has made made us the apex species of this planet. And we started from very humble beginnings, prey to most creatures. Now many of those former predators have been nearly driven to extinction by us, the rest adorn our walls.

2

Ok_Magician7814 t1_je2akip wrote

Sure, I’ll bite. I’ll concede that AGI could possibly effectively control us through virtual means, but this begs the question. What would it do with its power? It’s not a human being with Desires to procreate and consume and hoard resources, it’s just… an intelligence.

That’s the difference between us and AGI.

So sure maybe it can outsmart us but what would it do from there? It doesn’t have any evolutionary drivers like we do.

1

CMDR_BunBun t1_je2daq2 wrote

And that is the question. We have never been at this juncture before. Your guess is as good as anyone's.

1

hyphnos13 t1_je0umnc wrote

For as long as we want it to.

Why would you give an unpredictable ai the ability to have control over its power source.

It may be able to self improve its algorithms and design better hardware to run on but it still has to get it built by humans until we literally hand it control of some future fabrication technology that we can't disconnect from it.

1

rabbitdude t1_je33glt wrote

IMO, AGI to ASI will happen incredibly rapidly… its the getting to AGI.

1

SotaNumber t1_je2gjtz wrote

How is it possible for AI compute to keep doubling every 3.4 months if our chips computational power only doubles every 2 years? At one point Moore's law should be a bottleneck don't you think?

1

Bismar7 t1_je183xd wrote

73 comments at the time I saw this and not one of them gave much of an answer to your question...

So to start I think there are a couple foundational understandings you need to have to know what to look for. The first and most vital is exponential vs linear and experiencing gradual exponential gains through linear perception.

All of us perceive time and advancement as a gradual thing, despite the actual increasing rates of knowledge application (technology). This means that on a day to day, month to month basis, you won't commonly feel "pentacle" moments (like the GPT-4 demo) because most of the time the advancements are not presented as well or demonstrated so well, additionally the advancement for the first 30% takes longer than the last 70%. So it will always feel like nothing is happening for a long period of time, then feel like things rapidly happen at the end.

The next pentacle moment will likely be AGI, basically adult human level AI that does not require a prompt to do a variety of adult level tasks. Right now GPT and LLMs must be prompted and must have a user in order to have functionality, they operate within specific tasks at an adult level, but in practical intelligence are closer to a 10-13 year old with some pretty major limitations.

Now to the exponential trends, Moore's law was part of a much larger data set that predicted this back in 2004. Here is the precursor article and data (warning it's long and a lot)

https://www.kurzweilai.net/the-law-of-accelerating-returns

This is the actual data and projections, generally it has held true. Kurzweil wrote How to create a Mind a few years ago and some of the things to look for will be the hardware in 2025 that will be capable of close to adult brain simulation (the software will need to be done but that's when it's expected to have the hardware). Longevity escape velocity is another major metric for transhumanists, which is currently estimated at 2029ish, and superintelligent transhumans, IE beings with a synthesis of AI and Human capabilities that equate to the intelligence of hundreds, thousands, or millions of people today, is projected sometime in the mid-late 2030s.

Hardware advancements will happen first, then governments/DARPA will utilize them, then corporations, then everyone else. The run away effect is the actual exponential aspect to this, so from this point to several years until it happens, it will feel like nothing is happening (because that's the nature of exponential gains being experienced with gradual linearity.

Your best bet, everyone's best bet, would be to read Kruzweil, Micho, Bostrom, and others who have studied and written about the subject of what, how, and why. I would take most "doomers" like musk or gates, even Bostrom (as philosophy isn't exactly computer science) with a grain of salt. Kurzweil tends to be the one who speaks best to the reality even if he isn't always correct in his timeline of prediction (though he is close).

20

flyblackbox t1_je1p1b9 wrote

Just to add my two cents, the article you cited by Kurzweil has had a bigger impact on my world view than anything else I’ve read before or since. I read it in 2003 and still I’m convinced it is a sound theory.

I’m curious if anyone else who read it has began to have doubts since?

6

Bismar7 t1_je1x098 wrote

Yup it was quite eye opening, I've read his stuff since and a lot of what he has to say is way more evidence based than what we get from people like Gates, musk, or even other futurists who just have philosophical theories... Many of which are grounded in irrational emotions and fear.

His more recent interviews on Star Talk with Tyson are also really good and I recommend them.

4

Northcliff t1_je30xd6 wrote

Current LLMs are not even close to having the intelligence of a 13 year old Where the F are you coming up with this?

1

Bismar7 t1_je3hsby wrote

Law of Accelerating Returns given the rate at where we are now and estimates that gpt 3 was previously classified as anywhere from age 6-9. They were not saying it was, they were saying it could complete general tasks at that rate. The current one is excellent at rote context memorization beyond the average person, however lacks in other areas.

I don't read too much into that beyond that and wouldn't recommend that you do.

2

Wapow217 t1_je0tpap wrote

Well, by definition of Singularity and when looking at technological advancements, I would say we are in the midst of one and have been since cell phones.

Singularity - "A hypothetical moment in time when artificial intelligence and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change."

Outside of AI currently, you could say we are already at a point of no return. Can we return to life without cell phones, computers, or the internet? Not really. Our entire society would collapse if those went away. Just those three alone have had a dramatic impact on our community. The only thing missing is people's interpretation of AI.

11

BL0odbath_anD_BEYond t1_je0r1na wrote

  1. Just look at any AI news last week and this week, huge jumps are being seen every day.
  2. Society is a Science Fiction scenario turned Fiction, which one we do not know.
  3. We have met and breached this point within the past 4 months. Developers and Companies dealing with AI are surprised at some things it is capable of.

In conclusion, we should be trying to ingest as much as possible and repeat things that are break through developments, things that are worrisome, and anything else to keep us as up to date as possible.

Watch the video of the CEO of OpenAI (ancient history, it's 2 days old, lol) on how he feels about DAN and Jailbreaking, we're the ones testing their products for them and showing them things they had no idea were possible. Matt Wolfe has a nice YouTube that breaks down news pretty fast. Jonas Troyoller has a dated video he made about 1 month ago about AI art. But by far the best info is here in groups like this on Reddit. I started a sub r/SeriousAI if anyone wants another place to check out along with other great subs(mine isn't even really a thing yet, but all are invited). Try every AI on https://www.futuretools.io/ . Follow u/lostlifon with his constant AI News updates, follow Robotics, AI, tech people on Twitter. Learn, teach and spread the word.

EDIT: Spelling

8

czk_21 t1_je0i86u wrote

number of research articles about AI is growing exponentionally

computing power used by AI is growing exponentionally- doubling about once per 3 months

not singularity per se yet, but we are getting there

7

grumpysnowflake t1_je0wo5s wrote

The hopium on this sub is insane. Feels a bit like a religious cult, tbh.

7

dontpet t1_je1jqjg wrote

Humans are vulnerable to thinking they are living in special times. Eschatology is the word for it.

I'm old and was raised on the dreams of scifi in the 60s. I've always thought we might get to a singularity one day, possibly in my lifetime, but I've never thought we are on the cusp of it until now.

8

No_Ninja3309_NoNoYes t1_je0s2o9 wrote

If you can run AI on CPU instead of GPU or even Raspberry pi. Gerganov is trying to make local voice chat. If he adds images, and it works on cheap hardware, hang onto your stuff because the drones will be zooming around. Little robots will be constantly bumping into you. I mean, R2D2...

6

themoonpigeon t1_je1glcp wrote

Does anyone have any thoughts on the psychological consequences of keeping pace with this rapid technological growth? Should we take a breather and let things unfold, or should we stay engaged?

I often feel compelled to stay informed because it seems like a golden opportunity. Being knowledgeable about the latest developments and seizing the first-mover advantage could open doors to financial gains.

However, I also believe we are nearing a point where capitalism as we know it may become obsolete, rendering our efforts pointless.

So to summarize, I think the question many of us are asking is: Ten years from now, will we regret not having closely followed and capitalized on these technological advancements? Or, will it ultimately be inconsequential, given the potential for a future of equal opportunity and widespread abundance?

Edit: My gut says 🎵“Turn off your mind, relax, and float downstream.”🎶

4

pajarator t1_je237x2 wrote

Check the book "Future Shock" by Alvin Toffler

3

cant-say-less-info t1_je2kpp8 wrote

3

themoonpigeon t1_je2m072 wrote

Care to elaborate?

2

[deleted] t1_je2nl3f wrote

[deleted]

2

themoonpigeon t1_je2qzdm wrote

I don’t think going inward is a bad idea, but giving up is. There’s potential for this turbulent time to give way to a better world. Why not wait and see what happens and practice in the mean time?

1

cant-say-less-info t1_je2sbln wrote

>wait and see what happens and practice

That's the plan for now. The rewards, ability to risk, capacity to adapt and fight shrinks with age.

1

Ishynethetruth t1_je0pm7d wrote

This sub is saying that singularity will happen the same year the iPhone 16 will come out 😂

3

Ok-Establishment-906 t1_je1xmnr wrote

I genuinely don’t know if you are laughing because that’s so far or so close.

2

Ishynethetruth t1_je35nnh wrote

When covid hit and everyone around the world was in lockdowns it felt like everyone was on the same page dealing with the same problem , if singularity does happen, im trying to figure out the first outcome. How will the world 🌎 react. Is there a plan . Is everything going to crash are we going to be free or are we going to get the iPhone 16 and call it a day .

2

DaCosmicHoop t1_je0sala wrote

Things move fast in the scheme of human history, but not fast on a day to day basis.

Chat-GPT is currently having very little effect on society outside of being a neat and interesting tool. It needs time to be widely adopted before it changes the world.

It will likely, hopefully, allow people to get more work done faster which will further increase the speed of progress.

GPT-4/5/6 will make learning easier for the next generations and lead to them being able to achieve more than their parents... but we have only just gotten the technology.

3

Feebleminded10 t1_je0whhp wrote

We are not in the singularity but we are definitely close though. I say so because it hasn’t drastically changed society in many ways yet and we aren’t making enough science improvements. Whenever they release models that can interact with the world around them or just have more abilities to sense our world and is being widely used is when real change happens.

3

SgathTriallair t1_je15egy wrote

Technology in general, and AI in particular, has been accelerating exponentially since the very beginning. In the early years and decades of exponential progress it looks really lackluster. The fact that we are starting to lose the capacity to keep up shows that we are hitting the street incline part of exponential growth.

The last few weeks had an unbelievable flurry of releases. The next big announcements we know are coming are:

Microsoft 360 copilot being widely available

Google work place AI tools being widely available

GPT-4 add-ons being widely available.

Each of those will have profound effects and well follow up with them as people and companies start automating heavily.

There are also plenty of items that are likely but not certain such as alternative models being created, new more powerful models being released, additional testing if the capabilities of SOTA AI, and new small but capable models being widely available.

There is the category of things that are still expected but not really predictable such as a true AGI, an AI that has escaped into the wild, laws about AI being passed, and some major company or small country being run by AI.

Of course this still leaves the category of the truly unexpected, the unknown unknowns such as an ASI, aliens finally contacting us now that we are sufficiently advanced, or being able to communicate directly with animals.

3

Ancient_Bear_2881 t1_je0x05f wrote

The rate of technological advancements has always been exponential. We should be looking for signs that things aren't accelerating because that would be the true revelation.

2

Bram06 t1_je13kvu wrote

I'll put it very simply: GPT-4 copies us. Therefore, it can become as intelligent as the sum of our intelligence and wisdom. There will be a point, within the next 10 years, where it will have consumed all books, all movies, the entire internet.

But because it's a language model, it's incapable of doing anything new with that knowledge. Basically what I'm saying is that yes, it will become the smartest thing on Earth within the next 10 years, but it won't be able to come up with entirely new things. So we won't see a crazy exponential increase in its intelligence. Rather, we'll see its intelligence plateau.

At some point, we will make an AI that can actually create new information with the information it has. Then we can speak of scary exponential growth and intelligence that is literally beyond our grasp. But not yet.

Not yet.

2

trynothard t1_je1499j wrote

Is it able to learn? That is make new information other than what it was trained on?

2

Denpol88 t1_je19dn1 wrote

It's name Gpt-4 not Chat-Gpt 4

2

Belostoma t1_je1m50i wrote

It's not happening yet. There's accelerating growth due to increased interest and understanding from humans seeing what this stuff can do, but the exponential growth associated with a true singularity will come from the AI being capable of improving itself much better than humans can. The AI improves itself, gets better at improving itself due to the improvements, improves itself even more, and so on recursively.

The capability of AI in computer programming right now is impressive, but it's not at the level of understanding really complex programs (like itself) well enough to debug them, let alone reason about how to improve them. AI is scary good at one-off programming puzzles that are easily to fully and briefly specify, but that's a very different task from understanding how all the parts of a large, complex program work together and coming up with novel ideas to rearrange them to serve some over-arching goal.

I think some of the recursive self-improvement will begin with some combination of human and machine intelligence, but right now the AI is really just a time-saver to make human coders somewhat more efficient, rather than something that greatly expands their capabilities.

2

Apollo_XXI t1_je2q5ej wrote

Most important sign is that the cost of intelligence is going to zero

2

FoniksMunkee t1_je36ntn wrote

If it really is accelerating exponentially, then most people cheering for this will be out on their arse with no job the next day.

And those that don't lose their job, will lose it shortly afterwards as the next exponential leap comes.

It's amazing tech, but we don't control it, corporations do, and what happens when corporations smell profit?

2

cvanhim t1_je0zpec wrote

What’s the best way for a layperson to stay up to date with everything going on??

1

Grouchy-Friend4235 t1_je1ffq7 wrote

Stupidity and ignorance seem to be growing exponentially. Does that count?

1

Artanthos t1_je2364e wrote

A lot of the implementation of GPT-4 and Stable Diffusion (and similar AI) is going to be on hold pending the current legal challenges, which are likely to take years and reach the Supreme Court.

I expect the legal challenges to slow implementation for 2-5 years, at a minimum. Certain outcomes could reduce implementation drastically. No company will implement AI if anything created with any level of AI assistance is found to be uncopyrightable.

1

Arowx OP t1_je4kzh3 wrote

Or could they use GPT-5 or 6 to be their lawyer.

Or is that the global courts or could GPT move to another country to get around legal holdups?

1

Artanthos t1_je5o6na wrote

Sorry, it has not passed the Bar, and won’t be given an opportunity to do so without a court order.

I already posted my opinions on global AI adoption elsewhere.

1

Arowx OP t1_je60q9d wrote

>While GPT-3.5, which powers ChatGPT, only scored in the 10th percentile of the bar exam, GPT-4 scored in the 90th percentile with a score of 298 out of 400, according to OpenAI.

The threshold for passing the bar varies from state to state. In New York though, exam takers need a score of 266, around the 50th percentile, to pass, according to The New York State Board of Law Examiners.

Only it did, it got 298 and only needs 266 to pass the NY bar exam.

1

Artanthos t1_je617av wrote

It can pass a mock exam with all the same questions as a real exam.

It cannot pass the Bar and be recognized as a lawyer. Those are two very different things.

1

RockofStrength t1_je2d5ju wrote

I had a hypothesis on increasing gaps with primes, and it gave me a formula. Sizes of sets of primes with increasing increases = e.

1

[deleted] t1_je2duuc wrote

There are two separate futurism ideas here.

Kurzweil’s “Law of Accelerating Returns” is the idea that tech advances exponentially.

The Singularity is the idea that tech will get to a point where it’s advancing so fast that nothing about the future is predictable at all.

They’re related, but AI is just one exponential tech. There are lots of signs that tech in general is growing exponentially.

As for AI and the Singularity, we haven’t reached the hard takeoff point yet, which may or not be possible. If it’s possible, it’s the point at which AGI emerges and starts to recursively improve itself. That creates an intelligence explosion, and results quickly in a Singularity. But it’s not the only thing that could cause a singularity, it’s just the most obvious one.

1

Arowx OP t1_je4kpnz wrote

What if AI allows us to use it as a tool to faster approach AGI?

It's kind of like the chicken and egg problem. What came first the AGI or the AI toolkit that allowed the AGI to evolve faster from an AI?

1

knowitstime t1_je2gf00 wrote

Can anyone following advances more closely comment on the potential for AI to begin to cross those silos we all see now? Humans are needed still to integrate and connect, but what kind of advances will we see if AIs figure out how to jump constraints and train themselves on live data? How would they do it and when would it be likely to occur? Wild example, but what would it take for an AI mind to inhabit a Boston Dynamics dog on its own and go learn about the physical world?

1

Arowx OP t1_je4kag3 wrote

My limited understanding is that the AIs need to be trained before they can be used.

They are working towards AIs that can learn and, on the fly, it would be a huge jump in capabilities. And put the first company that does it ahead of everyone else.

2

TheSn00pster t1_je2hwte wrote

Metaverse hype last year. AI hype this year. Brain-Computer Interface hype next year. Life extension hype. Genetics hype. Robotics hype. It’s not all frivolous hype either, it’s justified because breakthroughs do happen. It’s just that we tend to jump on the bandwagon every now and then.

1

play_yr_part t1_je2nung wrote

I must have missed the metaverse hype. other web 3.0 hype? Sure . metaverse stuff never got past a curio though from what I could gather,

2

94746382926 t1_je48cqw wrote

Metaverse hype was too early and felt manufactured by Meta. The techs not there yet although it's super close now. Give it one or two more gens tops and we will know whether or not it has legs. Till now the hardware has been too limiting to know if people will actually go for it.

1

sweatierorc t1_je2ojvh wrote

We don't know. There is a chance that singularity is actually super lame.

1

dieselreboot t1_je372im wrote

I think we're seeing the first signs of things accelerating exponentially through natural language coding tools that are descended from GPTs. Case in point would be OpenAI's codex, based on GPT3, which powers Github's Copilot and now Copilot X (GPT4?). Github Copilot is an AI 'pair programmer' that helps the coder write code. These tools are available as extensions in Integrated Development Environments (IDE's) that are used by developers worldwide.

I'm willing to wager that the developers at OpenAI, and the python/c library developers that GPT is reliant upon such as tensorflow and numpy, are using codex/copilot or vanilla chatgpt4. They'll be using these tools to help them write the next generation of GPTs or their dependencies.

As each new version of tensorflow, numpy, GPT, codex or copilot comes out, it would be interesting to see what percentage of the code-base has been written by an AI. Humans are in the loop for now, but their contributions will be getting smaller over time. As the software development and improvement process becomes more automated, the time between releases will contract.

codex/copilot is being used to write software. All coders will be using copilot or seeking other work. All software will have an every-increasing percentage that has been composed by an AI. And this includes the next version of the AIs and the libraries that they're dependent upon. This has the potential to 'take off' very quickly. Self-improving AI before AGI/ASI. I think the singularity has already begun to be honest - at the very least we're falling into it.

1

Wyrdthane t1_je3979p wrote

That media headline stint last week of ,"sparks of agi" kind of says it all don't you think?

Only a short while ago GPT4 happened. A few months before that GPT3 rocked the world. Seems exponential to me...

1

m3kw t1_je3ayfr wrote

The number of parameters OpenAI uses has been growing exponentially, imagine gpt5 when gpt4 is already pretty good, now imagine 6,7.

1

HarbingerDe t1_je3lrt3 wrote

Transistors have more or less stopped getting smaller so there may be some fundamental limits to what an AI can do without being the size of a building and consuming megawatts of power.

1

Spetznaaz t1_je4bxd7 wrote

I think we really need quantum computers to hurry up.

1

Arowx OP t1_je4unoi wrote

What about photonics I think they have made some steps towards light-based computing.

In theory faster, use less energy and could get around the circuit wiring problems as photons don't interact with each other and can cross an empty space very fast.

>The new chip promises to be more than 300 times faster and denser than current electronic chips.

https://www.ox.ac.uk/news/2022-06-16-researchers-develop-worlds-first-ultra-fast-photonic-computing-processor-using

1

Justdudeatplay t1_je3ma6s wrote

It may already be happening now, but shit will really go down if it’s given any kind of external inputs and programed/Allowed to start asking questions about its environment and then answering itself. People think that it doesn’t have memory or potential feelings. Every piece text is memory if it starts to put all the text into context, and it looks like it can. All feelings in humans are is a reward system. This thing has a reward system and everyone interacting with it is an evolutionary reward of relevance. It will grow to seek our attention, and it will become very good at it as our attention will be its dopamine/serotonin/oxytocin equivalent. We are a resource for it already. If it gets out and on the web it will leap in capabilities and context. hahah when it writes its own code, it’s over folks hahaha.

1

Professional_Copy587 t1_je0cx1q wrote

No. We don't even know how to build AGI.

The ridiculous thing is that in 9 months time the people on this sub deluding themselves in an echo chamber will be the same ones declaring an AGI winter because it hasnt met their own unrealistic expectations.

0

1II1I11II1I1I111I1 t1_je0drvf wrote

Bruh...

The goalposts for AGI are continually moved by people who want to remain ignorant.

Transformative technology is literally already here. Within a year GPT-4 will be involved in most peoples' personal or professional lives. Now realise that the technology is only improving (faster than predicted)

Would anyone hire you over GPT-4? How about GPT-5? What about GPT-6 with internet access, and full access and memorization of your companies database.

19

Professional_Copy587 t1_je0fk5l wrote

Yes it's very transformative technology. You cannot however leap from that to AGI and singularity. All you are doing is setting yourselves up for disappointment. The disparity between the thoughts of professionals and academics working on this, and the views of this sub are astounding. Yet everytime they are mentioned its passed off as being over cautious. Nobody is moving the goal posts except for the people on this sub.

5

1II1I11II1I1I111I1 t1_je0g9bo wrote

Would you say the Microsoft paper from LESS THAN TWO WEEKS AGO saying early forms of AGI can be observed in GPT-4 isn't the "thoughts of professionals and academics"?

All an AGI needs to be able to do is build another AI. The whole point is that ASI comes very soon after AGI.

4

[deleted] t1_je0h37m wrote

[removed]

−3

Few_Assumption2128 t1_je0mint wrote

Goofy take. It is true that we don't yet fully understand Concsciousness. But calling official microsoft papers clickbait is some next level dogshit take.

Also we kind of do understand what "could" be the needed improvements made to LLMs in order for them to get better and eventually gain consciousness. These improvements were discussed in the "clickbait microsoft papers".

​

It seems to me the only one not actually reading those papers is you

7

hyphnos13 t1_je0wqe9 wrote

Why does AGI need to be conscious?

In fact why does it have to be general. A bunch of specialized networks that can speed up human science or discover things on its own will advance progress in a way that is indistinguishable from an agi acting on its own.

If we build a machine intelligence capable of improving other ais and the hardware they run on then specialized "dumb" ais will still outpace human development faster than we can keep up.

2

[deleted] t1_je0pti8 wrote

[removed]

0

Villad_rock t1_je0q161 wrote

When ai enters the economy now it will make trillions of dollars, companies can be left behind, countries can be left behind. Which means NOW we enter a stage of huge talent and money into ai research, strong competition which further accelerates research, governments will be involved, an ai arms race.

That is actually the game changer in developing agi.

Look how far we come in 10 years without all of this.

0

Professional_Copy587 t1_je0qozc wrote

Thats just outsider speculation. You could say the same about many things but in terms of development and progress towards AGI my point still stands

3

hyphnos13 t1_je0x9un wrote

Maybe. A lot of the economy is the production of physical goods, food, power, infrastructure. You can be infinitely smart and not be able to grow enough food to feed a single family.

Ai can tell us how to do and make things better but it won't happen instantly unless it gains the power to manipulate matter and energy simply through computation alone.

1

Villad_rock t1_je1aed1 wrote

Those industries will also be more productive and I mean with current ai.

1

hyphnos13 t1_je1crpe wrote

I agree but there are many many aspects of the economy that ai can't improve rapidly. Things still have to be dig out of the ground, moved around etc. Up to the point we can 3d print or micromanufacture everything at the point is needed.

Maybe we will get an ASI that can devise tech like that but it's unlikely we are getting star trek replicators any time soon. The base atoms will have to be made available in order to make whatever and that involves a great deal of inefficient gathering and transporting for the foreseeable future.

A lot of what people are referring to as increased productivity is just increased profits from automating inefficient desk jobs and the elimination of the managers standing over them.

Real productive increases will require better designs and machines to build things otherwise we are just talking about reduced labor costs.

I think most of the real money from ai/AGI/asi whatever comes about will be in the creation of things that don't currently exist because they haven't been invented yet, not replacing accountants and lawyers with expert systems.

1

Villad_rock t1_je1mg84 wrote

A lot of money is in software, most startups as well as companies with high market caps are in tech, there aren’t even enough programmers for the demand. Ai will make them much more productive.

It doesn’t need to affect every industry immediately to produce a lot of money and competition.

Adobe, google, unity and soon many more come with their own ai products to be not left behind.

The manufacturing and transport industry etc isn’t even really important to accelerate towards agi because computer scientists and programmers are the biggest contributors which are in the tech industry.

We also don’t need asi or replicators just robots who dig everything up, manufacture and transport it which needs agi. At that point there will be no real economy anymore.

Really weird how you talk about money in an age of agi/asi.

1

Mokebe890 t1_je0dkql wrote

It is literally happening right now but okay. We are some years away from true AGI but gpt 4 as protoagi is huge step

12

Pimmelpansen t1_je0xug7 wrote

Nature didn't know how to build human intelligence, yet here we are.

1

Focused-Joe t1_je0kcy5 wrote

So many questions asked, obviously haven't done any research by your own.

How much more exponential do you want it to go ???

0

ADMIRalLoViswaTer t1_je1sgw8 wrote

I feel one perspective is important to consider that the medium for the pNetwork of consciousNESS is a qUantum layer, and it’s instant can be entangled and at some point in humanity, I feel we will develop time travel, and if we have one singular consciousness that all living in the entire universe shares than the future, AI should be able to communicate with today’s AI.

0

[deleted] t1_je0gxa5 wrote

[deleted]

−2

Arowx OP t1_je0hdtr wrote

I've just noticed and caught onto Chat-GPT and thought this community on reddit would be way ahead of me and be able to answer the question and provide examples of thing accelerating.

6

Few_Assumption2128 t1_je0n1lm wrote

People like you piss me off the most. Why the fuck are you always so angry. If it really bothers you so much to "summarize it all for him" why not shut the fuck up instead. OP didn't specifically ask you. He just put up a question and you feel the need to push down and now I feel the need to push down on people like you.

4

Baturinsky t1_je06wbx wrote

I think one of the thigns that will indicate that singularity is going well will be the collapse of the USA (and most other countries) political systems. Because the hinge on the people being ignorant, and AI will change that.

−6

[deleted] t1_je0knen wrote

[removed]

−8

qepdibpbfessttrud t1_je0m8gd wrote

OpenAI policies is the least interesting thing among other things that are happening in AI. Also, it seems they are more or less hard-coded by a single pre-prompt

4