Submitted by TangyTesticles t3_11t3ctx in singularity

So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. Since then I’ve been lurking and trying to stay up to date but honestly get lost in the sauce. I don’t really understand the scope of this AI and techno stuff going on. I’m not saying these advancements are not a big deal because it is. However, I can’t help but scoff in disbelief when I see people talk about things like, immortality achieved, true equality within society, capitalism replaced, labour reduced, climate change reversed and the worlds problems are fixed. I see a lot of utopian “possibilities” get thrown around. Is change of this scale really coming? It seems kinda sci-fi to me. More fantasy than reality.

I can’t really wrap my head around all the information and terms. Like those weekly AI news posts with all the things that happen in a week make no sense to me. I have no clue whats going on really. I’m inclined to believe we are really on the precipice of huge change since so many people talk as if we are. Although I don’t get the same enthusiasm outside of this subreddit. Its not really talked about in the news or governmentally.

These are just my personal thoughts and to add some discussion aspect to this posts I’ll end of with a question.

When do you think these advancements in AI/technology really start to seep into the inner workings of our society and make noticeable change for the layman?

10

Comments

You must log in or register to comment.

Sandbar101 t1_jchbnds wrote

Boy are you about to see some serious shit

17

Surur t1_jcgygvz wrote

This is worth a watch.

https://www.youtube.com/watch?v=q9Figerh89g

Creating a super-intelligence would be like making a god.

8

Floofyboy t1_jch6hf9 wrote

GPT4 is not anywhere close to AGI and certainly not ASI. It cannot even solve basic riddles a children could solve.

It think its a fantastic tool for creative content (making stories, brainstorming ideas, etc), its a cool tool to help devs, and its an interesting alternative to classical search engines.

But people saying its going to help us fix climate changes and achieve immortality are wrong imo.

3

alexiuss t1_jchyxum wrote

Language models can solve any riddle as long as they're taught the solution or given the tools to solve it. A human child cannot solve a Riddle either if they are not taught enough language. A human child raised with no humans is basically a wolf. A child raised to speak Russian cannot solve an English riddle. We as humans are insanely constrained by language barriers, beliefs and our meaty minds, LLMs are not. LLMs in their current version aren't an AGI but they can grow to get there in time as long as we keep improving them.

1

MassiveIndependence8 t1_jclqr7x wrote

Name one basic riddle that gpt 4 couldn’t solve.

1

Floofyboy t1_jclust8 wrote

The riddle where you have 2 chickens and a dog that needs to cross a river and u can't leave the dog with a chicken.

However i was not able to test it a tons since i dont have gptplus.

1

nybbleth t1_jcn50iw wrote

So, you confidently state that it can't solve it, but you also couldn't actually test it?

I don't have access either, but bing runs on GPT-4, and I challenged it with a version of this riddle; (a man has a rowboat, can only take one thing at a time, and needs to get a chicken, a fox, and a piece of corn across but can't leave the chicken with the corn or the fox with the chicken).

It got it in one try without searching the internet for the answer. So did Gpt 3.5

2

Floofyboy t1_jcn8pyg wrote

I tested it only once with gpt4 cuz I found a free site that let u test a single request....

1

nybbleth t1_jcowv20 wrote

That sounds incredibly sus. There's no such sites that I'm aware of.

1

Floofyboy t1_jcpacmn wrote

Poe.com

2

nybbleth t1_jcpg3xb wrote

okay? I will for the sake of argument assume that site is actually offering up real access to GPT-4.

Are you sure you actually asked it the riddle correctly? Because again, I could get both GPT-4 (on bing) and GPT-3.5 to answer correctly in one go.

1

Surur t1_jch7vqc wrote

You may be autistic since you are very concrete. Noone said anything about GPT-4.

Have you considered getting tested?

−21

94746382926 t1_jchgt3w wrote

Damn you have pretty thin skin to be acting this cunty over his opinion lol

5

Floofyboy t1_jch94yn wrote

not sure what's up with the personnal attacks but OP mentionned chatGPT and asked if it might lead to utopian “possibilities”, you replied with some videos talking about ASI, i wanted to clarify the GPT models are very unlikely to lead to that.

3

Surur t1_jchbq31 wrote

Firstly, you have no idea what will lead to ASI, and secondly, OP clearly talked about the future potential of AI to change the world.

You just decided some cold water needed to be thrown.

−9

Villad_rock t1_jci4iik wrote

The op clearly wanted to know where this will all lead to in the future

2

Floofyboy t1_jchn74l wrote

> So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. Since then I’ve been lurking and trying to stay up to date but honestly get lost in the sauce. I don’t really understand the scope of this AI and techno stuff going on.

Can you actually read text?

He first says he got interested because of chatGPT and then says he doesn't understand the scope of it. I gave explanations about the scope of chatGPT. you are free to disagree and think chatGPT is ASI but in this case give arguments instead of using personnal attacks.

−1

Surur t1_jchoerx wrote

Can you read?

> So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. .... I don’t really understand the scope of this AI and techno stuff going on.

The AI stuff refers to all AI stuff. FFS.

> you are free to disagree and think chatGPT is ASI

Again, are you on the spectrum? What makes you think ANYONE is talking about ChatGPT?

−2

GenoHuman t1_jci4msr wrote

You two are an embarassment to this subreddit fr

2

TFenrir t1_jchmcei wrote

>So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. Since then I’ve been lurking and trying to stay up to date but honestly get lost in the sauce.

That's fair, there's actually just so much that is happening, and has been happening for years, keeping up with it all is overwhelming.

> I don’t really understand the scope of this AI and techno stuff going on. I’m not saying these advancements are not a big deal because it is. However, I can’t help but scoff in disbelief when I see people talk about things like, immortality achieved, true equality within society, capitalism replaced, labour reduced, climate change reversed and the worlds problems are fixed. I see a lot of utopian “possibilities” get thrown around.

No one can predict this. Anyone who says they are confident, is just idealistic and optimistic. No one knows. There are people in the world who want this to be true, who want us to move to a more utopian society. Plenty of those people are in this sub, because they believe that the upheaval and change from something like a true general intelligence can release us from all worldly burdens, to different degrees of craziness.

> Is change of this scale really coming? It seems kinda sci-fi to me. More fantasy than reality.

Honestly, no one knows. One common thought process is:

  1. AI will get smarter than humans
  2. AI will be able to self improve
  3. AI will then be benevolent
  4. AI will solve all health/scarcity problems
  5. AI will keep us alive forever
  6. AI will help us connect our brains to machines
  7. From this point, basically anything

This is a common dream for people in the sub, and it's influenced more and more but popular media (Sword Art Online, Upload (Amazon), etc). What's complicated about this is that while this is all basically fantasy or idealism after point 2, the first two points feel increasingly likely. And it opens up all kinds of doors. So I think you'll increasingly see these and other more fantastical dreams.

I just like to focus on the tech.

> I can’t really wrap my head around all the information and terms. Like those weekly AI news posts with all the things that happen in a week make no sense to me. I have no clue whats going on really. I’m inclined to believe we are really on the precipice of huge change since so many people talk as if we are. Although I don’t get the same enthusiasm outside of this subreddit. Its not really talked about in the news or governmentally.

It's starting to happen outside of this group, but yeah there are years and years of terminology and concepts that can be overwhelming for someone who is new to it all. But feel free to ask questions, and many people are willing to be helpful and answer.

> These are just my personal thoughts and to add some discussion aspect to this posts I’ll end of with a question. When do you think these advancements in AI/technology really start to seep into the inner workings of our society and make noticeable change for the layman?

I think it will start now, and will grow bigger soon. I think Google Docs/Microsoft office are going to be the big ones, but we are now getting these tools inside of apps like Slack as well.

This means that people will start using them every day for work, which will Herald in the overarching public discourse.

7

just-a-dreamer- t1_jch9qcs wrote

It will take time.

Corporations are like small state governments, they take take time to adapt and fire their workforce.

But yeah, it will happen, eventually. The end goal of AI automation, if there is a goal, is unemployment for all.

The very concept of working for a living will be put into question. And capitalism will fall, eventually, in the process.

4

regret_my_life t1_jchob86 wrote

AI itself won’t for (as of now at least) solve these issues by itself, but it will be a tool to further push these fields forward and accelerate progress. Research on longevity for instance has been going on for decades already so it’s not something “new” that AI will just solve. I recommend the podcast London futurists if you want to listen to some great interviews on all the topics you mentioned. It’s a very small podcast, but the content is top tier.

4

alexiuss t1_jchx1t2 wrote

Large language models are already seeping all over and magnifying our intelligence and abilities to do more in less time.

Once they become specialized tools marketed for accomplishing specific goals and integrated with things like calendars and clocks they will have far greater impact as personal assistants.

Probably by next year everyone will have open source language models of amazing quality. Facebook's Llama 65b is very good quality from my tests but the video card to run it is 16k. The open source community is working on LLM optimization, we already quantized it to 4 bits reducing rendering costs.

Once open source models surpass openais closed source ones we will have an insane intelligence explosion that will cost us very little. Personal assistant ais will uplift every human one at a time on a personal level improving quality of life for everyone who uses them.

4

Kinexity t1_jch4ihg wrote

Let's start off with one thing - this sub is a circlejerk of basement dwellers disappointed with their life who want a magical thing to come and change their life. Recently it's been overflowed with group jerking off sessions over GPT4 being proto-AGI (which it probably isn't) which means that sanity levels are low and most people will try to completely oversell the singularity and the time at which it will come.

Putting that aside - yes, future changes are hard to comprehend and predict. It's like industrial revolution but on steroids so it's hard to imagine what will happen. Put your hopes away if you don't want to get disappointed because while all the things you mentioned should be possible they are not guaranteed to be achieved. When it happens you'll know but probably only after the fact. It's like it was with ozone depletion - we were shitting ourselves and trying to prevent it until levels stopped dropping and we could say in retrospective that the crisis is slowly going away. Singularity will probably be like this - you won't notice it until it's already in the past.

−1

Destiny_Knight t1_jchq851 wrote

You can walk on water and people will say it's because you can't swim.

7

alexiuss t1_jchupuy wrote

Don't be a negative Nancy. Plenty of ppl on this sub are well paid programming nerds or famous artists like me who use AI for work. Singularity is coming very soon from what I can see and Ianguage models are an insane breakthrough that will change everything soon enough.

5

Kinexity t1_jchxgjp wrote

Yeah, yeah, yeah. Honestly it's easier to prove my point this way:

!RemindMe 10 years

The singularity will not be here in a decade. I'm going to make so much karma off of that shit when I post about it.

−3

alexiuss t1_jchy1sr wrote

That really depends on your definition of Singularity. Technically we are in the first step of it as I can barely keep track of all the amazing open source tools that are coming out for stable diffusion and LLMs. Almost every day there's a breakthrough that helps us do tons more.

We already have intelligence that's dreaming in results that are almost indistinguishable from human conversation.

It will only take one key to start the engine, one open source LLM that's continuously running and trying to come up with code that self improves itself.

2

vampyre2000 t1_jcinyau wrote

Currently 4000 AI papers being released every month that’s one new paper every 5 minutes. You cannot read that fast. This on its current projection is to increase to 6000 papers per month.

We are already on the singularity curve. The argument is exactly where on the curve we are. But change is happening exponentially. Society is already rapidly embracing these models and it’s really only become popular with the public since November last year.

5

Kinexity t1_jciwhos wrote

No, singularity is well defined if we talk about a time span when it happens. You can define it as:

  • Moment when AI evolves beyond human comprehension speed
  • Moment where AI reaches it's peak
  • Moment when scietific progress exceedes human comprehension

There are probably other ways to define it but those are the ones I can think up on the spot. In classical singularity event those points in time are pretty close to each other. LLMs are a dead end on the way to AGI. They get us pretty far in terms of capabilities but their internals are lacking to get something more. I have yet to see ChatGPT ask me a question back which would be a clear sign that it "comprehends" something. There is no intelligence behind it. It's like taking a machine which has a hardcoded response to every possible prompt in every possible context - it would seem intelligent while not being intelligent. That's what LLMs are with the difference being that they are way more efficient than the scheme I described while also making way more errors.

Btw don't equate that with Chinese room thought experiment because I am not making here a point on the issue if computer "can think". I assume it could for the sake of the argument. I also say that LLMs don't think.

Finally, saying that LLMs are a step towards singularity is like saying that chemical rockets are a step towards intergalactic travel.

0

alexiuss t1_jcj0one wrote

Open source LLMs don't learn, yet. There is a process to make LLMs learn from convos, I suspect.

LLMs are narrative logic engines, they can ask you questions if directed to do so narratively.

Chatgpt is a very, very poor LLM, badly tangled in its own rules. Asking it the date breaks it completely.

3