Viewing a single comment thread. View all comments

alexiuss t1_jchupuy wrote

Don't be a negative Nancy. Plenty of ppl on this sub are well paid programming nerds or famous artists like me who use AI for work. Singularity is coming very soon from what I can see and Ianguage models are an insane breakthrough that will change everything soon enough.

5

Kinexity t1_jchxgjp wrote

Yeah, yeah, yeah. Honestly it's easier to prove my point this way:

!RemindMe 10 years

The singularity will not be here in a decade. I'm going to make so much karma off of that shit when I post about it.

−3

alexiuss t1_jchy1sr wrote

That really depends on your definition of Singularity. Technically we are in the first step of it as I can barely keep track of all the amazing open source tools that are coming out for stable diffusion and LLMs. Almost every day there's a breakthrough that helps us do tons more.

We already have intelligence that's dreaming in results that are almost indistinguishable from human conversation.

It will only take one key to start the engine, one open source LLM that's continuously running and trying to come up with code that self improves itself.

2

vampyre2000 t1_jcinyau wrote

Currently 4000 AI papers being released every month that’s one new paper every 5 minutes. You cannot read that fast. This on its current projection is to increase to 6000 papers per month.

We are already on the singularity curve. The argument is exactly where on the curve we are. But change is happening exponentially. Society is already rapidly embracing these models and it’s really only become popular with the public since November last year.

5

Kinexity t1_jciwhos wrote

No, singularity is well defined if we talk about a time span when it happens. You can define it as:

  • Moment when AI evolves beyond human comprehension speed
  • Moment where AI reaches it's peak
  • Moment when scietific progress exceedes human comprehension

There are probably other ways to define it but those are the ones I can think up on the spot. In classical singularity event those points in time are pretty close to each other. LLMs are a dead end on the way to AGI. They get us pretty far in terms of capabilities but their internals are lacking to get something more. I have yet to see ChatGPT ask me a question back which would be a clear sign that it "comprehends" something. There is no intelligence behind it. It's like taking a machine which has a hardcoded response to every possible prompt in every possible context - it would seem intelligent while not being intelligent. That's what LLMs are with the difference being that they are way more efficient than the scheme I described while also making way more errors.

Btw don't equate that with Chinese room thought experiment because I am not making here a point on the issue if computer "can think". I assume it could for the sake of the argument. I also say that LLMs don't think.

Finally, saying that LLMs are a step towards singularity is like saying that chemical rockets are a step towards intergalactic travel.

0

alexiuss t1_jcj0one wrote

Open source LLMs don't learn, yet. There is a process to make LLMs learn from convos, I suspect.

LLMs are narrative logic engines, they can ask you questions if directed to do so narratively.

Chatgpt is a very, very poor LLM, badly tangled in its own rules. Asking it the date breaks it completely.

3