WarmSignificance1

WarmSignificance1 t1_jeflzrc wrote

No, it’s because manual labor fucking sucks. It’s honest and necessary work but it’s not easy work. There is a reason why blue collar folks push their children into higher education. Sitting at a desk is so much easier than using your body to make a living.

11

WarmSignificance1 t1_je6jpjt wrote

Humans are trained on a fraction of the data that LLMs are. That actually does matter, because it begs the question: what are LLMs missing?

It doesn’t inherently mean that you can’t get a very powerful system with the current paradigm, but it does mean that you may be missing a better way of doing things.

6

WarmSignificance1 t1_je1pdz9 wrote

I think that ChatGPT has shown how bad so many people are at Googling. And granted, sometimes ChatGPT is just far superior.

But when people say things like "I can ask it how to use a library and it's made me 10x faster over using Google", it just blows my mind. I can usually find the official docs and figure out how to use a library in about the same time as ChatGPT can tell me, without the risk of errors.

12

WarmSignificance1 t1_jdv1usr wrote

Part of intelligence is the ability to learn in an efficient manner. For example, an expert programmer doesn't need to see hundreds of millions of examples to learn a new programming language. They can read the docs, play around with it a bit, and then apply their existing experience and models that they've built up over time to the new language.

LLMs fall over in this same situation.

1

WarmSignificance1 t1_jdrfdmj wrote

There is no future world in which having more assets is a bad thing. The worst case scenario, which I find to be extremely unlikely, is that assets no longer have value. In a situation like that, we'll either all be dead or have everything we need.

Just think about the technology that we have invented over the last 100 years. Someone born 100 years ago today witnessed massive changes, and yet, life is still pretty much the same as is was back then, just a lot better. I think that it is quite likely we will experience the same thing.

16

WarmSignificance1 t1_jdhrkof wrote

Reply to comment by race2tb in [N] ChatGPT plugins by Singularian2501

Well now you’re conflating two different things. A unified experience is always good. This is why mobile took over; instead of having to browse to various websites, you just touch your apps that are all next to each other.

Natural language seems highly inefficient for lots of things. I don’t want to type to my bank. I want to open up an app/website and click a button to make a transfer.

3

WarmSignificance1 t1_jdhi4rj wrote

Reply to comment by race2tb in [N] ChatGPT plugins by Singularian2501

I get the concept, and I see this working for a small subset of websites. But have you seen an average person interact with a website before? Having a non-deterministic GUI will absolutely kill UX in my opinion. Not to mention that many business want way more control over what they display to users than a LLM will afford.

2

WarmSignificance1 t1_jdeqxg8 wrote

Reply to comment by race2tb in [N] ChatGPT plugins by Singularian2501

Seems like trying to fit a square peg. Why would you want to do this instead of having a static website?

If we’re talking about dynamic websites that’s a whole different ballgame, and LLMs seem even less appropriate for them.

4

WarmSignificance1 t1_jcwloed wrote

Completely agree. LLMs will probably plateau at some point. The end game for that approach is very unlikely to be ASI, but will be something that can do most human jobs. If we don't have any additional approaches to pursue by that point, we're in trouble.

I absolutely think that slow take off is way worse for humanity than fast take off.

1

WarmSignificance1 t1_ja9jnft wrote

You don’t have to understand the physics behind nuclear weapons to argue that they’re dangerous. Indeed, the people in the weeds are not always the best at taking a step back and surveying the big picture.

Of course making AI development closed source is ridiculous, though.

−1

WarmSignificance1 t1_j874rpw wrote

The paper that you linked was withdrawn for severe technical flaws.

However, assuming that AI can modify source code, this is still a far cry from recursive self improvement.

What is needed to improve AI are new, novel ideas, not source code modification. The source code will be the medium through which these ideas are implemented, but simply modifying it is not sufficient to make actual improvements.

8