WarmSignificance1
WarmSignificance1 t1_jeflzrc wrote
Reply to comment by YunLihai in Today I became a construction worker by YunLihai
No, it’s because manual labor fucking sucks. It’s honest and necessary work but it’s not easy work. There is a reason why blue collar folks push their children into higher education. Sitting at a desk is so much easier than using your body to make a living.
WarmSignificance1 t1_jeeqi21 wrote
Reply to Today I became a construction worker by YunLihai
RemindMe! 2 years “AI will have fully replaced no job categories”
WarmSignificance1 t1_jecvmmm wrote
Reply to comment by chrisc82 in Where do you place yourself on the curve? by Many_Consequence_337
There isn't a new breakthrough every day. Not even close.
WarmSignificance1 t1_jecpvw8 wrote
Reply to comment by SkyeandJett in When will AI actually start taking jobs? by Weeb_Geek_7779
Seems like a terrible business decision if true. Doesn't really matter though, we literally have employment statistics. If 1 in 4 companies actually replace workers, we will know about it immediately.
WarmSignificance1 t1_jec0890 wrote
Reply to comment by SkyeandJett in When will AI actually start taking jobs? by Weeb_Geek_7779
Holy fuck stop posting this. Look at the methodology. It’s a shit survey meant to get clicks.
WarmSignificance1 t1_je6jpjt wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Humans are trained on a fraction of the data that LLMs are. That actually does matter, because it begs the question: what are LLMs missing?
It doesn’t inherently mean that you can’t get a very powerful system with the current paradigm, but it does mean that you may be missing a better way of doing things.
WarmSignificance1 t1_je58y3c wrote
Reply to comment by _sbmaruf in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Looks interesting. Have you tried any of the GPT models against this benchmark?
WarmSignificance1 t1_je57s2a wrote
Reply to comment by trajo123 in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
So I actually think that senior devs copy and paste a lot less than everyone imagines.
I can’t remember the last time I’ve copied code from StackOverflow. Actually, I rarely even use StackOverflow at this point. Going directly to the official docs is always best.
WarmSignificance1 t1_je1pdz9 wrote
Reply to comment by cegras in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
I think that ChatGPT has shown how bad so many people are at Googling. And granted, sometimes ChatGPT is just far superior.
But when people say things like "I can ask it how to use a library and it's made me 10x faster over using Google", it just blows my mind. I can usually find the official docs and figure out how to use a library in about the same time as ChatGPT can tell me, without the risk of errors.
WarmSignificance1 t1_jdv1usr wrote
Reply to comment by LifeScientist123 in [D] GPT4 and coding problems by enryu42
Part of intelligence is the ability to learn in an efficient manner. For example, an expert programmer doesn't need to see hundreds of millions of examples to learn a new programming language. They can read the docs, play around with it a bit, and then apply their existing experience and models that they've built up over time to the new language.
LLMs fall over in this same situation.
WarmSignificance1 t1_jdrfdmj wrote
There is no future world in which having more assets is a bad thing. The worst case scenario, which I find to be extremely unlikely, is that assets no longer have value. In a situation like that, we'll either all be dead or have everything we need.
Just think about the technology that we have invented over the last 100 years. Someone born 100 years ago today witnessed massive changes, and yet, life is still pretty much the same as is was back then, just a lot better. I think that it is quite likely we will experience the same thing.
WarmSignificance1 t1_jdhrkof wrote
Reply to comment by race2tb in [N] ChatGPT plugins by Singularian2501
Well now you’re conflating two different things. A unified experience is always good. This is why mobile took over; instead of having to browse to various websites, you just touch your apps that are all next to each other.
Natural language seems highly inefficient for lots of things. I don’t want to type to my bank. I want to open up an app/website and click a button to make a transfer.
WarmSignificance1 t1_jdhi4rj wrote
Reply to comment by race2tb in [N] ChatGPT plugins by Singularian2501
I get the concept, and I see this working for a small subset of websites. But have you seen an average person interact with a website before? Having a non-deterministic GUI will absolutely kill UX in my opinion. Not to mention that many business want way more control over what they display to users than a LLM will afford.
WarmSignificance1 t1_jdhh6w9 wrote
Reply to comment by yokingato in [N] ChatGPT plugins by Singularian2501
I just don't see replacing GUIs with LLMs making sense in general.
Do people really want to access their bank via a LLM? I see that being an inferior user experience.
WarmSignificance1 t1_jdeqxg8 wrote
Reply to comment by race2tb in [N] ChatGPT plugins by Singularian2501
Seems like trying to fit a square peg. Why would you want to do this instead of having a static website?
If we’re talking about dynamic websites that’s a whole different ballgame, and LLMs seem even less appropriate for them.
WarmSignificance1 t1_jcwloed wrote
Completely agree. LLMs will probably plateau at some point. The end game for that approach is very unlikely to be ASI, but will be something that can do most human jobs. If we don't have any additional approaches to pursue by that point, we're in trouble.
I absolutely think that slow take off is way worse for humanity than fast take off.
WarmSignificance1 t1_ja9jnft wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
You don’t have to understand the physics behind nuclear weapons to argue that they’re dangerous. Indeed, the people in the weeds are not always the best at taking a step back and surveying the big picture.
Of course making AI development closed source is ridiculous, though.
WarmSignificance1 t1_j8g4iu5 wrote
Reply to comment by Iffykindofguy in The new Bing AI hallucinated during the Microsoft demo. A reminder these tools are not reliable yet by giuven95
Correct, as far as you know.
The fact that it hallucinated during a pre-recorded demo that was obviously vetted by multiple people demonstrates how hard it is to actually determine when it is wrong.
WarmSignificance1 t1_j874rpw wrote
The paper that you linked was withdrawn for severe technical flaws.
However, assuming that AI can modify source code, this is still a far cry from recursive self improvement.
What is needed to improve AI are new, novel ideas, not source code modification. The source code will be the medium through which these ideas are implemented, but simply modifying it is not sufficient to make actual improvements.
WarmSignificance1 t1_jefx501 wrote
Reply to What do I do? by SprayOnMe43
Stop overthinking it. Do what you’re interested in and the rest will usually follow.