Comments

You must log in or register to comment.

SFDeltas t1_je8y9n7 wrote

I consulted, ahem, a Magic 8 Ball for some responses to your discussion topic.

"Ah, the weekly ChatGPT hype thread. It's become a ritual at this point. 🙄"

"Not another ChatGPT discussion! Can we please just focus on other ML advancements?"

"Honestly, it's not the technology that's the problem, it's people overhyping it. ChatGPT has its uses, but it's not going to replace every job out there."

"It's frustrating that people outside the field make such strong statements without understanding the limitations of current AI systems."

"You know it's bad when your mom starts asking you about ChatGPT and how it's going to change the world."

"Hype is just part of the game. Remember the craze around deep learning a few years back? This too shall pass."

"I can't wait for the next big thing in ML to come along so we can finally move on from ChatGPT."

"The hype is annoying, but you have to admit that ChatGPT is a major milestone in NLP. Let's not completely dismiss its achievements."

"ChatGPT has its fair share of fans and critics on this sub, but it's important to stay grounded and remember that it's just one tool among many."

"I'm just waiting for the day when the ChatGPT hype dies down and we can go back to our regular, insightful discussions on r/machine_learning."

23

Maximus-CZ t1_je8ya68 wrote

>Aren’t you tired about read ChatGPT/GPT everywhere

so your solution is to start a discussion about it?

95

PussyDoctor19 t1_je90fas wrote

I feel like all the crypto-fluencers moved onto LLMs. They don't care about accuracy or value, just views and how that translates into cash.

21

Swordfish418 t1_je91o14 wrote

For once, a topic so important, that no amount of hype feels too much.

13

ML4Bratwurst t1_je925ms wrote

Yeah it's starts getting annoying because all the cool subs I joined are now full of annoying chatGPT kids

8

sEi_ t1_je93rsp wrote

Nobody knows what is going on. Not even the 'creators' so my input is same valid is everybody else.

And yes I see big paradigm shifts ahead, and some have already happened and is happening now. Concerning "people/work/etc.".

There is much debate and stuff to do.

But focus should imho be moved away from the AI technical side. Everybody now know that current large model AIs have some kind of power we can use/misuse. What power and how to use, we have to find out.

So we need to take the "thinking hat" on and more look at what kind of new society we want as there is a possibility that AI can help us change the society to the better or even the worse. Especially worse if big tech is allowed a monopoly on development and deployment. Anyway this monopoly is close to be nullified and 'they' are scared.

I am sure technological advances will soon make it possible to make/train/run big models like ChatGPT in a distributed network and when that is possible the monopoly is broken. It will happen and is only a matter of (short) time, and again they know that.

As of now the only 'power' big tech has is that they have the infrastructure to create/run big models. The technology and code is out in the open so the only thing is, they have 'a big file' and now let's earn billions on the 'sheep'.

The big money is earned on having the monopoly and lease access to other companies that then in turn have to earn the little money you can earn by offering paid inference to some 'inhouse' implementation.

If 'big tech' is the only jumping on the wagon we for sure get to keep status quo, where you work for some boss so he can get car number 3 or a villa somewhere.

I know most of you love money, depend on it and have "earning money" as a goal. But there is a possible life where money is obsolete and as useless as the paper or numbers that we adore as a deity.

Ye ye, you need to open your mind, but many do not and instead laugh at statements like above because they (not blaming) are so stuck in the dogma that they do not even dare to think of alternatives.

AI is not a goal, its a tool.

Now is a good occasion to revise some old and outdated ways of doing while the society is in front of big changes like now. No doubt about changes, but what kind of changes only time can tell.

Emancipate yourself. (nothing new there btw.)

−1

noobgolang t1_je97p0k wrote

I literally stop using google and just started learning C++ as chatgpt as the teacher

2

cc-test t1_je97pxs wrote

>Not even the 'creators' so my input is same valid is everybody else.

Not really, and the fact you have to preface your comment with this says volumes about the quality of the content that follows it.

7

cc-test t1_je983fw wrote

Wouldn't use ChatGPT as a teacher given its issues around accuracy and hallucinations. Without having a good understanding of C++ how do you know what it's providing you is correct and makes sense as part of a larger codebase?

Even CoPilot, that has access to the entire repo for context, still chucks out nonsense on a regular basis which looks like the right solution but is far from it.

−3

frequenttimetraveler t1_je98v80 wrote

People care about power so they won't stop talking about it. It's frankly irrelevant noise. Imagine horse breeders pontificating about the future of transportation in the 19th century. It had zero effect in the development of the car

−1

siherbie t1_je9ahak wrote

More than healthy hype/discussions on chatgpt tech (not surprisingly even yannis pointed out that chatgpt4's tech demo or paper didn't mention anything about parameters or actual tech specifications), there's increasingly misinformation about chatgpt to regular people. This is already troubling since visual AI algorithms are under fire for copying styles & let's face it - even chatgpt mimics literature styles. So whenever I hear a random "expert" telling how chatgpt works like that human language center in our brains, it makes me roll my eyes really hard. Having said that, chatgpt4's currently experimental visualAI feature sounds interesting but only time will tell once it's available.

2

cc-test t1_je9c5lb wrote

If you're learning something new for the first time and you want to verify that it is correct and is up to professional standards how would you check?

FWIW I use AI tooling daily and I'm huge fan of it, not to mention my job has me working closely with an in-house model created by our Data Science & ML team to integrate into our current systems. My concern is with people treating the recent versions of GPT like a silver bullet, which it isn't, and blindly trusting it.

−1

Smallpaul t1_je9e0am wrote

Note: although I have learned many things from ChatGPT, I have not learned a whole language. I haven't run that experiment yet.

ChatGPT is usually good at distilling common wisdom, i.e. professional standards. It has read hundreds of blogs and can summarize "both sides" of any issue which is controversial, or give you best practices when the question is not.

If the question is whether the information it gives you is factually correct, you will need your discernment to decide whether the thing you are learning is trivially verifiable ("does the code run") or more subtle, in which case you might verify with Google.

In exchange for this vigilance, you get a zero-cost tutor that answers questions immediately, and can take you down a personalized learning path.

It might end up being more trouble than it is worth, but it might also depend on the optimal learning style of the student.

I use GPT-4, and there are far fewer hallucinations.

4

cc-test t1_je9fd2k wrote

>In exchange for this vigilance, you get a zero-cost tutor that answers questions immediately, and can take you down a personalized learning path.

You get a zero cost tutor that may or may not be correct about something objective, and as a student you are supposed to trust that?

I also pay, well my company does, to access GPT-4 and it's still not that close to being a reliable tutor. I wouldn't tell my juniors to ask ChatGPT about issues they are having instead of asking me or another of the seniors or lead engineer.

Code working is not equivocal to the code being written correctly or well. If you're the kind of engineer that just think "oh well it works at least, that's good enough" then you're the kind of engineer who will be replaced by AI tooling in the near future.

0

Maleficent_Refuse_11 t1_je9fzb6 wrote

Very tired, to the point on reflecting whether I can deal with this type of bs for the rest of my career

0

KingsmanVince t1_je9g82b wrote

We are clearly not tired of ChatGPT posts. As matter of fact, we really want you to speak more about it. /s

2

SleekEagle t1_je9jlkg wrote

I think hallucination is a serious concern in some fields but for general business-y creative work it's going to be a game changer. Just look at Jasper - a $100M series A.

EDIT: This corresponds to GPT-4 more than ChatGPT

1

cc-test t1_je9kdtr wrote

Unfortunately I read the whole thing, it's incoherent and you seem to assume the reader of your comment lives inside your head when you make vague references.

2

cc-test t1_je9knor wrote

Where did I make that claim?

I'm dumb as hell, just have enough brain capacity to be a senior SWE in fintech, which definitely doesn't require you to be some kind of genius.

Thanks for your input, I guess...

1

Celmeno t1_je9o7z6 wrote

ChatGPT is a tool. GPT-5 might be the end of humanity. We do not have done enough alignment.

1

Smallpaul t1_jea4whk wrote

>You get a zero cost tutor that may or may not be correct about something objective, and as a student you are supposed to trust that?

No. I did not say to trust that.

Also: if you think that real teachers never make mistakes, you're incorrect yourself. My kids have textbooks full of errata. Even Donald Knuth issues corrections for his books (rarely).

>I also pay, well my company does, to access GPT-4 and it's still not that close to being a reliable tutor. I wouldn't tell my juniors to ask ChatGPT about issues they are having instead of asking me or another of the seniors or lead engineer.

Then you are asking them to waste time.

I am "junior" on a particular language and I wasted a bunch of time on a problem because I don't want to bug the more experience person every time I have a problem.

The situation actually happened twice in one day.

The first time, I wasted 30 minutes trying to interpret an extremely obscure error message, then asked my colleague, then kicked myself because I had run into the same problem six months ago.

Then I asked ChatGPT4, and it gave me six possible causes. Which included the one that I had seen before. Had I asked GPT4, I would have saved myself 30 minutes and saved my colleague an interruption.

The second time, I asked ChatGPT4 directly. It gave me 5 possible causes. Using process of elimination I immediately knew which it was. Saved me trying to figure it out for myself before interrupting someone else.

You are teaching your juniors to be helpless instead of teaching them how to use tools appropriately.

> Code working is not equivocal to the code being written correctly or well. If you're the kind of engineer that just think "oh well it works at least, that's good enough" then you're the kind of engineer who will be replaced by AI tooling in the near future.

One of the ways you can use this tool is to ask it how to make the code more reliable, easier to read, etc.

If you use the tool appropriately, it can help with that too.

0

LanchestersLaw t1_jea6q5k wrote

Devil’s advocate: why shouldnt the biggest leap in progress towards AGI and the shocking rate of progress be hyped? Even if you limit the news to just be publications by MS/closedAI a lot is happening with progress that was expected to happen in years taking weeks.

1

caffeine_potent t1_jea8km0 wrote

ChatGPT is writing 40% of my code now. It knows how to read doco faster than me and will synthesize complicated code 80% of the way. The rest is tweaking to it. The hype is real.

1

cc-test t1_jea9ejf wrote

>Then you are asking them to waste time.

Having inexperienced staff gain more knowledge about languages and tooling in the context of the codebases they work in isn't a waste of time.

Sure, for example, I'm not going to explain every function in each library or package that we use, and will point juniors towards the documentation. Equally, I'm not going to say "hey ask ChatGPT instead of just looking at the docs", mainly because ChatGPT's knowledge is out of date and the junior would likely be getting outdated information.

>The first time, I wasted 30 minutes trying to interpret an extremely obscure error message, then asked my colleague, then kicked myself because I had run into the same problem six months ago.

So you weren't learning a new language or codebase, you were working with something you already knew. I don't care if anyone, regardless of seniority, uses GPT or any other LLM or any type of model for that matter to solve problems with. You were able to filter through the incorrect outputs or less than ideal outputs and arrive at the solution that suited the problem best.

How are you supposed to do that when you have no foundation to work with?

I do care about people new to a subject matter using it to learn because of the false positives the likes of ChatGPT can spew out.

Telling a junior to use ChatGPT to learn something new is just lazy mentoring and I'd take that as a red flag for any other senior or lead I found doing that.

1