Viewing a single comment thread. View all comments

Smallpaul t1_je9bwuq wrote

It is easy to verify anything ChatGPT tells you about programming.

6

cc-test t1_je9c5lb wrote

If you're learning something new for the first time and you want to verify that it is correct and is up to professional standards how would you check?

FWIW I use AI tooling daily and I'm huge fan of it, not to mention my job has me working closely with an in-house model created by our Data Science & ML team to integrate into our current systems. My concern is with people treating the recent versions of GPT like a silver bullet, which it isn't, and blindly trusting it.

−1

Smallpaul t1_je9e0am wrote

Note: although I have learned many things from ChatGPT, I have not learned a whole language. I haven't run that experiment yet.

ChatGPT is usually good at distilling common wisdom, i.e. professional standards. It has read hundreds of blogs and can summarize "both sides" of any issue which is controversial, or give you best practices when the question is not.

If the question is whether the information it gives you is factually correct, you will need your discernment to decide whether the thing you are learning is trivially verifiable ("does the code run") or more subtle, in which case you might verify with Google.

In exchange for this vigilance, you get a zero-cost tutor that answers questions immediately, and can take you down a personalized learning path.

It might end up being more trouble than it is worth, but it might also depend on the optimal learning style of the student.

I use GPT-4, and there are far fewer hallucinations.

4

cc-test t1_je9fd2k wrote

>In exchange for this vigilance, you get a zero-cost tutor that answers questions immediately, and can take you down a personalized learning path.

You get a zero cost tutor that may or may not be correct about something objective, and as a student you are supposed to trust that?

I also pay, well my company does, to access GPT-4 and it's still not that close to being a reliable tutor. I wouldn't tell my juniors to ask ChatGPT about issues they are having instead of asking me or another of the seniors or lead engineer.

Code working is not equivocal to the code being written correctly or well. If you're the kind of engineer that just think "oh well it works at least, that's good enough" then you're the kind of engineer who will be replaced by AI tooling in the near future.

0

Smallpaul t1_jea4whk wrote

>You get a zero cost tutor that may or may not be correct about something objective, and as a student you are supposed to trust that?

No. I did not say to trust that.

Also: if you think that real teachers never make mistakes, you're incorrect yourself. My kids have textbooks full of errata. Even Donald Knuth issues corrections for his books (rarely).

>I also pay, well my company does, to access GPT-4 and it's still not that close to being a reliable tutor. I wouldn't tell my juniors to ask ChatGPT about issues they are having instead of asking me or another of the seniors or lead engineer.

Then you are asking them to waste time.

I am "junior" on a particular language and I wasted a bunch of time on a problem because I don't want to bug the more experience person every time I have a problem.

The situation actually happened twice in one day.

The first time, I wasted 30 minutes trying to interpret an extremely obscure error message, then asked my colleague, then kicked myself because I had run into the same problem six months ago.

Then I asked ChatGPT4, and it gave me six possible causes. Which included the one that I had seen before. Had I asked GPT4, I would have saved myself 30 minutes and saved my colleague an interruption.

The second time, I asked ChatGPT4 directly. It gave me 5 possible causes. Using process of elimination I immediately knew which it was. Saved me trying to figure it out for myself before interrupting someone else.

You are teaching your juniors to be helpless instead of teaching them how to use tools appropriately.

> Code working is not equivocal to the code being written correctly or well. If you're the kind of engineer that just think "oh well it works at least, that's good enough" then you're the kind of engineer who will be replaced by AI tooling in the near future.

One of the ways you can use this tool is to ask it how to make the code more reliable, easier to read, etc.

If you use the tool appropriately, it can help with that too.

0

cc-test t1_jea9ejf wrote

>Then you are asking them to waste time.

Having inexperienced staff gain more knowledge about languages and tooling in the context of the codebases they work in isn't a waste of time.

Sure, for example, I'm not going to explain every function in each library or package that we use, and will point juniors towards the documentation. Equally, I'm not going to say "hey ask ChatGPT instead of just looking at the docs", mainly because ChatGPT's knowledge is out of date and the junior would likely be getting outdated information.

>The first time, I wasted 30 minutes trying to interpret an extremely obscure error message, then asked my colleague, then kicked myself because I had run into the same problem six months ago.

So you weren't learning a new language or codebase, you were working with something you already knew. I don't care if anyone, regardless of seniority, uses GPT or any other LLM or any type of model for that matter to solve problems with. You were able to filter through the incorrect outputs or less than ideal outputs and arrive at the solution that suited the problem best.

How are you supposed to do that when you have no foundation to work with?

I do care about people new to a subject matter using it to learn because of the false positives the likes of ChatGPT can spew out.

Telling a junior to use ChatGPT to learn something new is just lazy mentoring and I'd take that as a red flag for any other senior or lead I found doing that.

1