Submitted by DukkyDrake t3_105qmfy in singularity
tinyogre t1_j3cgh2u wrote
The current version is funny to me. I did some tests and got reasonable looking code. First time I tried to use it for something real, but still pretty simple, it produced completely reasonable looking code. Everything made total sense and it even explained it well.
I put it in my actual project and it turned out the APIs it was trying to use were completely fabricated. No bearing on reality at all. I went back and told it so and it apologized and gave me a different version using a different set of non-existent APIs. Gave up and did it myself after all.
I think the APIs it wanted me to use would have been better than the ones that actually exist, for my purpose at least. But they just don’t and it really underlined the current weaknesses of the platform for me. In code as well as natural language, it’s an extremely good producer of bullshit and only marginally good at producing useful answers.
DukkyDrake OP t1_j3cwfmw wrote
All known existing AI tools doesn't really understand anything, this tool produces probabilistic text. That's why it won't do your work for you, it can't produce precise and dependable results. It will make you more productive as a programmer and is incapable of directly replacing what a programmer does.
blueSGL t1_j3du306 wrote
You can bet dollars to doughnuts that chatGPT is being run against real environments in training.
You know how it gets things wrong, and you need to keep prompting it then eventually it gets the thing correct?
That happening at scale.
Everything being recorded and every test case where it finally generates working code, that's a new piece of training data.
With just the current dataset and ability to feed known good answers back in this could bootstrap itself up in capability.
But of course it's not just using the data that's being ground out internally, it's also going to be training on all the conversations people are having with it right now.
gay_manta_ray t1_j3g2rx2 wrote
you can get good answers if you ask it to refactor the code repeatedly, and often the comments on the code (if you ask it to provide comments) are accurate after a certain point. the idea that this will replace programmers is comical, because you have to be a programmer to understand the code, understand why it does or doesn't work, understand what to ask chatgpt to refactor, etc. you have to already be a programmer to utilize chatgpt to program. that's what people who don't program don't seem to understand at all. it will be a useful tool as it improves, and will make programmers more productive, but it will not replace programmers.
Viewing a single comment thread. View all comments