Viewing a single comment thread. View all comments

No_Ninja3309_NoNoYes t1_j9ix5p4 wrote

32K context is the new 600K RAM. The bigger the model the more resources you need to support it and the more expensive it gets. Without any guarantee about the quality. For example ChatGPT would produce code like:

int result= num1 + num2; return result;

That's in itself not technically wrong but it is unnecessary long. Any static analysis tool would have nagged about this. Also, unit tests or compilers would have caught any actual errors. The OpenAI culture is of PhDs with a certain background. They work in Jupiter notebooks and don't know about standard Dev tools.

My friend Fred says that he can add value with his code generation startup because of that. I also think that LLMs and more traditional technology combined are the way to go.

−4

DonOfTheDarkNight t1_j9jgped wrote

It's interesting to see how dismissive you are of PhDs not being able to use standard Dev tools

6