Viewing a single comment thread. View all comments

Delacroid t1_j9flphj wrote

Did you really need to numerically compute the gradient to check it was OK? Dude this is high-school math.

3

vladosaurus OP t1_j9gn4i0 wrote

Dude it's ok, I know it is high-school math you proved your point you are genius you know high-school math I don't.

That was not my aim. It was to treat the ChatGPT implementation as a black-box without touching it, and see whether is correct.

1

Delacroid t1_j9grwzu wrote

I'll admit that my comment may come off as elitist, but I think that you have to admit that this was a very low effort post. Maybe a more correct sub for this post would have been r/learnmachinelearning.

1

vladosaurus OP t1_j9gu3cr wrote

Ideally we have to generate many examples as such without seeing them and wrap them in some test suite using automatic differentiation to see how many will come out correct.

Something similar to what the authors did in the OpenAI Codex model. They provided the function signature and the docstrings and promted the model to generate the rest of it. Then they wrapped the generated function into test suites and calculated how many of them pass. It's the pass@K metric.

I am not aware if something similar is done for differentiation, maybe there is, I have to search for.

0

Delacroid t1_j9itr09 wrote

Well that good be an amazing post to read. How many times does it get math questions right but with an statistically significant number of samples. So that we can actually compare to the state of the art, such as galactica.

1