Viewing a single comment thread. View all comments

OptimizedGarbage t1_j87jazn wrote

So, yes and no. You really do not want to make the type annotations be in plain English, because in the Curry-Howard correspondence, the types correspond to theorems, and the code corresponds to a proof of those theorems. It's one thing if you know the theorem, but don't see the proof. You can often trust that the proof is right without seeing it. But you really need to know what the theorem is. If you start with English and generate some type behind the scenes, you don't see what the actual theorem is, but just know that the system has proved 'some theorem' about your code. As the programmer you have no idea what this actually tells you, so it kinda defeats the point of using static typing in the first place.

That said, you *can* write down a desired type and have a system write down a ton of type annotations or generate a bunch of code to prove that the type you wrote down is satisfied by your program. There's been recent work on this in deep learning for theorem proving, such as this work which uses GPT for proving theorems in Lean, a dependently type programming language and theorem prover. A better approach though would be to combine this with an actual tree search algorithm to allow a more structured search over the space of proofs, instead of trying to generate full correct proofs in one shot. Hypertree Proof Search does this, using a variant of AlphaZero to search and fine-tune the neural net. Unfortunately it hasn't been open-sourced though, and it's pretty compute intensive, so we can't use this for actual type inference yet. But yeah there's active interest in doing this kind of thing, both as a proving ground for using RL for reasoning tasks and from mathematicians for theorem-proving.

15