Viewing a single comment thread. View all comments

xander76 OP t1_je9w2ax wrote

Yeah, that's definitely one of the things it offers right now. If you want a particular data shape out of GPT, we handle that, both on the side of crafting the prompt to elicit the type and on the parsing side to get the data out of the raw GPT response.

We're also building more tools to make the development process easier, which depend on the fact that imaginary functions are easy to do static analysis on. The first tool is an IDE plugin that lets you directly run and test imaginary functions in VS Code and to compare different versions of an imaginary function to see how they do on various test inputs. We also plan to add simple annotations to the comment format to let you easily switch to other LLMs for your runtime to manage the cost/quality/privacy tradeoff.

ETA: One thing it also does right now is lets you switch between models (ada, babbage, curie, davinci, gpt-3.5-turbo, gpt-4) with just a configuration switch. If you use OpenAI's APIs you need to change your client code, because the GPT-3 models have a different API than GPT-3.5 and GPT-4.

2