Viewing a single comment thread. View all comments

xander76 OP t1_je9fehv wrote

Thanks for the response!

I may not be completely understanding the question, but from my perspective, the OpenAI APIs are just as non-deterministic as imaginary functions. If you call OpenAI directly multiple times with the exact same prompt and a temperature above 0, you will get different responses each time. The same is true of imaginary functions. (As an interesting side note, we default temperature in imaginary functions to 0, so unless you modify it in the comment, imaginary functions do by default return the same responses for the same set of arguments.)

Now, I do think that introducing this kind of non-determinism into your web code, whether through OpenAI's APIs or imaginary programming, presents some interesting wrinkles. For a traditional web developer like me, the fuzziness and non-determinism is frankly a bit scary. The thing we're working on now is tools that you can use to consistently test your imaginary functions and make sure that they are returning acceptable answers. Our hope is that this will give frontend devs the ability to use AI in their apps with reasonable confidence that the AI is doing what they want it to.

2

icedrift t1_je9i0wk wrote

What I mean is, why generate the function when only the data needs to be generated? Let's say I need a function that takes the text content of a post and returns an array of recommended flairs for the user to click. Why do this

/**
* This function takes a passage of text, and recommends up to 8
* unique flairs for a user to select. Flairs can be thought of as labels
* that categorize the type of post.
*
* \@param textContent - the text content of a user's post
*
* \@returns an array of flairs represented as strings
*
* \@imaginary
*/

declare function recommendedFlairs(textContent: string) : <string[]>

When you could write out the function and only generate the data?

async function recommendedFlairs(textContent: string) : <string[]> {
const OAIrequest = await someRequest(textContent);
const flairs = formatResponse(OAIrequest);
return flairs
}

In writing all this out I think I figured it out. You're abstracting away a lot of the headaches that come with trying to get the correct outputs out of GPT?

2

xander76 OP t1_je9w2ax wrote

Yeah, that's definitely one of the things it offers right now. If you want a particular data shape out of GPT, we handle that, both on the side of crafting the prompt to elicit the type and on the parsing side to get the data out of the raw GPT response.

We're also building more tools to make the development process easier, which depend on the fact that imaginary functions are easy to do static analysis on. The first tool is an IDE plugin that lets you directly run and test imaginary functions in VS Code and to compare different versions of an imaginary function to see how they do on various test inputs. We also plan to add simple annotations to the comment format to let you easily switch to other LLMs for your runtime to manage the cost/quality/privacy tradeoff.

ETA: One thing it also does right now is lets you switch between models (ada, babbage, curie, davinci, gpt-3.5-turbo, gpt-4) with just a configuration switch. If you use OpenAI's APIs you need to change your client code, because the GPT-3 models have a different API than GPT-3.5 and GPT-4.

2