Comments

You must log in or register to comment.

farmingvillein t1_jczf7z8 wrote

Maybe I'm reading too quickly, but I can't figure out what this actually does, from the README.

1

usc-ur OP t1_jd3nzab wrote

The main purpose of this project is joining in a single environment all the resources (models, prompts, APIs, etc.) related to LLMs. Moreover, we also think from an end-user perspective. It is heavily unlikely that a user would introduce a complex context in a query to a model or searcher. In this project, we try to bias the different model responses to answer in different ways/behaviors, but hidding this to end-users.

1

farmingvillein t1_jd47vh9 wrote

OK, insofar as you care about adoption, I'd encourage you to clean up the README to make it much clearer as to what you're doing. Right now, you've got API call examples, but it isn't clear what is actually happening, why this wrapper is helpful/necessary, etc.

I can guess/infer all the above, but you want your README to make it really, really quick and easy for your readers to figure out what is going on.

1

usc-ur OP t1_jd4ydfk wrote

Thanks for the tips! Will consider them!

1

Nezarah t1_jd297zo wrote

Is this essentially In-Context Learning?

You condense additional knowledge as a prefix to the prompt as “context” so that the question/input can use that information to create a more accurate/useful output?

1

usc-ur OP t1_jd3nx3w wrote

That's right :) Check the "purpose" in our readme

1

Nezarah t1_jd4tt7y wrote

Ah! Iv only just started to dive into the machine learning rabbit hole so I was not sure of if was understanding the term correctly.

Im keen to check it out.

1