Submitted by gsvclass t3_11ak97p in MachineLearning

The focus of this library is to implement a lot of the top prompt engineering papers such as PAL, CoT, Self-Ask, ReACT.

Excited to share "Minds". A a new way to build backends and workflows entirely with AI (LLMs from OpenAI and Cohere). The AI can call your APIs, lookup in your database, etc.

With just a couple lines of code you can builds things like a question answering service where the AI can query your local database to help answer customer support queries etc.

List of Prompt Engineering papers we are implementing: https://42papers.com/c/llm-prompting-6343

MindJS Library https://github.com/dosco/minds

8

Comments

You must log in or register to comment.

ZestyData t1_j9tstqb wrote

A wrapper for a big API isn't suited for posting on /r/MachineLearning

10

gsvclass OP t1_j9wq69s wrote

It's a prompt engineering library that has implementations of various papers in the space include ReAct, Pal etc. We are working on adding more. Here's a list of some of papers we are implementing. https://42papers.com/c/llm-prompting-6343

−2

cthorrez t1_j9xgmx1 wrote

may be an unpopular opinion these days but I don't think prompt engineering is a suitable topic for /r/MachineLearning

2

gsvclass OP t1_j9xqgf1 wrote

Why do you feel that?

1

cthorrez t1_j9xrguh wrote

That comment is very over the top sarcasm. You would have realized that if you had checked the source I linked.

1

gsvclass OP t1_j9xrqzq wrote

I updated my comment. Not sure what you mean here "You would have realized that if you had checked the source I linked"? what source

1

cthorrez t1_j9xrv1v wrote

The source I linked in the comment you linked and then deleted.

1

gsvclass OP t1_j9xsgz3 wrote

Ok I saw that not entirely sure what you think prompting is but its not about getting exact answers or anything like that. As I understand it (however limited) it is about bringing attention to a part of the models latent space closest to where your soluton may fall.

1

cthorrez t1_j9xstlw wrote

People are rushing to deploy LLMs in search, summarization, virtual assistants, question answering and countless other applications where correct answers are expected.

The reason they want to get to the latent space close to the answer is because they want the LLM to output the correct answer.

1

gsvclass OP t1_j9xttgi wrote

While it may seem that way correct answers are always expected but never delivered everything works within a margin of error with humans it's pretty large and not easy to fix. Also "correct" is subjective. LLMs are language models use the knowlede embedded in their wieghts combined with the context provided by the prompt to do their best. The positive thing here is that that the margin of error is actively being reduced withn LLMs and not so with however we did this before.

1

cthorrez t1_j9xrl8a wrote

I think it's not suitable because it isn't really related to the process of a machine learning anything. It seems to me to belong to the field of human computer interaction.

1