igorhorst

igorhorst t1_jc372db wrote

> Without a clear path to increasing this vital metric, I struggle to see how modern generative AI models can be used for any important tasks that are sensitive to correctness.

My immediate response is "human-in-the-loop" - let the machine generate solutions and then let the human user validate the correctness of said solutions. That being said, that relies on humans being competent to validate correctness, which may be a dubious proposition.

Perhaps a better way forward is to take a general-purpose text generator and finetune it on a more limited corpus that you can guarantee validity on. Then use this finetuned model on important tasks that are sensitive to correctness. This is the basis behind this Othello-GPT paper - take an existing GPT-3 model and finetune it on valid Othello boards so you can generate valid Othello moves. You wouldn't trust this Othello-GPT to write code for you, but you don't have to - you would find a specific machine learning model finetuned on code, and let that model generate code. It's interesting that OpenAI has Codex models that is finetuned on code, such as "code-davinci-003" (which is based off GPT-3).

This latter approach kinda reminds me of the Bitter Solution:

>The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.

But the flipside of the Bitter Solution is that building knowledge into your agent (via approaches like finetuning) will lead to better results in the short-term. In the long-term, solutions based on scaling computation by search and learning may outperform current solutions - but we shouldn't wait for the long term to show up. We have tasks to solve now, and so it's okay to build knowledge into our agents. The resulting agents might become obsolete in a few years, but that's okay. We build tools to solve problems, we solve those problems, and then we retire those tools and move on.

>And certainly we are really far from anything remotely "AGI".

The issue is that we're dealing with "general intelligence" here, and just because a human is terrible at bunch of subjects, we do not say that human lacks general intelligence. I generally conflate the term "AGI" with "general-purpose", and while ChatGPT isn't fully general-purpose (at the end of the day, it just generates text - though it's surprising to me that lots of tasks can be modeled and expressed by mere text), you could use ChatGPT to generate a bunch of solutions. So, I think we're close to getting general-purpose agents that can generate solutions for everything, but the timeline for getting correct solutions for everything may be longer.

8