Viewing a single comment thread. View all comments

ReginaldIII t1_izl0quh wrote

This is a really nice write up, thank you.

I'm interested what your thoughts are on prompt manipulation and "reasoning" your way around ChatGPT's ethical responses (and how those responses were even added during training). What direction do you see being best to combat these issues?

Also, have you looked at incorporating querying external sources for information by decomposing problems to reason about them? The quality of ChatGPT made me think of Binder https://lm-code-binder.github.io/ and how powerful a combination they could be. A benefit of Binder is the chain of reasoning is encoded in the intermediate steps and queries which can be debugged and audited.

Something ChatGPT lacks is that ability to properly explain itself. You can ask it to explain it's last output, but you can also ask it to lie to you and it does.

If you ask it to lie to you convincingly, who is to say it isn't?

Can a conversationally trained LLM ever be used in a production application (as many are beginning to do) without a more rigorous rule based framework around it?

10