folk_glaciologist

folk_glaciologist t1_jc68a4q wrote

You can use searches to augment the responses. You can write a python script to do this yourself via the API, making use of the fact that you can write prompts that ask ChatGPT questions about prompts. For example this is a question that will cause ChatGPT to hallucinate:

> Who are some famous people from Palmerston North?

But you can prepend some text to the prompt like this:

> I want you to give me a topic I could search Wikipedia for to answer the question below. Just output the name of the topic by itself. If the text that follows is not a request for information or is asking to generate something, it is very important to output "not applicable". The question is: <your original prompt>

If it outputs "not applicable" or searching Wikipedia with the returned topic returns nothing, then just reprocess the original prompt raw. Otherwise download the Wikipedia article (or first few paragraphs), prepend to original prompt and ask again. Etc.

In general I think that using LLMs as giant databases is the wrong approach because even if we can stop them hallucinating they will always be out of date because of the time lag to retrain them, we should be using their NLP capabilities to turn user questions into "machine-readable" (whatever that means nowadays) queries that get run behind the scenes and then fed back into the LLM. Like Bing chat doing web searches basically.

1

folk_glaciologist t1_ja0hnaq wrote

I went through a period of getting annoyed at people being unimpressed by ChatGPT but I've decided to just let it go. A few observations and theories of mine about why they are like this:

  • A lot of people are just phoning it in at work and pretty much hate their jobs. If you start hyping up how some AI chatbot is going to help them complete their TPS reports 10 times as fast you are going to come off as a weirdo corporate shill. Even if that happened, it would probably just mean their bosses start expecting 10 times as many TPS reports from them.
  • They tried it out but were really unimaginative with their prompts. One guy I showed it to I told him that he could use to write newsletters. His attempt at a prompt: "newsletter". Not "write a newsletter", not "write a news letter for the hiking club reminding members their fees are due 15/2/2023 and asking for suggestions for the next trip" or anything like that. They somehow think the AI is going to telepathically know what they want and if it doesn't then it's a dud.
  • They like to think they are too clever to fall for hype and hysteria and like to put on a cynical "too cool to be unimpressed by the latest shiny thing" front. One older guy at my office is convinced "it's just Eliza with a few extra bells and whistles".
  • They are low decouplers - people who can't separate the question of whether AI works from ethical questions around it. So they hear about Stable Diffusion using artists' work in their training sets without permission, hear that it's going to put people out of work, about OpenAI paying people in Kenya measly wages to train the bots etc etc and think that's all bad, so their natural response is to bad mouth AI technology by saying it doesn't work or is underwhelming. It's the equivalent of "eugenics is immoral, therefore eugenics doesn't work and is a pseudoscience"
  • People whose jobs are based around compliance concerns like privacy/security/plagiarism/copyright etc. They realise AI opens a massive can of worms for them and instead of working through the issues they are pretty keen to clamp down on it.
  • Cryptocurrency hype has made a lot of people wary about the "next big thing" in tech, especially when there is a cult-like vibe emanating from some of its evangelists, which is unfortunately how talk about singularity comes off like to a lot of people.
2