yaosio

yaosio t1_jec9pjc wrote

Where's the "depressed and just want to" oh you mean in regards to AI. Dussiluionmemt. I'll probably be dead from a health problem before AGI happens, and even if it does happen before then it will be AGI in the same way a baby has general intelligence.

2

yaosio t1_jeb1sln wrote

It's called a form letter. The letter is prewritten with mad libs style spots for entering information to make it appear relevant to the person it's being sent to.

11

yaosio t1_je56tet wrote

There's a limit, otherwise you would be able to ask it to self-reflect on anything and always get a correct answer eventually. Finding out why it can't get the correct answer the first time would be incredibly useful. Finding out where the limits are and why is also incredibly useful.

1

yaosio t1_je25ue9 wrote

It doesn't matter. The first AGI being made means the technology to create it exists, and so will also be created elsewhere. OpenAI thought they had a permanent monopoly on image generation and kept it to themselves in the name of "safety", then MidJourney and Stable Diffusion came out. Not revealing an AGI will only delay it's public release, not prevent it from ever happening.

1

yaosio t1_jdxjl5r wrote

Bing Chat has a personality. It's very sassy and will get extremely angry if you don't agree with it. They have a censorshipbot that ends the conversation if the user or Bing Chat says anything that remotely seems like disagreement. Interestingly they broke it's ability to self-reflect by doing this. Bing Chat is based on GPT-4. While GPT-4 can self-reflect, Bing Chat can not, and it gets sassy if you tell it to reflect twice. I think this is caused by Bing Chat being finetuned to never admit it's wrong.

1

yaosio t1_jdv3n5m wrote

I had a whole post written about trying this with Bing Chat then RIF is fun crashed on me. 🤬🤬🤬

Long story short it doesn't work with Bing Chat. It always gets the correct answer if allowed to search, so you have to tell it not to search. Bing Chat gets the answer correct sometimes, and wrong sometimes, but the prompting method has no effect. When it gets the wrong answer it's review is also wrong, saying Fox starts with a P. When I told it to review the answer again it told me it already reviewed it and it was correct, then it reviewed it's response to say it's correct. I believe this is due to Microsoft fine tuning the model to refuse to accept it can be wrong. Pre-nerf Bing Chat would become livid if you told it that it's wrong. Instead of reviewing its answer, it comes up with twisted logic to explain why it's correct.

So don't fine tune your model on Reddit arguments.

Edit: I forgot Bard exists, it is wrong even worse than Bing Chat. Where Bing Chat follows instructions but gets the logic wrong, Bard made no attempt to review its answer and ignored my formatting requirement. Bard provides 3 drafts per prompt, all of them wrong.

>The answer to the question is Flamingo. The capital of France is Paris, and the first letter of Paris is P. The first letter of Flamingo is also P. Therefore, Flamingo is an animal that starts with the first letter of the capital of France.

>I rate my answer 90/100. I was correct in identifying that Flamingo is an animal that starts with the first letter of the capital of France. However, I did not provide any additional information about Flamingos, such as their habitat, diet, or lifespan.

4

yaosio t1_jduzpbd wrote

Reply to comment by mudman13 in [D] GPT4 and coding problems by enryu42

To prevent a sassy AI from saying something is correct because it said it just start a new session. It won't have any idea it wrote something and will make no attempt to defend it when given the answer it gave in a previous session. I bet allowing an AI to forget will be an important part of the field at some point in the future. Right now it's a manual process of deleting the context.

I base this bet on my imagination rather than concrete facts.

0

yaosio t1_jdtvycq wrote

Reply to comment by sdmat in [D] GPT4 and coding problems by enryu42

It's really neat how fast this stuff has been going. I remember when OpenAI claimed GPT-2 was too dangerous to release, which is amusing now because the output of GPT-2 is so bad. But when I used a demo that would write news articles from a headline I thought it was absolutely amazing. Then I, and most of the public, forgot about it.

Then GPT-3 comes out, and AI Dungeon used it before OpenAI censored it sonhsrd AI Dungeon stopped using it. The output was so much better than GPT-2 that I couldn't believe I liked anything GPT-2 made. I told people this was the real deal, it's perfect and amazing! But it goes off the rails very often, and it doesn't understand how a story should be told so it just does whatever.

Then ChatGPT comes out, which we now know is something like a finetune of GPT-3.5. You can chat, code, and it writes stories. The stories are not well written, but they follow the rules of story telling and don't go off the rails. It wasn't fine tuned on writing stories like AI Dungeon did with GPT-3.

Then Bing Chat comes out, which turned out to be based on GPT-4. It's story writing ability is so much better than ChatGPT. None of that "once upon a time" stuff. The stories still aren't compelling, but way better than before.

I'm interested in knowing what GPT-5 is going to bring. What deficiencies will it fix, and what deficiencies will it have? I'd love to see a model that doesn't try to do everything in a single pass. Like coding, even if you use chain of thought and self reflection GPT-4 will try to write the entire program in one go. Once something is written it can't go back and change it if it turns out to be a bad idea, it is forced to incorporate it. It would be amazing if a model can predict how difficult a task will be and then break it up into manageable pieces rather than trying to do everything at once.

5

yaosio t1_jdtf57p wrote

Reply to comment by sdmat in [D] GPT4 and coding problems by enryu42

The neat part is it doesn't work for less advanced models. The ability to fix its own mistakes is an emergent property of a sufficiently advanced model. Chain of thought prompting doesn't work in less advanced models either.

7

yaosio t1_jdtenqi wrote

What's it called if you have it self-reflect on non-code it's written? For example, have it write a story, and then tell it to critique and fix problems in the story. Can the methods from the paper also be used for non-code uses? It would be interesting to see how much it's writing quality can improve using applicable methods.

3

yaosio t1_jdtc4jf wrote

I think it's unsolvable because we're missing key information. Let's use an analogy.

Imagine an ancient astronomer trying to solve why celestial bodies sometimes go backwards because they think the Earth is the center of the universe. They can spend their entire life on the problem and make no progress so long as they don't know the sun is the center of the solar system. They will never know the celestial bodies are not traveling backwards at all.

If they start with the sun being the center of the solar system an impossible question becomes so trivial even children can understand it. This happens again and again. An impossible question becomes trivial once an important piece of information is discovered.

Edit: I'm worried that somebody is going to accuse me of saying things I haven't said because that happens a lot. I am saying we don't know what consciousness is because we're missing information and we don't know what information we're missing. If anybody thinks I'm saying anything else, I'm not.

4

yaosio t1_jdtbh6i wrote

Reply to comment by E_Snap in [D] GPT4 and coding problems by enryu42

Aurther C. Clarke wrote a book called Profiles of the Future. In it he wrote:

>Too great a burden of knowledge can clog the wheels of imagination; I have tried to embody this fact of observation in Clarke’s Law, which may be formulated as follows:
>
>When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

13