nixed9 t1_jdrxr76 wrote

a Reflexion loop asks the model to react to it's own output and critique it before giving you an additional answer.

Edit: (In the paper, it provides a loop like this which feeds back into itself to help it's own cognition. It can repeat this loop multiple times.)

You can do a mini-loop by prompting. I've been playing with this all day.

I prompt it like this:

> "For this interaction, we are going to use the following structure.

> User (me): [I will ask a topic or question]

> You will provide an Assistant Hypothetical Response: [Brief or simplified answer to the topic or question]

> Then you will undergo Agent Reflection: [You will provide a Critique of the hypothetical response, highlighting the limitations, inaccuracies, or areas that need improvement or expansion, while providing guidance on how to address these issues in the revised response]

> Then you will provide an Actual Response: [The natural and contextually appropriate answer to the topic or question, as generated by the advanced language model, which incorporates the suggestions and improvements from the agent reflection for a more comprehensive and accurate response. This also can include step-by-step reasoning.]

> Do you understand?"


nixed9 t1_jdnpdma wrote

In my personal experience, Bing Chat, while it says it's powered by GPT-4, is way, way, way less powerful and useful than ChatGPT-4 (which is only available for Pro users right now). I've found ChatGPT-4 SIGNIFICANTLY better.

It also has emergent properties of intelligence, vision, and mapping, somehow. We don't know how.

This paper, which was done on GPT-4, and a more powerful version than what we have access to via either Bing or, is astounding:


nixed9 t1_jdifhni wrote

This is quite literally what we hope for/deeply fear at /r/singularity. It's going to be able to interact with computer systems itself. Give it read/write memory access and access to it's own API, or the ability to just simply visually process the screen output... and then.... what?

Several years ago, as recently as 2017 or so, this seemed extremely far-fetched and the "estimation" of a technological singularity of 2045 seemed wildly optimistic.

Right now it seems like it's more like than not to happen by 2030.


nixed9 t1_itzv0p0 wrote

What? What are you talking about?

Who thinks this?

What evidence is there for this?

Why would they think this?

What capitalist-based society would intentionally sacrifice productivity?