astrange

astrange t1_jdy6d4f wrote

But nobody uses the base model, and when they did use it, it was only interesting because it fails to predict the next word and therefore generates new text. A model that successfully predicts the next word all the time given existing text would be overfitting, since it would only produce things you already have.

1

astrange t1_j9se7ri wrote

Yud is a millenarian street preacher; his concept of evil superintelligent AGI is half religion and half old SF books they read. It has no resemblance to current research and we aren't going in directions similar to what they imagine we're doing.

(There's not even much reason to believe "superintelligence" is possible, that it would be helpful on any given task, or even that humans are generally intelligent.)

12

astrange t1_j7oduw3 wrote

This is wishful thinking. ChatGPT, being a computer program, doesn't have features it's not designed to have, and it's not designed to have this one.

(By designed, I mean has engineering and regression testing so you can trust it'll work tomorrow when they redo the model.)

I agree a fine tuned LLM can be a large part of it, but virtual assistants already have LMs and obviously don't always work that well.

2

astrange t1_j0z2ea3 wrote

This seems like an extremely difficult problem. Humans generally fail to recognize sarcastic journalism all the time; I expect only the original authors could tell for some of it.

(For instance, famous alleged-fraudster SBF has a lot of articles in places like the NYT which most readers think are "good press" for him, but I'm fairly sure are actually the journalists lowkey making fun of him.)

2

astrange t1_iy5mm2j wrote

Yeah, "bad anatomy" and things like that come from NovelAI because its dataset has images literally tagged with that. It doesn't work on other models.

SD is scraped off the internet so something that might work is negative keywords associated with websites of images you don't like. Like "zillow" "clipart" "coindesk" etc.

Or try clip-interrogator or textual inversion against bad looking images (but IMO clip-interrogator doesn't work very well yet either).

10

astrange t1_iwyimtj wrote

Humans do have some instinctive knowledge. The instinctive fear of snakes and spiders, sexual attraction, etc, all rely on recognizing sense data without learning anything first.

1

astrange t1_irr0xth wrote

The “spooky action” (instantaneous collapse of the waveform) is part of the Copenhagen interpretation of quantum physics, but isn’t proven to exist, as that’s just one interpretation.

There’s other interpretations that are still valid (many worlds, superdeterminism, pilot wave) and don’t include it, but of course many of those can’t be falsified.

2

astrange t1_irpk8s2 wrote

Do you have specific examples?

It's obviously true that diffusion models don't work for the reasons they were originally thought to; the cold diffusion paper shows this. Also, Stable Diffusion explainers I've seen explain it using pixel diffusion even though it's latent diffusion. And I'm not sure I understand why latent diffusion works.

1