Viewing a single comment thread. View all comments

visarga t1_izg14yr wrote

> But here lies the rub: you will need to do this for everything that you do going forward, and the facade will need to never fall.

In a few years we'll be all surrounded by very advanced AI left and right. The trend is to use more and more AI, not less. It will become like penmanship in the age of keyboards. Everyone will use AI for writing.

BTW, you can use GPT-3 prompted with personality profiles to answer polls, rate things, act like a focus group. If you know the distribution of your audience you can focus-group the shit out of your messages to obtain the maximum impact.

> “conditioning GPT3 on thousands of socio-demographic backstories from real human participants in multiple large surveys in the United States: the 2012, 2016, and 2020 waves of the American National Election Studies (ANES)[16], and Rothschild et al.’s “Pigeonholing Partisans” data.

> When properly conditioned, is able to produce outputs biased both toward and against specific groups and perspectives in ways that strongly correspond with human response patterns along fine-grained demographic axes. In other words, these language models do not contain just one bias, but many”.

They can simulate a population in silicon for virtual polling. Everyone will want to virtual-test their tweets and articles.

6

GuyWithLag t1_izg8rz0 wrote

Was this written by an AI? because it's veering hard to a similar topic after the first paragraph.

>penmanship in the age of keyboards

Bad example, in both cases you need to know what you want to write, and how to express it. I'm of the position that the approach used by OP will lead to a shallower understanding of the topics he delegates to the AI for research, and that he himself will not have the necessary foundations to generate advances (novel things, sure, he'll get from the AI recombining the current state of the art).

2