Viewing a single comment thread. View all comments

mvujas OP t1_j0l9aht wrote

That is true, but it's a similar case with crowdsourcing, they have some clever things there such as honeypots and weighted expertise scores or whatever they are called in order to make the most of the data. But I would even argue that continuing a conversation is a form of positive feedback or even coming back to the website

6

Nameless1995 t1_j0lri08 wrote

I just had a thought. I think resampling with "try again" button itself can be used as a feedback (a noisy signal for the "user didn't like the earlier version"). Moreover if a user switches back to the earlier sample that can be another feedback (the earlier version being preferred more). They can get a lot of data from these. I expect users to be using "try again" more frequently that upvotes/downvotes.

9

Aggravating-Act-1092 t1_j0lhqqx wrote

I’d agree. You can probably even ask ChatGPT to review the follow up someone gives it and assign a score based on that.

Personally if it gives me buggy code I point it out and try to fix for example, that’s clear negative. I also sometimes write thank you to it when I’m happy with its answer.

6

fimari t1_j0rpmx0 wrote

Probably the same way Google detect good search results - people stop searching when the result is good and people stop fiddle around if they have what they want.

2