aluvus
aluvus t1_jag1qou wrote
Reply to comment by elijahmeeks in [OC] Complexity and Uncertainty of Topics that ChatGPT Claims to be Difficult to Discuss by elijahmeeks
ChatGPT is not a data source, even for data about itself. There is no underlying "thinking machine" that can, say, meaningfully assign numeric scores like this. It is essentially a statistical language machine. It is a very impressive parrot.
There is nothing inside of it that can autonomously reach a conclusion that a topic is too difficult to comment on; in fact, many people have noted that it will generate text that sounds very confident and is very wrong. It does not have a "mental model" by which it can actually be uncertain about claims in the way that a human can.
The first question you asked (100 topics) is perhaps one that it can answer in a meaningful way, but only inasmuch as it reveals things that the programmers programmed it not to talk about. But the others reflect only, at best, how complex people have said a topic is, in the corpus of text that was used to train its language model.
Regarding the plot itself, I would suggest "uncertainty about topic" and "complexity of topic" for the labels, as the single-word labels were difficult for me to make sense of. I would also suggest reordering the labels, since complexity should be the thing that leads to uncertainty (for a human; for ChatGPT they are essentially unrelated).
aluvus t1_j9so4kr wrote
Reply to [OC] Devastating WUI Peaks. by srujay98
It would be better to have just the beginning of each year marked on the x axis, rather than what I assume is auto-generated ticks (which are spaced every 3 quarters).
The vertical gridlines are very dense, to the point of being a distraction. It bothers me that they fade out before reaching the x axis, so that it's very hard to actually correlate them to a year/quarter.
It would be nice to embed a description of this index in the graphic, since very few people will have heard of it before.
aluvus t1_j90iyhc wrote
Reply to [OC] Took a look into Hello Fresh by jakeman8888
Is "cooking difficulty" meant to be some sort of average across the category? If so, the label is unclear. My initial assumption was that the chart was loaded with the wrong data, since "cooking difficulty: European" did not make any sense.
aluvus t1_jag4p3r wrote
Reply to comment by elijahmeeks in [OC] Complexity and Uncertainty of Topics that ChatGPT Claims to be Difficult to Discuss by elijahmeeks
The fact that people misuse it as a data source is not an excuse for you to knowingly misuse it, doubly so without providing any context to indicate that you know the data is basically bunk. This is fundamentally irresponsible behavior. Consider how your graphic will be interpreted by different audiences:
> Whether it "should" be able to assign meaningful numerical scores to things like this, it sure was willing to.
Yes, so will random.org. Should I make a graphic of that too? Perhaps I could imply that it is sentient.