Viewing a single comment thread. View all comments

Clean-Inevitable538 t1_ivhfbhw wrote

This answer is a perfect example of what the OG author is talking about. This response does seem to come from a knowledgable person and the response seems well constructed but it does not address the point the author is making. But they are observant enough too state that the authors argument is unclear which in reality means that they did not understand it fully... Which is great at showing how 2 separate theories of Truth work for diferent people. Where the author is probably comming from some sort of relativism, the redditor comes from a theory where truth is objective and so claims not that the OG author's argument is difficult to understand but the argument is unclear, under a premise that they know what constitutes a clear argument. :D

Three takeaways:

  1. The paradox of big data is that the more data we ransack for patterns, the more likely it is that what we find will be worthless or worse.
  2. The real problem today is not that computers are smarter than us, but that we think that computers are smarter than us and trust them to make decisions for us that they should not be trusted to make.
  3. In the age of Big Data and powerful computers, human wisdom, commonsense, and expertise are needed more than ever.
−1

visarga t1_ivinkvl wrote

  1. Take a look at neural scaling laws, figures 2 and 3 especially. Experiments show that more data and more compute are better. It's been a thing for a couple of years already, the paper has 260 citations, authored by OpenAI.

  2. If you work with AI you know it always makes mistakes. Just like if you're using Google Search - you know you often have to work around its problems. Checking models not to make mistakes is big business today, called "human in the loop". There is awareness about model failure modes. Not to mention that even generative AIs like Stable Diffusion require lots of prompt massaging to work well.

  3. sure

9

thereissweetmusic t1_ivj4ggd wrote

As a layman your supposed alternative interpretation of the article’s arguments makes them sound quite simplistic and not at all difficult to understand. Reductive even. Which makes me suspect your suggestion that OP didn’t understand the article came directly from your butthole.

  1. Ok, you’ve just claimed the opposite of what OP claimed, and provided far less evidence (ie none) to back it up compared to what OP provided.

  2. This sounds like it has nothing to do with having more or less data.

  3. Ditto

2

Clean-Inevitable538 t1_ivj6u8m wrote

I am a layman as well but as far as I understand the article, as it talks about meaning and relation, variance mentioned by the commentor is not relevant. And I can see how it can be misconstrued as relevant when talking about meaning. It depends if meaningfull is understood as data extrapolation itself or its corelation to factual aplication.

3

shumpitostick t1_ivldyfg wrote

I understand that those are the takeaways, but where is the evidence? The author just jumps to some vaguely related topics as if it's evidence, while what he's really doing is spinning some kind of narrative, and the narrative is wrong.

About the takeaways:

  1. As I explained in my comment, this is not true.
  2. Who thinks that way? Everybody I know, both laymen and people in the field of Data Science and Machine Learning, have healthy skepticism of AI.
  3. Having worked as a data scientist, I can attest that data scientists check their algorithms, use common sense, and put an emphasis on understanding their data.

Honestly,the article just reads to me as a boomer theory-focused economist who's upset of the turn towards quantitative and statistics-heavy approach that his field has taken. There is a certain (old) school of economists who prefer theoretical models and takes a rationalist over an empirical approach. The problem with their approach is that the theoretical models they build use assumptions that often turn out to be wrong. They use "common sense" rather than relying on data but the world is complex and many "common sense" assumptions don't actually hold.

0