Submitted by SoulGuardian55 t3_10ro4hc in singularity

Not long time ago, engaged in dispute about using AI in every part of our life. For clarity, the debate was among ordinary people and in a country (from Russia*) where the topic of AI is not so vividly discussed in society. But the results pretty similar anyway. Here what they said to me. To do justice, all of these words presented by one man:

  1. The main emphasis is placed on the fact that neural networks cannot be trusted, even taking into account very rapid progress, they will remain neural networks, and other architectures will not replace them. Obviously, such an incomplete understanding of the topic is caused by a lack of information. It got to the point that narrow AI was referred to as general AI.
  2. "We don't need general AI, we've been doing well ourselves for the last 40,000 years without it."
  3. Neural networks and machine learning will only lead us to the wrong results, because they themselves cannot identify positively the problem without a human.
  4. You cannot be sure of the truthfulness of AlphaFold's predictions (and predictions of AlphaFold-like systems), because it's only trained on the available data, and did not install proteins on his own, in real time.
  5. In five years if someone ends up in hospital, remember it's because your doctor used AI-systems to help him earn a degree and finish diploma.

(*No matter what you think about current conflict between Russia and Ukraine, please, in the name of healthy dispute don't bring up politics and war.)

13

Comments

You must log in or register to comment.

xSNYPSx t1_j6x8ajo wrote

  1. Tell it to people who actually 35+(especially woman) and want to be younger again
5

SoulGuardian55 OP t1_j6xr0bs wrote

Some of them think I "overestimate" about skyrocketing biomedical field by AI.

5

FomalhautCalliclea t1_j6ysiyp wrote

First off, your post and attempt deserves more upvotes, you are trying to bring the topic to people that disagree and start a discussion, even more so in a context and country where the topic isn't mainstream. For that alone you deserve praise.

Now for the points in question :

  1. Neural networks aren't the end of AI research. The bet they make on the fact that no architecture will ever replace them is a bit presomptuous. And the goal of NNs is not to be trusted blindly. That's the word lacking in their reasoning.

  2. That is the silliest point of them all, with respect to the people you were talking to. First of all it can be said of many technologies, just think of space travel and the amazing discoveries it brought even indirectly. But even simpler: we haven't been doing that good in the last 40 000 years. Besides, that sounds a lot like an appeal to nature fallacy:

https://en.wikipedia.org/wiki/Appeal_to_nature

  1. This point is somewhat anachronistic and tautological: it of course cannot identify a problem without a human currently. Otherwise, it would be an AGI... which they say is not possible... And one doesn't need a tool to be human independent to have correct results. Some AI have been detecting breast cancer better than humans:

https://www.bbc.com/news/health-50857759

and those results were "correct" (whatever your fellows meant by "wrong result", maybe a bit lost in translation there, it's ok, i'm not a native english speaker myself). Btw, it's not even new, AI has been in use in cancer detection for the last 20 years or so.

  1. AlphaFold's goal isn't to "install proteins on his own, in real time". It seems your interlocutors make the same tautology as in point 3: "it's not an AGI, therefore it cannot be an AGI"... AlphaFold isn't conceived as a magic wand that tells you 100% truth but as a helping tool to be used along with X-ray crystallography. It was intended that way. What your interlocutors hope AlphaFold to be isn't here yet.

  2. The actual "learning" in university is actually quite separate from actual knowledge. Many learn some topic hard for just an exam then forget about it in a few days. Many doctors, in the example of medecine, keep learning through their career. The classical way of learning isn't as optimal as they believe it to be. Sure GPT can be abused. As any tech. But those cheater fellows won't remain in their job long if they absolutely know nothing. Hospitals won't keep them long.

3

visarga t1_j6yw3md wrote

> AlphaFold's goal isn't to "install proteins on his own, in real time"

Actually that's important - to have experimental validation, a signal to learn from when you have no other clue. Instead of learning from a fixed dataset, an AI could design experiments, like human scientists.

5

FomalhautCalliclea t1_j6z1n0v wrote

Totally agree on the fact that it's very important. It's just that we're not there yet and that AlphaFold is not made for that. Maybe a future descendant of it, but not AlphaFold itself.

The day we'll have that will definitely be a big deal for sure.

1

SoulGuardian55 OP t1_j6z5mra wrote

>That is the silliest point of them all, with respect to the people you were talking to. First of all it can be said of many technologies, just think of space travel and the amazing discoveries it brought even indirectly. But even simpler: we haven't been doing that good in the last 40 000 years. Besides, that sounds a lot like an appeal to nature fallacy

Try to emphasize about that point. In another thread I mentioned three people (and their age) who engaged a topic with me. Such words (from 2nd point) came from youngest of them, who's 22.

The second person, which is 23, is a medical student, he's doubtful about complete handling to AI such work as art, but agrees that it's a powerful tool that enhances artists, writers and etc and can give them a lot of help. Out of three he seems most revelant and opened to topic. Of course, he excited about developments of biomedical AI.

The last one is an engineer, 28, just not so engaged in topic about AI and was brought to it partially after appearance of Generative AI and ChatGPT. But also sees potential in that technology.

2

FomalhautCalliclea t1_j73nifj wrote

Very interesting that the ones more open to the topic are the most educated and versed in science oriented fields.

My overall advice would be not to talk right away about far future and the most improbable things (AGI, the singularity itself), but rather about current advances and progress, hence my reference to current achievements (the links), showing AI is far from being only "wrong" and having only "failures".

1

SoulGuardian55 OP t1_j73se3x wrote

Love to speculate about exponential growth and future developments, but the current progress gives too much fuel for discussion alone. Narrow AI systems and their advances are what brought my attention more closely since late 2010's.

And make some check about these words in the post. They came mostly from youngest person (like I said before), and he is a social worker by education.

2

Nadeja_ t1_j6wtvef wrote

  1. Although neural networks (the human brain too) tend to “hallucinate” and to make things up (your own memory isn’t 100% reliable either) that’s why we help our memory with pictures, taking notes, journals, record numbers and so on (not just because we forget, but also because we might not remember correctly). If you want to retrieve accurate info from a nn, then you have it to understand your question and come up with the probable answer, then find the source on the net or in a database, then, if found, a quote function returns the exact quote/info. However, trust-wise, there is the alignment problem, but that’s another story.

  2. Yeah, that sounds like “we don’t need the wheel, because we did fine without it in the past 300,000 years”.

  3. “Would only”, “would never”… is reasoning in absolutist terms, witch ends up in faulty predictions such as “heavier than air machines would never fly”. For now, with the current models, you still have to to review the results: the generated answer or may contain inaccurate or made up info, the generated code may have bugs or not work at all, the generated image comes with weird stuff that you notice when you zoom in or the hands look funny, and so on. But it’s pretty likely that eventually we will have reliable models that understand the context better, that know how a hand is supposed to be and how it works, that return accurate sourced info, that code like the best professional. Our brain is the example that’s doable, unless you believe (based on no evidence) it’s because of something magical.

  4. You can hardly be 100% be sure of anything, if you ask to a philosopher, and there may be some issue, but there are also peer reviewed papers.

  5. Or maybe the opposite happens and there would be fewer wrong diagnoses. In the medical field there is already who uses machine learning. Still, students shouldn’t delegate their learning, reasoning and writing to language models and other models (not yet at least, I’m not sure how I would feel when an ASI will be around), but use them to improve (e.g. you ask ChatGPT to improve your essay and you learn how to write better).

1

SoulGuardian55 OP t1_j6wvwhg wrote

>but use them to improve (e.g. you ask ChatGPT to improve your essay and you learn how to write better).

# Such thought I used in argument with one of them, but he tried to counter it like that: "Do you really think students shall use such systems, even if they are be "education type" to improve themselves? I highly doubtful that's shall be the case."

1

SoulGuardian55 OP t1_j6wwav7 wrote

One more thing. Dispute was with people who are pretty young (22 years old, one was 23 years old and oldest of them 28 years old).

1