Viewing a single comment thread. View all comments

FomalhautCalliclea t1_j6ysiyp wrote

First off, your post and attempt deserves more upvotes, you are trying to bring the topic to people that disagree and start a discussion, even more so in a context and country where the topic isn't mainstream. For that alone you deserve praise.

Now for the points in question :

  1. Neural networks aren't the end of AI research. The bet they make on the fact that no architecture will ever replace them is a bit presomptuous. And the goal of NNs is not to be trusted blindly. That's the word lacking in their reasoning.

  2. That is the silliest point of them all, with respect to the people you were talking to. First of all it can be said of many technologies, just think of space travel and the amazing discoveries it brought even indirectly. But even simpler: we haven't been doing that good in the last 40 000 years. Besides, that sounds a lot like an appeal to nature fallacy:

https://en.wikipedia.org/wiki/Appeal_to_nature

  1. This point is somewhat anachronistic and tautological: it of course cannot identify a problem without a human currently. Otherwise, it would be an AGI... which they say is not possible... And one doesn't need a tool to be human independent to have correct results. Some AI have been detecting breast cancer better than humans:

https://www.bbc.com/news/health-50857759

and those results were "correct" (whatever your fellows meant by "wrong result", maybe a bit lost in translation there, it's ok, i'm not a native english speaker myself). Btw, it's not even new, AI has been in use in cancer detection for the last 20 years or so.

  1. AlphaFold's goal isn't to "install proteins on his own, in real time". It seems your interlocutors make the same tautology as in point 3: "it's not an AGI, therefore it cannot be an AGI"... AlphaFold isn't conceived as a magic wand that tells you 100% truth but as a helping tool to be used along with X-ray crystallography. It was intended that way. What your interlocutors hope AlphaFold to be isn't here yet.

  2. The actual "learning" in university is actually quite separate from actual knowledge. Many learn some topic hard for just an exam then forget about it in a few days. Many doctors, in the example of medecine, keep learning through their career. The classical way of learning isn't as optimal as they believe it to be. Sure GPT can be abused. As any tech. But those cheater fellows won't remain in their job long if they absolutely know nothing. Hospitals won't keep them long.

3

visarga t1_j6yw3md wrote

> AlphaFold's goal isn't to "install proteins on his own, in real time"

Actually that's important - to have experimental validation, a signal to learn from when you have no other clue. Instead of learning from a fixed dataset, an AI could design experiments, like human scientists.

5

FomalhautCalliclea t1_j6z1n0v wrote

Totally agree on the fact that it's very important. It's just that we're not there yet and that AlphaFold is not made for that. Maybe a future descendant of it, but not AlphaFold itself.

The day we'll have that will definitely be a big deal for sure.

1

SoulGuardian55 OP t1_j6z5mra wrote

>That is the silliest point of them all, with respect to the people you were talking to. First of all it can be said of many technologies, just think of space travel and the amazing discoveries it brought even indirectly. But even simpler: we haven't been doing that good in the last 40 000 years. Besides, that sounds a lot like an appeal to nature fallacy

Try to emphasize about that point. In another thread I mentioned three people (and their age) who engaged a topic with me. Such words (from 2nd point) came from youngest of them, who's 22.

The second person, which is 23, is a medical student, he's doubtful about complete handling to AI such work as art, but agrees that it's a powerful tool that enhances artists, writers and etc and can give them a lot of help. Out of three he seems most revelant and opened to topic. Of course, he excited about developments of biomedical AI.

The last one is an engineer, 28, just not so engaged in topic about AI and was brought to it partially after appearance of Generative AI and ChatGPT. But also sees potential in that technology.

2

FomalhautCalliclea t1_j73nifj wrote

Very interesting that the ones more open to the topic are the most educated and versed in science oriented fields.

My overall advice would be not to talk right away about far future and the most improbable things (AGI, the singularity itself), but rather about current advances and progress, hence my reference to current achievements (the links), showing AI is far from being only "wrong" and having only "failures".

1

SoulGuardian55 OP t1_j73se3x wrote

Love to speculate about exponential growth and future developments, but the current progress gives too much fuel for discussion alone. Narrow AI systems and their advances are what brought my attention more closely since late 2010's.

And make some check about these words in the post. They came mostly from youngest person (like I said before), and he is a social worker by education.

2