Viewing a single comment thread. View all comments

Category-Basic t1_j2x2o9p wrote

Can a knife cut better than a human hand? After all, it was made by human hands... Yes, AI can be designed and trained to outperform humans at any task that we can frame as a ML task. The big advances have come from clever ways to frame tasks in a way that ML can work on it.

62

CollectFromDepot t1_j2x6r23 wrote

>Can a knife cut better than a human hand? After all, it was made by human hands...

Facts and logic

25

ll-phuture-ll t1_j2y3uh5 wrote

My only rebuttal to this theory is a knife is physical matter made by a human thinking. I feel this is an over simplification of OP’s question and personally have this same concern as OP. The question is how can immaterial mechanical thought be any better than immaterial organically produced thought when created by the latter?

Edit: Sorry, meant to reply to above post but not sure it matters..

3

WistfulSonder t1_j2xuy0q wrote

What kinds of tasks can (or can’t) be framed as an ML task?

2

Category-Basic t1_j4yu5ck wrote

That is the million dollar question. A lot of clever people seem to be finding new ways all the time. I think that, at this point, it is safe to say that any task that has sufficient relevant data probably can be modeled and subject to ML. I might not be able to figure out how, but I am sure someone could.

1

I-grok-god t1_j31odeq wrote

The hand can be used as a blade

But that doesn’t work on a tomato

1

groman434 OP t1_j2x4o2g wrote

I would argue that there is a significant difference between how a knife works and how ML works. You do not have to train a knife how to slide bread.

Besides, it looks to me that ML can outperform humans just because it utilises the fact that modern day computers can do zylions of computations per second. Of course, the sheer speed of computation is not enough and this is why we need smart algorithms as well. But those algorithms benefit from the fact that they have super power hardware available, often not only during training phase but also during normal operation.

−5

Extension_Bat_4945 t1_j2x8nt3 wrote

ML can be use to train a model to perform a specific task very well. We humans have a way more broad intelligence.

Imagine if we could use 100% of our brain power to perform one task 24/7, that was trained all its life to perform that one task, we could outperform an AI easily.

0

groman434 OP t1_j2xb0c8 wrote

My question was slighly different. My understanding is that one of major factors that impact your quality of your model predictions is your training set. But since your training set could be inaccurare (in other words, made by humans), how this fact can impact quality of learning and then quality of predictions.

Of course, as u/IntelArtiGen wrote, models can avoid reproducing errors made by humans (I guess because they are able to learn specific features during a teaching phase when your training set is good enough). But I wonder what this good enough means exactly (in other words, how inevitable errors made by humans when preparing it impact an entire learning process and what kind of errors are acceptable) and how an entire training process can be described mathematically. Of course, I have seen many explanation using gradient descent as an example, but none of them incorporated the fact that a training set (or loss function) was imperfect.

5

Ali_M t1_j2xkhl3 wrote

Supervised learning isn't the only game in town, and human demonstrations aren't the only kind of data we can collect. For example we can record human preferences over model outputs and then use this data to fine tune models using reinforcement learning (e.g. https://arxiv.org/abs/2203.02155). Even though I'm not a musician, I can still make a meaningful judgement about whether one piece of music is better than another. By analogy, we can use human preferences to train models that are capable of superhuman performance.

8

e_for_oil-er t1_j2xz49i wrote

I guess "errors" in the dataset could be equivalent to introducing noise (like random perturbations with mean 0) or a bias (perturbation with non 0 expectation). I guess those would be the two main kind of innacuracies found in data.

Bias has been the plague of some language models which were based on internet forum data. The training data was biased towards certain opinions, and the model just spat them out. This is has caused the creators of those models to shut them down. I don't know how could one do to correct bias, since this is not at all my expertise.

Learning techniques resistant to noise (often called robust) are an active field of research, and some methods actually perform really well.

2