sabertoothedhedgehog

sabertoothedhedgehog t1_j5eneoh wrote

Love the topic of the paper.Absolutely HATE the figures showing taxonomies / example AI tools. These visualisations with boxes and arrows are really awful. These arrows are all over the place and meaningless. And the category boxes look the same as the application boxes.

It could have looked more like this:https://the-decoder.com/wp-content/uploads/2022/10/market_map_generative_AI-770x1027.png.webp

Or like this:https://www.sequoiacap.com/wp-content/uploads/sites/6/2022/09/genai-landscape-8.png

I don't even particularly like my examples. But there is no need for all these arrows and category boxes looking like the examples.

3

sabertoothedhedgehog t1_j4701ft wrote

Yes. My PhD was on applied ML. My current day job is at a center for AI. There are many people dimensions smarter than me -- but AI is all I deal with every day.

The reason for the nebulous concept is that intelligence is hard to define. Thus, in the past it was often defined by relating it back to human intelligence, e.g. "automating tasks that would require human intelligence to solve" and even the Turing Test.
But there are harder definitions of intelligence, such as Francois Chollet's paper.

It is definitely NOT correct to say << [AI] is just using some search algorithm with heuristics to make the search more "intelligent">>.
AI covers way more and goes far beyond search.

2

sabertoothedhedgehog t1_j4660hr wrote

To me, a linear regression is part of Machine Learning, and, thus, part of the broader vision of AI. Even though linear regressions are old statistical models and probably existed long before the term ML.The linear regression algorithm is learning from data (i.e. improves the line fitting after observing more data. hence, it is ML in my book) -- it just has a very limited hypothesis space. It will only ever fit a straight line (or hyperplane, in the general case). It is not a general learner like a Deep Neural Network which can approximate any function.

2

sabertoothedhedgehog t1_j465kq6 wrote

This is not correct.
These algorithms are definitely learning (i.e. improving performance at a task through experience, i.e. by observing more data).

Intelligence is hard to define. Something like 'efficiency at acquiring skills at a broad range of tasks' would be one definition. We're getting there. This is the weak vs strong AI hypothesis: can we merely simulate intelligence or are we creating actual intelligence.

2

sabertoothedhedgehog t1_j44ff9b wrote

I should explain to be useful:AI is the vision and effort to replicate human intelligence. Human intelligence includes learning from data (--> ML). But one could argue there is more to it, e.g. knowledge bases etc. This is not something typical ML algorithms consider (LLMs do that indirectly). Also, our current ML models are still super narrow. The idea of AI is general intelligence.

3