wind_dude
wind_dude t1_j43qjja wrote
Reply to [D] Has ML become synonymous with AI? by Valachio
I used to be of the mind set that everything called AI is just ML, and we aren't even close to achieving AI. Well still technically true, now AI is AGI, and LLMs are AI. I've given up that fight. But yes basically synonyms in our current lexicon, just call ML AI to sound cooler to scifi fans and tech journalist.
​
To me...
AI is an abstract concept that a computer can achieve cognitive abilities, emotion, and problem solving to the same level of a human.
ML is statistical and mathematical models. Basically the limit of what can be achieved on logic based hardware.
wind_dude t1_j3y8vi8 wrote
Reply to I analyzed 11000 products of a Dutch supermarket to find the cheapest sources of protein [OC] by MemeableData
Nice, I considered do something similar for comparing product prices. How did you build your dataset?
wind_dude t1_isr0x3i wrote
No, the work flow and job descriptions will change. The same way it has for graphic designers. It's just another tool in the bag to use, enabling devs to write better code, create better apps, either more quickly, with less errors, or with a different skill set, or without having to remember as much.
wind_dude t1_ir2a49l wrote
Reply to comment by kmtrp in What happens in the first month of AGI/ASI? by kmtrp
Sparrow or more or less a set of filters and wrappers around a fine tuned Chinchilla to create a chatbot. Chincilla is just another LLM doing is predicting the next token in a string. There have been zero ground breaking discoveries from LLMs.
wind_dude t1_ir1mymg wrote
Reply to comment by Cult_of_Chad in What happens in the first month of AGI/ASI? by kmtrp
It's part of my job to know. As a software engineer working on information extraction i'm constantly evaluating SOTA ml/ai, and reading research papers. For example if you look at LLMs like OpenAIs GPT-3 it just predicts the next token in a string. It still falls flat on it's face if you offer it multiplication. Narrow ai and specialised domain specific NLP still does better than a lot of these LLMs for many tasks like NER and classification. Albeit at the cost of annotation, but the benefit of performance and accuracy.
The other models, like protein folding, dna sequencing are narrow AI. It's very likely AI models will help solve fusion, quantum theory etc, but they will be specialised and narrow AI models created and worked on in collaboration with domain experts.
wind_dude t1_ir12tia wrote
Reply to What happens in the first month of AGI/ASI? by kmtrp
>But I wonder: soon, Deepmind, OpenAI, and others will have a model in their hands that you can interact with that will provide all the correct answers to significant scientific questions: how to perform fusion, create affordable quantum computers, create new propulsion technology, and reverse aging.
define soon. Because if you look at the timeline of AI from the last 15 years, yes it's progressing increasingly rapidly, but we're not even slightly close to being able to do that.
wind_dude t1_j4767yf wrote
Reply to comment by Sirisian in [D] Has ML become synonymous with AI? by Valachio
>ChatGPT is a dialog AI,
OpenAI doesn't even call chatGPT AI or dialog AI.
​
>Non task-specific problem solving at the level of a human. The boundary between advanced AI, especially multi-task learning models, and AGI will get smaller and fuzzy in the coming decades, but there is a difference.
That is closer to the original older definition of AI, I don't think I've seen one credible definition that doesn't include problem solving. AGI just started getting used because AI had been used to much to refer to stuff that isn't AI.