Submitted by [deleted] t3_11ul904 in MachineLearning
[removed]
Submitted by [deleted] t3_11ul904 in MachineLearning
[removed]
100% agree Also any AI that requires sensor data (e.g., in manufacturing) cannot easily be placed by foundation models.
Yes, I completely agree. Right now that's true. But I wonder how long this will be true? Protocols for data encryption and privacy preserving learning are already out there, IMHO it's just a matter of time until openAI (and similar) will offer such services
Factuality is not a guarantee either with LLMs...
nor with statistical models. But accuracy has generally been higher, but LLMs are catching up and key NLP domains.
Nor with humans.
Nor with self hosted models...
Is that true? OpenAI seems to think they’ll be able to train task-specific AI on top of their existing models for specific roles.
Did you create an account, just to ask this question?
I don't think neither CV nor NLP is going away. CV is yet to be solved to the same extent as NLP, but I agree it might just be a matter of time.
Research wise, there are still tons of problems around uncertainty, complexity, causality, 'real-world' problem solving (domain adaptation) and so forth.
Just don't compete on having the largest cluster of GPUs.
Why are these things doomed just because they are advancing?
What do you mean by "advancing"? Just models getting bigger and bigger?
I mean results steadily improving, regardless of method.
- One of the biggest advantages of hosted like models like chatGP etc, is not to having to own/buy million dollars worth of hardware to host such big models. But it also seems to be one of the biggest disadvantages for openAI. Since it seems like they have like downtime multiple times a week /or month by looking at their discored.
- I think it will be harder to trust them on data in the future, keep in mind they started as research company that was pretty transparent and has slowly turned into a full on business that is become less transparent (i understand they need to make money, since imagine its quite expensive to run). Laying off their ethical team (https://techcrunch.com/2023/03/13/microsoft-lays-off-an-ethical-ai-team-as-it-doubles-down-on-openai/) also with microsoft trying to get more involved. I do think it could be a problem especially for people living in countries that is part of EU, since its already illegal to like store data and use certain services in countries that isnt deemed trusted by the EU (such as america)
- In some use cases it would still have useful to have models that arent operated server side and can be used offline.
There is life outside of NLP and CV.
This too shall pass
just like cars pass horses.
Horses are greener
Are they? They graze, I’d think you’d have the same methane problem you’ve got with cows.
They are allowed to pass (gas) as well
There are still apps needed to be built on top of these APIs, niche tasks within your business domain that a generalist GPT will not be able to cover.
If you are a researcher, there is a lot that can be done to advance AI algorithmically, though you'd be limited by proper access to data and compute.
If you are an ML engineer working in a company, your concern might become valid for a period of time. So yes, you might need to learn how to become a good prompt engineer.
But at some points, these LLM+CV will become accessible, from a data and a compute perspective, then it will be super fun. Imagine hundred of thousand of LLMs running as agents and interacting between each other and millions of people.
banatage t1_jcon2zl wrote
IMHO, those models are very good for general knowledge that can be sucked up from public sources.
When it comes to proprietary / confidential data / knowledge, this is where your work will pay off.