Comments

You must log in or register to comment.

banatage t1_jcon2zl wrote

IMHO, those models are very good for general knowledge that can be sucked up from public sources.

When it comes to proprietary / confidential data / knowledge, this is where your work will pay off.

16

fullstackai t1_jcopazt wrote

100% agree Also any AI that requires sensor data (e.g., in manufacturing) cannot easily be placed by foundation models.

3

Individual-Sky-778 t1_jconas3 wrote

Yes, I completely agree. Right now that's true. But I wonder how long this will be true? Protocols for data encryption and privacy preserving learning are already out there, IMHO it's just a matter of time until openAI (and similar) will offer such services

2

banatage t1_jcoo9ek wrote

Factuality is not a guarantee either with LLMs...

5

wind_dude t1_jcoqe5z wrote

nor with statistical models. But accuracy has generally been higher, but LLMs are catching up and key NLP domains.

4

EmmyNoetherRing t1_jcot5t2 wrote

Is that true? OpenAI seems to think they’ll be able to train task-specific AI on top of their existing models for specific roles.

2

tripple13 t1_jcoq61v wrote

Did you create an account, just to ask this question?

I don't think neither CV nor NLP is going away. CV is yet to be solved to the same extent as NLP, but I agree it might just be a matter of time.

Research wise, there are still tons of problems around uncertainty, complexity, causality, 'real-world' problem solving (domain adaptation) and so forth.

Just don't compete on having the largest cluster of GPUs.

11

hiptobecubic t1_jcomtiq wrote

Why are these things doomed just because they are advancing?

6

hund35 t1_jcorkdy wrote

- One of the biggest advantages of hosted like models like chatGP etc, is not to having to own/buy million dollars worth of hardware to host such big models. But it also seems to be one of the biggest disadvantages for openAI. Since it seems like they have like downtime multiple times a week /or month by looking at their discored.

- I think it will be harder to trust them on data in the future, keep in mind they started as research company that was pretty transparent and has slowly turned into a full on business that is become less transparent (i understand they need to make money, since imagine its quite expensive to run). Laying off their ethical team (https://techcrunch.com/2023/03/13/microsoft-lays-off-an-ethical-ai-team-as-it-doubles-down-on-openai/) also with microsoft trying to get more involved. I do think it could be a problem especially for people living in countries that is part of EU, since its already illegal to like store data and use certain services in countries that isnt deemed trusted by the EU (such as america)

- In some use cases it would still have useful to have models that arent operated server side and can be used offline.

5

ab3rratic t1_jcorj10 wrote

There is life outside of NLP and CV.

3

boss_007 t1_jcosyeu wrote

This too shall pass

2

fferegrino t1_jcot78v wrote

There are still apps needed to be built on top of these APIs, niche tasks within your business domain that a generalist GPT will not be able to cover.

1

elgringo0091 t1_jcovnrt wrote

If you are a researcher, there is a lot that can be done to advance AI algorithmically, though you'd be limited by proper access to data and compute.

If you are an ML engineer working in a company, your concern might become valid for a period of time. So yes, you might need to learn how to become a good prompt engineer.

But at some points, these LLM+CV will become accessible, from a data and a compute perspective, then it will be super fun. Imagine hundred of thousand of LLMs running as agents and interacting between each other and millions of people.

1