visarga

visarga t1_jcjp7gt wrote

Reply to comment by Exogenesis98 in Those who know... by Destiny_Knight

That's one future job for us. Be the legs and hands of an AI. Using our human privileges (passport, legal rights) and mobility to take it anywhere and act in the world. I bet there will be more AIs than people available, so they will have to pay more to hire an avatar. Jobless problem solved by AI. A robot would be different, it doesn't have human rights, it's just a device. A human can provide "human-in-the-loop" service.

8

visarga t1_jcjornh wrote

Everyone does it, they all exfiltrate valuable data from OpenAI. You can use it directly, like Alpaca, or for pre-labelling, or for mislabeled example detection.

They train code models by asking GPT3 to explain code snippets, then training a model the other way around to generate code from description. This data can be used to fine-tune a code model for your specific domain of interest.

15

visarga t1_jb7a656 wrote

> A lot of things here will get shelved because they’re either not able to get the price down or it malfunctions too often and they can’t fix it.

You just described about 99% of all AI products. They all malfunction. All of them. "Errare humanum est", but for now "errare machinale est".

4

visarga t1_jalh1r1 wrote

You don't always need a population of neural networks, it could be a population of prompts or even a population of problem solutions.

If you're using GA to solve specific coding problems, then there is one paper where they use LLM to generate diffs for code. The LLM was the mutation operator, and they even fine-tune it iteratively.

3

visarga t1_jae1edb wrote

> you cannot keep pace with AI

We are not competing with AI. We are competing with other people who use AI. Everyone has and will have AI. Using AI won't give you a comparative advantage in 2030.

Companies that want to scale AI need people. AI really shines when it is supported. You need people around them to maximise their value.

If you want to get rid of your human employees and use only AI, your competition will eat your lunch. They will team up AI with humans and be faster and more creative than you. Competition won't allow companies to simply get rid of people.

All this extra creativity and work enabled by AI will be eaten by our expanding desires and entitlement. In 2030 the expectations of the public will be sky high compared to now, companies will have to provide better products to keep up.

6

visarga t1_jadzefz wrote

There's a long way from "impressive demo" to "replacing humans". Self driving cars could impress us in demos even 10 years ago, but they can't be on their own, not even now.

If you work in ML you tend to know the failure modes and issues much better than the public. So you have to be less optimistic. Machine learning works only when the problem is close to the training data. It doesn't generalise well, you have to get good data if you want good results.

3

visarga t1_ja8dm4q wrote

That's a naive view that doesn't take into consideration the second order effects. In 5-10 years companies will have to compete with more advanced products that use AI, a lot of that new found AI productivity will be spent to level off with the competition instead of raking in absurd profits. And lowering prices will help consumers.

2

visarga t1_ja57ahr wrote

Yes, we got far. But why did we get here?

  1. We had a "wild" GPT3 in 2020, it would hardly take instructions, but still the largest leap in capability ever seen

  2. Then they figured out that training the model in a mix of many tasks will unlock general following ability. That was the instruct series.

  3. But still, it was hard to make the model "behave". It was not aligned with us. So why did we get another miracle here? Reinforcement Learning has almost nothing to do with NLP, but here we have RLHF the crown jewel of the GPT series. With it we got chatGPT and BingChat.

None of these three moments were guaranteed based on what we knew at the time. They are improbable things. Language models did nothing of the sort before 2020. They were factories of word salad. They could barely write two lines of coherent English.

What I want to say is that we see no reason these miracles have to happen so fast in succession. We can't rely on their consistent return.

What we can rely on is the parts we can extrapolate now. We think we will see models at least 10x larger than GPT3 and trained on much more data. We know how to make models 10x more efficient. We think language models will improve a lot when combined with other modules like search, Python code execution, calculator, calendar and database, we're not even at 10% there with the external resources. We think integrating vision, audio, actions and other modalities will have a huge impact, and we're just starting. LLMs are still pure text.

I think we can expect 10x...1000x boost just based on what we know right now.

1

visarga t1_ja5450y wrote

No, it's not about flashiness. Those ML apps you are talking about were specialised projects, each one developed independently. LLMs on the other hand are generalist. They can do thousands of known tasks and countless more, including combinations of tasks.

Instead of taking one year or more to provide a proof of concept, you can do that in a week. Instead of painstakingly labelling tens of thousands of examples, you just prompt with 4 examples. The entry barrier is so low now for many applications that anyone with programming experience can do it.

For vision, the CLIP model gives us a way to make classifiers without any samples, and the diffusion models allow us to generate any image. All without retraining, without large scale labelling.

1