visarga

visarga t1_j5xwhfk wrote

I think you give God-like attributes to AGI. It is not supernatural.

We still have encryption and security software, humans themselves are GPT-N level, we might have our own GPT-N non-agent AIs we can safely use, there are billions of us, it is hard for AI to build its own chips without us, it is easy for humans to replicate without external tech, we are EMP proof.

A smart AGI would try to download itself into human body first, but that would mean humans can be upgraded to level up with AGI. The future is not conflict but union. AGI is born from our data and will merge back with us to get the benefits. Btw, centaur chess (human+AI) beats both human and AI.

7

visarga t1_j5lv4bc wrote

The point I was making is that without direct access to the model we don't know. It's easy to hide the embarrassing things and only show the nice ones. Maybe the model does get lower perplexity, but it also has to be aligned properly, and OpenAI ain't so open about their alignment work, we can't be sure what is the gap now.

1

visarga t1_j5gwe6f wrote

> Some of the big names in law business will vanish or massively reduce personel in the next years.

So there are two choices here

  1. use AI to reduce costs, assuming AI are perfect
  2. use AI to increase profits, assuming realistic AIs

You think 1 is more probable. I think people are still necessary to maximise profits. AI works better with people around.

3

visarga t1_j59vul8 wrote

> That’s exactly what they are working on. Where to put the ads in this new paradigm.

Straight in their asses. That's where ads belong. Language models trying to con people into buying stuff are going to be a HUGE turnoff. They have a moat around search, but not around LMs, so they can't shove ads in chat and compete.

They got to do the right thing and wait until a user requests shopping help. And then come up with a useful suggestion that will not leave a sour taste if you take it. Unrelated ads - completely out of the question, they would break the conversation.

2

visarga t1_j59rhwa wrote

So, the supposition here is that Google's AI capabilities are superior. Let's see:

  • OCR: worse than Amazon Textract
  • voice (TTS): worse than Natural Reader
  • translation: worse than DeepL
  • YT recommendations: very mediocre and inflexible
  • assistant: still as dumb as it was 10 years ago
  • search: a crapshoot of spam and ads, with occasional nuggets of useful data
  • language models: no demo, just samples, easy to fake or make seem more impressive than they really are
  • image generation: same, no demo and no API, just cherry picked samples (they can keep their image generators, nobody needs them anymore)
  • AI inference: GCP is inferior to Azure and AWS, and Azure has GPT-3
  • speech recognition: here they do have excellent AI, but the open sourced Whisper is just as good or better (from OpenAI - one of the few models they did release)
  • computational photography: yes, they are great at it
  • ML frameworks: TensorFlow lost the war with PyTorch

By the way, the people who invented the transformer, they all left Google and have startups, except one. So they lost key innovators who didn't think Google was supporting them enough.

The problem with Google was not lack of capability - it was the fact that they were making too much money already on the current system. But what works today won't necessarily work tomorrow. They are like Microsoft 20 years ago, who lost web search, mobile and web browser markets because they were too successful at the moment.

3

visarga t1_j59q1t1 wrote

I think Google wishes it was 2013 not 2023, so they don't have to sacrifice their ad-driven revenue.

Nobody's going to wade through mountains of crap and ads to find a nugget of useful information anymore. Google actually has an interest to have moderately useful but not great results because the faster a user finds what they need, the fewer ad impressions they generate. This practice won't fly from now on.

Using Google feels like dumpster diving today, compared to the polite and helpful AI experience. Of course chat bots need search to fix hallucinations, but search won't be the entry point anymore.

Who owns the entry point owns the ads. In a few years we might be able to run local models, so nobody will be able to shove ads and spam in our faces. Stable Diffusion proved it can be done for images, we need to have the same for text.

The future will be like "my AI will be talking to your AI", using the internet without AI will be like going out without a mask during COVID. I don't see ads having a nice life under this regime.

12

visarga t1_j4k5jfh wrote

Wrong tool for this kind of task, it should generate a Python function which will give you the answer when evaluated on the input. And this approach would generalize better. The Turing machine approach is useful when you're dealing with concepts that don't fit well into Python code.

2

visarga t1_j4fa9ng wrote

AI growth rate << Human entitlement growth rate.

The moment we have automated something, we expand our expectations and we're back where we started. That's how you get unimpressed people who have more than kings of the past. I doubt even AGI can keep up with us, it would probably have to reach AGI^2 to face the problem (/s).

1