visarga

visarga t1_isq5mvf wrote

I believe there is no substantial difference. Both the AI and the brain transform noise into some conditional output. AIs can be original in the way they recombine things - there's space for adding a bit of originality there, and humans can be pretty reliant themselves on reusing other styles and concepts - so not as original as we like to imagine. Both humans and AIs are standing on the shoulders of giants. Intelligence was in the culture, not in the brain or AI.

3

visarga t1_is0fpyb wrote

I became aware of AI in 2007 when Hinton came out with Restricted Boltzmann Machines (RBMs, a dead end today). I've been following it and started learning ML in 2010. I am a ML engineer now, and I read lots of papers every day.

Ok, so my evaluation - I am surprised with the current batch of text and image generators. The game playing agents and the protein folding stuff are also impressive. I didn't expect any of them even though I was following closely. Two other surprises along the way were residual networks, which put the deep into deep learning, and the impact of scaling up to billions of parameters.

I think we still need 10,000x scaling to reach human level both in intelligence and efficiency, but we'll have expensive to use AGI in a lab sooner.

I predict the next big thing will be large video models, not the ones we see today but really large like GPT-3. They will be great for robotics and automation, games and of course video generation. They have "procedural" knowledge - how we do things step by step - that is missing in text and images. They align video/images with audio and language. Unfortunately videos are very long, so hard to train on.

3

visarga t1_is0cb6i wrote

> Which will make it easy for people to write off the truth.

Wouldn't it be nice if there was a place where Truth was written so we can all check things up. But unfortunately that is not possible, so we're left with a continually evolving social truth.

3

visarga t1_irzdrho wrote

> if LSTMs would have received the amount of engineering attention that went into making transformers better and faster

There was a short period when people were trying to improve LSTMs using genetic algorithms or RL.

The conclusion was that the LSTM cell is somewhat arbitrary and many other architectures work just as well, but none much better. So people stuck with classic LSTMs.

2

visarga t1_irta9lz wrote

It's not just a matter of different substrate. Yes, a neural net can approximate any continuous function, but not always in a practical or efficient way. The result has been proven on networks of infinite width, not on the finite networks we are using in practice.

But the major difference comes from the environment of the agent. Humans have the human society, our cities and nature as environment. An AI agent, the kind we have today, would have access to a few games and maybe a simulation of a robotic body. We are billions of complex agents, more complex than the largest neural net, they are small and alone, and their environment is not real but an approximation. We can do causal investigations by intervention in the environment and apply the scientific method, they can't do much of that as they don't have access.

The more fundamental difference comes from the fact that biological agents are self replicators and artificial agents are usually not (AlphaGo had an evolutionary thing going). Self replication leads to competition leads to evolution and goals aligned with survival. An AI agent would need something similar to be guided to evolve its own instincts, it needs to have "skin in the game" so to speak.

4

visarga t1_irt7w5u wrote

> Have you heard of Integrated Information Theory?

That was a wasted opportunity. It didn't lead anywhere, it's missing essential pieces, and it has been proven that "systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data" have high IIT (link).

A theory of consciousness should explain why consciousness exists in order to explain how it evolved. Consciousness has a purpose - to keep itself alive, and to spread its genes. This purpose explains how it evolved, as part of the competition for resources of agents sharing the same environment. It also explains what it does, why, and what's the cost of failing to do so.

I see consciousness and evolution as a two part system of which consciousness is the inner loop and evolution the outer loop. There is no purpose here except that agents who don't fight for survival disappear and are replaced by agents that do. So in time only agents aligned with survival can exist and purpose is "learned" by natural selection, each species fit specifically to their own niche.

1

visarga t1_irt4m6q wrote

An important observation to make is that it's only been demonstrated on images sized 32x32 and 64x64. A long way away from 512x512. Papers that only test on small datasets are usually avoiding a deficiency.

0

visarga t1_iqzerze wrote

> If we assume AI can eventually create a movie that is oscar nomination worthy every 10 seconds for essentially no cost

It's not gonna be a "movie" but more like a sim or a game, and we're not going to make it for entertainment but as a training ground for AI. Simulation goes hand in hand with AI because real world data is expensive and limited but sims only cost electricity to run.

We are already seeing generative models as source of training data. link

1

visarga t1_iqxstx5 wrote

Just remember how you use your phone and explain that to a person from 200 years ago, I bet they'll think you are already deep into the singularity by their standards.

Having food, water, toilet, electricity and internet is nothing to brag about, even the poorest of us should have them. But just a couple of centuries ago these things would have been off the scale.

If you look back over decades or a couple of centuries life has been getting steadily better. It wasn't fake progress, but we're busier than ever.

Many people think after the singularity we'll have nothing to do anymore. On the contrary, I think we'll have more than before. We'll still compete and we'll often be unhappy like before.

Who said the purpose of AI should be to improve our lives? The purpose of life is to expand and exist despite the challenges it meets. That means competition and exploration, not peace and detachment. We didn't come out on top of nature by being nice, we exploited every advantage and knowledge along the way.

5