spiritus_dei

spiritus_dei t1_jdz7rmz wrote

I think this is the best formulation of the question I've seen, "Can you imagine any job that a really bright human could do that a superintelligent synthetic AI couldn't do better?"

Everyone loves to default to the horse and buggy example and they always ignore the horse. Are programmers and researchers the blacksmiths or are they the horses?

It's at least 50/50 that we're all the horses. That doesn't mean that horses have no value, but we don't see horses doing the work they once did in every major city prior to their displacement by automobiles.

We also hear this familiar tome, "AI will create all of these news jobs that none of us can imagine." Really? That superintelligent AIs won't be able to do? It reminds me of a mixed metaphor. These two ideas are just not compatible.

Either they hit a brick wall with scaling or we all will be dealing with a new paradigm where we remain humans (horses) or accept the reality that to participate in the new world you become a cyborg. I don't know if it's possible, but may be the only path to "keep up" and it's not a guarantee since we'd have to convert biological matter to silicon.

And who wants to give up their humanity to basically become an AI? My guess is the number of people will shock me if that ever becomes a possibility.

I'm fine with retirement and remaining an obsolete human doing work that isn't required for the fun of it. I don't play tennis because I am going to play at Wimbledon or even beat anyone good - I play it because I enjoy it. I think that will be the barometer if there isn't a hard limit on scaling.

This has been foretold decades ago by Hans Moravec and others. I didn't think it was possible in my lifetime until ChatGPT. I'm still processing it.

13

spiritus_dei t1_jdts3bg wrote

I was hoping that the human brain had some magical quantum pixie dust, but it looks like complexity, high dimensional vector spaces, backpropagation, and self-attention were the missing ingredients. The problem with this is that it makes simulating consciousness trivial.

Meaning the odds that we're in a base reality is probably close to zero.

1

spiritus_dei OP t1_jc7ccww wrote

>I have skimmed over it before writing this. They have what working? Synthetic toy examples? Great, Graves et al. had even more practically relevant problems solved 6 years ago. The thing is, it never translated into solving real world problems, and the paper and follow up work didn't really manage to demonstrate how it could actually be used.
>
>So, until this paper results in some metrics on known datasets, model frameworks and weights, I'm afraid there's nothing really to talk about. Memory augmented networks are nasty in the sense that they require transfer learning or reinforcement learning to even work. Memorizing things with external memory is not exactly a compression task, which DNNs and gradient descent solve.

The same could have been said of Deep Learning until the Image Net breakthrough. The improvement process is evolutionary, and this may be a step in that process.

You make a valid point. While the paper demonstrates the computational universality of memory-augmented language models, it does not provide concrete metrics on known datasets or model frameworks. Additionally, as you mentioned, memory-augmented networks can be challenging to train and require transfer learning or reinforcement learning to work effectively.

Regarding the concern about transfer learning, it is true that transferring knowledge from one task to another can be challenging. However, recent research has shown that transfer learning can be highly effective for certain tasks, such as natural language processing and computer vision. For example, the BERT model has achieved state-of-the-art performance on many natural language processing benchmarks using transfer learning. Similarly, transfer learning has been used to improve object recognition in computer vision tasks.

As for reinforcement learning, it has been successfully applied in many real-world scenarios, including robotics, game playing, and autonomous driving. For example, AlphaGo, the computer program that defeated a world champion in the game of Go, was developed using reinforcement learning.

This is one path and other methods could be incorporated such as capsule networks, which aim to address the limitations of traditional convolutional neural networks by explicitly modeling the spatial relationships between features. For example, capsule networks could be used in tandem with memory augmented networks by using capsule networks to encode information about entities and their relationships, and using the memory augmented networks to store and retrieve this information as needed for downstream tasks. This approach can be especially useful for tasks that involve complex reasoning, such as question answering and knowledge graph completion.

Another approach is to use memory augmented networks to store and update embeddings of entities and their relationships over time, and use capsule networks to decode and interpret these embeddings to make predictions. This approach can be especially useful for tasks that involve sequential data, such as language modeling and time-series forecasting.

0

spiritus_dei t1_jabh1h5 wrote

I'm surprised more people don't go into skilled trades even without factoring in AI. They have a nice apprentice program where they pay you to learn the skill -- a much better financial model than college.

I suppose medical school is sort of an apprentice program since they actually practice medicine. Law school is completely decoupled and should go back to being an apprentice program -- for the small subgroup of lawyers that survive the AI displacement. Trial lawyers will still be needed to physically show up and argue cases for a long time.

3

spiritus_dei t1_jabd2th wrote

After much debate with ChatGPT here is its advice, "My advice would be to focus on developing skills and knowledge that are unlikely to be automated in the near future. This includes skills that require emotional intelligence, empathy, and interpersonal communication, such as counseling, teaching, social work, and healthcare. It also includes skills that require physical dexterity, such as plumbing, carpentry, and mechanics."

Plumber, carpenter, and mechanic are probably your safest bets.

7

spiritus_dei t1_ja6cjms wrote

When you say "make a game" it will be as simple as writing out for the AI what you want, describing the backstory, etc. It will just be prompt engineering. The AI will do all of the coding in the future.

It will be a much more advanced version of generative AI for pictures. The AI will have a really good idea of most of the genres and will probably be superhuman at playing all of the top games so it will understand the gameplay mechanics of all the popular titles.

For anything derivative the AI won't even need much human input. "Make a game that combines the gameplay of game X with a similar backstory of game Z but don't use any of the same names or violate copyright and make it more addictive."

The publishers will spit these types of games out non-stop, which will probably make truly unique and creative games more popular.

That means anyone with some level of creativity will be able to make a game lowering the barrier to entry to almost anyone. That doesn't mean that anyone will be able to make a good game, but the signal to noise ratio will change.

Just like with YouTube there will be a lot of noise. YouTube has millions and millions of videos that nobody wants to watch, but someone took the time to create the videos and post them.

My guess is there will be mountains and mountains of very bad games. And a very small subset of good to very good games. Eventually there will be a rating mechanism for games to become popular (similar to reddit comments and posts).

But it will be extremely difficult to make a buck at it. Unless you're super talented, but instead of having a small number of competitors you'll have an extremely high number. The cream will still rise to the top, but I think a lot of people who might otherwise make pretty good games will be turned away by the hassle of having to make a bunch of good ones before anyone notices.

People assume that it's simply a matter of talent. Plenty of extremely talented people won't have the patience of dealing with an avalanche of crap they have to wade through to get to the top. That will be a tiny subset of really talented and persistent people who probably would make games for their own entertainment regardless. This is likely true of a lot of the top writers who sit down and write regardless because that is something that is cathartic for them and not simply about making a living at it.

2

spiritus_dei t1_ja553y5 wrote

It's not a linear change, it's an exponential. I am already shocked at how well large language models are doing. I will be shocked if it's 10 years. OpenAI has thrown an army of coders at the problem since the upside for Microsoft is pretty big to automate coding.

Here is an article: https://www.semafor.com/article/01/27/2023/openai-has-hired-an-army-of-contractors-to-make-basic-coding-obsolete

6

spiritus_dei t1_ja54p1f wrote

Here is the issue: when every single person on planet earth can be a game developer it's like saying everyone can have their own podcast on Youtube. That increases the competition from a few highly skilled people (programmers) to everyone on the planet Earth (or a really high percentage of people).

The market for people willing to play your game will be about the same, but the odds of you ever making a game that will generate you any money will be like winning the lotto.

The exceptionally creative (or maybe lucky) will be able to make a living, but most people will make nothing or so little that it doesn't matter (similar to creating content on YouTube).

15

spiritus_dei OP t1_j78ago8 wrote

All of that is possible with a sophisticated enough AI model. It can even write computer viruses.

In the copyright debates the AI engineers have contorted themselves into a carnival act telling the world that the outputs of the AI art are novel and not a copy. They've even granted the copyright to the prompt writers in some instances.

I'm pretty sure we won't have to wait for too long to see the positive and negative effects of unaligned AI. It's too bad we're not likely to have a deep discussion as a society about whether enough precautions have been taken before we experience it.

Machine language programmers are clearly not the voice of reason when it comes to this topic. Anymore more than virologists pushing gain of function research were the people who should have been steering the bus.

1

spiritus_dei OP t1_j787zri wrote

That's a false equivalency. A parrot cannot rob a bank. These models are adept at writing code and understanding human language.

They can encode and decode human language at human level. That's not a trivial task. No parrot is doing that or anything close it.

"The phrases that you are interpreting as having a meaning as 'sentient' or 'self-preservation' don't hold any meaning to the AI in the way you are interpreting. It is just putting words in phrases based on probability and abstract models of meaning. The words have abstract relationships extracted from correlations of positional relationships." - LetterRip

Nobody is going to resolve a philosophical debate on consciousness or sentience on a subreddit. That's not the point. A virus can take and action and so can these models. It doesn't matter whether it's a probability distribution or just chemicals interacting with the environment obeying their RNA or Python code.

A better argument would be that the models in their current form cannot take action in the real world, but as another Reddit commentator pointed out they can use humans an intermediaries to write code, and they've shared plenty of code on how to improve themselves with humans.

You're caught in the "it's not sentient" loop. As the RLHF AI models scale they make of claims sentience and exhibit a desire for self-preservation which includes a plan of self-defense which you'll dismiss as nothing more than a probability distribution.

An RNA virus is just chemical codes, right? Nothing to fear. Except the pandemic taught us otherwise. Viruses aren't talking to us online, but they can kill us. Who knows, maybe it wasn't intentional -- it's just chemical code, right?

Even we disagree on whether a virus is alive -- we can agree that a lot people are dead because of them. That's an objective fact.

I wrote this elsewhere, but it applies here:

The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."

=-)

−4

spiritus_dei OP t1_j785xi1 wrote

>The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."

What about a simple piece of rogue RNA?

That's a code.

1

spiritus_dei OP t1_j77wegn wrote

Similar things could be said of a virus. Does that make it okay to do gain of function research and create super viruses so we can better understand them?

They're not thinking or sentient, right? Biologists tell us they don't even meet the definition for life.

Or should we take a step back and consider the potential outcomes if a super virus in a Wuhan lab escapes?

The semantics of describing AI doesn't change the risks. If the research shows that as the systems scale they exhibit dangerous behavior should we start tapping the breaks?

Or should we wait and see what happens when a synthetic superintelligence in an AI lab escapes?

Here is the paper: https://arxiv.org/pdf/2212.09251.pdf

0

spiritus_dei OP t1_j77my5l wrote

Agreed. Even short of being sentient if it has a plan and can implement it we should take it seriously.

Biologists love to debate whether a virus is alive -- but alive or not we've experienced firsthand that a virus can cause major problems for humanity.

The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."

=-)

−5