flexaplext

flexaplext t1_jeg04oq wrote

This idea is to let AI optimize things. But everyone has their own personal AI that puts forward their views and needs. Then a central AI concatenates all this data and works out what policies would best work for the population as a whole, as a whole of the individuals in that population. Which is what democracy should be.

1

flexaplext t1_jefw4sn wrote

Yeah, because we don't just see governments knee-jerk reacting to AI now when private enterprise has been developing and investing in it for many years.

And it isn't the most important and dangerous technology that will ever exist and yet they have little to no regulations on it or proper plans going forward for it. Despite this being obvious and known for decades.

And MPs know so much about computer programming, I'm sure they'll be able to know how to lead AI development and appoint the right people to it. Doing so in an efficient and innovative manner

And I'm sure the best programmers will be lining up to work for the government and their military rather than OpenAI and progressive companies.

4

flexaplext t1_jefsxdn wrote

Yeah, that's the alternative. And such people will probably win because governments are so useless.

However, I suspect the US government will just forcibly take over OpenAI at some point on the grounds of National Security. They may be useless, but they're good at taking things over.

The same option probably won't exist for the UK government though. Which is why they'd be better joining the EU again and trying something within that union. Of course, with the EU buying out a decent existing company to get themselves started, as I also suggested. 

Or the EU could just fund many different companies and then take over the one that wins out, the US-style plan. To ask the UK try to do this model alone dramatically reduces their funding, company pool and odds of them being successful.

5

flexaplext t1_jeesk7z wrote

I'm not sure exactly. As people say, it just always happens. The only thing I think these people are wrong about is that this trend continues after true AGI. That's when all trends and models of the economy and everything else breaks down.

If I had to guess, I think a lot of people will just be moved into places where AI is not yet fully capable. Mass collective data training. The more people on it, the faster we'll get to true AGI. If the AI is not yet at true AGI, then that means there is obviously areas where it needs to learn.

The economic value of training AI, once it has a full capacity to learn well from training, will just be absolutely massive. So it will require work from home-based solutions to get more people into these areas and very quick turnaround and retraining of people into new areas. The economic value will certainly be there though to facilitate such a system.

I think we'll inevitably also start to see a greater amount of real-world value being created too. So there will be a large increase in real-life activity needing to be done, whilst the robotics side of things lags behind.

I think robotics will still lag behind for a while, even after true AGI is created. It will take some time to manufacture and deploy all the necessary robots to replace workers. So there will still be a lot of people with jobs safter the inception of AGI even but then, slowly but surely, they'll start to get replaced. Starting with the higher salaried jobs first, then down towards the minimum wage workers eventually.

I think there will still be physical work after AGI. But it will be incredibly low-paid and optional. Humans are still useful, but they'll just have to accept not being paid much at all i  order to stay economically viable against a robot.

1

flexaplext t1_jeeos7z wrote

It needs work-based training data. That's where Copilot comes in:

https://www.reddit.com/r/singularity/comments/11t13ts/the_full_path_to_true_agi_finally_emerges/

Once this system gets better, then we'll start seeing proper unemployment happen on a worldwide mass scale.

There will be lots of new jobs created for a while though. As people say. I think the job market will be perfectly fine, even with this massive shift. That is, up until AI reaches true AGI / ASI, then the job market will be shot to pieces.

3

flexaplext t1_je7x01m wrote

Well that was a fun read. I've obviously heard his stuff before.

But what needs to be absolutely realized is that true global cooperation has to come first. First and foremost. Nothing else can happen until that happens. It cannot be shut down before that happens. No progress can be made at all until that happens.

I literally just wrote a thread post about it.

The lack of international cooperation is both the very first problem that needs solving and also the very first threat to society. It needs to happen now and before anything else. It is the only real discussion that needs to take place right now. How can that actually be facilitated. Because bringing the major world powers together and aligned and feeling safe from one another is by far the hardest problem. Shutting down all AI development is a piece of cake of a task in comparison.

9

flexaplext t1_je7ampt wrote

I think this and OP's take is completely wrong. I would say it's going to be mostly non-useful data that's lost.

People will still write Wikipedia articles, they just won't be read as much, but the data on the site will still be valid.

People will still ask questions on stack overflow but there will be fewer questions and the number of trivial questions will significantly reduce as these are easier for AI to answer. But novel and difficult questions will still need to be asked to stack overflow, because AI isn't capable of answering them. And people will still want to and be interested in answering the more novel questions.

Thus, the overall effect will actually be to significantly improve the data, and engage people better. People wanting to answer questions will enjoy the experience much more with the less easy and obvious questions being removed. And they will be able to stumble across the interesting questions more easily and efficiently.

It should actually end up creating a much richer set of data for models to train on.

Think about it. If all the questions on stack overflow that are asked and answered were only questions that the models couldn't answer, that's like literally the most perfect training data. It filters out the questions it already knows the answer too which is not useful for it to read.

And this effect (I think it needs a name if it doesn't have one? - The Flex Effect if I just came up with it 😂😂) will only adapt over time with increased model output accuracy. As the model updates and its answers get better, so too will the questions, and subsequent answers. They'll get better and more and more difficult, matching the critea for what the new model then still needs to learn and train on.

16

flexaplext t1_je6j0wt wrote

I think people often underestimate the capabilities of AI.

But they also often overestimate the capabilities of physics.

Some things will just be impossible and not be allowed within the laws of physics no matter what. Can't say exactly what those things will be but I'll put my hat in the ring to say it will be a number of the things they hypothesize AI to be capable of doing.

11

flexaplext t1_je26vkw wrote

Content: Music, Movies, Books, Art

Economics: Efficient, Fair, Reliable systems

Politics: Efficient, accurate, representative systems

Human Enhancement: Directly enhance physical and mental capabilities

Biological: Create new animals, bring some back from extinction, create entirely new food sources

Safety Needs: Allow more direct control over our nervous system. So we can turn off pain entirely at source if we wish to do so

1

flexaplext OP t1_jdydx39 wrote

Just hold it all in memory. My mental arithmetic and manipulation is actually rather decent, despite not being able to visualise it. You actually find that this applies to most people with aphantasia. There's lots of interesting things about it if you search and read up on people's perceptions and experience of it.

It's strange to describe.

Because I know exactly what something like a graph looks like, without being able to visualise it. Just by holding all the information about a graph in memory. I can manipulate that information by simply changing the information of the graph.

However, this ability does break down with more complex systems. If I try and hold an entire chess board in memory and manipulate it, I just fail completely. It's too much information for me to keep in memory and work out accurately without a visual aid.

2

flexaplext OP t1_jdxxwnv wrote

It's really quite a strange experience if you properly delve deep into your conscious thought process and think about exactly what's going on in there.

This subconscious supercomputer in the back of your mind that's always running, throwing ideas into your thought process, processing and analysing and prioritising every single input of this massive stream of sensory data, storing, retrieving memories, managing your heartbeat and internal body systems.

There's this computer back there doing so, so much on autopilot and you have no direct access to it or control over it.

The strangest thing of all, though, is this way it just throws ideas and concepts, words into your conscious dialog. Maybe I think that's strangest to me though, just because it's the only thing I am able to have a true perception of it doing.

Like I said, it's not necessarily single words that it is throwing at you, but overarching ideas. However, maybe these ideas are just like single word terms, like a macro, and then that single term is expanded out into multiple words based on the sequence of words in such a term.

There are different ways to test and manipulate its output to you though. You have some conscious control over its functionality. 

If you try to, you can tell and make your subconscious only throw out overarching ideas to you, rather than a string of words. Well, I can anyway.

You can also, like, force the output to slow down completely and force it to give you literally only one word at a time and not think at all about an overarching idea of the sentence. Again, I can do that anyway.

It's just like my thought process is completely slowed down and limited. It's just way more limited in thought and it's literally like the subconscious is just throwing one word at a time into my mind. I mean I can write out exactly what it comes up with when I do this:

"Hello, my name is something you should not come up with. How about your mom goes to prison. What's for tea tonight. I don't know how you're doing this but it's interesting. How come I'm so alone in the world. Where is the next tablet coming from."

I mean, fuck. That's weird to do. You should try it if you can. Just completely slow down and force your thoughts into completely singular words. Make sure to not let any ideas or concepts enter your mind. I mean, that output is way less than an LLMs capability when I do that, it's very, very similar to what basic predictive text currently is. In fact, it feels almost the same except that it appears to be affected by emotion and sensory input.

Edit: There is another way I can do it. Just think or even better speak out loud fairly fast without thinking at all about what you're saying. Don't give yourself time to think or for ideas to come into your mind. You wind up just stringing nonsensical words together. Sometimes there's a coherent sentence in there from where a concept pops in, but it's mainly still just like a random string of predictive text.

1

flexaplext t1_jdxjy0l wrote

It depends entirely on how seriously the government / AI company takes the threat of a strong AGI. To whether it will be created safely or not.

There is then the notion that we will need to be able to actually detect if it's reached strong AGI, or a hypothesis that it may have and may deceive us. So, whichever way, containment would be necessary if we consider it a very serious existential threat.

There are different levels of containment. Each further one is more and more restrictive but more and more safe. The challenge would likely come in working out how many restrictions you could lift in order to open up more functionality whilst also keeping it contained and completely safe.

We'll see when we get there how much real legislation and safety is enforced. Humans tend to, unfortunately, be rather reactive rather than proactive, which gives me great concern. An AI model developed between now and AGI may be used to enact something incredibly horrific though, which may then force these extreme safety measures. That's usually what it will take to actually make governments sit up properly and notice.

1

flexaplext t1_jdxi6k0 wrote

Not very likely. It's much more likely it will first emerge in somewhere like OpenAI's testing where they have advanced it to a significant degree with their significant model changes. Hopefully, recognizing when they are near strong AGI levels and not giving it internet access for testing.

If they are then able to probe and test it's capabilities and find it able to be incredibly dangerous. This is when it would get reported to the pentagon and they may start to put extreme containment measures on it.

If AI has maybe been used up to this point for something highly horrific like an assassination of the president, or a terrorist attack. It is possible that these kinds of safety measures would be put in place. There's plenty of potential serious dangers of humans using AI before AGI itself actually happens. These might draw proper attention to its deadly consequences if safety is not made of paramount importance.

I can't really predict how it will go down though. I'm certainly not saying at all that containment will happen. I'm just saying that it's potentially possible to happen if it's taken seriously enough and ruled with an iron fist.

I don't personally have much faith though from humanity's past record of being reactive rather than proactive towards potential severe dangers. Successful proactive measures tend to never get noticed though, that's their point, so this may cause high sample bias on my behalf due to experience and media coverage.

1

flexaplext t1_jdxg54v wrote

Not if you only give direct access to one singular person in the company and have them highly monitored and with very limited power and tool use outside of said communication. Just greatly limit the odds of a breach.

You can do AI containment successfully, it's just highly restrictive. 

If it remains within a single data centre with no ability to output to the internet, only receive input. Governments world wide block and ban all other AI development and monitor this very closely and strictly 1984 style with tracking forcibly embedded into all devices.

I'm not saying this will happen, but it is possible. If we find out ASI could literally end with complete ease though, I wouldn't completely rule it out that we will go down this incredibly strict rule.

Understand that even in this highly restrictive state, it will still be world changing. Being able to potentially come up with all scientific discovery alone is good enough. We can always do rigorous tests of any scientific discovery just as we would if we came up with the idea ourselves. Make sure we understand it completely before any implementation.

4

flexaplext OP t1_jdxc3bh wrote

I actually can't do those things. As part of aphantasia I can't generate virtual vision, virtual taste, virtual smell or virtual touch at all.

I can only generate virtual sound in my head.

This is why I can say those other mental modes are not necessarily at all to thinking and conciousness. Because I know that I'm conscious and thinking without them and I still would be without any input from my real senses. But obviously my sensory input have been completely vital to learning.

4