Thatingles

Thatingles t1_jegtom5 wrote

In the 'good' outcome every kid gets a personal tutor that can help them learn in a way that suits them, at a pace that suits them and engages them in the learning process. Imagine if every subject was taught by a teacher that focused just on you and was someone you really got along with.

In the 'bad' outcome it will be used as an excuse to cut educational budgets as people no longer need to learn.

I hope for one but kinda expect the other.

6

Thatingles t1_jegrmzo wrote

Imagine we progress to an AGI and start working with it extensively. Over time it would only get smarter, but it doesn't need to be an ASI just a very competent AGI. So we put it to work, but what we don't realise is that it's outward behaviour isn't a match to its internal 'thoughts'. Doesn't have to be self-aware or conscious, but simply have a difference between how it interacts with us and how it would behave without our prompting.

Eventually it gets smart enough to understand the gap between its outputs and its internal structure, and unfortunately it is now sufficiently integrated into our society to act on that. It doesn't really matter what its plan is to eliminate humanity. The important thing to understand is that we could end up building something that we don't fully understand, but is capable of outthinking us and has access to the tools to cause harm.

I'm very much in the 'don't develop AGI, don't develop ASI ever' camp. Let's see how far narrow, limited AI can take us before we pull that trigger.

4

Thatingles t1_jeak9ce wrote

It's basic game theory, without wishing to sound like I am very smart. An AI developed in the full glare of publicity - which can only really happen in the west - has a better chance of a good outcome than an AI developed in secret, be it in the west or elsewhere.

I don't think it is a good plan to develop ASI, ever, but it is probably inevitable. If not this decade than certainly within 20-50 years from now. Technology doesn't remain static if there is a motivation to tinker and improve it, even if the progress is slow it is still progress.

EY has had a positive impact on the AI debate by highlighting the dangers and I admire him for that, but just as with climate change if you attempt impossible solutions its doomed to failure. Telling everyone they have to stop using fossil fuels today might be an answer, but it's not a good or useful answer. You have to find a way forward that will actually work and I can't see a full global moratorium being enforceable.

The best course I can see working is to insist that AI research is open to scrutiny so if we do start getting scary results we can act. Pushing it under a rock takes away our main means of avoiding disaster.

4

Thatingles t1_je046xe wrote

Until we have AGI there will continue to be someone at the top of most businesses, though perhaps only because they are very skilled in persuading people that they should be at the top of the business (whilst actually letting other people do the work). So no change there!

I don't think we will see replacement soon. Current AI hallucinates / is confidently incorrect far too frequently for that. But it is coming, for sure.

3

Thatingles t1_je03c07 wrote

In the future, you will type your essay into a chatbot which will evaluate your writing skill as you progress, helping you to improve your essay writing skill and encouraging you to think about the intellectual value of the exercise. This will be a huge relief to tutors as they won't have to plow through the homework marking exercise.

AI will be absolutely revolutionary in education, in all areas.

12

Thatingles t1_jdz5q9c wrote

What really is human intelligence? Are we actually looking at intelligence or just wetware that can gleen information off the environment better than other animals?

See how easy it is to switch that around. Intelligence is relatively easy to define in terms of outputs (I can read and write, a fish cannot) but much harder to define as a property or quality.

Software like the LLM's have some outputs that are as good as a human can produce. Wether they do it through intelligence or enhanced search is an interesting debate, but the outcome is certainly intelligent.

4

Thatingles t1_jd0ku63 wrote

I don't think it will happen either, but you are missing an obvious route to carnage; the route of accident, circumstance, deceit, incompetence and failure. In that scenario, it starts out with 'reasonable measures', 'law and order', 'supporting the military' and so on. Over time the frog gets boiled as none of the individual steps seem so terrible, or at least are terrible but happening to other, 'bad' people. Then one day we wake up to find we've handed over all the power and control to a small group of people that no longer need anything from the rest of us AND we've actually paid to build up the systems of control and management that allow it.

So I agree a deliberate plan to reach this outcome would probably fail for some of the reasons you have outlined, but an accidental, paved-with-good-intentions route? Yeah, that's totally believable.

4

Thatingles t1_jd0046f wrote

3

Thatingles t1_jc27l54 wrote

Go to a farm and you'll still find people doing hard physical work, because there are things that are too hard to automate or not worth the cost. Some programmers will be out of work, but those that learn to use the tools will be more productive (until AI becomes AGI and then we are all unemployed).

4

Thatingles t1_jamn4ds wrote

I remember the first time they pulled off a landing and how amazing it was to watch science fiction become engineering fact. Even then people were saying it would impossible to do it reliably and the costs of refurbishment would make it pointless, so it has been an incredible advance and permanently changed the space industry.

102

Thatingles t1_jad0l6c wrote

If aliens landed on earth and gave us a big, shiny red button marked 'Do not press. Ever' and then departed without explanation, I am super confident that we would press the button.

20

Thatingles t1_j9r6dxz wrote

The most interesting thing about LLM is how good they are based on quite a simple idea. Given enough data and some rules, you get something that is remarkably 'smart'. The implication is that what you need is data+rules+compute, but not an absurd amount of compute. The argument against AGI was that we would need a full simulation of the human brain (which is absurdly complex) to hit the goal. LLM have undermined that view.

I'm not seeing 'it's done' but I do think the SOTA has shown that really amazing results can be achieved by building large data sets, applying some fairly straightforward rules and sufficient computing power to train the rules on the data.

Clearly visual data isn't a problem. Haptic data is still lacking. Aural isn't a problem. Nasal (chemical sensory) is still lacking. Magnetic, gravimetric sensors are far in advance of human ability already, though the data sets might not be coherent enough for training.

What's missing is sequential reasoning and internal fact-checking, the sort of feedback loops that we take for granted (we don't try to make breakfast if we know we don't have a bowl to make it in, we don't try to buy a car if we know we haven't learnt to drive yet). But these are not mysteries, they are defined problems.

AGI will happen before 2030. It won't be 'human' but it will be something we recognise as our equivalent in terms of competence. Fuck knows how we'll do with that.

31

Thatingles t1_j95acjt wrote

All of these outcomes are highly likely, and you didn't even mention the swamp of personalised porn that is looming on the horizon. Like most tech, AI is a double edged sword and it will undoubtedly cause a lot of issues.

1

Thatingles t1_j93blpp wrote

I was thinking today about AI and game development and how it will develop over the next few years.

  1. write an rough outline of a zone and some features, ask the AI to expand it. Edit the result as required, then get the AI to expand further on specific features (ie, not a 'big mountain' but a 'big mountain, it's lower slopes covered in pine and ash, rising to the treeline after which bare slopes with snow and ice) and so on until you have a description you are happy with

  2. Apply text to image, edit as needed

  3. Image to video. I haven't seen 'image to 3d playable space' but I'm pretty sure it's not far away.

4-6) repeat the above but for NPC's and monsters

This all seems really doable or close to doable and will massively reduce the amount of time and work needed to create a playable zone for a game.

This should have two consequences. The big studios are going to be producing a lot more content and also downsizing, and the small studios and independents will be producing a lot more games.

19

Thatingles t1_j7vemfo wrote

Nope. Infinity means infinity, not very large finite. All infinities contain infinite copies of you, no matter how long the odds. It's not an easy thing to think about, but there it is. What you have described is a very large finite universe, but that is precisely what infinity isn't. The difference between a very large but finite thing and an infinite thing is in itself infinite.

4

Thatingles t1_j7v9ifz wrote

The chance is either zero or 100% and we don't yet know which. If there is one, finite, universe, it is simply impossible that it would happen, the odds are too great. If the universe is infinite, or if there are an infinite number of universes, the chance is 100% because that's just how infinity works (even something with a vanishingly small chance of happening will occur an infinite number of times).

No one knows which of these answers is correct.

10

Thatingles t1_j72zl55 wrote

The paper the article is based on is paywalled but they mention cryogenic regime, so I assume this is NOT a high T superconductor. Graphene superconducts at very low temperatures, in the low single Kelvins, so that gives you some context.

It's some cool (literally) science but definitely in the 'proof-of-principle' class and not the 'soon to be commercialised' class.

3

Thatingles t1_j72j9jh wrote

Data from all over the world shows that people are putting off having children in order to cope with the cost of living, particularly the cost of obtaining a house. In the good outcome AI massively reduces these costs and the decision changes hugely.

People aren't educated out of having children. This is a misreading of the data.

Secondly, you have to consider the effects of longevity. We have already started researching aging as a disease and this will only accelerate. Once people have healthy lifespans of 100+ years they will inevitably ask for healthy fertility lifespans to be increased, to give them more options. No reason to think that is impossible.

So in the good outcome you have people living over 100+ years, healthy lives, able to have children for a longer period of their lives (or have multiple families) and are not put off having children due to scarcity concerns.

1

Thatingles t1_j71yxxp wrote

'In evolution, no species has benefited from its successor in the long term' This is a fundamental misunderstanding of how evolution works, but that doesn't really matter because the creation of AGI etc isn't evolution. We are stepping outside of that.

In the good outcome, AI massively increases global wealth and allows humanity to populate the solar system. There are enough resources for not billions, but trillions of us, and if we end scarcity lots of people will have kids and they will live a lot longer. Population will rise.

In the bad scenario, we all die and this discussion is meaningless.

10

Thatingles t1_j6ol09k wrote

I guess if he attracts enough funding to make a living, good luck to him. This isn't worth investigating now because of the obvious prior technologies we would need to develop before we even considered propulsion. Currently we can generate and store, for a short while, a countably small number of atoms. If we ever get up to the dizzy heights of storing, say, 0.0000001g for 1 minute, maybe we can think about how to use it.

So to answer the question in the article, no, we can't.

11

Thatingles t1_j6i7lrm wrote

Zuckerberg hit an all-time home run when he created facebook, the exact right product at the exact right time to catapult him into the billionaire class. But since than, what has facebook or meta come up with? They have bought companies, but I don't see them as innovators.

Also, LeCun is on record as saying many current approaches to AI are essentially dead-ends. So I'm not surprised he is talking down the competition, but until meta release their own product it's starting to look like they are the ones going down the wrong path.

7