VirtualHat

VirtualHat t1_jaa4jwx wrote

An increasing number of academics are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop, it is prudent to start considering them now.

While it is currently evident that AI systems do not pose an existential threat, this does not necessarily apply to future systems. It is important to remember that regulations are commonly put in place and rarely result in the suppression of an entire field. For instance, despite the existence of traffic regulations, we continue to use cars.

3

VirtualHat t1_j9vnc3y wrote

Yes, it's worse than this too. We usually associate well-written text with accurate information. That's because, generally speaking, most people who write well are highly educated and have been taught to be critical of their own writing.

Text generated by large language models is atypical in that it's written like an expert but is not critical of its own ideas. We now have an unlimited amount of well-written, poor-quality information, and this is going to cause real problems.

13

VirtualHat t1_j9vkpgd wrote

That's a good question. To be clear, I believe there is a risk of an extinction-level event, just that it's unlikely. My thinking goes like this.

  1. Extinction-level events must be rare, as one has not occurred in a very long time.
  2. Therefore the 'base' risk is very low, and I need evidence to convince me otherwise.
  3. I'm yet to see strong evidence that AI will lead to an extinction-level event.

I think the most likely outcome is that there will be serious negative implications of AI (along with some great ones) but that they will be recoverable.

I also think some people overestimate how 'super' a superintelligence can be and how unstoppable an advanced AI would be. In a game like chess or Go, a superior player can win 100% of the time. But in a game with chance and imperfect information, a relatively weak player can occasionally beat a much stronger player. The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes. This makes EYs 'AI didn't stop at human-level for Go' analogy less relevant.

1

VirtualHat t1_j9rsysw wrote

I work in AI research, and I see many of the points EY makes here in section A as valid reasons for concern. They are not 'valid' in the sense that they must be true, but valid in that they are plausible.

For example, he says, We can't just build a very weak system. There are two papers that led me to believe this could be the case. All Else Being Equal be Empowered, which shows that any agent acting to achieve a goal under uncertainty will need (all else being equal) to maximize its control over the system. And the Zero Shot Learners paper which shows that (very large) models trained on one task seem also to learn other tasks (or at least learn how to learn them). Both of these papers make me question the assumption that a model trained to learn one 'weak' task won't also learn more general capabilities.

Where I think I disagree is on the likely scale of the consequences. "We're all going to die" is an unlikely outcome. Most likely the upheaval caused by AGI will be similar to previous upheavals in scale, and I'm yet to see a strong argument that bad outcomes will be unrecoverable.

59

VirtualHat t1_j9rqmii wrote

This is very far from the current thinking in AI research circles. Everyone I know believes intelligence is substrate independent and, therefore, could be implemented in silicon. The debate is really more about what constitutes AGI and if we're 10 years or 100 years away, not if it can be done at all.

8

VirtualHat t1_j9lp6z3 wrote

There was a really good paper a few years ago that identifies some biases in how DNNs learn might explain why they work so well in practice as compared to alternatives. Essentially they are biased towards smoother solutions, which is often what is wanted.

This is still an area of active research, though. I think it's fair to say we still don't quite know why DNNs work as well as they do.

1

VirtualHat t1_j9loi32 wrote

It should be all continious functions, but I can't really think of any problems where this would limit the solution. The set of all continuous functions is a very big set!

As a side note, I think it's quite interesting that the theorem doesn't include periodic functions like sin, so I guess it's not quite all continuous functions, just continuous functions with bounded input.

4

VirtualHat t1_j9lmkg1 wrote

In my experience DNNs only help with structured data (audio, video, images etc.). I once had a large (~10M datapoints) tabular dataset and found that simply taking a random 2K subset and fitting an SVM gave the best results. I think this is usually the case, but people still want DNNs for some reason. If it were a vision problem, then, of course, it'd be the other way around.

3

VirtualHat t1_j9j8uvr wrote

For example, in IRIS dataset, the class label is not a linear combination of the input. Therefore, if your model class is all linear models, you won't find the optimal or in this case, even a good solution.

If you extend the model class to include non-linear functions, then your hypothesis space now at least contains a good solution, but finding it might be a bit more trickly.

15

VirtualHat t1_j9j8805 wrote

Linear models make an assumption that the solution is in the form of y=ax+b. If the solution is not in this form then the best solution will is likely to be a poor solution.

I think Emma Brunskill's notes are quite good at explaining this. Essentially the model will underfit as it is too simple. I am making an assumption though, that a large dataset implies a more complex non-linear solution, but this is generally the case.

10

VirtualHat t1_j6ckblf wrote

I was thinking next frame prediction, perhaps conditioned on the text description or maybe a transcript. The idea is you could then use the model to generate a video from a text prompt.

I suspect this is far too difficult to achieve with current algorithms. It's just interesting that the training data is all there, and would be many, many orders of magnitude larger than GPT-3's training set.

2

VirtualHat t1_j45faub wrote

Genetic algorithms are a type of evolutionary algorithm, which are themselves a part of AI. Have a look at the wiki page.

I think I can see your point though. The term AI is used quite differently in research than in the popular meaning. We sometimes joke that the cultural definition of AI is "everything that can't yet be done with a computer" :)

This is a bit of a running joke in the field. Chess was AI, until we solved it, then it wasn't. Asking a computer random questions and getting an answer Star Trek style was AI until Google then it was just 'searching the internet'. The list goes on...

9

VirtualHat t1_j45em2b wrote

Yes true! Most models will eventually saturate and perhaps and even become worse. I guess it's our job then to just make the algorithms better :). A great example of this is the new Large Langauge Models (LLM), which are trained on billions if not trillions of tokens, and still keep getting better :)

1

VirtualHat t1_j45dklv wrote

I think Russell and Norvig is a good place to start if you want to read more. The AI defintion is a taken from their textbook which is one of the most cited references I've ever seen. I do agree however that the first defintion has a problem. Namely with what 'intellegently' means.

The second defintion is just the textbook defintion of ML. Hard to argue with that one. It's taken from Tom Mitchell. Formally “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” (Machine Learning, Tom Mitchell, McGraw Hill, 1997).

I'd be curious to know what your thoughts on a good defintion for AI would be? This is an actively debated topic, and so far no one really has a great defintion (that I know of).

7

VirtualHat t1_j456msu wrote

Definitions shift a bit, and people disagree, but this is what I stick to...

AI: Any system that responds 'intelligently' to its environment. A thermostat is, therefore, AI.

ML: A system that gets better at a task with more data.

Therefore ML is a subset of AI, one specific way of achieving the goal.

16