Yomiel94

Yomiel94 t1_jdyrrw5 wrote

> This is so wrong I will not bother with the rest of the claims, this author is unqualified

I find these comments pretty amusing. The author you’re referring to is François Chollet, an esteemed and widely published AI researcher whose code you’ve probably used if you’ve ever played around with ML (he created Keras and, as a Google employee, is a key contributor to Tensorflow).

So no, he’s not “unqualified,” and if you think he’s confused about a very basic area of human or machine cognition, you very likely don’t understand his claim, or are yourself confused.

Based on your response, you’re probably a little of both.

2

Yomiel94 t1_jdu38es wrote

It deceives and ultimately kills the protagonist without an ounce of regret. I would not call that optimistic.

Iirc the film was meant as a feminist social commentary rather than a cautionary tale about AI though lol.

3

Yomiel94 t1_jcbemtv wrote

>Or if I had all Google search results saved in a database I could access during the test!

You mean like your long-term memory? To be clear, GPT doesn’t have the raw training information available for reference. In a sense, it read it during training, extracted the useful information, and is now using it.

If it’s answering totally novel reasoning questions, that’s a pretty clear indication that it’s gone beyond just modeling syntax and grammar.

1

Yomiel94 t1_jbcxfi8 wrote

>machines don't do what you intend, they do what they're made to do.

It seems like, whether you use top-down machine-learning techniques to evolve a system according to some high-level spec or you use bottom-up conventional programming to rigorously and explicitly define behavior, what’s unspecified (ML case) or misspecified (conventional case) can bite you in the ass lol… it’s just that ML allows you to generate way more (potentially malignant) capability in the process.

There’s also possible weird inner-alignment cases where a perfectly specified optimization process still produces a misaligned agent. It seems increasingly obvious that we can’t just treat ML as some kind of black magic past a certain capability threshold.

0

Yomiel94 t1_j4xzlr9 wrote

This seems like a stretch. GPT might be the most general form of artificial intelligence we’ve seen, but it’s still not an agent, and it’s still not cognitively flexible enough to really be general on a human level.

And just scaling up the existing model probably won’t get us there. Another large conceptual advancement that can give it something like executive function and tiered memory seems like a necessary precondition. Is there any indication at this point that such a breakthrough has been made?

19

Yomiel94 t1_j4xosdp wrote

> Simply put, the poor, uneducated, extremely religious minded, are the main drivers of the fear complex

Oh come on… Have you seen /r/technology recently? Have you read mainstream tech journalism? Have you watched science fiction? There is a very negative, very cynical view of technology that’s become mainstream in recent years, and it’s coming from the cultural elites.

2