Viewing a single comment thread. View all comments

dasnihil t1_j5tvy2m wrote

futurology is the worst subreddit for factual information.

gary marcus' objections have nothing to do with world models, but the fact that both deep learning and LLM have nothing to do with intelligence the way we see it in biological species i.e. their lack of ability for generalizing. it is fundamentally based on optimizing using gradient learning and this in my view is the opposite route to go when we're trying to engineer general intelligence.

2

EOE97 t1_j5vmhro wrote

But there's the possibility we could build specialised top class models and in the future we just keep making them more amd more multimodal and general by adding other models on top of it.

Maybe that's another way to AGI. Narrow AI models/agents strewn together such that the sum is greater than the parts.

10

dasnihil t1_j5vnkn4 wrote

all my engineering intuition bets against that. but i do get the idea, and i also have a good intuition of what kind of intelligence this kind of approach will give rise to and i'm okay with that. nothing wrong with scaled up LLMs and reinforcement learning. all innovative algorithms are welcome. engineers will keep at it while fancy things distract others.

1

botfiddler t1_j5xxl9l wrote

Yeah, these language models might be one building block, but their output will for example need to be parsed and related to world models and specific knowledge graphs. Also, people have an individual memory and many other elements to them.

1

beezlebub33 t1_j5yqbuv wrote

>gary marcus' objections have nothing to do with world models,

I think they do. See: https://garymarcus.substack.com/p/how-come-gpt-can-seem-so-brilliant . GPT and other LLMs don't are not grounded in the real world, so cannot form an accurate model of them; only getting secondary (from human text). This causes them to make mistakes about relationships; they don't 'master abstract relationships'. I know he doesn't use the term there, but that's what he's getting at.

Also, at https://garymarcus.substack.com/p/how-new-are-yann-lecuns-new-ideas he says:

>A large part of LeCun’s new manifesto is a well-motivated call for incorporating a “configurable predictive world model” into deep learning. I’ve been calling for that for a little while....

The essay isn't primarily about his thoughts on world models, but marcus, for better or worse, thinks that they are important.

3

dasnihil t1_j5z5ijq wrote

disclaimer: idk much about gary marcus, i only follow a few people closely in the field like joscha bach, and i'm sure he wouldn't say or worry about such things.

if you give 3 hands to a generally intelligent neural network, it will figure out how to make use of 3 hands, or no hands. it doesn't matter. so those trivial things are not to be worried about, the problem at hand is different.

0

GlobusGlobus t1_j5wd3ux wrote

Some of Marcus' comments are so strange because he always thinks about AGI and seems to to think that other people also think that way. His critique against ChatGPT is that he doesn't think it is a fast way to reach AGI. He basically says we should scrap GPT and do other things. I agree on GPT not being a steep stepping stone towards AGI, I don't think GPT has much at all to do with AGI. But that is not the point! GPT3 is a fantastic tool made to solve lots of things. Even if it never has anything to do with AGI it still worth an insane amount of money and will be extremely beneficial.

For me GPT might be more important than AGI. Each time Marcus speaks he just assumes that everyone's goal is AGI. It is very strange.

1

dasnihil t1_j5ybc3o wrote

if gpt is more important for you that's okay. everyone has a mission and it doesn't have to be the same. there are physicists still going at it without caring much about gpt or agi. who cares man, we have a limited life and we'll all be dead sooner or later. relax.

1

GlobusGlobus t1_j5yct43 wrote

I am not convinced either or, but it is a strange, and clearly false, assumption that all ML has AGI as a goal. Most of the time people just want to solve a problem.

1

dasnihil t1_j5yd0ai wrote

that's fine and it's a great tool like most tools humans have invented, id even say NN and gradient descent is the greatest idea so far. so what, we must keep going while society makes use of inventions along the way.

1