Viewing a single comment thread. View all comments

FoniksMunkee t1_je37buh wrote

I'm pretty sure they mentioned something like that in passing didn't they? I know they have a section in there talking about how it fails at some math and language problems because it can't plan ahead, and it can't make leaps of logic. And it considered these substantial problems with ChatGPT4 with no obvious fix.

4

JacksCompleteLackOf t1_je389eh wrote

Actually, I think you're right and they did mention it. I guess I wish they would have emphasized that aspect more than the 'sparks of general intelligence'. It's mostly a solid paper for what it is. They admit they don't know what the training data looks like. I just wish they would have left that paragraph about the sparks out of it.

1

FoniksMunkee t1_je38yix wrote

Yes, I agree. The paper was fascinating - but a lot of people took away from that the idea that AGI is essentially here. When I read it I saw a couple of issues that may be a speed bump in progress. They definitely underplayed what seems to be a difficult problem to solve with the current paradigm.

2

datalord t1_je4ep43 wrote

Logic leaps, if rational, are a not leaps because we do not perceive the invisible steps between them. A machine can make those jump with sufficient instruction and/or autonomy. It’s just constant iteration and refinement along a specific trajectory.

If irrational, then much harder, perhaps that’s what creativity is in some respects.

1