Forstmannsen t1_jdxy37q wrote

TBH the question from the tweet is relevant. LLMs provide statistically likely outputs to inputs. Is an unknown scientific principle statistically likely output to description of the phenomena? Rather tricky question if you ask me.

Honestly, so far I see LLMs as more and more efficient bullshit generators. Will they automate many humans out of work? Sure, production of bullshit is a huge industry. Are we ready for the mass deluge of algorithmically generated bullshit indistinguishable from human generated bullshit? No we aren't. We'll get it anyway.


Forstmannsen t1_iyk15rl wrote

It's all very "wow", but most of the examples I've seen so far were small, neat, well defined problems. Things that I can easily see being coded for hobby reasons. The AI probably saw many, many good examples of those when training. I dunno, it's "wait and see" for me.


Forstmannsen t1_iws0pqb wrote

Yep. Actually though, it would depend on your mindset if it matters or not... but the funny thing is, if you are very attached to the idea of thinking yourself as the original, and not a mere copy, you can bet your ass that the "copy" thinks the exact same thing. Knives out, I say whoever bleeds out last is the original.

Also, this whole continuity argument is a cop-out, IMO. I fail to subjectively (which is the only way that matters) experience continuity every night, and somehow, I live with that.


Forstmannsen t1_iwrzwcs wrote

If you are making an assumption those synthetic neurons are fully functional and able to signal back and forth with organic ones, those questions make no sense, because the answer is obvious.

If you are making an assumption those synthetic neurons are non-functional and/or unable to signal to organic ones, those questions make no sense either, for the same reason.