Viewing a single comment thread. View all comments

KPTN25 t1_j92yfz4 wrote

Reply to comment by Metacognitor in [D] Please stop by [deleted]

None of the models or frameworks developed to date. None are even close.

3

the320x200 t1_j93a7sy wrote

Given our track record of mistreating animals and our fellow people, treating them as just objects, it's very likely when the day does come we will cross the line first and only realize it afterwards.

3

Metacognitor t1_j941yl1 wrote

My question was more rhetorical, as in, what would be capable of producing sentience? Because I don't believe anyone actually knows, which makes any definitive statements of the nature (like yours above) come across as presumptuous. Just my opinion.

1

KPTN25 t1_j94a1y0 wrote

Nah. Negatives are a lot easier to prove than positives in this case. LLMs aren't able to produce sentience for the same reason a peanut butter sandwich can't produce sentience.

Just because I don't know positively how to achieve eternal youth, doesn't invalidate the fact that I'm quite confident it isn't McDonalds.

3

Metacognitor t1_j94ois4 wrote

That's a fair enough point, I can see where you're coming from on that. Although my perspective is perhaps as the models become increasingly large, to the point of being almost entirely a "black box" from a dev perspective, maybe something resembling sentience could emerge spontaneously as a function of some type of self-referential or evaluative model within the primary. It would obviously be a more limited form of sentience (not human-level) but perhaps.

0