KPTN25

KPTN25 t1_j95kx5j wrote

Reply to comment by overactor in [D] Please stop by [deleted]

Reproducing language is a very different problem than true thought or self-awareness, is why.

LLMs are no more likely to become sentient than a linear regression or random forest model. Frankly, they're no more likely than a peanut butter sandwich to achieve sentience.

Is it possible that we've bungled our study of peanut butter sandwiches so badly that we may have missed some incredible sentience-granting mechanism? I guess, but it's so absurd and infinitesimal it's not worth considering or entertaining practically.

The black box argument is intellectually lazy. We have a better understanding of what is happening in LLMs and other models than most clickbaity headlines imply.

1

KPTN25 t1_j94a1y0 wrote

Reply to comment by Metacognitor in [D] Please stop by [deleted]

Nah. Negatives are a lot easier to prove than positives in this case. LLMs aren't able to produce sentience for the same reason a peanut butter sandwich can't produce sentience.

Just because I don't know positively how to achieve eternal youth, doesn't invalidate the fact that I'm quite confident it isn't McDonalds.

3

KPTN25 t1_j91q5hn wrote

Reply to comment by Optimal-Asshole in [D] Please stop by [deleted]

Yeah, that quote is completely irrelevant.

The bottom line is that LLMs are technically and completely incapable of producing sentience, regardless of 'intent'. Anyone claiming otherwise is fundamentally misunderstanding the models involved.

4

KPTN25 t1_ivjgw5l wrote

This is also where I'd start.

You can set up outlook rules to sort inbound requests into different folders or tag them by keyword, too. Id still recommend doing that final step of copying in the appropriate response manually though.

2