MonsieurBlunt
MonsieurBlunt t1_j7p4o0j wrote
Neural networks was a successful idea
MonsieurBlunt t1_j7eeg25 wrote
Yea looks like Meta is making him say this stuff.
I assumed he jerks off to chat GPT responses when he is alone. I am continuing to assume that tbh
MonsieurBlunt t1_j77hgn2 wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
They don't have desires and plans and understanding of the world, which is what is actually meant when people say they are notot sentient or conscious because we also don't really know what consciousness is you see
For example, machines are conscious in your conception if you ask Alan Turing.
MonsieurBlunt t1_j5dzfpn wrote
Reply to ChatGPT is not all you need [R] by EduCGM
"This work consists on an attempt to describe in a concise way the main models are sectors that are affected by generative AI"
MonsieurBlunt t1_j9glzsp wrote
Reply to [D] Bottleneck Layers: What's your intuition? by _Arsenie_Boca_
Accomodating as much space for information as you can is not really a good idea. It is prone to overfitting and also harder to learn. You can think of it as a way of regularisation, you are forcing the model to get the useful information and not the rest or, you leave less space where it can encode the training data to overfit.