Magikarpeles t1_javnfez wrote

It only took me one trip on dissociatives to realise how much my sensory experience is dependent on small changes in my brain chemistry. Kind of shattered the illusion of reality being this stable, objective thing. Everyone is different and it makes sense therefore that their subjective experience of reality is at least somewhat different to mine.

Even from a physics perspective we know that what we experience is at best an approximation of reality. Vision is basically just radar with high resolution. There’s a lot missing.


Magikarpeles t1_j9xudeu wrote

How long before someone makes an AI that makes a website that sells ads and uses the money to buy cloud infrastructure to make more sites and sell more ads to buy more infrastructure?

Or easier: a 4chan AI that starts a cult with little incel minions doing it’s bidding?

I give it months.


Magikarpeles t1_j9xtz9x wrote

Stability AI “democratised” stable diffusion by releasing their models and allowing open source platforms to use them. The open source solutions are arguably better than the corpo ones like Dalle-2 now.

OpenAI do release older models of GPT but they are vastly less sophisticated than the current ones. Releasing the current models would “democratise” chatGPT but it would also kill their golden goose.


Magikarpeles t1_j6nfvau wrote

I just mean conceptually. You show a child something and then tell them a word, but also a lot of the time the child just gets exposed to a bunch of language and figures out the relationship between the words themselves. On a surface level that’s similar to the guided and unguided training paradigms we use for training AI models.


Magikarpeles t1_j6n6fsg wrote

I think the hard problem is more about being unable to prove or disprove someone else’s phenomenological experience of being conscious (at least how I understand it). I think that’s quite relevant to the discussion about whether or not the AI is “conscious”. Unlike humans and animals the AI isn’t constantly processing and thinking and feeling, just when it’s tasked with something.

If consciousness is an emergent property then it’s possible for the AI to be conscious in its own way while its “thinking”. But the point stands that it’s not possible to access someone or something’s subjective experience, so we can only ever speculate.