Laafheid

Laafheid t1_j0zi3ca wrote

I wouldn't be as worried if I had solutions for this, that's kind of the issue.

it seems difficult to me since regardless of whether the thing is fake or real the truth-finding used to depends on records being available, but precisely those become easier to manufacture, so even if alibi material is presented it's not clear whether that is real either. the only solution that would sort of solve this, seems to me to be surveillance by default, yet that brings with it a slew of other problems.

Do you have ideas?

2

Laafheid t1_j0yaqrx wrote

It might be way out there, but the who situation makes me think of a piece by philosopher Slavoj Zizek. The piece is about a link between belief in existence of God, or some other transcendent Big Idea, and limitations.

If one replaces the Big Idea, with the concept that "reality is and should be reality", then it stands to reason that either there will be more explicit censorship and/or that more importance will be placed on reputation/status/interpersonal relationships, and beside these two other forms of verifyability.

On the flipside, if one looks at what the majority of people actually create with it, you get things which are mostly (")productive(").

In a sense it is similar to the invention of the printing press: with it people could relatively rapidly publish anything, but in the end reputation of good publishers is what covers the majority of the market, and although there are still people who self-publish how far their reach is depends on credentials (in a broad sense of reputation, status, relationships & accomplishments).

what I am more worried about than fake X, is the opposite consequence. Because it becomes easier to create fake X, it should also become more "reasonable" to claim X is actually fake. Since deepfake detection is practically an arms race with no end in sight a different solution than AI is needed for this and consequences depend a lot on what that looks like.

0

Laafheid t1_iruzzup wrote

By framing your question like this you are essentially asking "are bricks and cement a modern house?", but because "Artificial Neural Network" sounds like it's on the same level of fanciness as sentience you do not notice how ridiculous the question is.

It makes you unable to think of the answer too, namely: ANNs are a house, and for it to become a modern house it needs some extra components (,or maybe the ANNs are a different component to make the metaphor work better; it's a component among many).

Both "Sentient" and "Artificial neural network" are useless concepts for this question.

"sentient" has become a term with overloaded meaning and as such is not a useful category for this question.

with "sentient" do you mean:

  • human-like; in which case: is just a brain without a body sentient? what about it missing some subset of input signals? does the zombie-walk to the coffee machine after waking up count as sentient?
  • able to respond to situations it perceives: plants can release toxins through their system once leaves are bitten/harmed.
  • tell us their experience: are less linguistically able people less sentient?
  • hold a conversation: what about those introverted friends who hardly ever contribute half a sentence if they're out of their comfort zone?

An ANN is nothing without data, training and action space. compare some ANN to classify MNIST digits to ACT-1 or GPT with a python interpreter at it's disposal.

The former is much more purely an ANN, whereas the latter two are given the programmatic equivalent/precursors of bodies, especially ACT-1. Still relatively limited (with domain limited by links on pages themselves, rather than direct url query, but with lots of room to expand, given the ubiquity of software) and prompt driven (but I would say people underestimate how they themselves are also very much prompt-driven and with planning coming into the picture following external commands becoming a smaller ratio of it's set of behavior).

I'd say the most serious lack w.r.t. sentience is response adjustment out of training phase although this seems more an engineering challenge than an ANN challenge (when do you accept that the environment tells you that you're incorrect and should adjust: not always sometimes it's a fluke and sometimes the person telling you you're wring is actually the one making the mistake, not to speak of malicious actors).

It is also considered a proverb that "insanity is doing the same thing twice and expecting different results", yet many people do not adjust their actions. As such I'm not sure this should be a requirement for sentience unless you'd want to exclude people ad I'm not sure this response adjustment out of training phase is something people in general are good at either.

5