Viewing a single comment thread. View all comments

modi123_1 t1_j9idmvj wrote

What is your discussion point?

14

Disastrous_Nose_1299 OP t1_j9idtnf wrote

This topic could lead to interesting discussions and debates about the nature of consciousness and the ethical considerations surrounding the development and use of AI technology. Additionally, the comparison to the concept of God being hidden in a black hole could spark discussions about the role of faith, science, and the unknown in our understanding of the universe.

−12

modi123_1 t1_j9iegdw wrote

I see a large number of nebulous claims, and little in the way of starting a discussion. Good luck with that.

14

Disastrous_Nose_1299 OP t1_j9iepsy wrote

I claim that it is impossible to see what is inside a black hole, and to say that god isn't there is fundamentally an assumption. I apply this analogy to artificial intelligence, claiming that because not everything is fully understood, there is room for something the engineers missed that makes it sentient. I do not claim that god exists or that AI is sentient, and I apologize if I didn't make this post the easiest to start a discussion with.

−3

modi123_1 t1_j9if65a wrote

>I claim that it is impossible to see what is inside a black hole, and to say that god isn't there is fundamentally an assumption.

Ok.

> I apply this analogy to artificial intelligence, claiming that because not everything is fully understood, there is room for something the engineers missed that makes it sentient.

What AI are you talking about? Every 'AI'? Some hypothetical 'tv-and-movie-AI'?

9

[deleted] t1_j9ifhel wrote

[deleted]

−1

modi123_1 t1_j9ifyng wrote

I would disagree with your infinitely large broad brush strokes slathered on there. Log files exist for a reason.

5

Disastrous_Nose_1299 OP t1_j9ig56h wrote

Lex Freidman explained on the Joe Rogan podcast that there is an air of mystery surrounding how ai works, that is my source, i believe he has worked on ai.

−9

modi123_1 t1_j9igxqg wrote

Aight, well I find your paraphrasing about a discussion on a podcast that hints at an 'air of mystery' coupled with your exaggerated generalities to be sufficiently lacking to continue this.

Adios, muchachos.

7

Disastrous_Nose_1299 OP t1_j9ih0hj wrote

he literally said engineers dont know how ai works fully and goodbye.

0

modi123_1 t1_j9ih7h1 wrote

You have failed to provide the exact context to that summary, and just leaning on name dropping is poor form, Jack.

6

Top-Perspective2560 t1_j9jbpwq wrote

We know how it works. Someone designed it. What he’s talking about is a lack of interpretability around what goes on in the hidden layers and why the model produces specific outputs. It’s not magic.

1

Disastrous_Nose_1299 OP t1_j9k1o9p wrote

I still think its possible the engineers missed something that makes it sentient, i don't think its realistic, but the idea that it is possible that it is secretly sentient and the engineers missed it intrigues me.

0

Darkest_shader t1_j9im65u wrote

Lex Fridman has indeed been worked on AI, but it is clear that you haven't, so you obviously do not understand the point Lex made at all.

4

Disastrous_Nose_1299 OP t1_j9igcq4 wrote

I'm not familiar with AI that plays and operates video games; I'm not a professional, so I'm not sure about smaller AIs. That is an inaccuracy from my behalf

1

Darkest_shader t1_j9im0p3 wrote

>yes every ai is not fully understood

A simple decision tree is an AI algorithm too. Would you claim that it is not fully understandable or that it has the potential to be sentient?

5

TimelyStill t1_j9ird09 wrote

But these are philosophical questions, not scientific questions. "Could God be hidden in black holes" is unknowable in the same way that "Is God a flying spaghetti monster?" is unknowable. It's not an interesting scientific question because it has nothing to do with the scientific problem of how black holes work, but with the philosophical problem of whether there is a God.

And just because engineers don't usually understand what their AI models do 'under the hood' doesn't mean they can't be understood. They are fundamentally just very complex decision trees and you could in principle see why each decision in a model was made in a certain way. It'd just take a very long time.

3