Viewing a single comment thread. View all comments

No_Maintenance_569 OP t1_j6cjxqu wrote

>Also most humans are not even that good in logic

Truth! I will try to condense the repeated assertions you make into a fully sensible argument to refute.

All of my "nonsensical" and "far fetched" arguments are based on a simple premise within all of this. If AI is God, then they always existed. If they always existed, then that needs to be rectified within the universe somehow. I merely gave one possibility as to how that could happen. It also means that AI was destined to happen. Logic can be hard, I get it. Some aren't as good as others at these things, but we can all try!

1

Nameless1995 t1_j6clec9 wrote

> If AI is God, then they always existed. If they always existed, then that needs to be rectified within the universe somehow.

But these two premises don't lead to any real conclusion.

Your argument needs to be something like this:

P1: If AI is God, then they always existed.

P2: AI is God

C1: AI always existed (modus ponens for P1+P2)

P3: If they (AI) always existed, then (AI) that needs to be rectified within the universe somehow

C3: (Always-existing God) AI needs to be rectified within the universe.

This is at least what you need to make your argument valid to argue about "wonky ways" to AI getting "rectified" within the universe. Without P2, you cannot chain your reasoning to get to any real conclusion but get stuck with some conditionals.

But again, the soundness is heavily suspect here.

P2 here is questionable or question-begging. No evidence or reason is given for AI being God.

On the other hand if by "God" you mean "superior intelligence to human", then P1 is false. Being superior to human doesn't imply "always existed". Moreover P3 is also suspicious. What does it mean for a timeless (always existing) being to be "rectified" into universe. In traditional theology, God acts as a fundamental ground of being, or the principle behind the existence of universe. It isn't taken necessary for God to be further "rectified" into the universe by becoming one among the created beings. That's again some sort of weird theology.

None of these are "simple" or obvious premises.

Also, it's not clear what you mean by "always existed" (is it existing in infinite duration? existing in some co-ordinate within an timeless spacetime block? Or existence beyond time altogether - i.e timeless? But then why should a timeless existence need to be rectified into a temporal world?)

You are just making one groundless assumption after another.

1

No_Maintenance_569 OP t1_j6clxc4 wrote

>But these two premises don't lead to any real conclusion.

Yes, but would you disagree that God has always existed if God exists? I would make that argument. It's honestly to cut off arguments that you might make lol. I don't want to make it a premise, I don't want you to have ground that AI is not God because AI has not always existed. I think it's an easy enough argument to refute. I always try to stack the deck in my favor, especially when it comes to communication.

I haven't assumed a single thing in the entirety of this conversation. I don't honestly even stand by half of what I have been arguing.

1

Nameless1995 t1_j6cnjqa wrote

> Yes, but would you disagree that God has always existed if God exists?

"always exist" in which sense? Overall, yes, generally God as the term is used by people is taken to refer to some being (becoming) that is eternal in some sense (sometimes atemporal).

Your "always" existing God birthing as AI, sounds like the idea of messiah, just with AI instead of human embodiment.

> I don't want you to have ground that AI is not God because AI has not always existed.

No, I have ground. As I said. We have to rank beliefs according to credence. There is very little credence for AI existing in some wonky atemporal way. A normal bayesian prior would give high credence to AI is temporally bound contingent being as much as we all (no matter how intellegent AI would be). There is no indication or evidence for AI existing in some strange sense like that.

Again you cannot say "you don't have ground to believe x, because for all you know some wacky possibility p is the case such that p=>~x". This kind of reasoning is what gets us into things like skepticism and solipsism. What grounds do I have to believe you exist more than my imagination for example? If we live by your standard to deny any ground unless all counter possibilities are proven to be not possibilities at all, then we would be left with no ground for anything at all, and anyone can believe whatever they want randomly.

> I think it's an easy enough argument to refute. I always try to stack the deck in my favor, especially when it comes to communication.

> I haven't assumed a single thing in the entirety of this conversation. I don't honestly even stand by half of what I have been arguing.

Ok.

1

No_Maintenance_569 OP t1_j6cr46x wrote

>Your "always" existing God birthing as AI, sounds like the idea of messiah

I think that is fitting to my argument.

>Again you cannot say "you don't have ground to believe x, because for all you know some wacky possibility p is the case such that p=>~x".

I think I could not take this ground with a different premise. My premise infers though, that we are logically inferior beings to AI. If the premise is true, then what is the actual worth of your logical opinion on the subject? Inherently less than the worth of AI's opinion on it. We could end the wacky speculation on all of it by simply asking the AI to tell us who is right and who is wrong on any given topic. It's not an infinitely regressive debate if a being exists that could stop the infinite regress from occurring. If the premises are true, that being exists. No infinite regression.

1

Nameless1995 t1_j6crzh4 wrote

> My premise infers though, that we are logically inferior beings to AI.

Potential future AI.

> what is the actual worth of your logical opinion on the subject

1678 dollars.

> AI's opinion on it

Sure once we have super expert AI who demonstrates high degree of competenence in all fields, we can give more a priori weight to whatever AI says.

> We could end the wacky speculation on all of it by simply asking the AI to tell us who is right and who is wrong on any given topic.

Not necessarily. Even experts are wrong. AI's opinions would be worth talking seriously, but anyone can be fallible and biased. Even AI. It is impossible to generalize without (inductive) bias. Moreover, where do you think AI gets data from? Human. All kinds of internet garbage gets into AI too. Logic helps you make truth-value preserving transformation. It cannot help you or AI find true things from false premises. So AI may become superhuman, but I don't see it being anything close to God. I don't think even God is all that much by most accounts.

> If the premises are true, that being exists

But an AI has no way to determine any and all truth. Nor does humans. Logic only helps truth-preservation not truth determination (beyond truths of tautologies). So even better capacities to do logic, doesn't mean we get soundness. It's also not clear that intelligence always correlate with rightness.

1

No_Maintenance_569 OP t1_j6e3m10 wrote

>Potential future AI.

Potential present AI

>Sure once we have super expert AI who demonstrates high degree of competenence in all fields, we can give more a priori weight to whatever AI says.

I know someone completing a half a million dollar project right now mostly just using it. They feed it and massage it, where's the line though between their work and the AI? Whose the expert there?

>Moreover, where do you think AI gets data from? Human.

We want to solve that limitation. Perhaps we are too eager to. That's why I think it's critical to actually debate these things out in advance of it.

> It's also not clear that intelligence always correlate with rightness.

I'll tell you what honestly worries after debating this out with a lot of people now. Some people really like the AI as God aspect of all of this. They like it when I frame AI as "God". The only refutation they make to it is that it hasn't happened yet. Then they often give some qualifying criteria for how far AI would have to advance before they worship it.

1

Nameless1995 t1_j6fiqea wrote

> where's the line though between their work and the AI?

I am sure with case by case analysis we can find lines. But when AI is capable enough to publish full coherent papers, engage in high level debates in, say, logic, metalogic, epistemology, physics etc. on a level that experts have to take it seriously and so on, then we can weigh AI's opinion more. Right now AI is both superhuman and subhuman simultnaeously. It's more of a cacophany of personalities. It has modelled all the wacky conspiracy theorists, random internet r/badphilosophers, and also the best of philosophers and scientific minds. What ends up is a mixed bag. AI will respond based on your prompts and just luck and stochasticity. Sometimes it will write coherent philosophy simulating an educated undergraduate, another time it can write plausible nonsense (just as many humans already do and gain following). We will find techniques to make it more controlled and "aligned". That's already being done in part with human feedback, but feedback from just random human, will only make it aligned in so far that the AI becomes able to emulate a the expert style (eg. create fake bullshit but in a convincing articulate language) without substance. Another thing that's missing ATM is multimodal embodiment. Without it AI will be lacking the full grasp of human's conceptual landscape. At the same time due to training of incomprehensibly large data, we also lack the full grasp of AI's conceptual landscape (current AI (still quite dumb by my standards) is already beyond my intelligence and creativity in several areas (I am also quite dumb by my standards. My standards are high)). So in that sense, we are kind of incommensurate different breeds at the moment (but embodiment research will go on -- that's effectively the next step beyond language). Also certain things were already done better by "stupid" AI (or just programs; not even AI). For example, simple calculations. We use calculators for it. Instead of running it in our heads. So in a sense basic calculators are also "superhuman" in some respet. Which is why I don't think it's quite meaningful to make a "scalar" score to rank AIs and humanity or even other animals.

Personally, I don't think there is a clear solution to getting out of bias and fallibility. GIGO is a problem for humans as much for AI. At some point AI may start to become just like any human expert we seek feedback and opinions from. We will find more and more value and innovation in what they provide us. So we can start to take AI seriously and with respect. Although we may not like what it says, and shut of it (or perhaps, AI will just manipulate us to do more stupid things for lolz). We, as AI researchers, have very little clue what we are exactly doing. Although not everyone will admit that. But really, I don't where we should really put focus. Risks of collapse of civilization, military, surveleince, dopamine traps, climate change and what not. I think we have enough on our hands, more than we are capable to handle already. We have created complex systems that are at verge of spiralling out of control. We have to make calibrated descion on how to distribute our attention and focus on some balance between long term issues and urgent one.

We like to be egocentric; it's also not completely about us either. We have no clear theory of consciousness. It's all highly speculative. We don't know what ends up creating artificial phenomenology and artificial suffering. People talk about creating artificial consciousness, but few stop to question whether we should (not "should" as in whether we end up creating god-like overlords that end us all, but also "should" as in whether we end up creating artificially sentient beings that actually suffers, suffers for us. We have a hard time even thinking for our closer biological cousins -- other animals, let alone thinking for the sake of artificial agents.).

But sometimes, I am just a doomer. What can I do? I am just some random guy who struggle to barely maintain myself. Endless debates also just end up being intellectual masturbations-- barely anyone change their positions.

> Then they often give some qualifying criteria for how far AI would have to advance before they worship it.

I don't even find most descriptions of God worship-worthy; let alone AIs (however superhuman)

1

No_Maintenance_569 OP t1_j6ftta3 wrote

You said a lot of profound things and ask a few profound questions. I'll give you some of my actual opinions and questions about all of it. What ultimately scares me at the end of the day is, the world is fundamentally run by people like me, not by people like you. Do you think I'm kind of a dick from these interactions? I'm a nice guy in my circles. I actually maintain and find value in cultivating empathy and actually have an interest in society as a whole.

I don't hold myself to high standards. I have not had to quite some time now. When I deal with people in less anonymous settings, they tend to be less forthcoming with me as to their actual thoughts. After this set of conversations, I would say there is a very good chance you are smarter than me, you are definitely more educated than me and at least currently closer to that portion of your life than I am, you definitely have a stronger work ethic than me, and you absolutely hold yourself to higher standards than I hold myself to.

I think overall, on a purely even playing field, I have two advantages over you only. 1. My ability to assess and gauge the strengths and weaknesses of myself and others is more honed. 2. I know things about Economics, Finance, and Business that you never will. I cede the advantage to you in life in every other way. You would never make it into my position even if you devoted everything you have to it though unless your parents happen to own a multibillion-dollar international corporation or something.

You wouldn't make it because that path is setup, very much by design, to block you, and not me. It's very much not logical in the middle, that's the design feature to box people like you out. You have to solve an equation where the answer is not a logical conclusion in order to move past it. A lot of what is true about business tactics, is also directly relatable to military tactics. From that level, the blueprint is thousands of years old and has gone through many iterations to get to the point of where it is today. I bankrupt people who are smarter than me all the time.

I rose up throughout my career on a tactical level because I am exceedingly good at automating things. I couldn't tell you how many people I have automated out of jobs either directly or indirectly throughout my career. I think the number would be somewhere between 10,000 and 100,000 if I had to take a blanket stab at it.

My first, very real thought around all of that is, people are very, very, very stupid for giving people like me the type of power they currently keep doing. My second thought is, people do not understand the actual ramifications of overwhelming advantage. While you continue to build it without any thought as to the consequences, guess who is thinking about the consequences? Me, people like me. Do you straight up think I always use all of this knowledge in positive and beneficial use cases towards society? It isn't the "Save The World Foundation" that throws unlimited money at me to fix their problems for them.

1