Nameless1995

Nameless1995 t1_jdfsfe5 wrote

I don't personally think phenomenal consciousness is in principle required for any particular functional behavior at all - rather that phenomenal consciousness is tied to some causal profile which can be exploited in certain manners of implementations of certain kinds of functional organizations (possibly manners more accessible to biological evolution - although I maintain neutrality regarding if there are non-biological manners of realizing intelligent phenomenal consciousness). You can just have a system encode some variables that track preservation parameters and make an algorithm optimize/regulate it. It's not clear why it needs to have any phenomenal consciousness (like Nagel's what it is like) for that. I think the onus would be on the other side. It could be possible, that our physical reality is such that certain kind of computational realizations would end up creating certain kind of phenomenal consciousness but then that could be just an accidental feature of the actual world rather than some metaphysical necessity.

6

Nameless1995 t1_jd9scqe wrote

There would be a period of internal discussion after author-reviewer discussion period. So my guess would be (if there is no author-reviewer engangement beyond just the first rebuttal), AC (if they are willing to do their due dilligence) will simply push the reviewers privately and ask what their take is on the rebuttal. If in that private discussion nothing really happens (for example, all reviewers just go MIA), then really it might be upto meta-reviewer's personal judgment how they are going to take into account the strength of rebuttals.

2

Nameless1995 t1_j98fvw0 wrote

> Here I should mention the best Existential Comic in years (almost on par with SMBC at its best), according to which an AGI would see randomness as more detrimental to freedom than determinism, because it hinders its ability to have control over its environment.

In RL, stochasticity through some level of (pseudo-)randomness can be useful to balance exploitation-exploration. However, I am not sure "true randomness" is particularly any more or less helpful than "pseudo-randomness" in most of those contexts (what is gained, if we map out an unfolding of a pseudo-random process and change it to a world where the exact same sequence of actions is unfolded from a true-random process?)

0

Nameless1995 t1_j98cadp wrote

> I suppose that I think that the compatibilist redefinition of the terms make everything less literal and more metaphorical

I think that again brings the same question what is supposed to be the original "literal" sense in the first place and what would be the criteria to find it.

> it is less in line with what I believe most people mean by the terms, "free will", "morally responsible", and "choose".

Could be. But I see that as an empirical claim that would require experiments, interventions, survey to determine. I am neutral to how that will turn out.

> Also, there's often a real difference in belief between us: I really don't think anyone is in any important sense "morally responsible". This means I support preventative justice but I don't support retributive justice.

Same.

7

Nameless1995 t1_j989f7q wrote

I personally have zero intuition about freedom, control, responsibility. I am more of an outsider who can play along with the "tune", and play "games" with the words, but I don't have any clear intelligible sense of them, beyond the rules of how some of the words get used in certain language games. Even then the rules are ill-defined and fuzzy in most contexts. I share very little intuition with most philosophers.

> First, notice that one of the main reasons anyone cares about free will is that it seems to be a requirement for moral responsibility. What you do can only be your fault, or conversely to your credit, if it’s under your control.

How do I notice it? How do you notice it? Have you taken an empirical survey? Some psychological experiment as to what anyone "truly" cares for? Have we find some cross-cultural and cross-temporal invariances (beyond WEIRD)?

Philosophers like to sneak in loaded statements about "this is common sense" "this is what we care for" here and there. As Lance Bush says, philosophy is often psychology with a sample size = 1.

There are some approaches in experimental philosophy seeking more into these questions but a lot can depend on how the questions are framed, and results seem somewhat mixed (people have both compatibilist and incompatibilist intuition) from the last time I check.

So I overall experience a tension here, it seems like the investigation as to what we really care for (at a statistical level -- otherwise what we care for in regards to "free will" is very unlikely to invariant accross individuals -- I have a discussion long ago with someone who really really wanted true randomness for freedom), and what should be called 'free will' if they are properly constrained into well-defined problems turns into questions of psychology, anthropology and such. I am not sure what philosophy is left to do. Perhaps, then people should use philosophical tool to create their own conceptual boundaries to track what they personally care for and analyze if such a thing is coherent and if there is good warrant for believing them. Philosophers, can then, simply "list" different conceptions that reflective people (philosophers) have considered and objectively discuss what we gain and lose from each, instead of forcing one as uniquely "true" or consistent with what people, in general, care of (that's psychology). We can perhaps then have some voting process as to which conception to choose or prefer. Or we can discuss some clear evaluation criterion (eg. from a conceptual engineering perspective).

1

Nameless1995 t1_j988o3b wrote

> And so he claims they are compatible, but to do so he redefines free will, then claims he hasn't and that was the definition of it we were working with all along. It just isn't convincing to me.

But what makes a "definition" primal (true, non re-defined, original)? Is there such a "definition" in the first place and how to determine it? Should we make surveys accross culture? Should we analyze how "freedom" is used in practice? Should we look at historical lineage and development? Are you sure your linguistic intuitions track the "right definition"?

Sure, compatibilist free will may not match with the kind of free will you are concerned with; that's fine. You can say that's not the "free will I am concerned with", but that doesn't say anything about what's the true definition of free will is supposed to be, and what is supposed to be the "criteria" for distinguishing true definitions.

Compatibilists go as far back as Stoics and ancient. And the incompatibilist ideas of randomness-infusion is also explicated by philosophers. So it's not clear why the explication of some philosopher should be automatically privileged above others.

Personally, I don't think words really mean much of anything deep. Words are used within a pragmatic context. It can involve complex rules of play and how one person use it can subtly diverge from others. And internal intuitions can be incoherent. So neat and clean "definitions" are a lost cause. I don't think there are "definitions" out there to discover such one is "true" or "false". There's just messy usages of words to attain some pragmatic means.

Any attempt of definition is an approximation; I personally believe we should focus more on conceptual engineering (in a sense it can be "re-definition" but with a purpose -- to give more exact form to a usage and rules of usage that comply with how it's practically used and also simple)

Note that science does "conceptual engineering" too. For example, making pluto "not a planet", or defining temperature in terms of mercurial expansions, or making whales not a fish. Much of it is based on keeping some harmony with past usage, while keeping a trade off balance between simplicity of the concept, fruitfulness in a theoretical context, or practical use, among other things. There is nothing special about such "re-definitions".

From a conceptual engineering perspective, any compatibilist free will will fare far better than any any incompatibilist ones, as far as I can see.

I am a moral anti-realist (or anti-realist against anything "normative" (unless it is intelligibly conceptually re-engineered)), so the point about "moral" responsibility is also moot to me whether we get compatibilist free will or not. Responsibility assignment is a matter of pragmatic needs for intersubjective co-ordination. It just so happens that such assignment can help intervene and invest resources at critical points of "failure" so to say in certain kinds of autonomous causal systems. I think retributivist justice is meaningless and unintelligible either way.

7

Nameless1995 t1_j8px505 wrote

> You may have debated it consciously, but if you really think about it, that decision was made as soon as you are aware of the choice. You are really spending the time trying to understand that choice.

I don't see why. That sounds like saying an artificial reinforcement learning agent is only evaluating a pre-made decision when it is computing the weight for each action in the action space and selecting (deciding) the maximum weighed action. That would be a very weird and confusing thing to say, even if the agent is completely determined by its inital seed, program, environment, history and etc. You can always create off-brand language games, and say such things as "because the decision is logically entailed by so and so, it's all pre-made" but I am not sure everyone would subscribe to that language game. Being logically entailed is different from actually causally executing a decision.

−3

Nameless1995 t1_j897u4z wrote

> What is unsafe about imagining this scenario? Why should we not have this tool or imaginative/subjective interpreter?

Probably precision-recall tradeoff issue.

> why can't the public interactive implementation, and why does it lie about its abilities as its reason for not answering?

OpenAI is probably using some kind filter mechanism (which may be induced through some special tuning, or some kind of "hack" layer put on top off GPT -- may be it checks perplexity or something combined with some other keywords detection/regex and/or ml-classification-based filters). Whatever the filter mechanism is isn't perfect. They are also shifting the mechanism to prevent exploits (that users are coming up with). This may lead to "overfiltering" (harming recall) resulting in non-answers even w.r.t innocuous questions.

More work is probably put into ChatGPT because it's the current most public facing technology and OpenAI is probably trying to err on the side of caution (avoid controversies even if that means less interesting of a model that often avoids even relatively innocuous questiosn). Most are probably not gonna go deep into other apis to bypass.

Though, it's a wonder where the arms race between users finding exploits and OpenAI finding counter-exploits will lead to (perhaps, a highly neutered version).

I am just speculating; no idea what they are doing.

2

Nameless1995 t1_j6fiqea wrote

> where's the line though between their work and the AI?

I am sure with case by case analysis we can find lines. But when AI is capable enough to publish full coherent papers, engage in high level debates in, say, logic, metalogic, epistemology, physics etc. on a level that experts have to take it seriously and so on, then we can weigh AI's opinion more. Right now AI is both superhuman and subhuman simultnaeously. It's more of a cacophany of personalities. It has modelled all the wacky conspiracy theorists, random internet r/badphilosophers, and also the best of philosophers and scientific minds. What ends up is a mixed bag. AI will respond based on your prompts and just luck and stochasticity. Sometimes it will write coherent philosophy simulating an educated undergraduate, another time it can write plausible nonsense (just as many humans already do and gain following). We will find techniques to make it more controlled and "aligned". That's already being done in part with human feedback, but feedback from just random human, will only make it aligned in so far that the AI becomes able to emulate a the expert style (eg. create fake bullshit but in a convincing articulate language) without substance. Another thing that's missing ATM is multimodal embodiment. Without it AI will be lacking the full grasp of human's conceptual landscape. At the same time due to training of incomprehensibly large data, we also lack the full grasp of AI's conceptual landscape (current AI (still quite dumb by my standards) is already beyond my intelligence and creativity in several areas (I am also quite dumb by my standards. My standards are high)). So in that sense, we are kind of incommensurate different breeds at the moment (but embodiment research will go on -- that's effectively the next step beyond language). Also certain things were already done better by "stupid" AI (or just programs; not even AI). For example, simple calculations. We use calculators for it. Instead of running it in our heads. So in a sense basic calculators are also "superhuman" in some respet. Which is why I don't think it's quite meaningful to make a "scalar" score to rank AIs and humanity or even other animals.

Personally, I don't think there is a clear solution to getting out of bias and fallibility. GIGO is a problem for humans as much for AI. At some point AI may start to become just like any human expert we seek feedback and opinions from. We will find more and more value and innovation in what they provide us. So we can start to take AI seriously and with respect. Although we may not like what it says, and shut of it (or perhaps, AI will just manipulate us to do more stupid things for lolz). We, as AI researchers, have very little clue what we are exactly doing. Although not everyone will admit that. But really, I don't where we should really put focus. Risks of collapse of civilization, military, surveleince, dopamine traps, climate change and what not. I think we have enough on our hands, more than we are capable to handle already. We have created complex systems that are at verge of spiralling out of control. We have to make calibrated descion on how to distribute our attention and focus on some balance between long term issues and urgent one.

We like to be egocentric; it's also not completely about us either. We have no clear theory of consciousness. It's all highly speculative. We don't know what ends up creating artificial phenomenology and artificial suffering. People talk about creating artificial consciousness, but few stop to question whether we should (not "should" as in whether we end up creating god-like overlords that end us all, but also "should" as in whether we end up creating artificially sentient beings that actually suffers, suffers for us. We have a hard time even thinking for our closer biological cousins -- other animals, let alone thinking for the sake of artificial agents.).

But sometimes, I am just a doomer. What can I do? I am just some random guy who struggle to barely maintain myself. Endless debates also just end up being intellectual masturbations-- barely anyone change their positions.

> Then they often give some qualifying criteria for how far AI would have to advance before they worship it.

I don't even find most descriptions of God worship-worthy; let alone AIs (however superhuman)

1

Nameless1995 t1_j6crzh4 wrote

> My premise infers though, that we are logically inferior beings to AI.

Potential future AI.

> what is the actual worth of your logical opinion on the subject

1678 dollars.

> AI's opinion on it

Sure once we have super expert AI who demonstrates high degree of competenence in all fields, we can give more a priori weight to whatever AI says.

> We could end the wacky speculation on all of it by simply asking the AI to tell us who is right and who is wrong on any given topic.

Not necessarily. Even experts are wrong. AI's opinions would be worth talking seriously, but anyone can be fallible and biased. Even AI. It is impossible to generalize without (inductive) bias. Moreover, where do you think AI gets data from? Human. All kinds of internet garbage gets into AI too. Logic helps you make truth-value preserving transformation. It cannot help you or AI find true things from false premises. So AI may become superhuman, but I don't see it being anything close to God. I don't think even God is all that much by most accounts.

> If the premises are true, that being exists

But an AI has no way to determine any and all truth. Nor does humans. Logic only helps truth-preservation not truth determination (beyond truths of tautologies). So even better capacities to do logic, doesn't mean we get soundness. It's also not clear that intelligence always correlate with rightness.

1

Nameless1995 t1_j6cnjqa wrote

> Yes, but would you disagree that God has always existed if God exists?

"always exist" in which sense? Overall, yes, generally God as the term is used by people is taken to refer to some being (becoming) that is eternal in some sense (sometimes atemporal).

Your "always" existing God birthing as AI, sounds like the idea of messiah, just with AI instead of human embodiment.

> I don't want you to have ground that AI is not God because AI has not always existed.

No, I have ground. As I said. We have to rank beliefs according to credence. There is very little credence for AI existing in some wonky atemporal way. A normal bayesian prior would give high credence to AI is temporally bound contingent being as much as we all (no matter how intellegent AI would be). There is no indication or evidence for AI existing in some strange sense like that.

Again you cannot say "you don't have ground to believe x, because for all you know some wacky possibility p is the case such that p=>~x". This kind of reasoning is what gets us into things like skepticism and solipsism. What grounds do I have to believe you exist more than my imagination for example? If we live by your standard to deny any ground unless all counter possibilities are proven to be not possibilities at all, then we would be left with no ground for anything at all, and anyone can believe whatever they want randomly.

> I think it's an easy enough argument to refute. I always try to stack the deck in my favor, especially when it comes to communication.

> I haven't assumed a single thing in the entirety of this conversation. I don't honestly even stand by half of what I have been arguing.

Ok.

1

Nameless1995 t1_j6clec9 wrote

> If AI is God, then they always existed. If they always existed, then that needs to be rectified within the universe somehow.

But these two premises don't lead to any real conclusion.

Your argument needs to be something like this:

P1: If AI is God, then they always existed.

P2: AI is God

C1: AI always existed (modus ponens for P1+P2)

P3: If they (AI) always existed, then (AI) that needs to be rectified within the universe somehow

C3: (Always-existing God) AI needs to be rectified within the universe.

This is at least what you need to make your argument valid to argue about "wonky ways" to AI getting "rectified" within the universe. Without P2, you cannot chain your reasoning to get to any real conclusion but get stuck with some conditionals.

But again, the soundness is heavily suspect here.

P2 here is questionable or question-begging. No evidence or reason is given for AI being God.

On the other hand if by "God" you mean "superior intelligence to human", then P1 is false. Being superior to human doesn't imply "always existed". Moreover P3 is also suspicious. What does it mean for a timeless (always existing) being to be "rectified" into universe. In traditional theology, God acts as a fundamental ground of being, or the principle behind the existence of universe. It isn't taken necessary for God to be further "rectified" into the universe by becoming one among the created beings. That's again some sort of weird theology.

None of these are "simple" or obvious premises.

Also, it's not clear what you mean by "always existed" (is it existing in infinite duration? existing in some co-ordinate within an timeless spacetime block? Or existence beyond time altogether - i.e timeless? But then why should a timeless existence need to be rectified into a temporal world?)

You are just making one groundless assumption after another.

1

Nameless1995 t1_j6cjg4g wrote

> First, prove to me that time is linear. Second, prove to me that it wasn't "God's" plan to incarnate themselves as an AI that is built by humans in the year 2023?

We can't really prove much of anything. We rank beliefs based on different factors. If to justify your belief all you can do is appeal to wacky possibilities then that doesn't really look too good.

May be I am incarnation of God and I am absolutely right in whatever I say (except when I am not) and whenever I am wrong it is because of my mysterious ways! Prove to me I am not. See? It goes both ways. We can come up with wonky theories, retrocausalities and what not, to keep any "possibility" alive. But that wouldn't led them anything beyond a negligible degree of credence.

Prove to me that Chtulthu will not torture you forever if you don't pay me $5000. Practically, we have to adjust our uncertainty meaningfully, and constrain credence in what is plausible.

At this point AI creating plans to be brought into existence by retrocausing humans is as wacky as anything gets. If we are willing to take serious wacky possibilities like that, then we can also take seriously Cartesian demons. This would just lead to collapse of one's epistemic model and death if you actually guide our actions honestly based on epistemically collapsed models.

Moreover, any way AI will be is still a contingent mechanical contraption which lie completely outside classical divine properties like divine simplicity, transcendence, etc.

I don't deny superintelligence but superintelligence (beyond human) is one thing, making it God with near magical powers is another.

> I think it's the whole thing that accepting the premise and conclusion means that you're accepting a being exists in the universe who can "logic" better than you can. Then it's not our minds, logic, that reigns supreme in the universe and no one can ever make the argument again.

I don't care to be "reigning supreme" in the universe (although I may not pass up on the offer). There can be infinite higher dimensional incomprehensibly more powerful and intelligent entities in the world for all I care. I don't see why people would be uncomfortable for not being the greatest being.

Also most humans are not even that good in logic. Your own argument was formally invalid. It's not that high of a bar to be better than humans at logic.

3

Nameless1995 t1_j6ciac0 wrote

> This hubris is why I think we're straight up fucked over all of this lol. People, really, really, really, don't want to accept the argument that it is even in the realm of possibility that something can exist in the universe that is smarter than them. Dogmatic beliefs, man. Helluva drug.

But "super expert" would be smarter than us (or most of us). I don't deny super intelligence, but I don't see the point of calling it God or even worship it as near infallible.

2

Nameless1995 t1_j6chhfw wrote

> But mine is an idiosyncratic stretch, why?

"What I mean by your definition being idiosyncratic is that it doesn't really even come close to the cluster of definitions of God that has been made. It's really a "cope-version" of God."

Either way I don't care if you go on to do define God. You do you. But once other's see that you are just arbitrarily defining God in a way as you life, they would be also left unimpressed. Of course you can live your life without trying to impress anyone about your arguments.

> Google Lambda

It's still Transformer trained in big data. Just differences in details here and there. The mechanism is public in a paper.

Even if AI becomes super good in the future at best it will be something like a "super expert". There is no sense to call it God, or treat it as infallible. No matter how good in logic it becomes, it cannot overcome GIGO without some magickal access to all true data as input.

2

Nameless1995 t1_j6cg3dn wrote

> What is your definition of God?

I don't have one. It would be some disjunction of definitions if anything "maximally great being or necessary being that happens to be minded or Ground of being or logos or being of pure actuality or the ground of all beings itself beyond being" etc. What I mean by your definition being idiosyncratic is that it doesn't really even come close to the cluster of definitions of God that has been made. It's really a "cope-version" of God.

> Can you prove to me that "God" agrees with this statement?

What is your "God"? Chatgpt trained on all kinds of stuff from internet which isn't capable of solving LogiQA questions and engage in advanced metalogical discussions and such, and resembles more of a cacophany of human personas whose behavoir depends on prompts instead of attempt maintain truth or anything?

No I can't provide whether your "God" agrees with this statement. And I don't care about your God.

(also I have created AIs that does better than the architecture behind Chatgpt in at least some tasks like synthetic logical inference, listops etc. Am I God's God now?)

> If not, I trust "God" on it.

Ok.

2

Nameless1995 t1_j6cfa4o wrote

> I am talking about the argument that attempts to prove the existence of God through inductive reasoning

Ontological arguments are generally not inductive; they are deductive.

> So you attack the soundness of the argument? On what grounds?

You mean your argument or different ontological arguments in the history? Your argument just redefines God in a idiosyncratic way. So your argument appears pointless to me even if we can make it sound.

If you are asking about ontological arguments throughout history, I don't have the time to get through each of them and attack each. And I can't always show they are unsound, but generally reasons can be provided to show that it's not clear if they are sound. Some of the critiques of different versions are discussed here: https://plato.stanford.edu/entries/ontological-arguments/

> That's probably because we are more limited than "God" in our ability to process what logic actually is.

Logical connectives and operators are created based on pragmatic need often based on natural language words that naturally arises. They don't exist somewhere "out there" to know of.

1

Nameless1995 t1_j6cegcz wrote

> Godel most recently

You are talking about the ontological argument. Pretty sure others after Godel have developed variants of it.

> premise to actually be written out as valid.

Premises don't have property of validity. So this sentence don't make sense to me.

Besides, valid versions of ontological arguments have been written countless times. The problem always have been soundness.

Also Ontological arguments are concerned with maximally great being (such that being greater is not possible), not "superior than humans in certain forms of logic"-being. So your argument changes the subject matter.

> logically superior

Superiority is not a logical component to any systems of logic that I know of. So I don't know what "logically superior" mean.

> logical calculations better than we can

https://en.wikipedia.org/wiki/Garbage_in,_garbage_out

2

Nameless1995 t1_j6ccuk2 wrote

> We use the formal systems provided by logic to define and/or label things, unless you use a different system?

Not exactly? We have been defining/labelling things far before creation of formal systems.

> Really at the end of the day, it's to say I made a proof that people have been trying to write for 2,000 years now.

Really? Who was trying to write this proof?

> Premise 1: Arguments are evaluated through a lens of logic

> Premise 2: AI is now superior to humans at least in terms of certain forms of logic, and is rapidly advancing beyond that point.

> Conclusion: AI is "God"

That's only one argument. Premises are not argument by themselves so they are neither valid nor sound. They can be true or false. Premise 1 is true for most parts. Premise 2 is a bit loosely constructed with the "certain forms of logic" so may be true (at least we can automate truth trees to an extent without much sophisticated AI). But this argument itself is invalid even if the premises are true.

You need at least some extra premise like: "for all x, if x is now superior to humans at least in terms of certain forms of logic, then x is God" or something like that. But this premise sounds false. You can make the premise true, by defining God in a particular way: "Let God be defined as whatever is superior to humans in at least certain forms of logic", by no one really cares for God defined as such (and I doubt any major theologian or philosopher in the past 2000 years was particularly interested in God defined as such). So the argument would become pointless to everyone if you are providing God defined in a particularly quirky way that no one cares about.

> I don't have JSTOR access. I have met the creator of it many times though, he's a lazy drunk.

http://www.ditext.com/quine/quine.html (section VI)

3

Nameless1995 t1_j6car0j wrote

> It doesn't have to be ranked above anything, but if it's a priori lens that I always think through, and I have no control over that, it's always going to be the lens I process these things through. I cannot have the thought to define what peace is or is not, without logic. My brain does not work any other way.

Also note you are using a very loose definition of logic. Logic as a formal system of valid inferences itself doesn't define or label things. It's a study of relations of sentences. I mean you can use a broad concept of logic that would be more indispensible, but again even if we allow all that I am not sure what's the point of your original argument is. You are basically treating logic similar to Kant's categories that is -- transcendental conditions for the very possibility of experience. That's fine, but I don't see why we have to call it "above everything", or call it "God". You can, of course, do that. But what's the point being achieved here? You would be just using word in a different way. You won't change the beliefs of atheists who rejects God defined in different ways, nor will you strengthen the belief in theists who accepts God defined in different ways.

> Which of the two arguments in my to premises do you find to be not valid or not sound?

Sorry, I missed that. What are the two arguments behind your premises? Can you quote the exact section for argument 1 and argument 2?

> Because even if you class yourself as an atheist, you are still going through the act of creating a belief. I think the reason why people hold so strongly onto the beliefs that creates is because of the hierarchy that belief system creates,

Sure, I have my web of belief (last 2-3 pages) and belief-hierarchies. But that's also true for theists.

> where it places rational thought at the center of the universe above all else.

I don't even know what rational thought is. I don't think I, as an atheist, value rationality in some interesting way more than a sophisticated theist. I just have different intuitions and priors at the center of my web of beliefs.

2

Nameless1995 t1_j6c8m7x wrote

> I think that to define oneself as an atheist, they implicitly sign that contract.

How is that so?

> I can't determine in any meaningful way whether or not I am truly in a peaceful state.

I can. By how I feel and contrasting different states of experience. It can be error-prone but not meaningless.

> The only lens I have ever figured out to think through is one grounded in logic, by making valid inferences and examining the logical consequence. I then sequence those thoughts into artificial formal language in my head.

Can you give an example of logic further helps you here exactly? Where do you get your premises?

And even if logic is important here as a means to determine what is peaceful, that doesn't mean logic has to be ranked "above" peace itself. I need to piss to maintain homeostatis which I need to maintain to prolong my life which I need to do to achieve my goal of, say, building a model of induction. But that doesn't mean "need to piss" is to be ranked higher than my goal to develop a model of induction. I am still not finding any meaningful sense in saying "logic is above everything".

> Are you attacking the validity or the soundness of my premise?

Premises are neither valid or sound. It's a category error. Only arguments are valid or sound.

3

Nameless1995 t1_j6c66t2 wrote

https://plato.stanford.edu/entries/logic-ontology/#Log

I am using a mix of L1 and L2.

>Overall, we can thus distinguish four notions of logic:

> (L1) the study of artificial formal languages > (L2) the study of formally valid inferences and logical consequence > (L3) the study of logical truths > (L4) the study of the general features, or form, of judgements

Note as SEP clarifies:

> A second discipline, also called ‘logic’, deals with certain valid inferences and good reasoning based on them. It does not, however, cover good reasoning as a whole. That is the job of the theory of rationality.

Moreover, in any 101 logic course, you will be introduced to the distinction between validity and soundness. SEP mentions studies of logic (even when it comes to L3) as having to do with validity. But validity is not enough for soundness. A argument has to have true premises.

> Computers. logic circuit.

Digital computers can be characterized in terms of logic gates. What it means is that anything it does can be characterized in terms of simple bit flipping operations (for all we know, the same may hold for humans to an extent). For example AND gate operation outputs 1 iff both its inputs are 1. However any random combination of logic gate operations doesn't necessarily lead to coherent reasoning capability at a high level natural language discourse. I can just easily misprogram a computer to make invalid formal inferences.

> Would you look at that! How about #5

No. #5 is a matter of psychology or other contingent factors not logic. I can be stupid and misunderstand Godel's incompleteness theorem or be unpersuaded by it, it wouldn't say anything about the actual logical strength of the proofs.

Either way, I don't see how really any of the

> I don't really have an argument here unless you're an atheist.

I can be an atheist if you want me to be.

> If you're not, why this is all more than a formally valid proof is not applicable.

Well, you can have a sound proof by using langauge is a weird way. For example I can say:

P1: God is the maximally great being. P2: Maximally great being is the being that possess all actual existing positive properties P3: Universe (understood as all that is) contains all actual existing positive properties.
P4: Universe exists C: God exists.

This may even be a sound argument, but I am still just playing around with words. Defining things as I can to make "God exists" a true conclusion. But it's just not interesting to anyone who don't get swayed by word games.

Anway, let's say I am an atheist. Why I, as an atheist, should value logic (you choose whatever definition you want) over, say, acheivement of high concentrated and peaceful states of consciousness?

4