GepardenK t1_jcf55l6 wrote

That's fine, so long as you keep it to yourself, but then you don't get to make statements about what is and isn't useful. Because if you make assertions, that is to say you impose yourself on others, while also maintaining that truth isn't something to be worried about - then that's just a rhetorical way of trying to have your cake and eat it too.


GepardenK t1_jccjktk wrote

No, all of this is truth statements. You seem to be selective about what true things you consider "truth", so that you can argue against truth while still keeping your own non-truth "true". Subjectivity itself relies on truth, since for something to be subjective it must be true that it is not objective - and so on.


GepardenK t1_jc248bv wrote

Well of course you can't. If you want to make a truth claim then by definition your position must be that truth claims can be made. The best you can do is to make a distinction where you say that some things can be said to be true (which then would include your own position, lucky break I guess), while most other things can't be said to be true.


GepardenK t1_jc1xwc3 wrote

>Ok, but this article relies on the exact sorts of "truth" the author is seeking to dismiss in order to argue the "truth" of its own assertions.

Yes, maybe it's just where I hang but I've seen so many of these pseudo-deconstructions lately and it's getting pretty boring. If you argue against a framework then don't keep standing on that same framework - just abandon it.

For example, if I make a deconstructive argument against God, or gods, then I'm not gonna end with: "... and hence that it why it is God's will that God does not exist".

In the same vein, don't make truth claims about the nonexistence of truth. If it does not exist then you should not need to rely on it.


GepardenK t1_j9y9tth wrote

> Did we just pass the Turing Test?

Well, no. Even if everyone thought this article was written by a human that would not pass the Turing Test.

The Turing test requires two participants to engage back and forth in conversation, one being human and the other an AI, and then for a third party to watch the conversation in real time and not be able to distinguish who is human and who is not.

It is a significantly higher standard than simply confusing some algorithmic text for having been written by a person.


GepardenK t1_j9xlw53 wrote

>Where is the evidence that we can't explain reality?

Think about what it would mean to explain something.

Empiricism, i.e. science, considers what we experience to be reality. It then uses experience (testing) to explain what we experience. As you may notice this is circular. Science seeks to understand what we experience, it isn't interested in explaining reality in a fundamental way and in fact refuses to do so as it would be unscientific.

On the other hand we have the various forms of metaphysics. Unlike science metaphysics doesn't trust what we experience outright because it considers truth to be something independent of experience. Which means that, by it's own premise, an ultimate explanation of reality cannot be reached as the evidence is considered unreliable.

In either case, science or metaphysics, the conclusion is that reality cannot be explained.


GepardenK t1_j9xg72u wrote

> Which would be whatever are the foundational blocks of all of reality. Quantum fields or what have you.

Since you seem to be appealing to empirical concepts I feel compelled to point out that according to empiricism this idea that only the smallest components represent reality is just flat out wrong.

Empiricism holds that there is no universal reference frame. Empiricism even go as far as saying, some would argue to a fault, that apriori knowledge doesn't exist at all - i.e. nothing can be said to be true independent of experience.

Thus: the quantum level, or what have you, have no more claim to truth than the cosmic level or any other frame. If a property, like solidness, exists in one frame but not the other then that in no way invalidates it's existence. According to empiricism something is real if it can be experienced; scale matters not.

It is tempting to think that "smaller is truer" because we usually have to go smaller when following the arrow of causality. But finding the origin of causality just means finding the origin of causality; it dosen't make it any more or less true than any other phenomena.


GepardenK t1_j9xdu9c wrote

>Well be needing a different term than "Post-postmodern".

We should just retire the concept entirely rather than come up with a new term. Self-identifying as a new era is cringe and leaves one vulnerable to thinking progress is being made even when it isn't: whether we have successfully moved beyond modernism at this point will be up to future historians - not us.


GepardenK t1_iuhdu0s wrote

It depends on the level of complexity of what you consider to be a 'trait' to begin with. If it's something like "flight", and a species goes 100.000 years without opportunity to use their ability to fly, then the chances of them still retaining their ability to fly after all that time is astronomically small. Simply due to the dependencies such a complex trait would require.

If, by 'trait', you mean something much more basic, like the presence of a particular protein, then that could obviously stick around much longer without active selection.

Destined is the right word, though. Given enough time things will go away without something to keep it in place.


GepardenK t1_iuh4u5u wrote

>Remember that nothing has to be "beneficial" for it to remain, it simply needs not be selected against.

In theory yes, but not in practice.

Because things we recognize as 'traits' tend to rely on a complex set of dependencies they almost invariably get scrambled beyond functionality unless actively selected for.

It's a principle similar to that of entropy. If there is no force in play to actively maintain a particular structure, then, through sheer randomness alone, that structure is destined to dilute eventually; be it one way or another.