orincoro

orincoro t1_jeg2ncr wrote

F-451 is not specifically anti-fascist (though it does take place in an authoritarian future world) and probably a lot like 1984, it is often cited by far right propagandists who would co-opt the message of anti-censorship and make the case that the post-literary future it presents is the product of some derivation of Marxism. There may be some further ammunition in the book for this take, given that the “firemen” of this future are, according to the legend they subscribe to, eliminating all non-mainstream culture as a means of explicitly of ending class-conflict. This may lead some unimaginative people to conclude that it’s really a tract against socialism, which it isn’t.

However, it’s worth noting that the form of censorship against which the book was implicitly reacting was McCarthyism, and it bears further noting that while the book is of course about censorship on its face, its more animating motivation is probably as a criticism of all mass consumer culture, particularly television and advertising.

I imagine somebody is co-opting it for the same reason anything is co-opted in this way. Young people are told this is an important book.

Fuck neo Nazis indeed sir.

24

orincoro t1_jaa3o1s wrote

Moreover it is really not strongly supported that humans actually outcompeted Neanderthals in any particular way. They could been bred out, died of disease, or many other outcomes. The argument that we necessity survived because we were better on some way is not very scientific. Weaker and less resilient species win out all the time for obscure reasons.

18

orincoro t1_ja3nlwp wrote

Your brain will never not benefit from learning languages. There’s also utterly no way to understand the success or failure of a given translation matrix without people who understand both.

Finally, you will never achieve high level communication with people who don’t understand any of the same languages as you, so this seems like a silly line of inquiry.

0

orincoro t1_j7fs06k wrote

Not really. We don’t actually know exactly how cognition works, so it would be a little overzealous to analogize it with a machine. Whenever we do this, we tend to over-rely on such analogies. 20 years ago technologists were talking about how our brain has “apps.” 20 years before that, our brains had “ram.” And so forth. We analogize to machines because we can understand machines, but this does not our brain a machine make.

2

orincoro t1_j7fr90k wrote

Those settings are also driven by machine learning. You’re thinking in a linear way, but neural networks don’t work like that.

All of this is nonsensical. Altman has to define what is “neutral.” But this is an orthogonal value; not an objective characteristic. What’s neutral to you isn’t neutral to me. The bloody minded technocracy of these companies is utterly fucking maddening. They’ll replace human driven decision making and the definition of mortality and ethics themselves in the hands of programs. And believe me: the people who will benefit are the people who own and control those programs.

1

orincoro t1_j7fqwud wrote

Absolutely disagree. The purpose of neural networks is to establish connections in an organic way. You can use certain heuristics to get the machine to form connections in certain ways, but your ability to guide its learning is limited by the fact that you will never know in detail what all the nodes are for or how they actually work. There is no possibility of analyzing a neural network in the sense that we can understand machine code.

This is why neural networks can degrade if not trained properly. Companies like Google and Facebook don’t have as much control over their systems as they would like you to think.

2

orincoro t1_j7fqn3q wrote

It already does decide what you know. ChatGPT is just an overt and public facing form of the same technology that’s been determining your information diet for years. Believe me, I write for some popular YouTube channels: not only does AI tell us what to write about, it gives us exact critical feedback on making the text more digestible. It’s really quite a seedy business in my opinion.

2

orincoro t1_j5ocrmk wrote

Ah, so the people who stood directly the lose power because of the press said it was evil? Color me fucking shocked. Is that the best you’ve got?

This invention is not empowering common people like the press did. It empowers the already powerful to accumulate yet more power. Show me how it’s anything else.

2

orincoro t1_j5kasve wrote

The printing press was used to print Hitler’s book too. If you don’t think this is going to have similar concequences, you’re very much mistaken. New mass media is adopted by radical political movements faster than anybody else.

When the printing press was invented, most people couldn’t read. This isn’t even close to the same kind of situation.

1

orincoro t1_j5ka4dv wrote

  1. Not letting AI spread misinformation when being used in an application where the law specifically protects people from this use.
  2. Not allowing AI to be used to defeat security, privacy, minisformation, spam, harassment, or other criminal behaviors (and this is a very big one).
  3. Not allowing AI to access, share, reproduce, or otherwise use restricted or copy protected material it is exposed to or trained on.
  4. Not allowing a chat application to violate or cause to be violated laws concerning privacy. There are 200+ countries in the world with 200 legal systems to contend with. And they all have an agenda.
5

orincoro t1_j3vpzea wrote

Of course. Obviously we’d be doing it to lower temperatures and prevent climate collapse. I mean to say that it would not be harmful to life.

It would have no meaningful impact on the ability of any biosphere on Earth to function as it currently does. We get about 50% more sunlight than we need. The majority of that sunlight (and I mean the enormously overwhelming majority) is converted into ambient heat.

We know this because when the sun dims by several percent over the course of years, nothing happens on Earth, except global temperatures very slightly drop.

1