Cerulean_IsFancyBlue t1_jef0xb1 wrote

People have always signaled through their clothing and accessories. Team colors, mourning attire, flag pin in the lapel, spiky mohawk, political buttons, “rebel” beret, jeans to the board meeting, religious necklace, “I voted”, wedding ring, school tie, MAGA hat, pussy hat.

“New thing bad” + half-ass justification, is no way to go through life.


Cerulean_IsFancyBlue t1_jeezwlf wrote

As people, we have social skills and intelligence, and I feel like this is the deploying those in a great combination to let everybody be comfortable with or without consensual physical contact.

It’s also possible that you meant, “I have no discomfort around this, so I don’t see why it’s worth the trouble.” In which case, ok cool. You’re why we have wristbands.


Cerulean_IsFancyBlue t1_jeeznlo wrote

Physical contact is part of standard getting and bonding for many humans. The problem is when we either weaponize it (handshake grip contests) or when we, once again, impose the “normative” model on everybody.

I wonder what it would be like if I grew up in a culture where men routinely did the air kiss thing to each other or held hands in public as friends. Would it have changed my attitude towards contact and made me more of a touchy person? Or would my life just be even more uncomfortable than it is now?

Anyway. I like stuff like these bands, because it allows people to set their own boundaries in a way that other people can see and understand. It’s also a reminder that when you are dealing with other people, it’s worth stopping and thinking about those things, even when you’re not in the situation with wristbands. Look for cues. Go slowly.

I don’t think we’re going to change some peoples basic desire for physical contact as part of a greeting. What we can do is, normalize consent and normalize respect.


Cerulean_IsFancyBlue t1_jecghpo wrote

So this is interesting. On the one hand, I am very pessimistic, that we are anywhere close to achieving a human intelligence and cognition. I don’t think it possesses intuition or feelings, or any of the things that you might think are necessary for true creativity.

But … this might be a generation of AI that is actually better at creativity than it is at being factual and correct. Language generation has the ability to produce sentences that are plausible and coherent. But without some kind of additional subsystem it’s actually not very good at fact checking. So it’s possible that this tool will be a boost to human creativity by being able to generate tons of alternatives and variations on ideas, and and not a boost to human accuracy or precision like many previous generations of “Thinking Machines” have done.

GPT is less “calculator” and more “crazy friend who spits out inspiring nonsense”. It produces fanciful novel output — made of the things you put into it, rearranged. But it does so in such a powerful way, drawing on such a wealth of examples, that the output can actually feel creative. Usually it’s creative via an existing style, so it’s a derivative sort of creativity, if that’s not an oxymoron.

But anyway. I find it interesting that in terms of how this to boost human abilities, it’s more of a creativity boost.


Cerulean_IsFancyBlue t1_jecf72b wrote

The human brain is also only one of the systems involved in human actions and decision making. I’m not talking about any kind of spiritual stuff. I mean actual systems that influence brain chemistry.

There are areas of cognition in which is quite possible that important decisions are being made outside the brain, and our executive function rationalizes the decision like Mayor Quincy running to the front of a protest to “lead” it.

I think one great layperson introduction to this kind of systems interaction is contained in the book Gut (Giulia Enders).

I don’t know if we literally need to simulate each subsystem, but it does lead me to believe is that we don’t yet understand the system that we are trying to model. It isn’t just neurons, and “just neurons” is hard enough.

That said, there’s a lot to be achieved by throwing more more power at the problem. Many problems in the realm of imitating humans, from playing chess to visual recognition systems, were not defeated by specialized approaches but eventually fell to sheer processing power. For me this means X is probably 5+ generations, and a lot of that is simply because I can’t picture what the future looks like further down the road than that


Cerulean_IsFancyBlue t1_je14jvo wrote

No, that’s not what I was saying. I was saying that currently our very best sauce still requires a lot of computing power. And that once the secret is out, knowledge is great but it will still take tons of computer power to implement it.

It’s also true that computing power will continue to increase, although Moore and his law may both be dead now. So the rate of increase is uncertain.

It’s possible that some things just won’t scale to the individual level. If that’s true, then most individuals will only have gated access to AGI.


Cerulean_IsFancyBlue t1_je126y5 wrote


Sorry, too short?

Both the program itself and the designers of it will tell you it has no sense of humor. There are some excellent white papers and podcast and articles written about it; how because it’s a language model, people will hallucinate a personality behind it. We live in a world in which anything that can produce that kind of language has some aspects of humanity, and it’s very very hard for us not to accidentally project at least some humanity into that system.


Cerulean_IsFancyBlue t1_je0upng wrote

An AGI would be an amazing feat.

The first AGI will be the equivalent of a human baby, completely helpless. It will likely use a massive array of computer hardware backed by a tremendous amount of electrical generation power, and even if it wanted to duplicate itself, will not be able to do so rapidly or without detection.

If anything, it will be even less able to survive on its own, than a human baby.

All the ideas we have about being unable to control an AI, are using Hollywood level ideas about what things are Hackable and controllable. It could thrash around and mess up a lot of systems. There’s a pretty good chance in the process that it would suicide. Every model we have for an AI right now, requires a tremendous amount of computing, power, electricity, and cooling. It’s not going to be able to run away and hide in “the internet”. If it does, it will probably contract a fatal disease from half the computers it tries to occupy.


Cerulean_IsFancyBlue t1_jadrze8 wrote

It’s hard to know without some survey data. I feel like Einstein is, in my culture, Ia widely recognized face and a man who is known for being very smart. If you drill down beyond that, with the average person, they might have an idea that he was smart, and also wise, which is why are you end up with so many sappy quotes attributed to Einstein that he never said. He is everybody’s genius, pacifist, kind grandpa with the crazy hair. In many ways, he fulfilled that role of eminent, trustable figure that Carl Sagan or Neil Degraase Tyson did/does later.

Tons of strange conspiracy stories just don’t make sense with Einstein. So he ends up with inspirational quotes instead.

I don’t think Newton is anywhere near as well, known by the average American. And when he is, it’s seldom more than the guy who “discovered” gravity, when an apple fell on him. You don’t hear about alchemy or calculus or astronomy or politics.

Tesla was a bit like Newton in America. Some people knew a lot about him. A ton of people who knew only one thing, probably do that. He did crazy experiments with electricity, like Tesla coils that were dramatic and cool in someway without knowing any of the details. He was a guy that was the epitome of not just alone genius but the unsuccessful doomed genius.

Edison was a revered figure who turn a heel turn in popular view and folks begin to weigh his politics, his greed, is intellectual property, theft, and such more than his stable of patents and financial success. Tesla made a good foil for that.

Even so, I think there’s not much comparison. Einstein is a figure on par with Napoleon in terms of recognition. Tesla, as a person, even as a highly fictionalized person, is a lot more obscure culturally.


Cerulean_IsFancyBlue t1_ja9s1fx wrote

And if you’ve ever had to clean up after (insert favorite target trade here), you know they leave a site like lions leaving a half eaten carcass. Freaking mess.

Let’s say sparkies just for example. :)


Cerulean_IsFancyBlue t1_ja93ah1 wrote

EDIT: I wanted to have that I’m enjoying your responses and I hope I’m not coming off as combative. It’s nice to have a good interaction on Reddit and this is the best of my day so far. :)

I agree that good policy is good for all in the long term. Hope we get there.

If you look at the industrial revolution in Britain in isolation, then it is an arc upwards. If you look at the British empire, it’s not quite so rosy.

The destruction of the Indian textile industry was essential to the success of Britain’s domestic wonder. Since it wasn’t an area I had directly studied in school, up until recently I assumed that it was mostly the consequence of Britain being a first mover, and overwhelming the inefficient, unfortunate textile producers in India, and other places. After reading a few histories of the British east India company, it became pretty clear that the British monopoly on textiles was not a matter of efficiency. It was imposed by tariffs, laws restricting, the importation of machinery, and at least three spectacular instances by force of arms and the destruction of property.

Real income and GDP in India took a severe hit from Britain’s Industrial Revolution, and continued to be suppressed to provide a market for British finished goods output. India is still recovering.

Again, this is not a necessary outcome of technology. I’m bringing this up to note that the external costs of past revolutions, especially the global winds, have to be looked at globally. It’s very dangerous to look justify the people who benefit. And in this case, even the working class people in the UK benefited. Yes, the arc went up. But it didn’t go out for everybody, at least not for a few centuries.

But it is unfortunately a common and likely outcome of our current system, where productivity games are assigned, almost exclusively to the owners, and, that group has a tremendous amount of leverage when it comes to creating laws and steering government spending.


Cerulean_IsFancyBlue t1_ja8uglv wrote

Sure, and the black plague resulted in improvement in labor mobility. Win! :)

These things can be true, but you’re still skipping over a fairly large amount of human suffering that happens during the transition. Remember that a lot of the jobs provided by Industrial Revolution factory work were often less healthy than even subsistence farming. Livings conditions as well, in the growing cities required by the centralized factories using the new large expensive equipment.

And of course, this was not directly the fault of the steam engine. In many ways, the loss of jobs in the farming sector was the result of agricultural policy, and not technology. The surplus rural population, then got fed into the industrial workforce as desperate needy workers, which was as much to blame as “progress”.

The idea that in a generation or two will still have plenty of jobs, does not mean we should ignore the fact that you’re going to have a bunch of people in one or two generations, who can’t earn a living because we don’t have a society that has a proper safety, net or proper retraining systems.

The ideal response would be, to fix those systems. Not to try to stop the inevitable progress of technology. But it’s also not good to get lost in the long term picture and forget about the short term social cost.

EDITED one million typos


Cerulean_IsFancyBlue t1_ja8p1zi wrote

Can anybody find any place in that article where it links to the study or the research? There’s nothing bugs me more than an article full of highly specific percentages, that’s actually so vague that it’s not worth discussing.

I’d love to know what kind of specific tasks these experts think are going to be automated.