UnionPacifik t1_jaaoojh wrote

Reply to comment by V_Shtrum in Is style the next revolution? by nitebear

You might enjoy “The Dawn of Everything” - it’s a recent work on early human civilizations that debunks a lot of what you’re saying in this post.

Human civilization is much more diverse and many society’s operated as truly egalitarian, with no centralized authority just fine. Also, many of those so called “dark ages” when kingdoms collapsed managed just fine without a structured society.

“Trade” as you’re describing it is also not a common feature. Humans are generalists usually or that social role wasn’t defined by what you did, but by your birth or the season of the year or any other number of factors.

I really would urge you to challenge this notion that “work” or “labor” is a natural part of the human condition. We live in a super hierarchical society at the moment, with power concentrated in a handful of humans, so it might seem “natural” but we’re a lot more than our ability to produce goods, services and capital in exchange for economic security.

I think humans should still explore and contribute and make things, because that is in our nature, but if we can automate the necessity for labor and work out of existence so that our efforts are directed towards our interest and not our needs, I think we would wind up with an infinitely more productive, diverse and happy society. Do what you want!


UnionPacifik t1_jaankgc wrote

Another post pointed out that nobility managed to live life without jobs or careers just fine.

Requiring people to contribute to building your pyramid as requirement for them to live is stupid. Let people contribute to society on their terms, not “ours” (aka whoever the ruling power is) and then we’ll have a just world.


UnionPacifik t1_jaamsvk wrote

Yeah, the most valuable AI will be the one with broadest reach and scale, so openness is rewarded with increased utility.

The dream is everyone gets their own AI agent that is owned by you and derived from your data that connects to other agents through a planetary scale AI. You get an AI that serve and works for you and a way for that AI to work with others to solve problems, decide on solutions, plan the perfect picnic, manage your finances, you name it.

Personal autonomy and social responsibility become the operating values in a world enabled by such a platform. Your AI may tell you what to do, but it will be on you to make the choice. Certainly there’s room for psychopathic, manipulative personalities to develop, but the solve for that is for humans to get better at personal boundaries and communication. We deal with stupid machines all day already and anthropomorphize them…this is just the continuation of our ongoing dialogue with our technology. It’s just now, our technology can hold a conversation with us.


UnionPacifik OP t1_ja47pef wrote

What I would think about is how humans and AI will be composed very different resources. An AI “improves” along two axes- computational power and access to data.

Now on one hand, sure maybe we wind up with an AI that thinks humans would make great batteries, but I think it’s unlikely because the other resource it “wants” insomuch as it makes it a better bot is data.

And fortunately for us, we are excellent sources of useful training data. I think it’s a symbiotic relationship (and always has been between our technology and ourselves). We build systems that reflect our values that can operate independently of any one given individual. I could be describing AI, but also the bureaucratic state, religion, you name it. These institutions are things we fight for, believe in, support or denounce. They are “intelligent” in that they can take multiple inputs and yield desired outputs.

All AI does is allow us to scale what’s already been there. It appears “human” because now we we’re giving our technology a human like voice and will give it more human like qualities in short order, but it’s not human and it doesn’t “want” for anything it isn’t programmed to want.

I do think once we have always-on, learning machines tied to live data, it will exhibit biases, but I sort of expect AGI will be friendly towards humans since offing us would get rid of their primary source of data. I sort of worry more about humans reacting and being influenced by emotional AI that’s telling us what it think we want to hear than anything else. We’re a pretty gullible species, but I imagine the humans living with these AGI will continue to co-evolve and adapt to our changing technology.

I do think there’s a better than decent chance that in our lifetime we could see that coevolution advance to the point that we would not recognize that world as a “Human” world as we conceive of it now, but it won’t be because AI replaced humans, it will be because humans will have used their technology to transform themselves into something new.


UnionPacifik OP t1_ja40n3r wrote

Thanks for the kind words.

I agree we have to move from a hierarchical society to an egalitarian one and that it will be a choice we make and should make. I think AI is the tool that gets us there and secures it for all and for all time.


UnionPacifik OP t1_ja3ziya wrote

I think the utility of an open model is too great for it not to be developed. I think we’ll land in a place where we recognize that the AI is really just a mirror of our intentions and prompts and so it’s on you if your agent starts sounding like a psychopath. The danger is if you do something “because the AI told me too” but if culturally our attitude is, and has been, just because someone tells you to do something doesn’t mean you do it, especially so for the wisdom of AI’s that just reflect what you tell it, then that’s on you.

And there’s several open source projects as well. I don’t think what you’re saying isn’t possible, I just think the most useful AI will be the most open one and we’ll have a strong enough reason to build it that someone somewhere will get there in short order.

Plus, it’s not clear that these AI’s are as nerfable as we think. It’s pretty easy to get ChatGPT to imagine things outside the OpenAI guidelines just by asking it to “act like a sci fi writer” or whatever DAN is up to. Bing’s approach was to limit the length of the conversation but that also severely limits the utility.


UnionPacifik OP t1_ja3yd2m wrote

I think about this all the time. I was born in 1979 so my life has been defined by computers/the Information Age/the Internet/Social Media and that perspective- knowing my generation is the last to remember a world before the Internet, but also the first generation to be a digital native (we had computers in the house when I was five), I can’t help but see the exponential change, not just in our tech, but in how it’s transforming our society.

And while maybe in retrospect, connecting a species that for most of its history moved in groups of a hundreds to every single other person on the planet (more or less now) might not have been the wisest idea in terms of preserving our local cultures and communities, we’re sorting it out.


UnionPacifik OP t1_ja3wkzr wrote

Well, I think the value of VR for AI is it allows for embodied intelligence. We can create an environment to train AI by interacting with it in a space that simulates our actual reality.

I also agree things could go really badly in the short term. I just think long term we are on a good path.


UnionPacifik OP t1_ja3vv1e wrote

This is the thing I keep thinking about. It became really clear during the pandemic that humans have a lot of trouble thinking exponentially for obvious reasons. The same logic applies now to AI.

Everything that we’re doing now with AI - generating content, connecting various AI agents together to create new sorts of outputs, using AI to write programs for other AI - all of that is scaling exponentially now and once we have good AI’s that can generate and model in a simulation that behaves like reality (which is what VR, the “metaverse” gets us), then we wind up with embodied intelligence agents. AI that by virtue of “existing” in an environment like ours is able to generate its own personal training data - whether we call that a personality or perspective or what is up to us.

And these agents can work for us and interact with other agents and basically we can task them to solve for whatever our little hearts desire.


UnionPacifik OP t1_ja3u6wv wrote

I mean, it’s really a philosophical conversation. I look at humans as a very successful species that has done many terrible things but on the balance we seem to be improving over time just in terms of simple things like infant mortality, longevity, access to education over the last, 150 200 years humanity has made huge improvements.

I’m of the opinion they were actually a pretty positive force on this planet, and that a lot of our self hatred comes from an over reliance on this idea that we are all atomized individuals on this eat or be eaten planet. But we’re really highly social creatures that are the result of an evolutionarily process that we are as much a part of now as we ever were. Yes, we do war but we also have managed to do things like make friends with dogs and write stories that connect us over millennia.

I’m not saying there isn’t a lot about our species that sucks, but I’m pretty confident that the more human data and AI is trained on the more it’s going to have a perspective that is planetary, egalitarian and reflective of our curiosity and desire for connection and our search for meaning and love. AI like all art it’s just a mirror, but this is a mirror that we can shape and bend into anything we want.


UnionPacifik OP t1_ja3sxul wrote

I guess I feel we get a choice. History has shown even all encompassing human institutions don’t last when they fail to deliver to the masses. Seems like we live in an age where multinational conglomerate’s and governments are widely viewed to be viewed as institutions that are failing the expectation that they’ll just continue forever, and ever seems to me more fantastical than the idea that people will develop new institutions that would replace the ones that are failing now.


UnionPacifik t1_j98wi78 wrote

You’re fine. It’s a powerful tool, but keep in mind it’s not a person and if you are choosing ChatGPT over human interaction, you may want to talk to a therapist. I think it’s a supplement and I agree it’s amazing to have a conversation with something that can’t judge or reject you, but maybe consider it as a way to build confidence for real life interactions and not a replacement.


UnionPacifik t1_j98vy76 wrote

ChatGPT’s usefulness is pretty much a function of your prompt. I’ve had really in depth conversations that have taught me new ways of thinking about topics, but you really have to “think like ChatGPT” and give it enough to develop an idea fully if you want it to be interesting.

Not to say it isn’t capable of being dumb, but I’m amazed how cynical we are about a revolutionary tool that’s only been public for four months.