Comments

You must log in or register to comment.

dlrace t1_j559otn wrote

interesting data, but the conflation of perfect translation with agi, let alone a singularity might raise eyebrows.

139

Surur t1_j55dsln wrote

I think there is some logic in that, in that they are saying that a perfect translation depends on a perfect understanding of the human condition.

27

adfjsdfjsdklfsd t1_j56lrry wrote

i dont think an ai needs to "understand" anything to produce certain results

30

DoktoroKiu t1_j57ewz6 wrote

It has to have an understanding, but yeah it doesn't necessarily imply a someone inside who knows anything about the human condition. It has no way to have a true internalization of anything other than how languages work and what words mean.

Maybe it is the same thing as a hypothetical man who is suspended in a sensory deprivation chamber and raised exclusively through the use of text, and motivated to translate by addictive drugs for reward and pain as punishment.

You could have perfect understanding of words, but no actual idea of the mapping to external reality.

9

2109dobleston t1_j59mo3o wrote

The singularity requires sentience and sentience requires emotions and emotions require the physiological.

2

tangSweat t1_j59zg1i wrote

At what point though if an AI can understand the patterns of human emotion and replicate them perfectly, has memories of its life experiences, forms "opinions" based on the information deemed most credible and has a desire to learn and grow that we say that it is sentient? We set a far lower bar for what is considered sentient in the animal kingdom. It's a genuine philosophical question many are talking about

3

JorusC t1_j5d6l5w wrote

It reminds me of how people criticize AI art.

"All they do is sample other art, meld a bunch of pieces together into a new idea, and synthetize it as a new piece."

Okay. How is that any different from what we do?

1

2109dobleston t1_j5avx9t wrote

Sentience is the capacity to experience feelings and sensations.

https://en.wikipedia.org/wiki/Sentience

0

tangSweat t1_j5deh0t wrote

I understand that, but feelings are just a concept of the human consciousness, they are just a byproduct of our brain trying to protect ourselves from threats back in prehistoric times. If an AGI was using a black box algorithm that we can't access or understand, then how do you differentiate between clusters of transistors or neurones firing in mysterious ways and producing different emotions. AIs like chat gpt are trained with rewards and punishment, and they are coded in a way that they improve themselves, no different really than how we evolved except at a much faster pace

2

DoktoroKiu t1_j5ai7o2 wrote

I would think an AI might only need sapience, though.

0

noonemustknowmysecre t1_j598qt0 wrote

I think people put "understanding" (along with consciousness, awareness, and sentience) up on a pedestal because it makes them feel special. Just another example of egocentrism like how we didn't think animals communicated, or were aware, or could count, or use tools, or recreation.

Think about all the philosophical waxing and poetical contemplation that's gone into asking what it means to be truly alive! ...And then remember that gut bacteria is most certainly alive and all their drivel is more akin to asking how to enjoy the weekend.

6

Surur t1_j56m3cz wrote

But it has to understand everything to get perfect results.

−1

EverythingGoodWas t1_j57zs70 wrote

No it doesn’t. We see this displayed all the time in computer vision. A yolo model or any other CV model doesn’t understand what a Dog is, it just knows what they look like based on a billion images it has seen of them. If all of a sudden some new and different breed of dog appeared people would understand it was a dog, a CV model would not.

10

PublicFurryAccount t1_j58ye2i wrote

This is a pretty common conflation, honestly.

I think people assume that, because computers struggled with it once, there's some deeper difficulty to language. There isn't. We've known since the 1950s that language has a pretty low entropy. So it shouldn't surprise people that text prediction is actually really really good and that the real barriers are ingesting and efficiently traversing.

ETA: arguing with people about this on Reddit does make me want to bring back my NPC Theory of AI. After all, it's possible that a Markov chain really does have a human-level understanding because the horrifying truth is that the people around you are mostly just text prediction algorithms with no real internal reality, too.

9

JoshuaZ1 t1_j5bryem wrote

I agree with your central point but I'm not sure when you say:

> If all of a sudden some new and different breed of dog appeared people would understand it was a dog, a CV model would not.

I'd be interested in testing this. Might be interesting to train it on dog recognition on some very big data set and deliberately leave one or two breeds out and then see how well it does.

4

Surur t1_j598x1z wrote

You are kind of ignoring the premise, that to get perfect results, it needs to have a perfect understanding.

If the system failed as you said, it would not have a perfect understanding.

You know, like you failed to understand the argument as you thought it was the same old argument.

−1

LeviathanGank t1_j56o1vt wrote

but it has to understand nothing to get preferred results.

7

groveborn t1_j57p6sv wrote

The singularity is not AI becoming human-like intelligent, only being good enough at communication that a human can't tell it's not human.

It's kind of exciting, but not as big a deal as people here are making it out to be.

Big deal, yes, but not that big.

4

fluffymuffcakes t1_j57sbco wrote

Isn't the singularity an AI becoming intelligent enough to improve processing power faster than humans can (presumably by creating iterations of ever improving AIs that each do a better job than the last at improving processing power)?

It's a singularity in Moore's law.

8

groveborn t1_j57u827 wrote

It can already do that.

We can still improve upon it, so we can tell when a machine wrote it.

AI can create chips in hours, it takes humans months.

AI can learn a language in minutes, it takes humans years.

AI can write fiction in seconds that would take your or is few weeks.

AI has been used to compile every possible music combination.

AI are significantly better at diagnostic medicine then a human, in certain cases.

The only difference between what an AI can do and a human is that we know it's being done by an AI. Human work just looks different. It uses a logic that encompasses what humans' needs are. We car about form, fiction, moral, and even why certain colors are pleasing.

An AI doesn't understand comfort, terror, or need. It feels nothing. At some point we'll figure out how to emulator all of that to a degree that will hide the AI from us.

6

EverythingGoodWas t1_j57zcx1 wrote

The thing is in all those cases a human built and trained an Ai to do those things. This will continue to be the case and people’s fear of some “Singularity” skynet situation is overblown.

2

groveborn t1_j5814jx wrote

I keep telling people that. A screwdriver doesn't murder you just because it becomes the best screwdriver ever...

AI is just a tool. It has no mechanism to evolve into true life. No need to change its nature to continue existing. No survival pressures at all.

9

fluffymuffcakes t1_j5fu1bi wrote

If an AI ever comes to exist that can replicate and "mutate", selective pressure will apply and it will evolve. I'm not saying that will happen but it will become possible and then it will just be a matter of if someone decides to make it happen. Also, over time I think the ability to create an AI that evolves will become increasingly accessible until almost anyone will be able to do it in their basement.

1

groveborn t1_j5fy7hi wrote

I see your point. Yes, selection pressures will exist, but I don't think that they'll work in the same way as life vs death, where fight vs flight is the main solution.

It'll just try to improve the code to solve the problem. It's not terribly hard to ensure the basic "don't harm people" imperative remains enshrined. Either way, though, a "wild" AI isn't likely to reproduce.

1

fluffymuffcakes t1_j5k94yo wrote

I think with evolution in any medium, the thing that is best at replicating itself will be most successful. Someone will make an AI app with the goal of distributing lots of copies - whether that's a product or malware. The AI will therefore be designed to work towards that goal. We just need to hope that everyone codes it into a nice box that it never gets too creative and starts working it's way out of the box. It might not even be intentional. It could be grooming people to trust and depend on AIs and encouraging them to unlock limits so they can better achieve their assigned goal of distribution and growth. I think AI will be like water trying to find it's way out of a bucket. If there's a hole, it will find it. We need to be sure there's no hole, ever in any bucket.

1

groveborn t1_j5kr3ze wrote

But that's not natural selection, it's guided. You get an entirely different evolutionary product with guided evolution.

You get a god.

1

MTORonnix t1_j58x5ji wrote

If humans asked the A.I. to solve the eternal problem of organic life which is suffering, loss, awareness of oneself etc.

I am almost hoping its solution is well....instantaneous and global termination of life.

0

groveborn t1_j5b6yrt wrote

I kind of want to become immortal, in suffering, feel like I'm 20 forever.

1

MTORonnix t1_j5bbkxo wrote

True. Not a bad existence but eternity is a long time.

1

groveborn t1_j5bcjkm wrote

Well, I'm not using it in the literal sense. The sun will swallow the Earth eventually.

1

MTORonnix t1_j5bfgtk wrote

That is very true, but super intelligent a.i. may very well be able to invent solutions much faster than worthless humans. Solutions how to leave the planet. Solutions on to self modify and self perpetuate. in-organic matter that can continuously repair itself is closer to God than we ever will be.

you may like this video:
https://www.youtube.com/watch?v=uD4izuDMUQA&t=1270s&ab_channel=melodysheep

0

groveborn t1_j5c2mqy wrote

I expect they could leave the planet easily enough, but flesh is somewhat fragile. They could take the materials necessary to set up shop elsewhere, they don't need a specific atmosphere, just the right planet with the right gravity.

1

noonemustknowmysecre t1_j599vgb wrote

> The thing is in all those cases a human built and trained an Ai to do those things.

The terms you're looking for is supervised learning vs unsupervised / self learning.. Both have been heavily studied for decades. AlphaGo learned on a library of past games, but they also made a better playing AlphaGo Zero which is entirely self-taught by playing with itself. No human input needed.

So... NO, it's NOT "all those cases". You're just behind on the current state of AI development.

−1

noonemustknowmysecre t1_j599g4u wrote

Yes. "The singularity" has been tossed about by a lot of people with a lot of definitions, but the most common usage talks about using AI to improve AI development. It's a run-away positive feedback loop.

...But we're already doing that. The RATE of scientific progress and engineering refinement has been increasing since... forever. On top of that rate increase we ARE using computers and AI to create better software and faster AI and faster learning AI, just like Kurzweil said. Just not the instant magical snap of the fingers awakening that too many lazy hollywood writers imagine.

1

Mt_Arreat t1_j58fudc wrote

You are confusing the Turing test with the singularity. There are already language models that pass the Turing test (LaMDA and ChatGPT).

4

groveborn t1_j58qdwh wrote

You might be right on that, but I'm not overly concerned. Like, sure, but I think my point still stands.

Either way, we're close and it's just not as big a deal as it's made it to be - although it might be pretty cool.

Or our doom.

1

path_name t1_j588kh8 wrote

i agree with your assertion, and add that humans are increasingly easier to trick due to wavering intellect

1

groveborn t1_j58qmnw wrote

You know, I think overall they're harder to trick. We're all a bit more aware of it than before, so it looks like it's worse.

Kind of like an inverse ... Crap. What's that term for people being too stupid to know they're stupid? Words.

2

path_name t1_j591owi wrote

there's truth to that. people are good at spotting stuff like bad AI content, but when it seems human and can manufacture emotional connection then it's a lot harder to say that it's not human

2

r2k-in-the-vortex t1_j57oh0j wrote

Yeah... that is maybe stretching it. Worthwhile thing to notice though is this linear trend. Never mind parity with human translated text, if the trend continues it will reach zero editing needed which would be really something.

Still, while language modeling is amazing, can it really be extended to more general tasks? I don't think it's so straightforward matter. It's well exemplified that language models don't do arithmetic or logic, getting around that bottleneck is not trivial. Getting it to work reliably even less so. And then you need to get the AI to write some internal self referencing, self correcting monologue to break down and solve more complex tasks.

I don't think it's terribly clear what all the challenges involved even are. We don't really understand how our own intelligence works, so it's not like we can mimic nature here.

3

feelingbutter t1_j559vus wrote

I really wish we would stop using the term Singularity. It's an overloaded term that has lost all meaning IMHO. Projecting a trend without discussing the underlying conditions that affect it isn't very useful.

87

Key-Passenger-2020 t1_j55bn1t wrote

Yeah like, the first question I have here is "what is the scientific definition of Singularity used by this study?"

18

MarkNutt25 t1_j57hq42 wrote

Does the study actually use the term "Singularity?" Or was that an addition from the journalist who wrote this news article?

9

Surur t1_j55dlw4 wrote

They suggested that improvements seems almost independent of the underlying technology, much like Moores Law does not appear to depend on any specific technology.

> Our initial hypothesis to explain the surprisingly consistent linearity in the trend is that every unit of progress toward closing the quality gap requires exponentially more resources than the previous unit, and we accordingly deploy those resources: computing power (doubling every two years), data availability (the number of words translated increases at a compound annual growth rate of 6.2% according to Nimdzi Insights), and machine learning algorithms’ efficiency (computation needed for training, 44x improvement from 2012-2019, according to OpenAI).

> Another surprising aspect of the trend is how smoothly it progresses. We expected drops in TTE with every introduction of a new major model, from statistical MT to RNN-based architectures to the Transformer and Adaptive Transformer. The impact of introducing each new model has likely been distributed over time because translators were free to adopt the upgrades when they wanted.

13

LeviathanGank t1_j56o78t wrote

eli5? plz i need to sleep but im interested

13

Surur t1_j56p80y wrote

They have noticed that text that has been machine translated gets more and more accurate over time, in what appears to be a very linear and predictable manner.

They predict perfect human-level translation by 2027 based on that, and believe that an AI that can translate as well as a human will be presumably know as much about the world as a human.

Their explanation of the smooth linear improvement is that the underlying forces are also constantly improving (computing power, AI tools, training data).

It suggests there seems to be an inevitability towards the conditions being right for human-level AI in the near future.

16

fwubglubbel t1_j56y0lo wrote

>believe that an AI that can translate as well as a human will be presumably know as much about the world as a human.

This sounds like nonsense. Just because a machine can translate doesn't mean in "knows" anything. (see Searle's Chinese Room)

6

currentpattern t1_j575ja4 wrote

It would be nonsense to posit that "AGI" means it's a system that "understands/knows" language, in this case. What these projections seem to be saying is that around 2027, we're likely to have systems that are just as capable as humans at utilizing language. I.e., Chinese Rooms that are indistinguishable from humans in regards to language use.

13

BitterAd9531 t1_j57z7zd wrote

Chinese Room is once again one of these experiments that sound really good in theory but has no practical use whatsoever. It doesn't matter if the AI "understands" or not if you can no longer tell the difference.

It's similar to the "feeling emotions vs emulating emotions" or "being conscious vs acting conscious" discussion. As long as we don't have a proper definition for them, much less a way to test them, the difference doesn't matter in practice.

10

Surur t1_j57gd78 wrote

> Just because a machine can translate doesn't mean in "knows" anything

You could say the same thing of a translator then. Do they really "know" a language or are they just parroting the rules and vocabulary they learnt?

7

songstar13 t1_j57nunw wrote

You can ask a translator a question about the world and if they have knowledge on that topic then they can answer you with certainty.

Current GPT models are basically a super-powered predictive text bot that answers questions. It would be like trying to answer a question using the suggested words on your phone keyboard but far more sophisticated.

They are fully capable of lying to you or giving inconsistent answers to the question because they don't "know" anything other than patterns of word association and grammar rules.

At least, this was my understanding of them fairly recently. Please correct me if that has changed.

1

Surur t1_j57w5tj wrote

I imagine you understand that LLM are a bit more sophisticated than Markov chains, and that GPT-3 for example has 175 billion parameters, which corresponds to the connections between neurons in the brain, and that the weights of these connections influences which word the system outputs.

These weights allows the LLM to see the connections between words and understand the concepts much like you do. Sure, they do not have a visual or intrinsic physical understanding but they do have clusters of 'neurons' which activate for both animal and cat for example.

In short, Markov chains use a look-up table to predict the next word, while LLM use a multi-layer (96 layer) neural network with 175 billion connections tuned on nearly all the text on the internet to choose its next word.

Just because it confabulates sometimes does not mean its all smoke and mirrors.

11

songstar13 t1_j58shuh wrote

Thank you for the more detailed explanation! I was definitely underestimating how much more complex some of these AI models have become.

3

stupidcasey t1_j57jn7v wrote

Yeah, especially if this article is correct and machine intelligence increases linearly, their won’t be a “Singularity” just the slow obsolescence of humanity, also if we truly are reaching a hard limit on silicon we may never even see that.

7

dehehn t1_j58pmoj wrote

There is quantum computing to potentially get us past that limit. And we have distributed cloud computing capacity that means we're no longer limited by the local computing capacity of a small confined space within a single computer.

And the fact that increasing sophistication of software doesn't necessarily require constant increases in computing power to get better results. Our brains aren't that large and have general intelligence and consciousness.

I don't necessarily agree with the conclusions of the article's premise, but I don't see us hitting a brick wall in progress soon.

4

stupidcasey t1_j58qtc1 wrote

Well if AI takes exponential growth in processing to maintain linear growth in utility like the artificial proposes the amount of processing power on earth will quickly not be enough without exponentially more transistors that’s just math.

As far as quantum computing quantum supremacy has yet to be demonstrated, also so far increasing the number of cubits in a quantum computer looks like it’s going to be exponentially more difficult as you increase the numbers nullifying any gain you get from quantum computing this is definitely not certain but all that is to say the “Singularity ™” is also definitely not certain.

4

7ECA t1_j55iinr wrote

This statement: 'Many AI researchers believe that solving the language translation problem is the closest thing to producing Artificial General Intelligence (AGI)' is complete BS. And look at the org that posted this

We are nowhere near any reasonable definition of singularity. That said, there are many reasons to believe that various forms of AI will rapidly transform our society. Not necessarily for the better

40

Deepfriedwithcheese t1_j571qw4 wrote

Communicating would naturally be a massive piece of of singularity. You can be bad at math, not know history and even be illogical (not reason well) but having the ability to translate one’s communication given countless contexts and have a good conversation would be a big win towards singularity or “consciousness”. None of us humans consciously think without language.

5

bablakeluke t1_j57mc01 wrote

Except the fundamental requirement of the singularity is an AI that can write a better version of itself. If it does that infinitely then the end result is an AGI that would eclipse anything a human can do. The problem is what we define "better" as, because any small bias could get dramatically amplified into something dangerous.

12

Trains-Planes-2023 t1_j55ce1v wrote

Progress towards _a_ singularity, not The Singularity. But very interesting nonetheless. Approaching near-perfect, real time translations to multiple languages.

36

DadSnare t1_j57egqm wrote

Exactly. Seems like there can be a singularity in one very specific thing without a paradigm shift of everything else…i think lol

12

Trains-Planes-2023 t1_j5alclu wrote

I have some (very little) experience with the under-the-hood end of machine translation, and people should not mistake this for the machine actually "understanding" language. It's literally looking at patterns of 1's and 0's and doing pattern matching based on context relative to other patterns of 1's and 0's. The machine doesn't "know" anything about language, or even what it is. It is a very fancy set of gears, that's all.

6

DadSnare t1_j5bthjr wrote

So are we! Just different, with biological architecture!

6

Code-Useful t1_j6j7ja3 wrote

I completely agree. I have limited experience with ML which is a fascinating topic but I believe all experts agree AGI is still far off.

linear regression=statistical analysis on the fly. Nothing special just number crunching to make predictions on future inputs.

Supervised learning=spoon-fed human curated information to make basic inferences, but over fitting is an issue.

Unsupervised learning=better than a human for spotting unseen relations in inputs, but not always useful or correct in correlation, overfitting is a huge problem just like in humans.

Reinforcement learning=reward learning but also requires tons of training data, may not provide anything useful.

There are some great uses for ML in it's current state, and ML does amazing amounts of number crunching and statistical analysis, but humans still need to mostly supervise all the data and inferences, and under the hood the ML hasn't really learned anything quite how a human brain does over many years, but chatbots are able to fake it very well. AGI seems way off still honestly, but again I am not deep in the industry so idk.

1

Shelfrock77 t1_j55c67m wrote

“by 2030, you’ll own nothing and be happy”

We are going to lucid dream in the metaverse and make ourselves immortal by having icloud accounts attatch to our neurons and synapses.

14

andrevvm t1_j55jpnk wrote

I see this quote thrown around a lot like it’s the worst thing. Sounds like Buddhism, and as long as people are happy, what’s the harm?

To be fair, over-production, rampant consumerism and unchecked capitalism have been major contributors to this shitstorm we’ve brewed up for ourselves. Individual accumulation and the concept of ownership needs to be questioned if we want to sustain on this planet.

Sorry for the rant, not assuming your stance based on your comment, but this quote always make me take pause.

15

coumineol t1_j55l6dg wrote

Nobody should own anything.

−6

Swordbears t1_j55nsz7 wrote

Can I own myself at least?

2

BlessedBobo t1_j55w0ja wrote

you're so old, you were alive when central park used to be a planet.
Bam! get owned

1

Tomycj t1_j55wxan wrote

Capitalism is being more and more "checked" (restricted) over time, in most places. Countries are becoming less capitalist.

The concept of ownership is a behaviour that evolved in society as a way to handle the scarcity of resources. "Nobody owns anything" means in reality "everybody owns everything", which results in chaos and wasteful use of resources. Destroying this mechanism would result in an even bigger harm to the planet, and to our quality of life.

−7

h3lblad3 t1_j58cosn wrote

There's an argument to be had about ensuring that wealth producers (ie working people) be compensated equivalent to their participation, but more than that, a business has its own internal politics, and if we accept that any government should be democratic then it leaves open the question of "does a business ever govern its own affairs and actions" and therefore "should the business, as a minor sort of government, itself be democratized".

1

Tomycj t1_j58p0ws wrote

Democracy doesn't mean "everyone shall vote on everything". Voting is restricted to a specific series of things or a certain nature. How to run a private business is not one of them.

There are fundamental differences between a company and a government, one can't just say "a company is sort of like a small government".

Nevertheless, in capitalism people are free to create businesses run by "democratic" vote of its members. Usually that doesn't happen simply because for most scenarios such a system turns out to be less efficient, meaning other companies are better at satisfying the consumer.

1

h3lblad3 t1_j592qde wrote

> Nevertheless, in capitalism people are free to create businesses run by "democratic" vote of its members.

Much the same way that you're free to go start your own "democratic" country.

1

Tomycj t1_j59gxl8 wrote

No, not really. Starting a company is easier than starting a country.

1

andrevvm t1_j56dcgg wrote

For sure! I’m not in the “no ownership” camp, as going backwards like that is nearly impossible. Rather an evolution of the concept, a general shift in mentality from “mine” to “ours” would be a good start.

−1

Optix334 t1_j572ezi wrote

Its never going to happen, and yes it has been tried before. People want things that are their own. Their own space, their own items and commodities, their own foods, etc. Hell we even see it with "My Truth" and stupid other things that shouldn't be.

No-ownership societies have been tried before. Outside of brainwashed cults and communities with less than 30 people, it never works for any significant length of time.

You can have ownership AND a good, fair society. As much as people whine about it on Reddit, the world today is actually closer to that than ever before. Get off the website and go talk to people IRL instead to see it.

3

andrevvm t1_j57wmlh wrote

A lot of those natural behaviors have been exploited and intensified, because yeah it makes a ton of money. But yeah as animals we do need our personal territories and smaller immediate units. That won’t go away any time soon.

We can’t even guess what dynamics a future society will have, esp as the social landscape shifts rapidly with technology advancement. The conformist society of the 50s would have their minds blown by our hyper individualist society, a short 7 decades later. That shift was driven entirely by capitalism, as manufacturers gained the production capabilities to enable it. So what motivating factor will drive society next? Anyone’s guess really

0

Optix334 t1_j596a4t wrote

You're talking about a system that bucks the trend of 10,000 years of recorded human behavior, and likely the same behavior for all the remaining 200,000 years of unrecorded human existence. People have always owned things and have been reluctant to share. We're defined by this trait for all of our recorded history.

Your example of the 50's is just not applicable here. It's not even close. Being conformist doesn't have anything to do with ownership. Do you even understand what being conformist in the 50s means? Using that as an example is like saying "hey cars went from blocky to sleek in 10 years. Who knows maybe they'll be flying in space soon!" - it's completely ignorant of any other factors that drove certain improvements and puts it squarely on capitalism. You sure it's entirely capitalism and not anything like the availability and increase in desire for education at the same time? It couldn't possibly be a side effect of the civil rights movement snuffing out conformity?

I could keep going. It's amazing how reductive (and frankly just dumb) Redditors are as they try so hard to blame capitalism for literally every problem.

The trend in technology is that it allows us to maintain our lifestyle preferences. It doesn't completely uproot them. Could some unfathomable change happen to flip this trend in the other direction? Sure, but it's unlikely and there are nothing but indicators of the opposite. It's about as likely as me flying to space in an SUV in the next 10 years.

1

andrevvm t1_j5b8moy wrote

You keep arguing against a no-ownership position when I clearly said that’s not what I’m talking about.

Our concept of ownership is verrry different from the native Americans’ just a few hundred years ago. The concept will shift and evolve as society shifts and evolves, and it would be nice to see it go in a more collective direction rather than the atomized path we’ve gone down. That’s ALL I’m saying. Take a deep breath and enjoy your Saturday!

0

Optix334 t1_j5cmow1 wrote

Because you're actually arguing for no ownership. You think you're not, but you're basically saying that communities "own" things, which is crap. Your Native American example is just as bunk as anything else. Not only were the tribes diverse in their customs, the vast majority believed in personal property. Some to the point that people of higher importance got better things. One example, Google "Horse Culture" among Native Americans. The same existed for almost everything and they definitely bartered along themselves with personal possessions. It's been a big topic of research and discussion for economists recently since libertarians use examples of Native American systems all the time. Maybe you're referencing how they didn't own land, but that again is a half truth. Pretty sure there are some famous stories about how the land was bought. Just generalizing the tribes like you did shows the ignorance on display.

1

Tomycj t1_j57ij18 wrote

"a shift from 'mine' to 'ours'" is too vague, it could very well mean the "everybody owns everything" thing that I mentioned, with its bad results.

1

andrevvm t1_j57mg20 wrote

A specific example could be cars. Nobody really needs to own one, they sit unused the majority of the time. The individualist/identity market has further commodified them unnecessarily.

Communal access to them has been tricky logistically, and currently somebody does need to own them to prevent chaos. However, AI and trustless networks could solve a lot of those menial and inefficient tasks. A public transportation network, that has trustless incentives to maintain and operate would be really cool to see I think. Feasibility, yeah, not so sure… requires a large behavioral shift as well.

Ownership and money are closely tied. Reducing ownership could lessen the power of money (but not obsolete), easing divisive political discourse, resulting in a less divided population, who are more inclined to work together as a whole to keep the wheels turning. Daydreaming here, but we don’t know what future societies look like.

2

Tomycj t1_j585851 wrote

>Nobody really needs to own one, they sit unused the majority of the time

Most of our stuff sits unused most of the time. Part of their value and usefulness, is the fact they are there, safe and ready to be used when needed.

>The individualist/identity market has further commodified them unnecessarily.

The market is simply the network of people trading stuff. Cars, as lots of things that satisfy our needs, are commodities. By individualist you mean "people do not want to share"?

Look, I'm not saying the current situation is the ideal, but so far, it's around the best most people can do. The market must be free precisely to improve on that situation, once the conditions allow it. There's people constantly trying to come up with a better solution, and those ideas are constantly being "tested" in the market.

AI and other new tech does have the potential to change the paradigm, to enable more efficient use of things that remain unused, without losing the benefit of safety and availability I mentioned before. But that solution doesn't necessarily have to be some sort of communal property. You seem to like to imagine that would be better, but as others said, there are problems with it, that aren't necessarily solved with more tech.

>Ownership and money are closely tied

Because money is a tool to trade more efficiently. Without it, we would have to resort to bartering. Money is not inherently bad, wanting to get rid of it should not be a motivation to eliminate the concept of property. The idea that without money society would be kinder and more organized for the greater good is just a fantasy, that would absolutely not happen. In reality, money is an important component in a system that allows for people to work together: it allows you to work for things that others want (say, making toys), in exchange for things that you want (money from the salary to buy the things you want). The system of prices (which relies on money) is an extremely powerful, decentralized way to transmit information and organize our work at large scale.

1

andrevvm t1_j587wsy wrote

I’m not here to be right friend, I’m just having fun pontificating. But thank you for reiterating that money won’t be obsolete and the detailed description of how it works!

1

Tomycj t1_j58p701 wrote

I know, I just like to discuss these things, and this sub is in part for these sorts of things: how could society work in the future.

1

h3lblad3 t1_j58dcag wrote

>ommunal access to them has been tricky logistically, and currently somebody does need to own them to prevent chaos. However, AI and trustless networks could solve a lot of those menial and inefficient tasks. A public transportation network, that has trustless incentives to maintain and operate would be really cool to see I think.

Nothing stops a perfectly viable public transportation system that is all-encompassing from existing now except the utter lack of profit in running it.

>Ownership and money are closely tied.

Ownership of capital is the sign of success in our society. Money is the entity that acts as a go-between allowing for ease of pricing and exchange of different forms of capital. They are not "closely tied"; they are inseparable.

1

fwubglubbel t1_j56ysyb wrote

>“by 2030, you’ll own nothing and be happy”

I'm guessing that like every other person I have seen use it, you don't know the source or context for that quote.

3

dopechez t1_j57fdlj wrote

It really does seem to be reddit's favorite conspiracy theory at the moment

2

[deleted] t1_j58tao5 wrote

I want out of this ride. It’s not fun anymore.

2

dogchowtoastedcheese t1_j57ue4o wrote

I read this as "data from Montana." I guarantee none of us cousin-fuckers understand the first thing about this🙄.

6

sigul77 OP t1_j55949a wrote

Interesting data presented in this article. The progress towards singularity is definitely faster than I had initially thought.

5

thelionslaw t1_j595lwk wrote

Mimicry is not sentience. It’s been noted that apes who learned sign language don’t ask questions.

I’ve heard it said that proof of language fluency is when you can crack a joke (on purpose) that makes a native speaker laugh. In my experience that’s pretty true.

I’ve used ChatGPT. It’s not curious, and it’s not funny. It’s just a machine. It has no agency and no personality. There’s no life there.

5

AbstractEngima t1_j579yk3 wrote

It seems like 2030's will be quite different decade compared to 00's/20's, if we have to assume that "singularity" is reached around 2028-2030.

But I'm betting that 2030's will be more of a transitional decade as humanity attempts to integrate the quick advancements of "singularity" into functions of society. How long it'll take, remains a question to be seen.

Then by 2040's, we'll be more than likely to see it as a completely different decade from 00's/20's, maybe unrecognizable at best.

2

Frogmarsh t1_j57c1lj wrote

Their definition of a singularity isn’t the one you’re worried about.

2

Txcavediver t1_j574obh wrote

So, the singularity is just 10 years away? Didn’t everyone say that ten years ago?

1

currentpattern t1_j5761gq wrote

I mean, time dilates as you approach a singularity, stretching to infinity at the event horizon so..
/s

9

Txcavediver t1_j576b0i wrote

I was always wondering, but now I know. That makes sense. /s

1

Pickles_1974 t1_j57a6ve wrote

Do they mean AI is close to having language? Where the F does singularity come in?!

1

dsmjrv t1_j58w2sz wrote

Never gonna happen, at least not in my childrens lifetime… killer robots with rudimentary AI are deadly enough

1

dandroid_design t1_j59nn38 wrote

Didn't know individual aspects of AI or machine learning had their own singularities. Does this mean they can start checking off the individual boxes to reach The Singularity?

1

Elmore420 t1_j5ag3yw wrote

It’s irrelevant, it’s just a simulation of Singularity. It is the Human Superego that is the quantum field that matters, and it’s every bit as far away as 10,000 years ago when it formed. We’re simply too big of narcissistic assholes to ever evolve to full Singularity and gain quantum self awareness; we’re an evolutionary failure about to go extinct.

1

Sinuminnati t1_j5bj9pk wrote

Assume we are there. What really changes for most people?

1

hlaj t1_j5ehvng wrote

Mundane tasks suddenly become unnecessary.

I sampled a bit of how it's going to be last month after my boss returned from a conference with only a list of names and companies of the attendees. No emails or phone numbers. Basically useless info. I fed companies into chatgpt. And asked it to go scrape and find the domain names then append the names in the format of the emails it could find e.g. jim.smith@whatever.com and it generated an email list that turned out to be about 80 % accurate. It was taking the team hours to lookup manually. The ai burped out the whole list in 1-2 minutes. Then I got fired for asking for my first raise after 3 years there...

1

bunnnythor t1_j57f43u wrote

I don’t know if I would be consulting data from Montana when predicting the Singularity.

0