Czl2

Czl2 t1_jcjmm91 wrote

Main advantage of paper books is that in bright sunlight you can still easily read them and they do not require being charged.

Ebooks. Love having a library in my pocket on my mobile which I can read on larger displays as well. Walking / driving my mobile can read to me any book I like. I control the fonts and sizes. I can highlight terms for quick looks or notes. Full text search is easy inside the books. To quote or share passages from ebooks is easy.

Paper book vs ebook question is like question of typewriter vs word processor. If you are used to the older technologies you may be fond of them despite even because of their limitations simply because how they make you “feel”.

4

Czl2 t1_j9030la wrote

Reply to comment by CypherLH in Emerging Behaviour by SirDidymus

> I’ll grant there is a gap there….. but it actually makes the whole thing weaker than I was granting…

What you described as the Chinese room argument is not the commonly accepted Chinese room “argument”. Your version was about “intelligence” the accepted version is about “conscious” / “understanding” / “mind” regardless how intelligent the machine is.

Whether the commonly accepted Chinese room argument is “weaker“ is difficult to judge due to the difference between them. I expect to judge whether a machine has “conscious” / “understanding” / “mind” will be harder than judging whether that machine is intelligent.

To judge intelligence there are objective tests. Are there objective tests to judge “consciousness” / “understanding” / “mind”? I suspect not.

> cause I don’t give a shit about whether an AI system is “conscious” or “understanding” or a “mind”, those are BS meaningless mystical terms.

For you they are “meaningless mystical terms”. For many others these are important aspects that they believe make humans “human”. They care about these things because these things determine how mechanical minds are viewed and treated by society.

When you construct an LLM today you are free to delete it. When you create a child however you are not free to “delete it”. If ever human minds are judged to be equaivalent to machine minds will machine minds come to be treated like human minds?

Will instead human minds come to be treated like machine minds which we are free to do with as we please (enslave / delete / ...)? When human minds come to be treated like machines will it make sense to care whether they suffer? To a machine what is suffering? Is your car “suffering” when check engine light is on? It is but a “status light” is it not?

> What I care about is the practical demonstration of intelligence; what measurable intelligence does a system exhibit. I’ll let priests and philosophers debate about whether its “really a mind” and how many angels can dance on the head of a pin while I use the AI to do fun or useful stuff.

I understand your attitude since I share it.

2

Czl2 t1_j8v60kl wrote

Reply to comment by CypherLH in Emerging Behaviour by SirDidymus

Visit Wikipedia or Britannica encyclopedia and compare what I told you against your understanding. I expect you will discover your understanding does not match what is generally accepted. Do you think these encyclopedias are both wrong?

Here is the gap in bold:

> As I pointed out before, if you accept its premise then you must accept that NOTHING is 'actually intelligent' unless you invoke something like the "vitalism" you referenced and claim humans have special magic that makes them...

The argument does not pertain to intelligence. To quote my last comment:

>> The argument says no matter how intelligent it seems a digital computer executing a program cannot have a "mind", "understanding", or "consciousness".

Do you see the gap? Your concept is "actually intelligent". The accepted concepts are: "mind", "understanding", or "consciousness" regardless of intelligence. A big difference, is it not?

1

Czl2 t1_j8umouq wrote

Reply to comment by CypherLH in Emerging Behaviour by SirDidymus

> Interesting points though I personally detest the Chinese Room Argument since by its logic no human can actually be intelligent either…

I suspect you have a private definition for the term “intelligent“ else you misunderstand the Chinese Room argument. The argument says no matter how intelligent it seems a digital computer executing a program cannot have a "mind", "understanding", or "consciousness".

> unless you posit that humans have something magical that lets them escape the Chinese Room logic.

Yes the argument claims there is something magical about human minds such that the logic of the Chinese Room does not apply to them and this part of the argument resembles the discredited belief in vitalism:

>> Vitalism is a belief that starts from the premise that "living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things."

1

Czl2 t1_j8txkgb wrote

Reply to comment by CypherLH in Emerging Behaviour by SirDidymus

> Ok, fair enough. I still think using any sort of mirror analogy breaks down rapidly though. If the “mirror” is so good at reflecting that its showing perfectly plausible scenes that respond in perfectly plausible ways to whatever is aimed into it…is it really even any sort of mirror at all any more?

Do you see above where I use the words:

>> These language models are obviously not mirrors but they actually are mirrors if you understand them.

Later on in that comment I describe them as “fantastically shaped mirrors”. I used those words because much like the surface of a mirror once trained LLM’s are “frozen” — given the same inputs they always yield the same outputs.

The static LLM weights are a multidimensional manifold that defines this the mirror shape. If when we switch away from electrons to photons to represent the static LLM weights they may indeed be represented by elementary components that act like mirrors. How else might the paths of photons be affected?

Another analogy for LLMs comes from the Chinese room thought experiment: https://en.wikipedia.org/wiki/Chinese_room Notice however that fantastically shaped mirror surfaces can implement look up tables and the process of computation at a fundemental level involves the repeated use of look up tables — when silicon is etched to make microchips we are etching it with circuits that implement look up tables.

LLM’s weights are a set of look up tables (optimized during training to best predict human language) which when given some new input always map it to the same output. Under the hood there is nothing but vector math yet to our our eyes it looks like human langauge and human thinking. And when you can not tell A from B how can you argue they are different? That is what the Turing test is all about.

For a long time now transhumansts have speculated about uploading minds into computers. I contend that these these LLM’s are partial “mind uploads”. We are uploading “language patterns” of all the minds that generated what the models are being trained on. The harder it is to judge LLM output from what it is trained on the higher fidelity of this “upload”.

When DNA was first sequenced most of the DNA was common person to person and we learned that fraction of DNA that makes you a unique person (vs other people) is rather small. It could be that with language and thinking the fraction that makes any one of us unique is similarly rather small. The better LLM get at imitating individual people the more will will know how large / small these personality differences are.

1

Czl2 t1_j8rc3xr wrote

Reply to comment by CypherLH in Emerging Behaviour by SirDidymus

> The mirror analogy doesn’t hold up. LLM’s are NOT just repeating back the words you prompt them with. They are feeding back plausible human language responses.

Did I say LLM are just repeating back the words you prompt them with? Why then reply as if I said this? Please read my comments again and paste the words that made you believe I said this so that I can correct them.

Here are the words above that I used:

>> These language models are obviously not mirrors but they actually are mirrors if you understand them. A mirror in response to what is in front of it always returns a reflection from it's surface -- a surface that needs not be flat.

> It would be like a magic mirror that reflects back a plausible human face with appropriate facial emotive responses to your face…that wouldn’t just be a reflection.

Do you see above me use these words:

>> In response to a context these language models return "a reflection" from their hyperdimensional manifold of "weights"; these weights act like a fantastically shaped mirror that was designed to minimally distort whatever data the model was trained on.

When you hear the words fantastically shaped mirror do you think I am describing a simple flat mirror? A fantastically shaped mirror perhaps another term for that is a “magic mirror”? A magic mirror is a mirror is it not?

> The mirror analogy doesn’t hold up.

AFAIK the mirror analogy is the best I can come up with. Do you have a better analogy?

1

Czl2 t1_j8r8hxu wrote

Reply to comment by SirDidymus in Emerging Behaviour by SirDidymus

> What I’m interesting is not so much the reflection you’re describing, but what other reflections appear that were not intended and emerge independently.

These language models are trained to predict their training data which is all the human writing the developers of these models could obtain and use for training.

The reflections that appear that were not intended and emerge independently are the mistakes the models make by which you can tell what they generate does not come from a human.

As these models grow in size and improve there will be fewer and fewer of these mistakes till at some point it will not be possible to tell their language from that generated by humans.

You asked for:

>> emerging and unexpected behaviour of recent models.

And you listed examples:

>> *Theory of Mind presenting itself increasingly >> *Bing reluctant to admit a mistake in its information >> *Bing willingly attributing invalid sources and altering sources to suit a narrative >>*Model threatening user when confronted with a breaking of its rules >> *ChatGPT explaining how it views binary data as comparable to colour for humans

These behaviours you would expect in human language would you not? So why would you not expect them in langauge from models trained to imitate human language?

Image I told you that my mirror showed me my face smiling, would you be suprised? Likely not.

(1) “Did the one who constructed the mirror ‘intend’ that it would show me my smile?”

(2) “Did my smile emerge ‘independently’?”

Do these two question make sense in reference to a mirror?

−1

Czl2 t1_j8r42ul wrote

These language models have been trained to predict what language humans will use in a given context so is it surprising that their language feels human? When a mirror shows you your own behavior does that surprise you? Likely not.

These language models are obviously not mirrors but they actually are mirrors if you understand them. A mirror in response to what is in front of it always returns a reflection from it's surface -- a surface that needs not be flat.

In response to a context these language models return "a reflection" from their hyperdimensional manifold of "weights"; these weights act like a fantastically shaped mirror that was designed to minimally distort whatever data the model was trained on.

6

Czl2 t1_j4zqan4 wrote

Ask model to summarize whatever is about to be cut off as you slide the token window and replace what is lost with that summary? In this way your token window always has a summarized version of what is missing attached?

8

Czl2 t1_iyaa3k7 wrote

> Why can't you just not eat if you're overweight?

For a while (even a long while!) you certainly can but your body comes with safety overrides (psychological and physical) which can override the will power most have.

Can you hold it if you have to go pee? Sure you can. But it will be unpleasant and eventually you will either give up or be tortured by your decision till you die.

If you have ample fat reserves and resist eating your death can take much longer but it's still inevitable and will not be pleasant.

1

Czl2 t1_iy1jgw1 wrote

> culture and societal norms play a massive role in moulding behaviour, and completely drown out the biological differences.

See:

https://www.reddit.com/r/TwoXChromosomes/comments/z6h0fu/stats_came_out_that_out_of_606_mass_shootings_in/

Do you disbelieve these statistics?

Could this be "Objective Example #3" that it is NOT true that "culture and societal norms play a massive role in moulding behaviour, and completely drown out the biological differences"?

Replace your word completely with say 50-60% and we would be closer to agreement about the situation today.

What about the future?

Men and women continue to gain ever greater control over their bodies via technology. We already see the impact birth control technology has. Now imagine technology to control your own "dimorphic temperament" (ie behavioral differences between men and women) or technology to genuinely change your sex or even migrate between bodies as you might switch cars today.

In such a future clearly "culture and societal norms play a massive role in moulding behaviour" because the "biological differences" that exist today will start to cease to exist. I think few can imagine how interesting that future might be much like African nomads from thousands of years ago could not imagine our reality today.

1

Czl2 t1_ixyadke wrote

I agree with you perhaps ~80%

Since I enjoy provoking people to think I will make some remarks about what you said to see how you react to them.

> However, you need to be really careful analysing that sort of thing,

/r/ExplainLikeImFive is for those who want a simplified analysis, is it not?

What is the context for this conversation?

> because culture and societal norms play a massive role in moulding behaviour, and completely drown out the biological differences.

What place on planet earth are you describing?

The biological differences that I listed are: “hassle with peeing, monthly mensuration, risk of pregnancy, actual pregnancy, giving birth, breast feeding, menopause”, also “physically smaller and weaker sex”, and “living longer”

Surely these biological differences exist in Portugal do they not? Did you really mean to say that ‘culture and societal norms … completely drown out the biological differences’? You strike as a smart person. Surely you are exaggerating or meant something else by your words. People often believe to be true what they prefer to be true. Perhaps what you wrote is what you prefer to be true (even if it is not true)? That I can understand. I would also prefer what you wrote to be true.

> By way of example, you get a lot of US-based people arguing that men are just better at STEM topics than women, but my university (in Portugal) didn’t really match that at all. Overall there were more males than females, but the difference wasn’t anywhere near as big as in the US.

“By way of example, most will argue that women are just better at wearing skirts than men, but opinion in my Irish town (where kilts are popular) doesn’t really match that at all.”

The academic preference examples you shared depend on subjective judgement much like what clothes you pick and how others judge you for it. What I am trying to show with my imperfect example above is that what culture is and what people argue is somewhat arbitrary and varies from place to place. I prefer objective examples and will share two objective examples below for you to think about.

First a fact about men vs women: the difference on average is tiny. Most men and women are close to average and there isn’t much difference between them and you can find plenty of examples with one or the other being better at X or taller, thinner, smaller... That said the tiny difference in the averages however makes a huge difference in the tails of the distribution. Shift a normal distribution by a tiny amount and compare the area under the curve above a high threshold before and after your shift. Ever try that? What happens? You see this difference in the tails in many places.

Objective Example #1: On average the best men are objectively far better then the best women at most (but not all) physical activities such as sports. Would you apply your “culture and societal norms play a massive role in moulding behaviour, and completely drown out the biological differences” explanation to physical sports? What do you predict would happen if gender specific competitions were merged? Do you think gender specific competition is due to left over sexism? Likely not.

Objective Example #2: The other place I see large men vs women gap is in the videos of people doing dumb stuff. You can see such videos all over the internet and /r/whatcouldgowrong collects them. They show the riskiest of the risky, most dangerous of the dangerous. the dumbest of the dumb. Men dominate in these video clips. Here too do you think ‘culture and societal norms … completely drown out the biological differences’? Are men doing dangerous, risky, dumb stuff around the world due to culture? Do you think their mothers teach them that culture? Do you think their wives and sisters promote that culture? Where does that culture come from? What does that tell you about men?

What is your “really careful analysis” of these two examples?

EDIT: Spelling.

0

Czl2 t1_ixx5gjh wrote

Agreed. Men can just open their zipper. For women (as many things pertaining to their bodies) it's more hassle so they need to think ahead. Could be why many women are better at this than men.

EDIT: Do women not have more hassle with peeing, monthly mensuration, risk of pregnancy, actual pregnancy, giving birth, breast feeding, menopause, … ? Also being the physically smaller and weaker sex tasked with child rearing would evolution of sex differences in the species not predict women to have better forward thinking? When you see /r/whatcouldgowrong videos of people doing dumb stuff what is the ratio of men vs. women being dumb in these videos? Who lives longer lives? How do you explain it? No possible connection women being women and men being men?

−2

Czl2 t1_ixtse2i wrote

> Bridge to Terabithia is one of those movies that haunts me to this day even now as an adult.

The film teaches viewers about death. It teaches the pain of loss and devastation you feel. Having your psyche immunized / desensitized by fiction can make your psyche stronger when similar situations arise in your life.

Your parents will die, your spouse and/or children will die, you will die. We may not like to think about it but all this is inevitable. All these events can damage your psyche.

Consider the film a vaccine to prepare your psyche cope with dire life situations. By watching the film you are making a trade: short-term pain for long-term gain.

> Incredibly beautiful movie

100% agree!

38

Czl2 t1_ixqlc76 wrote

On the scale of things to be sad about lack of knowledge about the space events you listed does not rank high.

For example above your concerns I suspect most would prefer to have knowledge about the cancer or whatever other malady is happening inside their body that will prematurely end their life. Would you not trade that knowledge against the things you listed?

We are not in /r/Medicine but /r/space so to fit the interests of this community consider the space event X that will inevitably (and perhaps relatively imminently) end our biosphere on earth. Can anything be more worrying than a lack of knowledge about X?

3

Czl2 t1_ix8ze2e wrote

> Either that expansion happens with less than light speed and the distance is impossible or it expands faster and the light wouldn't reach us.

You raise a good point which shows me you are starting to understand.

Consider that the rate of expansion is not constant but accelerating.

As time passes things that are ever closer to us will start to move away from us at the speed of light and what was once visible will no longer be.

The horizon at which the rate of expansion is at light speed is getting closer and closer to us:

https://www.space.com/einstein-gravity-variations-dark-energy

Your happy thought for today is that all you see and know will not end in a 'big crunch' but instead may be torn apart at the fundamental particle level in a 'big rip':

https://en.m.wikipedia.org/wiki/Big_Rip

Do not panic! This is not yet certain as 'heat death' may happen first. I believe only the 'big crunch' ending has been ruled so at least you can be happy about that.

https://en.m.wikipedia.org/wiki/Ultimate_fate_of_the_universe

1

Czl2 t1_ix7f4y6 wrote

> But if the universe's "surface" is expanding faster than light how do I observer something?

Those parts of the universe are not visible as they are now. However light travels at a finite speed and what you're looking at today is what happened when the light you see today long ago started traveling towards you.

1

Czl2 t1_ix71ioi wrote

> Someone tell me how my friend and I began driving in opposite directions at 60 miles per hour two hours ago, yet now we’re 240 miles apart?

You are driving away from each other on the surface of a rapidly inflating balloon. Your velocity from each other needs to be added how fast the balloon surface is stretching / growing. Even if you both stop your cars you will continue moving apart due to continuing inflation.

7