Coomb

Coomb t1_jea0109 wrote

Because the flame doesn't only need oxygen. It also needs fuel to react with that oxygen and it needs the reaction of the fuel with the oxygen to release energy overall.

When you are burning something in the air, you're burning something -- that is, a fuel. The reason all the nearby stuff doesn't catch on fire as soon as you strike a match is that all of the nearby stuff doesn't react with oxygen in the air in a way that releases energy unless it gets hot enough, and it isn't hot enough. Of course, if it's close enough to a flame, then it too can burn because the flame is heating it up to a temperature where it will react with the atmosphere.

6

Coomb t1_jdrsw29 wrote

Did anyone say that the matrix operation of convolution, or the idea of smearing it across an image, was invented via inspiration from experimental explication of image processing as performed by animals? I don't think they did, and I would be surprised if that were true. But those references do show that the "neocognitron" was explicitly inspired by actual physical neural networks used by animals for image processing, because among other things they include the original neocognitron paper, which is very clear about its inspiration. This is relevant because review papers of convolutional neural networks like this one from University College London almost universally identify the neocognitron as the direct precursor to modern convolutional neural networks.

2

Coomb t1_jdrprjg wrote

Can you give a complete physical description of why Lego blocks fit together in particular ways? What's the fundamental physical interaction(s), in detail, then make it so some Legos can fit with other Legos, and some Legos can't?

You can't. Actually, nobody can, because we don't have a coherent theory that is known correctly predict all of the interactions, at all of the scales, which are involved in two Legos sticking together. However, that doesn't prevent you from experimenting with Legos and observing that Legos come in a variety of sizes and shapes, and some of them can stick to other Legos in one particular way and some of them can stick in different ways. This is how people discovered things through experimental chemistry: they had atomic theory, which helped provide insight at an important level into the structure of everyday substances, but they didn't need quantum chemistry to experiment with bonding and breaking bonds and draw logical conclusions from experimental results.

2

Coomb t1_jd8p3ae wrote

Are you trying to draw a distinction here somehow between the wave dynamics in a guitar string and the dynamics in a rubber band that's pulled taut enough to support oscillation? They're the same. A rubber bands stretched taut between two supports and then plucked is exactly the same as a guitar string except that it's far more compliant. Whatever reasoning explains why a guitar string still makes the same sound even if you pluck it harder is identical to the example already given.

2

Coomb t1_jd3didl wrote

It is not true, has never been true, and probably never will be true that all genders and all sexual orientations are equally at risk from HIV. I don't at all doubt that government health material has emphasized that HIV infection is a risk for everyone, not just for gay men, but I do doubt that it has ever said both sexes and/or all genders and sexual orientations are equally at risk.

Setting aside blood transfusion and related risks (e.g needle sticks, needle sharing) because, although very significant early in the pandemic, they are relatively unimportant as a method of transmission at the moment, one look at the relative risk of sexual behaviors will tell you that there is a certain group of people which is considerably more likely to be infected by HIV if exposed to an HIV positive person.

Taking insertive penile-vaginal intercourse as the index risk, since it's the lowest meaningful transmission risk at 4 infections per 10,000 exposures to a known infected source, relative risks are:

  • Insertive Penile-Vaginal Intercourse: RR = 1
  • Receptive Penile-Vaginal Intercourse: RR = 2
  • Insertive [Penile-]Anal Intercourse: RR = 2.75
  • Receptive [Penile-]Anal Intercourse: RR = 34.5

Note that the absolute transmission risk is arguably quite low. Even if you are the receptive partner in penile anal intercourse with a partner who is both HIV positive and not on suppressive drugs, the risk per sexual contact is only about 1.4%. This is, of course, why frequency and diversity of sexual contacts is also a major epidemiological factor.

You can see that there's one specific sexual activity here which is an order of magnitude riskier than the others, which is being the receptive partner in penile-anal intercourse. This activity is much, much more common among men who have sex with men (MSM) than among any other group of people. It's absolutely true that there are both men and women who have contracted, and will continue to contract, HIV from participating exclusively in vaginal sex. But the risk of any given encounter is much lower, and combined with the fact that men who have sex with men tend to have much more frequent sex and much more diversity in sexual partners, it's hard to imagine we will ever live in a world where HIV infection and AIDS diagnosis are not both grossly disproportionately common among MSM.

4

Coomb t1_jd3a5tt wrote

You can come up with just so evolutionary stories as to why a particular trait might or might not be adaptive, and therefore might or might not be selected for, for just about anything.

The specific example he gave of rabbit hemorrhagic fever virus is pretty much like a rabbit version of Ebola virus in terms of symptoms.

It was released in Australia in the mid-1990s, and rabbits have been co-evolving with it since then. This study captured wild rabbits in 2007 (meaning their ancestors had been subject to periodic outbreaks for over a decade and therefore could reasonably be anticipated to have evolved some amount of resistance, if resistance is possible), bred them a few times to get 80 rabbits, and then exposed those 80 rabbits to four different variants of the virus: the original isolate released in the mid-90s, and isolates collected in 2006, 2007, and 2009.

What they found was that, in these rabbits, the newest virus samples are considerably more deadly and also killed the rabbits considerably more quickly.

The original virus killed about 70% of all the rabbits they exposed to it, with an average survival time of about 120 +- 20 hours. The 2006 sample killed 85% with a survival time of 80 +- 16 hours, the 2007 killed 100% with a survival time of 45 +- 2.5 hours and the 2009 also killed 100% in 50 +- 3.5 hours.

Compared to the effects of the original virus on the original wild rabbit population, the authors cite an earlier study that found:

>Cooke and Berman (2000) showed that CAPM V-351 killed 22 of 24 unselected, nonresistant Australian wild rabbits, with survival times averaging 72.5 hr for orally inoculated rabbits (and BDC, pers comm.).

It seems clear that the wild rabbits did begin evolving resistance to the original strain of the virus, because although the original strain of the virus is still very deadly among wild rabbits, it's not quite as deadly. But it also seems clear that the viral evolution has caused it to maintain, at the very least, the same level of virulence as it had before it began coevolving, and perhaps an even higher virulence. There is certainly no evidence that after 30+ generations of rabbits the virus has reached a much less deadly equilibrium with the rabbits compared to its original virulence.


As far as just so stories go, I don't find any story that HIV would certainly naturally tend to become less virulent to be convincing. Even in completely untreated HIV, the latency time between infection and observable, behaviorally affecting significant illness is months to years.

So you have a disease that without modern medicine, looks like many other apparently random diseases that just occasionally kill people. After all, it isn't HIV that kills. It's opportunistic infections associated with AIDS. We have been able to identify a relatively small number of characteristic illnesses that pop up in modern society almost entirely among those who are immunosuppressed because of HIV, but that doesn't mean those illnesses would also be characteristic in a pre-modern society, and it doesn't mean anybody would have the widespread health surveillance to identify them.

In addition to that, the most common transmission method of an HIV infection is sex, and (both currently and historically) sex is something that humans like to engage in, and engage in quite frequently on average.

The point of all that is that, if you think HIV would evolve to become less virulent because virulence impedes transmission, you should consider that, other than the terminal phase, it doesn't impede transmission, and the number of possible transmission events between infection and symptomatic illness is, for many people, in the dozens to hundreds, or more. That means that even if it kills 100% of people in 5 years, it's never going to run out of people to kill until everybody's dead -- unless you have modern epidemiology that can identify there's some kind of infection and what the method of transmission is and what effective preventive methods are, and/or you can at least identify HIV infection as a specific illness and have effective medication to treat it.

20

Coomb t1_jbpcbll wrote

"Specific" was, at least originally, a generic term to indicate that the parameter being discussed has been normalized by some relevant unit to turn it from an extensive property to an intensive property. Occasionally in the context of specific heat, you will actually see people write out "mass specific heat" or "volumetric specific heat" or "molar specific heat".

People working in a particular context almost certainly just use the term specific heat to refer to whichever specific intensive property is most often relevant, so it doesn't surprise me to hear that some people use it to mean molar specific heat rather than mass specific heat.

2

Coomb t1_jb17zvk wrote

>But what keeps the train moving? I know the answer to this question is inertia, but intuitively it makes sense that there must be some force that is making the object continue to move, even at a constant velocity. I guess a better question is do we know why objects with no net force can remain in motion? Like, it makes sense to me that when net force = 0 = no net movement, but not the constant velocity part.

Why is it that when you're standing inside a train (or airplane or car) moving at constant speed, you move along with the train without having to constantly horizontally push on the floor?

According to your reasoning, you're moving at constant velocity and that means you need to be pushing on something to keep moving forward. But actually you don't have to push on anything. Does that tell you anything about your intuition with respect to motion in different frames of reference?

You may also want to contrast this experience with your experience on something like a merry-go-round, where you know that unless you are actively exerting force against a pole or something else on the merry-go-round, you'll fall off. Do you know what the key difference between these situations is?

2

Coomb t1_jay2nm6 wrote

I'm not sure what you're thinking, but the key difference between Newton's first law and Newton's second law is that Newton's first law tells you that inertia exists, and Newton's second law tells you how much momentum of an object changes when you exert a force on the object. They're not equivalent.

7

Coomb t1_jadwjsm wrote

Reply to comment by Taxoro in How old is the ISS REALLY? by gwplayer1

The satellites do get their internal clocks updated by the control segment fairly regularly, but you're right that the reason isn't the accumulation of the relativistic error.

1

Coomb t1_jadvbz3 wrote

Reply to comment by Sammy81 in How old is the ISS REALLY? by gwplayer1

As a matter of technical fact, it's not GPS time that gets adjusted with leap seconds, it's UTC. From a user perspective in most cases the difference isn't particularly meaningful because you probably want to convert between GPS time and UTC and for that use case it doesn't matter whether you add or subtract the offset to one parameter or the other. But the satellites don't update the time they broadcast every so often to align with UTC. They've been counting seconds as accurately as they can since they started broadcasting. Instead, they broadcast, in the GPS navigation message, the offset, in integer seconds, from UTC. If you are reading time directly from a GPS message, you never have to worry about it repeating or skipping an increment. UTC technically could do either one of those.

E: to be clear, the GPS control segment routinely updates the clocks on the satellites to maintain synchronization tight enough to meet the GPS specified error budget, but these adjustments are transparent to users and never anywhere close to entire seconds

1

Coomb t1_jadsbp1 wrote

What exactly are you asking for? Are you asking for the range of irradiance under which humans can have "acceptable" vision, however you define that? Because if that's the case, it's going to vary by illuminating source. That is why you can't find convenient converters online. Stare at a light source that's putting out 100W in long range infrared and you can't see a thing, but your face will probably get warm if you're close enough. On the other hand, stare at a white LED that's putting out 100W across the visual spectrum and it's going to look pretty darn bright. Unfortunately, the luminous efficacy (how bright a light source of a particular wavelength appears to a human being given a certain amount of emitted flux) is variable based not only on wavelength but also on overall lighting conditions, because the sensors in your eye which are used at low light have different wavelength dependent response than the sensors in your eye that are used in brighter light.

Lux is the unit for radiative flux that is useful for human vision. It's the unit which is intended to reflect how bright a light source actually looks, without regard to how much total radiation is coming out of it. The range of human vision, in terms of being able to develop a useful visual picture based on the incoming light, is roughly 10^-5 to 10^+5 lux. If we make the further assumption that even at very low radiation power the eye is most sensitive to 555 nanometer wavelength light (which is not true) and use the conversion for luminous flux to radiative flux assuming that all of the light is monochromatic 555 nm light, we have 683 lux = 1 W/m^2. These edges aren't exactly precise, so let's just use 10^3 as the conversion factor between lux and watts per square meter, in which case we get a visual range of 10^-8 to 10^2 W/m^2.

Again, there's no single answer to your question. It's underdetermined. But as a general estimate the above is somewhat reasonable.

3

Coomb t1_ja8c5q8 wrote

The "fallacy"of the *sunk cost fallacy" is people getting emotionally attached to money that has already been spent / time that's already been wasted / other resources that have already been used, and despite knowing or having good reason to believe that future resource use is not a good investment given the current state of affairs, continue using resources.

It is not, however, fallacious to observe that money has already been spent to make some kind of progress, that spending only a little bit more money will actually get you a useful product at the end, and that abandoning the project entirely will get you nothing of value. You are making a rational assessment to continue investing money because, with the state of affairs as it is, additional investment appears to be profitable.

Let's say, for example, that you signed a contract to buy a new Ford F-150 for a million dollars, paid in $1,000 installments, with the vehicle to only be delivered if and when the final payment is made. Otherwise you get nothing.

That would have been a stupid contract to sign. It would be stupid to keep paying on that contract if you had only already paid in $1,000, or $10,000, or $100,000, or $900,000. The value of the vehicle is not a million dollars. It's $100,000 or less. Every single payment you make up to roughly the $900,000 level is objectively a bad decision, even if you've already paid in a substantial amount of money. However, along the way, your decision to keep paying might have been driven by the sunk cost fallacy. After all, you already threw $100,000 down a hole. If you stopped paying now, that money would just disappear to no benefit.

On the other hand, if somehow you inherited the right to be sold the F-150 knowing that only a single $1,000 payment needed to be made to actually get the car, it would not be fallacious reasoning to make that payment. It wouldn't make it fallacious if you observed that the $999,000 already spent are a sunk cost. You would be making a rational decision to spend $1,000 in return for an F-150. Somebody else might have made a bad decision to pay up to that point, but it's not a bad decision to pay just a little bit more money to get a useful product at the end.

13

Coomb t1_j9zs84p wrote

The limited travel of the hydraulics, pneumatics, or other actuators driving the ride cab means that the ride cannot provide anything other than 1g, normal gravity, indefinitely. However, that does not mean that the ride cannot provide higher or lower than normal gravity for relatively brief periods of time. All it needs to do is be able to drive the cab with enough force to exceed its weight.

2

Coomb t1_j9zpiux wrote

The pudendal nerve provides sensation (and motor control) in the genital area, including, as especially relevant to your question, the clitoris and penis. Various branches provide sensation to the rest of the genitalia (scrotum and vulva), the anal canal, the perineum, the pelvic floor, and surrounding areas of the legs (as well as other anatomical features in the area). In some people, it actually itself is a branch of the sciatic nerve, but in general it is certainly possible to have neurological damage that affects the sensory nerves in most of either or both legs which does not affect the sensory nerves for the genitals.

Please note that anatomy is complicated and actually varies from person to person and the pudendal nerve isn't necessarily the only sensory or motor nerve for the areas I mentioned.

2

Coomb t1_j9wg3jb wrote

You're right. The positive or negative sign in that expression is just a bookkeeping convention and doesn't really have any further consequences. In some sense, it's a one-dimensional problem (the "heat dimension"). An EM analog might be nodal current analysis. For the purposes of analyzing nodal currents, it doesn't actually matter if you say that the sum of all the currents is zero, or if you say that the sum of currents flowing in, minus the sum of currents flowing out, is zero (and call all of the currents positive). In either case, you're preserving the information about whether something is going in or going out, just with a different system of bookkeeping -- are the negative signs attached to each current individually, or a group of currents that you identify and sum?

Once you start involving multiple spatial dimensions, as is common in EM problems, of course your choice of sign convention has more implications downstream.

2

Coomb t1_j9v6ofp wrote

What I will call the mechanical engineering convention (i.e. work done by a system is positive) since it's certainly the convention that was either exclusively or overwhelmingly used in mechanical engineering when I was taking my classes 10 years ago or so, has an (arguable) pedagogical advantage when introducing enthalpy, which is an extremely commonly used parameter in mechanical engineering and probably in most other forms of engineering.

By definition, H = U + PV. If we assume that the internal energy is the only parameter of the working fluid that is changing, and not other things like its gravitational potential energy or its bulk kinetic energy, the engineering convention equation for the change in energy associated with heat addition and work performed is dU = dQ - dW.

The differential form of the enthalpy is dH = dU + d(PV) = dU + PdV + VdP.

Substitute and you have dH - PdV + VdP = dQ - dW. Make the further assumption of constant pressure and then

dH - PdV = dQ - dW

The pressure work done by the fluid as its enthalpy is increased, and the work done by the system on its surroundings, have the same sign. It makes it more obvious that the PdV component is the amount of enthalpy "lost" by allowing the fluid to expand against external pressure.

The toy problem that is usually used to introduce this is gas in a well insulated cylinder with a piston head held down by weights on the head. What happens when you add heat to the gas? Some of the energy goes into increasing the temperature (and therefore internal energy) - obviously the gas heats up. But just measuring the internal energy of the gas before and after you've added heat to it doesn't accurately tell you how much heat you added. This is of course because some of the heat also goes into raising the weight on the piston against gravity. For people who are mechanically inclined, this is a relatively intuitive physical scenario, and it helps illustrate why enthalpy is a more useful parameter for many engineering problems than internal energy alone.

2

Coomb t1_j9u6gq4 wrote

Plenty of engineering and physics texts define dE for a system (or dU depending on terminology and assumptions) = dQ - dW, that is, they treat work done by a system on its surroundings as positive.

Here's one example.

https://web.mit.edu/16.unified/www/FALL/thermodynamics/thermo_4.htm

As far as the reason why, the answer is that when you're working from the perspective of a machine using a working fluid, it's natural to conceive of the working fluid as gaining energy when heat is added and losing energy when it's doing work. We usually talk about engines as being rated for output in terms of Watts or horsepower and not negative Watts or horsepower. In your convention, the engine's work is negative.

3

Coomb t1_j9c6d45 wrote

>Of course there are options. They way you're thinking about choice here would render commonplace statements like "I could climb that fence but I don't feel like it" incoherent nonsense, because there wasn't any future in which I would have chosen to do so. That's a strong indication that you're operating with a notion of choice that doesn't line up with the what people generally mean by choice.

Nobody, or at least certainly not me, is going to deny that there is a strong subjective perception of choice in some situations. It seems like you choose whether to go to a party or not, or how much you think you need to study to pass an exam.

It's also obviously true that there are mental states which we are consciously aware of not choosing. People generally don't choose to be sexually attracted or not sexually attracted to someone. They don't choose whether they "click" with someone and become friends. They don't choose whether they prefer to stay in all night watching Netflix or go out to bars.

I think the obvious truth that we generally don't choose our preferences is inherently problematic for the common concept of free will.

>Choices are morally relevant where they give information about the decision maker, and that's where there are a number of options to take under a quite mundane sense of "option". There's a difference between jumping a fence because I wanted to and jumping one at gunpoint regardless of whether the universe is deterministic or not.

That's a weird definition of morally relevant. When I choose to eat vanilla ice cream instead of peanut butter, you're getting information about my preferences. When I choose to murder somebody or refrain from murdering them, you're getting information about my preferences. But most people would say that my ice cream choice isn't morally relevant but my murder choice is. Can you explain what makes you think your definition is sensible?

>A deterministic universe doesn't forbid mental processes from affecting physical processes when mental processes are understood as physical process. But really, you didn't answer my question here. I don't see how even a dualistic universe helps allow free will to exist. What additional factor into a choice does it allow for that wasn't already there?

If it is true that the universe is entirely physically deterministic, then there is no way to distinguish between the processes of the brain which give rise to mental states, including thoughts and choices, and simpler deterministic mechanical systems like internal combustion engines or computers. We do not have the intuition that an internal combustion engine is morally responsible for its actions, or that it is making any choices at all. The same is generally true of computers, at least until we developed computer programs sophisticated enough to trick people's pattern recognition algorithms into interpreting stimulus from a computer as stimulus from a mind. But even where that trick is effective, people are generally at least intellectually aware that everything that's coming out of the computer is predetermined by the motion of electrons and other purely mechanical processes, and by analogy to other machines, that's a pretty convincing argument to most people that chatGPT isn't actually a mind.

>Again, you seem to be saying that for a choice to be free it must be made on the basis of something other than your character, experiences, beliefs, facts of the situation, and random chance. What else needs to influence it for it to be free and how does a nondeterministic universe allow for that when a deterministic one doesn't? So far you've just said that it means mental processes can be nondeterministic but why's that supposed to help?

Most people conceive of free will as existing in the universe where there is a possible counterfactual to a choice. If I choose to eat broccoli instead of cauliflower, the word "choose" only makes sense if there is a possible world in which I ate cauliflower, but based on my mental processes, I influenced the world to become one where I ate broccoli. If there was never a possibility that I would "choose" cauliflower, I didn't make a choice. All that happened was the universe evolved as it was always going to. My mental processes didn't have any effect on the outcome.

In other words, a choice is the ability to actually change the future state of the universe via internal mental processes.

If the universe is entirely physical and deterministic, that's impossible to do. Everything that happens was fundamentally determined by the initial state of the universe and the rules that the universe follows. It is impossible for me to change the universe through choice, precisely because the outcome of my mental processes, which are instantiated in my brain, are entirely physical and predetermined by everything else. There is no "me" to "choose" for the same reasons that we don't think of water choosing to flow downhill or an engine choosing to run.

The only time it is possible for free will to exist is if my mental processes are not entirely predetermined the history of the universe up to the current point. Only that allows me to change the pattern of activation of neurons in my brain and central nervous system and muscles so that I can effectuate my genuine preference. Otherwise my body is a mechanism and everything that happens in the mechanism is fully automatic.

>I think that an nondeterministic universe poses problems for free will, because it means a less strong connection between beliefs/experiences and deliberation, as well as deliberation and action. Of course someone would make the exact same decision every time in the exact same situation: that decision is a reflection of who they were at the time. Why would we ever expect anything else? And to the extent a decision isn't reflective of who they are, it's less morally relevant!

As I said above, if the universe is fully physical and fully deterministic, its evolution in time is predetermined and therefore there is no choice by anyone about anything. People are just like any other composition of matter, and their activities are just like the activities of processes we generally don't consider conscious or mental, like atoms bonding with each other, or water flowing downhill. Only if our actions are somehow not fully determined by the physical universe, but rather can actually be changed by our conscious control of our mental state, can we make choices. You're right that free will then requires our mental processes to be fully determinative of our bodily actions.

>I lean towards what's sometimes called "hard compatibility": that far from being incompatible with determinism, free will in fact might require it.

Free will obviously requires that we, at at least some times and in at least some ways, be able to affect the physical universe through our mental processes, including and especially conscious choices. Otherwise, at best, we would be consciousnesses trapped in our bodies.

But in the sense that people commonly understand it, it also requires that the universe not predetermine our choices. It requires that we make choices of our own volition and not simply because a particular subatomic particle was close to another particular subatomic particle at the attosecond after the Big Bang.

3