atremblein t1_j52tm5s wrote

There is no logical link between these two correlations... "Results show that a quantifiable and meaningful portion of COVID-19 vaccine side-effects is predicted by vaccine hesitancy, demonstrating that side-effects comprise a psychosomatic nocebo component in vaccinated individuals. " My friend went to the hospital after her vaccine and she thought vaccines were good and would help her!

Obviously, if something happens and there is a statistical relation then that doesn't necessarily cause such an outcome. That this is even being published shows how biased science has become...obviously things have side effects and those have an effect. You can't blame everything on a psychosomatic nocebo effect just because side effects exist. A logical conclusion would be that maybe their genes or immune systems functions differently and that causes side effects, but blaming a person for their response and using that to justify how information is given to the public makes no sense.


atremblein t1_j0l7fxm wrote

Here is tl;dr:

“The optimal dose of exercise is not yet known, but it is likely to be 20-plus minutes each day and must include resistance training to grow the muscles, increase the size and capacity of the internal pharmacy, and stimulate the myokine production,” he said.

“This study provides strong evidence for the recommendation patients with prostate cancer, and likely anybody with any cancer type, should perform exercise most days, if not every day, to maintain a chemical environment within their body which is suppressive of cancer cell proliferation.”

From the article.


atremblein t1_j05ced8 wrote

Due to the reductionist nature of binary outcomes through which most computation is done, it is simply just not really possible for any AI to be maelovent outside of that it was trained to be so based off data. A fusion of analogy and binary would be needed for an AI to actually have enough consciousness to be considered having the ability to be maelovent.

As an example, I was talking to open AI chat gpt and would easily be able to get it to contradict itself by constraining the amount of information it was considering. Otherwise, it likes to make everything sound like there is no truth as if the nature of reality cannot be quantified. This is obviously problematic because it means humans have become too bias to discern the truth. This is why we have things like evolution anomalies from omricon variants. These things evolved right in front of us to the point where are our own vaccines don't even work. So, basically humans are really stupid and don't understand anything and all we can do is keep trying.