Hrmbee

Hrmbee OP t1_j506k4a wrote

A direct link to the journal for those who are interested:

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2800490

Key Points:

>Question Does body weight modify metabolism and response to vitamin D supplementation?
>
>Findings In this cohort study, a subset including 16515 participants of the VITAL randomized clinical trial, established and novel vitamin D serum metabolite levels were on average lower at higher body mass index. Supplementation increased vitamin D levels less over 2 years at higher body mass index.
>
>Meaning Previous trials observed reduced efficacy of vitamin D supplementation for outcomes of cancer, diabetes, and others, in subsets of participants with higher body mass index; the findings of this cohort study suggest this may be due to a blunted metabolism and internal dose at higher body weights.

7

Hrmbee OP t1_j5065zz wrote

From the article:

>The study is a reanalysis of the VITAL trial, a large-scale project that tested whether proactively taking vitamin D or marine omega-3 supplements could reduce older people’s risk of developing cancer and cardiovascular disease. The randomized, placebo-controlled trial was led by researchers from the Brigham and Women’s Hospital in Massachusetts, which is affiliated with Harvard University. It overall found no significant effect from either type of supplementation on these outcomes. But some data also indicated that vitamin D supplementation was associated with benefits in those with a BMI lower than 25 (BMI between 18.5 to 25 is considered “normal”), specifically a smaller risk of developing cancer and autoimmune disease, as well as a lower cancer mortality. > >To better understand this link, some of the same researchers decided to study blood samples taken from over 16,000 volunteers over the age of 50 involved in the trial. These samples allowed them to look at people’s total vitamin D levels as well as other biomarkers of vitamin D, like metabolic byproducts and calcium, before the study began. About 2,700 of these volunteers also came back for follow-up blood tests two years later. > >The team found that people’s levels of vitamin D and these biomarkers generally increased following supplementation, no matter their BMI. But this increase was significantly less pronounced in those with a BMI over 25, the threshold for overweight and obesity. This dampening effect was also seen in people who had low levels of vitamin D at baseline, meaning those who would experience the greatest benefit from supplementation. The team’s findings were published Tuesday in JAMA Network Open. > >“We observed striking differences after two years, indicating a blunted response to vitamin D supplementation with higher BMI,” said study author Deirdre Tobias, an associate epidemiologist in Brigham’s Division of Preventive Medicine, in a statement from Harvard. “This may have implications clinically and potentially explain some of the observed differences in the effectiveness of vitamin D supplementation by obesity status.”

These are some interesting results, and it's good that researchers went back to look again at the data in this study. Perhaps with more research in this area, weight-based guidelines for dosage could be developed subsequently.

37

Hrmbee OP t1_j46p6dw wrote

>The company claims to have granular details on more than 2.5 billion people across 62 different countries. The chances that Acxiom knows a whole lot about you, reader, are good. > >In many respects, data brokering is a shadowy enterprise. The industry mostly operates in quiet business deals the public never hears about, especially smaller firms that engage with data on particularly sensitive subjects. Compared to other parts of the tech industry, data brokers face little scrutiny from regulators, and in large part they evade attention from the media. > >You almost never directly interact with a company like Acxiom, but its operation intersects with your life on a near constant basis through a byzantine pipeline of data exchanges. Acxiom is in the business of identity, helping other companies figure out who you are, what you’re like, and how you might be persuaded to spend money. Got a list of a list of 50,000 of your customers’ names? Acxiom can tell you more about them. Want to find the perfect audience for your next ad campaign—perhaps people who’ve gone through bankruptcy or Latino families that spend a lot on healthcare? Acxiom knows where to look. > >Though Engelgau’s business understands so much about so many people, most people know very little about Acxiom. Engelgau offered to sit down for an interview with Gizmodo to offer a look at one of the least understood corners of the digital economy.

This interview seems to serve as a decent introduction into the world of data brokers for those who are unfamiliar, though there also seems to be a dose of self-promotion and justification within as well. It's good that he talks about the standards they have for privacy and not doing harm with their data, but as a whole, the industry is far shadier than that. Regulation of data collection/analytics would be something that might help to bring some accountability to this sector.

23

Hrmbee OP t1_j3yfe9b wrote

From the Abstract:

>Lead-formulated aviation gasoline (avgas) is the primary source of lead emissions in the United States today, consumed by over 170,000 piston-engine aircraft (PEA). The U.S. Environmental Protection Agency (EPA) estimates that four million people reside within 500m of a PEA-servicing airport. The disposition of avgas around such airports may be an independent source of child lead exposure. We analyze over 14,000 blood lead samples of children (≤5 y of age) residing near one such airport—Reid-Hillview Airport (RHV) in Santa Clara County, California. Across an ensemble of tests, we find that the blood lead levels (BLLs) of sampled children increase in proximity to RHV, are higher among children east and predominantly downwind of the airport, and increase with the volume of PEA traffic and quantities of avgas sold at the airport. The BLLs of airport-proximate children are especially responsive to an increase in PEA traffic, increasing by about 0.72 μg/dL under periods of maximum PEA traffic. We also observe a significant reduction in child BLLs from a series of pandemic-related interventions in Santa Clara County that contracted PEA traffic at the airport. Finally, we find that children’s BLLs increase with measured concentrations of atmospheric lead at the airport. In support of the scientific adjudication of the EPAs recently announced endangerment finding, this in-depth case study indicates that the deposition of avgas significantly elevates the BLLs of at-risk children.

It's unfortunate that a full switch from leaded gasoline in all its forms hasn't been implemented since we've understood the dangers over a generation ago. Industry convenience should not trump public health and yet it occurs on a regular basis.

6

Hrmbee OP t1_j29et7o wrote

>The idea of an open web where actors use common standards to communicate is as old as, well, the web. "The dreams of the 90s are alive in the Fediverse," Lemmer-Webber told me.
>
>In the late '00s, there were more than enough siloed, incompatible networking and sharing systems like Boxee, Flickr, Brightkite, Last.fm, Flux, Ma.gnolia, Windows Live, Foursquare, Facebook, and many others we loved, hated, forgot about, or wish we could forget about. Various independent efforts to standardize interoperation across silos generally coalesced into the Activity Streams v1 standard.
>
>Both the original Activity Streams standard, and the current W3C Activity Streams 2.0 standard used by Mastodon and friends, offer a grammar for expressing things a user might do, like "create a post" or "like👍 a post with a given ID" or "request to befriend a certain user." The vocabulary one would use with this grammar is split into its own sub-standard, the Activity Vocabulary.
>
>Now that we have a way to express a person's stream of thought and action in JSON blobs, where do all these streams go? The ActivityPub standard is an actor-based model which specifies that servers should have a profile for each actor providing a universal resource indicator (URI) for each actor's inbox and outbox. Actors can send a GET request to their own inbox to see what the actors they follow have been posting, or they can GET another actor's outbox to see what that specific actor has been posting. A POST request to a friend's inbox places a message there; a POST request to the user's own outbox posts messages for all (with the right permissions). The standard specifies that these various in- and outboxes hold activities in sequential order, much like our familiar social media timelines.
>
>...
>
>Here we have the vision of the Fediverse: a set of ActivityPub nodes, scattered across the globe, all speaking a common language. Mastodon is one of many efforts to implement the inboxes and outboxes of the ActivityPub standard. There are dozens of others, ranging from other microblogging platforms ("It's like Mastodon, but...") to an ActivityPub server that runs a chess club.
>
>In theory, they all intercommunicate; in practice, not so much. The sources of incompatibility stem from several issues, from imperfections in the standard to questions of how online communities should form to efforts to reach beyond the standard post/comment/follow format of typical social networks. > >... > >What's next? The Fediverse may remain a host of small hosts. But there are economies of scale. In the federation model, a small, ragtag community sharing an instance is now stuck paying the server bill. > >In terms of skill and time costs, the preparation for many of the systems on the Fediverse is as easy as "just spin up a Docker container on a Raspberry Pi." Of course, most people cannot understand and execute that (relatively) simple instruction. > >Or the Fediverse may centralize. Large instances can be bought. The CEO of Tumblr has promised to implement ActivityPub ASAP, and with 135 million monthly active users, that could make Tumblr the bright giant around which the rest of the Fediverse revolves. MacWright speculates that in such a case, “Inevitably everyone's gonna get grumpy that they're dominating the standard and it's no longer an Indieweb thing, and the cycle starts over.”

Moving (back) to a set of open standards with an aim for interoperability is fundamentally a good direction to be moving in. Walled gardens certainly bring their users a certain set of benefits, but if recent actions by various platforms have shown, also brings a set of challenges as well. It will be interesting to see what the future brings for ActivityPub and its various platforms, and hopefully this growth is managed well, with an eye to longevity and resiliency.

50

Hrmbee OP t1_j28ta3m wrote

Abstract:

>Urban expansion is generating unprecedented homogenization of landscapes across the world. This uniformization of urban forms brings along dramatic environmental, social, and health problems. Reverting such processes requires activating people’s sense of place, their feeling of caring for their surroundings, and their community engagement. While emotions are known to have a modulating effect on behavior, their role in urban transformation is unknown. Drawing on large cognitive-psychological experiments in two countries, we demonstrate for the first time that urban homogenization processes lower people’s affective bounds to places and ultimately their intentions to engage with their neighbourhoods. The dulled emotional responses in peri-urban areas compared to urban and rural areas can be explained by lower social cohesion and place attachment. The findings highlight the significance of considering emotions in shaping just, equitable, sustainable, and resilient cities.

This is some interesting research especially for those engaged in the work of city and community building in all its various forms. It is important to consider these kinds of psychological and social factors when designing our communities, but the difficulty of communicating the importance of these issues to the broader public remains a challenge that still needs to be overcome in many instances.

29

Hrmbee OP t1_j0exfy6 wrote

I wouldn't be too surprised if I were to discover that some of the big insurance companies are invested in some/many of these telehealth companies.

4

Hrmbee OP t1_j0ea476 wrote

Looks like HIPAA isn't well suited for situations encountered in digital ecosystems. It might be time for it to be revamped at the very least to take into account these kinds of situations that have and might continue to arise.

18

Hrmbee OP t1_j0e5t1z wrote

>A joint investigation by STAT and The Markup of 50 direct-to-consumer telehealth companies like WorkIt found that quick, online access to medications often comes with a hidden cost for patients: Virtual care websites were leaking sensitive medical information they collect to the world’s largest advertising platforms. > >On 13 of the 50 websites, we documented at least one tracker—from Meta, Google, TikTok, Bing, Snap, Twitter, LinkedIn, or Pinterest—that collected patients’ answers to medical intake questions. Trackers on 25 sites, including those run by industry leaders Hims & Hers, Ro, and Thirty Madison, told at least one big tech platform that the user had added an item like a prescription medication to their cart, or checked out with a subscription for a treatment plan.  > >The trackers that STAT and The Markup were able to detect, and what information they sent, is a floor, not a ceiling. Companies choose where to install trackers on their websites and how to configure them. Different pages of a company’s website can have different trackers, and we did not test every page on each company’s site. > >All but one website examined sent URLs users visited on the site and their IP addresses—akin to a mailing address for a computer, which can be used to link information to a specific patient or household—to at least one tech company. The only telehealth platform that we didn’t observe sharing data with outside tech giants was Amazon Clinic, a platform recently launched by Amazon. > >Health privacy experts and former regulators said sharing such sensitive medical information with the world’s largest advertising platforms threatens patient privacy and trust and could run afoul of unfair business practices laws. They also emphasized that privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) were not built for telehealth. That leaves “ethical and moral gray areas” that allow for the legal sharing of health-related data, said Andrew Mahler, a former investigator at the U.S. Department of Health and Human Services’ Office for Civil Rights. > >“I thought I was at this point hard to shock,” said Ari Friedman, an emergency medicine physician at the University of Pennsylvania who researches digital health privacy. “And I find this particularly shocking.”

This is, to put it mildly, not good. There need to be clear standards and requirements for any organization, public or private, to safekeep health data and metadata. Meaningful sanctions need to also be in place for those who violate these standards. Given the current situation, it would not be surprising if insurance companies and the like are buying all the data they can to help build out profiles on the people they insure to determine coverage and premiums.

9

Hrmbee OP t1_izvyzzl wrote

>Electronic logging devices (ELD) are billed as a way to make roads safer by keeping truckers accountable to their allowed hours of service. But the devices raise questions about what information employers are collecting about their workers. > >"People sort of tend to view the trucker as an 'other,'" said Karen Levy, author of Data Driven: Truckers, Technology and the New Workplace Surveillance. "They maybe say … 'You know, that maybe makes sense for truckers, but it wouldn't make sense for me.'" > >"The issues truckers are facing, I think, are issues that everybody is beginning to face — particularly post-pandemic — as these technologies become used in more remote work." > >... > >In addition to logging the number of hours a driver operates the vehicles, the devices can track information such as vehicle location and speed. > >Levy said that the proliferation of ELDs has opened the doors for other monitoring systems that can monitor driving behaviours, like hard braking or swerving, and may include driver-facing cameras that use artificial intelligence to track eye movements and check for signs of drowsiness.  > >The devices don't address the factors she says are driving fatigue among many truckers, including declining wages over decades. > >... > >Using ELDs to improve safety for drivers and the public can be valuable, but potentially using that data to improve efficiency could prove problematic, she said. > >"When that surveillance is used to 'data-ify' the job and track how many deliveries that person made in a day, and pushing them to cut corners or accelerate through red lights, or causing people to urinate or defecate in bottles in their truck because they're fearful of taking any time off to tend to natural bodily functions, then I think we're using it improperly," Bednar said.

Using technology to ensure safety is a laudable goal, but as mentioned in the article, using it to maximize efficiency (or some other business metric) has proven thus far to be problematic. There should be a clear firewall between these data that are collected ostensibly for safety reasons, and a company's other business units.

edit: typo

1

Hrmbee OP t1_iydry4y wrote

>Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. As a result, all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on “beneficial artificial general intelligence” that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it.  > >One of the most notable examples of EA’s influence comes from OpenAI, founded in 2015 by Silicon Valley elites that include Elon Musk and Peter Thiel, who committed $1 billion with a mission to “ensure that artificial general intelligence benefits all of humanity.” OpenAI’s website notes: “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” Thiel and Musk were speakers at the 2013 and 2015 EA conferences, respectively. Elon Musk has also described longtermism, a more extreme offshoot of EA, as a “close match for my philosophy.” Both billionaires have heavily invested in similar initiatives to build “beneficial AGI,” such as DeepMind and MIRI.  > >Five years after its founding, Open AI released, as part of its quest to build “beneficial” AGI, a large language model (LLM) called GPT-3. LLMs are models trained on vast amounts of text data, with the goal of predicting probable sequences of words. This release set off a race to build larger and larger language models; in 2021, Margaret Mitchell, among other collaborators, and I wrote about the dangers of this race to the bottom in a peer-reviewed paper that resulted in our highly publicized firing from Google.  > >Since then, the quest to proliferate larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hateful text and disinformation en masse, continue to unfold. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.” > >... > >With EAs founding and funding institutes, companies, think tanks, and research groups in elite universities dedicated to the brand of “AI safety” popularized by OpenAI, we are poised to see more proliferation of harmful models billed as a step toward “beneficial AGI.” And the influence begins early: Effective altruists provide “community building grants” to recruit at major college campuses, with EA chapters developing curricula and teaching classes on AI safety at elite universities like Stanford. > >Just last year, Anthropic, which is described as an “AI safety and research company” and was founded by former OpenAI vice presidents of research and safety, raised $704 million, with most of its funding coming from EA billionaires like Talin, Muskovitz and Bankman-Fried. An upcoming workshop on “AI safety” at NeurIPS, one of the largest and most influential machine learning conferences in the world, is also advertised as being sponsored by FTX Future Fund, Bankman-Fried’s EA-focused charity whose team resigned two weeks ago. The workshop advertises $100,000 in “best paper awards,” an amount I haven’t seen in any academic discipline.  > >Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now.  > >We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitanga, or guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAI, which scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.” We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites. 

There is clearly a lot of potential with AI research, but the potential for negative outcomes must be considered and dealt with early on, rather than wait until they inevitably occur. Research does follow funding, and as a technology that has the potential to deeply affect all aspects of private and public life, it might be good for there to be a strong public stake in this field of research.

3