Hrmbee

Hrmbee OP t1_jeallx6 wrote

>The software engineers behind these systems are employees of NTC Vulkan. On the surface, it looks like a run-of-the-mill cybersecurity consultancy. However, a leak of secret files from the company has exposed its work bolstering Vladimir Putin’s cyberwarfare capabilities. > >Thousands of pages of secret documents reveal how Vulkan’s engineers have worked for Russian military and intelligence agencies to support hacking operations, train operatives before attacks on national infrastructure, spread disinformation and control sections of the internet. > >One document links a Vulkan cyber-attack tool with the notorious hacking group Sandworm, which the US government said twice caused blackouts in Ukraine, disrupted the Olympics in South Korea and launched NotPetya, the most economically destructive malware in history. Codenamed Scan-V, it scours the internet for vulnerabilities, which are then stored for use in future cyber-attacks. > >Another system, known as Amezit, amounts to a blueprint for surveilling and controlling the internet in regions under Russia’s command, and also enables disinformation via fake social media profiles. A third Vulkan-built system – Crystal-2V – is a training program for cyber-operatives in the methods required to bring down rail, air and sea infrastructure. A file explaining the software states: “The level of secrecy of processed and stored information in the product is ‘Top Secret’.” > >The Vulkan files, which date from 2016 to 2021, were leaked by an anonymous whistleblower angered by Russia’s war in Ukraine. Such leaks from Moscow are extremely rare. Days after the invasion in February last year, the source approached the German newspaper Süddeutsche Zeitung and said the GRU and FSB “hide behind” Vulkan.

At this point, this may not be the most surprising news, but it's still useful to have confirmation about some of the scope and details of these operations. One of the questions that is raised by these revelations though is what knowledge did major technology players have about these operations, and what measures were taken to defend against them?

23

Hrmbee t1_je1oopl wrote

>Over the next few months, the bakery-café chain will roll out scanners that can access customers' credit card and loyalty account using their palm. The biometric-gathering technology, developed by Amazon and called Amazon One, is already popular in airports, stadiums and Whole Foods Market grocery stores. Panera is expected to become the first national restaurant company to use it. > >... > >"In contrast with biometric systems like Apple's Face ID and Touch ID or Samsung Pass, which store biometric information on a user's device, Amazon One reportedly uploads biometric information to the cloud, raising unique security risks," the senators' letter to Amazon CEO Andy Jassy said.

When I first read the headline, I wondered what kind of technological capabilities a company like Panera might have. However, seeing that they're going to be using Amazon One things make a lot more sense.

For me, a server-based biometric system for retail purchases is pretty much a non-starter. I wonder how many other retailers will be signing on with this particular system, and what benefits a server-based system brings to them.

6

Hrmbee OP t1_jdwik9g wrote

>Neuralink has been claiming human trials are just around the corner for years now. However, the company hasn’t yet gotten U.S. Food and Drug Administration approval to put its brain computer interface (BCI) devices inside human skulls. In fact, it only filed its first application for such approval in 2022—despite Musk publicly claiming human tests were forthcoming in 2019, according to another Reuters investigation. The FDA denied the company’s first bid for human trial approval last year, according to that early March report. > >Yet that rejection doesn’t necessarily mean Neuralink won’t eventually reach the human trial stage on its extremely ambitious quest to cure a wide array of ailments and disabilities—from blindness to paralysis—with its brain implant. That the company is still actively searching for an institutional partner for conducting human procedures suggests that Musk and other Neuralink execs remain confident in their device’s path forward. > >Gizmodo reached out to Neuralink for more information, but did not receive a response as of publication time. As with Musk’s other companies, like Twitter and Tesla, Neuralink almost never replies to journalist inquiries. Barrow Neurological Institute also did not immediately respond to Gizmodo’s emailed questions. > >To Reuters, however, a director from the Arizona treatment and research center said that Barrow would be well-equipped to conduct brain implant research along the lines of what Neuralink is hoping to do.

It will be interesting to see what these particular developments might be, It looks like these partnership explorations are in advance of the regulatory approvals that have so far been denied to the company. And in light of the various investigations underway regarding their practices, it seems that at least in the near term that permission might not be forthcoming at least in the United States.

3

Hrmbee OP t1_jdiqte0 wrote

A direct link to the journal article is available here:

Road Traffic Noise and Incidence of Primary Hypertension: A Prospective Analysis in UK Biobank

Abstract:

>Background
>
>The quality of evidence regarding the associations between road traffic noise and hypertension is low due to the limitations of cross-sectional study design, and the role of air pollution remains to be further clarified.
>
>
>
>Objectives
>
>To evaluate the associations of long-term road traffic noise exposure with incident primary hypertension; we conducted a prospective population-based analysis in UK Biobank.
>
>
>
>Methods
>
>Road traffic noise was estimated at baseline residential address using the common noise assessment method model. Incident hypertension was ascertained through linkage with medical records. Cox proportional hazard models were used to estimate hazard ratios (HRs) for association in an analytical sample size of over 240,000 participants free of hypertension at baseline, adjusting for covariates determined via directed acyclic graph.
>
>
>
>Results
>
>During a median of 8.1 years follow-up, 21,140 incident primary hypertension (International Classification of Diseases 10th Revision [ICD 10]: I10) were ascertained. The HR for a 10 dB[A] increment in mean weighted average 24-hour road traffic noise level (Lden) exposure was 1.07 (95% confidence interval: 1.02, 1.13). A dose-response relationship was found, with HR of 1.13 (95% confidence interval: 1.03, 1.25) for Lden >65 dB[A] vs ≤55 dB[A] (P for trend < 0.05). The associations were all robust to adjustment for fine particles (PM2.5) and nitrogen dioxide (NO2). Furthermore, high exposure to both road traffic noise and air pollution was associated with the highest hypertension risk.
>
>
>
>Conclusions
>
>Long-term exposure to road traffic noise was associated with increased incidence of primary hypertension, and the effect estimates were stronger in presence of higher air pollution.

19

Hrmbee OP t1_jdiqcpk wrote

>Previous studies have shown a connection between noisy road traffic and increased risk of hypertension. However, strong evidence was lacking, and it was unclear whether noise or air pollution played a bigger role. The new research shows that it is exposure to road traffic noise itself that can elevate hypertension risk.
>
>“We were a little surprised that the association between road traffic noise and hypertension was robust even after adjustment for air pollution,” said Jing Huang, assistant professor in the Department of Occupational and Environmental Health Sciences in the School of Public Health at Peking University in Beijing, China, and lead author of the study.
>
>Previous studies of the issue were cross-sectional, meaning they showed that traffic noise and hypertension were linked, but failed to show a causal relationship. For the new paper, researchers conducted a prospective study using UK Biobank data that looked at health outcomes over time.
>
>Researchers analyzed data from more than 240,000 people (aged 40 to 69 years) who started out without hypertension. They estimated road traffic noise based on residential address and the Common Noise Assessment Method, a European modeling tool.
>
>Using follow-up data over a median 8.1 years, they looked at how many people developed hypertension. Not only did they find that people living near road traffic noise were more likely to develop hypertension, they also found that risk increased in tandem with the noise “dose.”
>
>These associations held true even when researchers adjusted for exposure to fine particles and nitrogen dioxide. However, people who had high exposure to both traffic noise and air pollution had the highest hypertension risk, showing that air pollution plays a role as well.
>
>“Road traffic noise and traffic-related air pollution coexist around us,” Huang said. “It is essential to explore the independent effects of road traffic noise, rather than the total environment.”
>
>The findings can support public health measures because they confirm that exposure to road traffic noise is harmful to our blood pressure, she said. Policymaking may alleviate the adverse impacts of road traffic noise as a societal effort, such as setting stricter noise guideline and enforcement, improving road conditions and urban design, and investing advanced technology on quieter vehicles.

These are some important findings, especially given how the majority of humanity now lives in urban environments. Policymakers should take heed and look to reduce the noise in cities not just through quieter vehicles but by reducing the number of vehicles overall. Anecdotally, during the early days of the pandemic, many of us have experienced how quiet cities could be with reduced traffic volumes.

22

Hrmbee OP t1_jddu43j wrote

This isn't just about HOA residents, but also visitors to the HOA, contractors, and the like. Also, if a HOA is in a city, then likely all vehicles passing by the community will be captured as well.

Also, given the ubiquity of the collection and analysis of personal data by data brokers, companies, and other organizations, it's highly unlikely that this data will remain unlinked with other personal information. Much of the information captured will likely be of people who are not parties to that HOA contract and essentially have no say in the matter.

3

Hrmbee OP t1_jddmyn2 wrote

>Lakeway is just one example of a community that has faced Flock’s surveillance without many homeowners’ knowledge or approval. Neighbors in Atlanta, Georgia, remained in the dark for a year after cameras were put up. In Lake County, Florida, nearly 100 cameras went up “overnight like mushrooms,” according to one county commissioner — without a single permit. > >In a statement, Flock Safety brushed off the Lake County incident as an “an honest misunderstanding,” but the increasing surveillance of community members’ movements across the country is no accident. It’s a deliberate marketing strategy. > >Flock Safety, which began as a startup in 2017 in Atlanta and is now valued at approximately $3.5 billion, has targeted homeowners associations, or HOAs, in partnership with police departments, to become one of the largest surveillance vendors in the nation. There are key strategic reasons that make homeowners associations the ideal customer. HOAs have large budgets — they collect over $100 billion a year from homeowners — and it’s an opportunity for law enforcement to gain access into gated, private areas, normally out of their reach. > >Over 200 HOAs nationwide have bought and installed Flock’s license plate readers, according to an Intercept investigation, the most comprehensive count to date. HOAs are private entities and therefore are not subject to public records requests or regulation. > >“What are the consequences if somebody abuses the system?” said Dave Maass, director of investigations at the Electronic Frontier Foundation. “There are repercussions of having this data, and you don’t have that kind of accountability when it comes to a homeowners association.” > >The majority of the readers are hooked up to Flock’s TALON network, which allows police to track cars within their own neighborhoods, as well as access a nationwide system of license plate readers that scan approximately a billion images of vehicles a month. Camera owners can also create their own “hot lists” of plate numbers that generate alarms when scanned and will run them in state police watchlists and the FBI’s primary criminal database, the National Crime Information Center. > >“Flock Safety installs cameras with permission from our customers, at the locations they require,” said Holly Beilin, a Flock representative. “Our team has stood in front of hundreds of city council meetings, and we have always supported the democratic process.” > >After facing public outrage, the cameras were removed from communities in Texas and Florida, but Flock’s license plate readers continue to rapidly proliferate daily — from cities in Missouri to Kentucky. > >“It’s a near constant drumbeat,” said Edwin Yohnka, the director of public policy at the American Civil Liberties Union of Illinois. > >With over half of all Americans living in HOAs, experts believe the surveillance technology is far more ubiquitous than we know.

It looks like this company is following the playbook of other companies that have been looking to make inroads in communities through disruption, such as Uber and Airbnb. There also seem to be parallels between what they're doing here and what Ring has been doing with individual property owners. If we are to care about privacy in the slightest, regulations around these kinds of activities are sorely needed but also seemingly lacking in most jurisdictions.

2

Hrmbee OP t1_jdbl6r4 wrote

A link to the original research below:

Electrochemical degradation of PFOA and its common alternatives: Assessment of key parameters, roles of active species, and transformation pathway

Abstract:

>This study investigates an electrochemical approach for the treatment of water polluted with per- and poly-fluoroalkyl substances (PFAS), looking at the impact of different variables, contributions from generated radicals, and degradability of different structures of PFAS. Results obtained from a central composite design (CCD) showed the importance of mass transfer, related to the stirring speed, and the amount of charge passed through the electrodes, related to the current density on decomposition rate of PFOA. The CCD informed optimized operating conditions which we then used to study the impact of solution conditions. Acidic condition, high temperature, and low initial concentration of PFOA accelerated the degradation kinetic, while DO had a negligible effect. The impact of electrolyte concentration depended on the initial concentration of PFOA. At low initial PFOA dosage (0.2 mg L−1), the rate constant increased considerably from 0.079 ± 0.001 to 0.259 ± 0.019 min−1 when sulfate increased from 0.1% to 10%, likely due to the production of SO4•–. However, at higher initial PFOA dosage (20 mg L−1), the rate constant decreased slightly from 0.019 ± 0.001 to 0.015 ± 0.000 min−1, possibly due to the occupation of active anode sites by excess amount of sulfate. SO4•– and •OH played important roles in decomposition and defluorination of PFOA, respectively. PFOA oxidation was initiated by one electron transfer to the anode or SO4•–, undergoing Kolbe decarboxylation where yielded perfluoroalkyl radical followed three reaction pathways with •OH, O2 and/or H2O. PFAS electrooxidation depended on the chemical structures where the decomposition rate constants (min−1) were in the order of 6:2 FTCA (0.031) > PFOA (0.019) > GenX (0.013) > PFBA (0.008). PFBA with a shorter chain length and GenX with –CF3 branching had slower decomposition than PFOA. While presence of C–H bonds makes 6:2 FTCA susceptible to the attack of •OH accelerating its decomposition kinetic. Conducting experiments in mixed solution of all studied PFAS and in natural water showed that the co-presence of PFAS and other water constituents (organic and inorganic matters) had adverse effects on PFAS decomposition efficiency.

6

Hrmbee OP t1_jdbkypv wrote

>Scientists at the University of British Columbia announced on Wednesday that they had developed a new silica-based material with ability to absorb a wider range of the harmful chemicals, and new tools to break them apart them. > >“This is very exciting because we can target these difficult-to-break chemical bonds – and break them for good,” said researcher Madjid Mohseni, who focuses on water quality and water treatment. > >The chemicals, also known as PFAS (per-and polyfluoroalkyl substances) are used for non-stick or stain-resistant surfaces, including clothing, cookware, stain repellents and firefighting foam. But they are also notoriously difficult to break down naturally, giving them the name “forever chemicals”. > >... > >Current technologies often use activated carbon to filter out the chemicals, but are largely only able to target what researchers call the “long-chain” versions of PFAS – those with more than six carbon bonds. Following recent bans, however, industry has shifted to creating ‘short chain’ iterations of the chemical. > >Those versions “are equally as toxic and they stay in the water better. And as a result, current technologies like activated carbon really aren’t as effective,” said Mohseni. > >Most household water filters use activated carbon – and as a result, miss a wide range of possibly harmful chemicals. > >His team also found that the current filters concentrate the absorbed chemicals, creating a “highly toxic” form of waste that consumers throw into the garbage. > >Such filters “are not addressing the problem. We’re just temporarily fixing it and letting those chemicals stay in the environment,” he said. > >To combat the deficiencies in combatting PFAS, the team has developed a new silicate absorbing material that captures a far wider range of chemicals. The thin material can also be reused repeatedly. > >To destroy the chemicals, Mohseni says researchers use either electrochemical or photochemical processes to break the carbon-fluorine bond. The team first published their findings in the journal Chemosphere.

This is some good news as far as PFASes are concerned, though ultimately we do need to limit their broader use in our manufacturing processes. Allowing manufacturers to jump from one compound to another to avoid regulation seems to be a major failing in our current regulatory systems.

42

Hrmbee OP t1_jd3w54d wrote

From the abstract:

>To date, analog methods of cooking such as by grills, cooktops, stoves and microwaves have remained the world’s predominant cooking modalities. With the continual evolution of digital technologies, however, laser cooking and 3D food printing may present nutritious, convenient and cost-effective cooking opportunities. Food printing is an application of additive manufacturing that utilizes user-generated models to construct 3D shapes from edible food inks and laser cooking uses high-energy targeted light for high-resolution tailored heating. Using software to combine and cook ingredients allows a chef to more easily control the nutrient content of a meal, which could lead to healthier and more customized meals. With more emphasis on food safety following COVID-19, food prepared with less human handling may lower the risk of foodborne illness and disease transmission. Digital cooking technologies allow an end consumer to take more control of the macro and micro nutrients that they consume on a per meal basis and due to the rapid growth and potential benefits of 3D technology advancements, a 3D printer may become a staple home and industrial cooking device.

From the discussion:

>As digital cooking technologies become more ubiquitous, it is feasible that humankind will see the nutritional merits and drawbacks of having software-controlled assistants in the kitchen. 3D food printing has the potential to be the next frontier in cooking. Questions surrounding cost, ease of use and consumer acceptance will likely be top factors driving the trajectory of this technology. The spotlight shed on whole foods vs. processed foods for good health may influence consumers’ perception of this technology. However, with upcoming generations’ fascination with not only novel technologies, but also environmental sustainability and healthy eating, all of these are likely to influence the extent of adoption. Additionally, development of competing cooking technologies and advancements in nutrition science may come into play. An industry built around this technology may be on the horizon, creating a new vision of better nutrition, better food accessibility and palatability for many, increasing food safety and adding art and cutting-edge science to the most basic human need—nourishment.

There are some interesting possibilities here with regards to food production, but it seems that the likely outcome of these technologies, especially in the near-term, will be first at industrial scales. The details of these systems for home use will be critical: how proprietary the ingredients and recipes might be would be a key consideration.

3

Hrmbee OP t1_jcjiwg6 wrote

>Although there are almost 5,000 banks in North America, only a handful focus on startups, despite the importance of software, biotech and clean technology to the future of our economy, health and environment. While traditional commercial banks will only lend against “hard assets” or your personal guarantee, people such as me or SVB’s team have spent decades building the expertise to provide debt capital based on the value of your “enterprise,” taking into account your company’s IP, revenue or both.
>
>When these startups approach a lender, they’re rarely profitable. That lack of profitability often scares both bankers and regulators. And yet, as SVB and other lending teams have proven across multiple economic cycles, loan losses in this sector are no higher than those in the broader economy – provided you have the right expertise.
>
>SVB recognized this market gap and became the 16th-largest U.S. bank. As memories of the last dot-com bubble waned, SVB’s success spawned a few smaller competing banks. If you were an entrepreneur, you welcomed the new competition and the lower cost of capital that resulted.
>
>...
>
>But no competitor can do in five years what took SVB decades to accomplish with its 6,000-person team. Over a 40-year period, SVB built a US$30-billion loan portfolio, and about half of that capital is already at work in the economy. SVB has also deployed another US$40-billion in support of venture capital, infrastructure and private equity funds for their day-to-day business needs. That capital and know-how helps create thousands of new, high-paying North American jobs each month. All of which came to a screeching halt last Friday.
>
>With the loss of such a large debt partner, many VC funds will need to reserve more of their own capital to fund each and every new startup. Which means these same VCs will have no choice but to back fewer new firms. And fewer new startups means there’s a irrefutable risk that the “next Moderna” won’t get that first round of essential funding. The consequences of this single bank failure are difficult to overstate.

This kind of concentration of capacity within one organization and market dominance is a problem not just with finance but with any other critical pieces of business infrastructure. They become critical points of failure when things go wrong, and as we're seeing now there can be significant widespread damage to the ecosystem because of it. There ideally should be a degree of redundancy built into all of these systems, so that in the event of a failure there can be sufficient capacity to keep things going during the rebuilding phase.

13

Hrmbee OP t1_jc0l14e wrote

>Twitter’s API is used by vast numbers of researchers. Since 2020, there have been more than 17,500 academic papers based on the platform’s data, giving strength to the argument that Twitter owner Elon Musk has long claimed, that the platform is the “de facto town square.” > >But new charges, included in documentation seen by WIRED, suggest that most organizations that have relied on API access to conduct research will now be priced out of using Twitter. > >It’s the end of a long, convoluted process. On February 2, Musk announced API access would go behind a paywall in a week. (Those producing “good” content would be exempted.) A week later, he delayed the decision to February 13. Unsurprisingly, that deadline also slipped by, as Twitter suffered a catastrophic outage. > >The company is now offering three levels of Enterprise Packages to its developer platform, according to a document sent by a Twitter rep to would-be academic customers in early March and passed on to WIRED. The cheapest, Small Package, gives access to 50 million tweets for $42,000 a month. Higher tiers give researchers or businesses access to larger volumes of tweets—100 million and 200 million tweets respectively—and cost $125,000 and $210,000 a month. WIRED confirmed the figures with other existing free API users, who have received emails saying that the new pricing plans will take effect within months. > >“I don’t know if there’s an academic on the planet who could afford $42,000 a month for Twitter,” says Jeremy Blackburn, assistant professor at Binghamton University in New York and a member of the iDRAMA Lab, which analyzes hate speech on social media—including on Twitter. > >Elissa M. Redmiles, a faculty member at the Max Planck Institute for Software Systems in Germany, says the new prices are eye-watering. “It’s probably outside of any academic budget I’ve ever heard of,” she says, adding that the price would put off any long-term analysis of user sentiment. “One month of Twitter data isn’t really going to work for the purposes people have,” she says. > >Kenneth Joseph, assistant professor at the University of Buffalo and one of the authors of a recent paper analyzing a day in the life of Twitter, says the new pricing effectively kills his career. “$42,000 is not something I can pay for a single month in any reasonable way,” says. “It totally destroys any opportunity to engage in research in this space, which I’ve in many respects built a career on.” > >The pricing documents were provided to WIRED by a researcher who asked for anonymity, since they are still accessing Twitter data through an existing API agreement and worry it could be terminated if they were identified. They say the new costs were “not viable for the academic community.” > >“No one can afford to pay that,” they say. “Even rich institutions can’t afford to pay half a million a year for a thimbleful of data.”

From a lay perspective, it looks like this kind of pricing scheme for API access is designed to eliminate the possibility of independent research on the platform more than it is to generate revenues for the company.

51

Hrmbee OP t1_jbfsi3a wrote

>DuckAssist uses OpenAI’s natural language technology to generate answers to user's search queries at the top of the search results page, making responses more direct than traditional search results. > >Contrary to other generative AI search assistants that use input from thousands of websites, DuckAssist only sources information from Wikipedia and Britannica, hoping to prevent incorrect information from being used when generating answers. > >This restrained approach also differentiates DuckAssist from the AI-powered summarizer that Brave Search launched last week, which sources content from news portals, making it more susceptible to false or misleading information in some cases. > >... > >As for search query anonymity, which sits at the core of DuckDuckGo’s values, the company assures that DuckAssist is fully integrated into its private search engine; hence, user queries or browsing history aren’t logged. > >Some data has to be transmitted to search content partners like OpenAI and Anthropic, but no personally identifiable information or IP addresses are ever shared with those entities. > >DuckDuckGo says that DuckAssist will gradually roll out to users in the coming weeks and promises that this will be the first of the many AI-assisted features it plans to roll out in the coming months.

It's good that this company is rolling these features out cautiously and gradually, and limiting it initially to Wikipedia and Britannica. Also cautiously optimistic that their commitment to privacy will remain with this new service, but only time will tell.

3

Hrmbee t1_jbdurhl wrote

>Pali Bhat joined Reddit from Google about a year ago — he’s actually Reddit’s first-ever chief product officer, which is pretty surprising considering that Reddit is a series of product experiences: the reading experience, the writing experience, and importantly, the moderation experience. One thing we always say on Decoder is that the real product of any social network is content moderation, and Reddit is maybe the best example of that: every subreddit is shaped by volunteer moderators who use the tools Reddit builds for them. So Pali has a big job bringing all these products together and making them better, all while trying to grow Reddit as a platform. > >Pali wanted to come on Decoder to talk about his new focus on making Reddit simpler: simpler for new users to join and find interesting conversations; simpler to participate in those threads; and simpler to moderate. We talked a lot about the tension between what new users need when they’re learning to use Reddit and what Reddit power users want — if the goal is to grow the site, you run the risk of irritating your oldest users with change. > >We also talked about video. Reddit is rolling out a dedicated video feed, which sounds a lot like an attempt to compete with TikTok, which every social network is trying to do — and we talked quite a bit about Google and search. Lots of people use Google to find things on Reddit, which is often used as a criticism of Google’s search quality. I wanted to know if Pali thinks Google is vulnerable in search, if Reddit can become a primary search engine for people, and most importantly, what he took from Google’s culture and what he left behind in organizing Reddit’s product team.

This was an interesting interview. Of particular interest to me was that before Pali there was no chief product officer. Even without that though the product team(s) seemed to be doing a passable job at the very least. Hopefully this new director impacts the user experience in a positive way in the coming months and years.

2