Hrmbee

Hrmbee OP t1_jeallx6 wrote

>The software engineers behind these systems are employees of NTC Vulkan. On the surface, it looks like a run-of-the-mill cybersecurity consultancy. However, a leak of secret files from the company has exposed its work bolstering Vladimir Putin’s cyberwarfare capabilities. > >Thousands of pages of secret documents reveal how Vulkan’s engineers have worked for Russian military and intelligence agencies to support hacking operations, train operatives before attacks on national infrastructure, spread disinformation and control sections of the internet. > >One document links a Vulkan cyber-attack tool with the notorious hacking group Sandworm, which the US government said twice caused blackouts in Ukraine, disrupted the Olympics in South Korea and launched NotPetya, the most economically destructive malware in history. Codenamed Scan-V, it scours the internet for vulnerabilities, which are then stored for use in future cyber-attacks. > >Another system, known as Amezit, amounts to a blueprint for surveilling and controlling the internet in regions under Russia’s command, and also enables disinformation via fake social media profiles. A third Vulkan-built system – Crystal-2V – is a training program for cyber-operatives in the methods required to bring down rail, air and sea infrastructure. A file explaining the software states: “The level of secrecy of processed and stored information in the product is ‘Top Secret’.” > >The Vulkan files, which date from 2016 to 2021, were leaked by an anonymous whistleblower angered by Russia’s war in Ukraine. Such leaks from Moscow are extremely rare. Days after the invasion in February last year, the source approached the German newspaper Süddeutsche Zeitung and said the GRU and FSB “hide behind” Vulkan.

At this point, this may not be the most surprising news, but it's still useful to have confirmation about some of the scope and details of these operations. One of the questions that is raised by these revelations though is what knowledge did major technology players have about these operations, and what measures were taken to defend against them?

23

Hrmbee t1_je1oopl wrote

>Over the next few months, the bakery-café chain will roll out scanners that can access customers' credit card and loyalty account using their palm. The biometric-gathering technology, developed by Amazon and called Amazon One, is already popular in airports, stadiums and Whole Foods Market grocery stores. Panera is expected to become the first national restaurant company to use it. > >... > >"In contrast with biometric systems like Apple's Face ID and Touch ID or Samsung Pass, which store biometric information on a user's device, Amazon One reportedly uploads biometric information to the cloud, raising unique security risks," the senators' letter to Amazon CEO Andy Jassy said.

When I first read the headline, I wondered what kind of technological capabilities a company like Panera might have. However, seeing that they're going to be using Amazon One things make a lot more sense.

For me, a server-based biometric system for retail purchases is pretty much a non-starter. I wonder how many other retailers will be signing on with this particular system, and what benefits a server-based system brings to them.

6

Hrmbee OP t1_jdwik9g wrote

>Neuralink has been claiming human trials are just around the corner for years now. However, the company hasn’t yet gotten U.S. Food and Drug Administration approval to put its brain computer interface (BCI) devices inside human skulls. In fact, it only filed its first application for such approval in 2022—despite Musk publicly claiming human tests were forthcoming in 2019, according to another Reuters investigation. The FDA denied the company’s first bid for human trial approval last year, according to that early March report. > >Yet that rejection doesn’t necessarily mean Neuralink won’t eventually reach the human trial stage on its extremely ambitious quest to cure a wide array of ailments and disabilities—from blindness to paralysis—with its brain implant. That the company is still actively searching for an institutional partner for conducting human procedures suggests that Musk and other Neuralink execs remain confident in their device’s path forward. > >Gizmodo reached out to Neuralink for more information, but did not receive a response as of publication time. As with Musk’s other companies, like Twitter and Tesla, Neuralink almost never replies to journalist inquiries. Barrow Neurological Institute also did not immediately respond to Gizmodo’s emailed questions. > >To Reuters, however, a director from the Arizona treatment and research center said that Barrow would be well-equipped to conduct brain implant research along the lines of what Neuralink is hoping to do.

It will be interesting to see what these particular developments might be, It looks like these partnership explorations are in advance of the regulatory approvals that have so far been denied to the company. And in light of the various investigations underway regarding their practices, it seems that at least in the near term that permission might not be forthcoming at least in the United States.

3

Hrmbee OP t1_jdiqte0 wrote

A direct link to the journal article is available here:

Road Traffic Noise and Incidence of Primary Hypertension: A Prospective Analysis in UK Biobank

Abstract:

>Background
>
>The quality of evidence regarding the associations between road traffic noise and hypertension is low due to the limitations of cross-sectional study design, and the role of air pollution remains to be further clarified.
>
>
>
>Objectives
>
>To evaluate the associations of long-term road traffic noise exposure with incident primary hypertension; we conducted a prospective population-based analysis in UK Biobank.
>
>
>
>Methods
>
>Road traffic noise was estimated at baseline residential address using the common noise assessment method model. Incident hypertension was ascertained through linkage with medical records. Cox proportional hazard models were used to estimate hazard ratios (HRs) for association in an analytical sample size of over 240,000 participants free of hypertension at baseline, adjusting for covariates determined via directed acyclic graph.
>
>
>
>Results
>
>During a median of 8.1 years follow-up, 21,140 incident primary hypertension (International Classification of Diseases 10th Revision [ICD 10]: I10) were ascertained. The HR for a 10 dB[A] increment in mean weighted average 24-hour road traffic noise level (Lden) exposure was 1.07 (95% confidence interval: 1.02, 1.13). A dose-response relationship was found, with HR of 1.13 (95% confidence interval: 1.03, 1.25) for Lden >65 dB[A] vs ≤55 dB[A] (P for trend < 0.05). The associations were all robust to adjustment for fine particles (PM2.5) and nitrogen dioxide (NO2). Furthermore, high exposure to both road traffic noise and air pollution was associated with the highest hypertension risk.
>
>
>
>Conclusions
>
>Long-term exposure to road traffic noise was associated with increased incidence of primary hypertension, and the effect estimates were stronger in presence of higher air pollution.

19

Hrmbee OP t1_jdiqcpk wrote

>Previous studies have shown a connection between noisy road traffic and increased risk of hypertension. However, strong evidence was lacking, and it was unclear whether noise or air pollution played a bigger role. The new research shows that it is exposure to road traffic noise itself that can elevate hypertension risk.
>
>“We were a little surprised that the association between road traffic noise and hypertension was robust even after adjustment for air pollution,” said Jing Huang, assistant professor in the Department of Occupational and Environmental Health Sciences in the School of Public Health at Peking University in Beijing, China, and lead author of the study.
>
>Previous studies of the issue were cross-sectional, meaning they showed that traffic noise and hypertension were linked, but failed to show a causal relationship. For the new paper, researchers conducted a prospective study using UK Biobank data that looked at health outcomes over time.
>
>Researchers analyzed data from more than 240,000 people (aged 40 to 69 years) who started out without hypertension. They estimated road traffic noise based on residential address and the Common Noise Assessment Method, a European modeling tool.
>
>Using follow-up data over a median 8.1 years, they looked at how many people developed hypertension. Not only did they find that people living near road traffic noise were more likely to develop hypertension, they also found that risk increased in tandem with the noise “dose.”
>
>These associations held true even when researchers adjusted for exposure to fine particles and nitrogen dioxide. However, people who had high exposure to both traffic noise and air pollution had the highest hypertension risk, showing that air pollution plays a role as well.
>
>“Road traffic noise and traffic-related air pollution coexist around us,” Huang said. “It is essential to explore the independent effects of road traffic noise, rather than the total environment.”
>
>The findings can support public health measures because they confirm that exposure to road traffic noise is harmful to our blood pressure, she said. Policymaking may alleviate the adverse impacts of road traffic noise as a societal effort, such as setting stricter noise guideline and enforcement, improving road conditions and urban design, and investing advanced technology on quieter vehicles.

These are some important findings, especially given how the majority of humanity now lives in urban environments. Policymakers should take heed and look to reduce the noise in cities not just through quieter vehicles but by reducing the number of vehicles overall. Anecdotally, during the early days of the pandemic, many of us have experienced how quiet cities could be with reduced traffic volumes.

22

Hrmbee OP t1_jddu43j wrote

This isn't just about HOA residents, but also visitors to the HOA, contractors, and the like. Also, if a HOA is in a city, then likely all vehicles passing by the community will be captured as well.

Also, given the ubiquity of the collection and analysis of personal data by data brokers, companies, and other organizations, it's highly unlikely that this data will remain unlinked with other personal information. Much of the information captured will likely be of people who are not parties to that HOA contract and essentially have no say in the matter.

3

Hrmbee OP t1_jddmyn2 wrote

>Lakeway is just one example of a community that has faced Flock’s surveillance without many homeowners’ knowledge or approval. Neighbors in Atlanta, Georgia, remained in the dark for a year after cameras were put up. In Lake County, Florida, nearly 100 cameras went up “overnight like mushrooms,” according to one county commissioner — without a single permit. > >In a statement, Flock Safety brushed off the Lake County incident as an “an honest misunderstanding,” but the increasing surveillance of community members’ movements across the country is no accident. It’s a deliberate marketing strategy. > >Flock Safety, which began as a startup in 2017 in Atlanta and is now valued at approximately $3.5 billion, has targeted homeowners associations, or HOAs, in partnership with police departments, to become one of the largest surveillance vendors in the nation. There are key strategic reasons that make homeowners associations the ideal customer. HOAs have large budgets — they collect over $100 billion a year from homeowners — and it’s an opportunity for law enforcement to gain access into gated, private areas, normally out of their reach. > >Over 200 HOAs nationwide have bought and installed Flock’s license plate readers, according to an Intercept investigation, the most comprehensive count to date. HOAs are private entities and therefore are not subject to public records requests or regulation. > >“What are the consequences if somebody abuses the system?” said Dave Maass, director of investigations at the Electronic Frontier Foundation. “There are repercussions of having this data, and you don’t have that kind of accountability when it comes to a homeowners association.” > >The majority of the readers are hooked up to Flock’s TALON network, which allows police to track cars within their own neighborhoods, as well as access a nationwide system of license plate readers that scan approximately a billion images of vehicles a month. Camera owners can also create their own “hot lists” of plate numbers that generate alarms when scanned and will run them in state police watchlists and the FBI’s primary criminal database, the National Crime Information Center. > >“Flock Safety installs cameras with permission from our customers, at the locations they require,” said Holly Beilin, a Flock representative. “Our team has stood in front of hundreds of city council meetings, and we have always supported the democratic process.” > >After facing public outrage, the cameras were removed from communities in Texas and Florida, but Flock’s license plate readers continue to rapidly proliferate daily — from cities in Missouri to Kentucky. > >“It’s a near constant drumbeat,” said Edwin Yohnka, the director of public policy at the American Civil Liberties Union of Illinois. > >With over half of all Americans living in HOAs, experts believe the surveillance technology is far more ubiquitous than we know.

It looks like this company is following the playbook of other companies that have been looking to make inroads in communities through disruption, such as Uber and Airbnb. There also seem to be parallels between what they're doing here and what Ring has been doing with individual property owners. If we are to care about privacy in the slightest, regulations around these kinds of activities are sorely needed but also seemingly lacking in most jurisdictions.

2

Hrmbee OP t1_jdbl6r4 wrote

A link to the original research below:

Electrochemical degradation of PFOA and its common alternatives: Assessment of key parameters, roles of active species, and transformation pathway

Abstract:

>This study investigates an electrochemical approach for the treatment of water polluted with per- and poly-fluoroalkyl substances (PFAS), looking at the impact of different variables, contributions from generated radicals, and degradability of different structures of PFAS. Results obtained from a central composite design (CCD) showed the importance of mass transfer, related to the stirring speed, and the amount of charge passed through the electrodes, related to the current density on decomposition rate of PFOA. The CCD informed optimized operating conditions which we then used to study the impact of solution conditions. Acidic condition, high temperature, and low initial concentration of PFOA accelerated the degradation kinetic, while DO had a negligible effect. The impact of electrolyte concentration depended on the initial concentration of PFOA. At low initial PFOA dosage (0.2 mg L−1), the rate constant increased considerably from 0.079 ± 0.001 to 0.259 ± 0.019 min−1 when sulfate increased from 0.1% to 10%, likely due to the production of SO4•–. However, at higher initial PFOA dosage (20 mg L−1), the rate constant decreased slightly from 0.019 ± 0.001 to 0.015 ± 0.000 min−1, possibly due to the occupation of active anode sites by excess amount of sulfate. SO4•– and •OH played important roles in decomposition and defluorination of PFOA, respectively. PFOA oxidation was initiated by one electron transfer to the anode or SO4•–, undergoing Kolbe decarboxylation where yielded perfluoroalkyl radical followed three reaction pathways with •OH, O2 and/or H2O. PFAS electrooxidation depended on the chemical structures where the decomposition rate constants (min−1) were in the order of 6:2 FTCA (0.031) > PFOA (0.019) > GenX (0.013) > PFBA (0.008). PFBA with a shorter chain length and GenX with –CF3 branching had slower decomposition than PFOA. While presence of C–H bonds makes 6:2 FTCA susceptible to the attack of •OH accelerating its decomposition kinetic. Conducting experiments in mixed solution of all studied PFAS and in natural water showed that the co-presence of PFAS and other water constituents (organic and inorganic matters) had adverse effects on PFAS decomposition efficiency.

6

Hrmbee OP t1_jdbkypv wrote

>Scientists at the University of British Columbia announced on Wednesday that they had developed a new silica-based material with ability to absorb a wider range of the harmful chemicals, and new tools to break them apart them. > >“This is very exciting because we can target these difficult-to-break chemical bonds – and break them for good,” said researcher Madjid Mohseni, who focuses on water quality and water treatment. > >The chemicals, also known as PFAS (per-and polyfluoroalkyl substances) are used for non-stick or stain-resistant surfaces, including clothing, cookware, stain repellents and firefighting foam. But they are also notoriously difficult to break down naturally, giving them the name “forever chemicals”. > >... > >Current technologies often use activated carbon to filter out the chemicals, but are largely only able to target what researchers call the “long-chain” versions of PFAS – those with more than six carbon bonds. Following recent bans, however, industry has shifted to creating ‘short chain’ iterations of the chemical. > >Those versions “are equally as toxic and they stay in the water better. And as a result, current technologies like activated carbon really aren’t as effective,” said Mohseni. > >Most household water filters use activated carbon – and as a result, miss a wide range of possibly harmful chemicals. > >His team also found that the current filters concentrate the absorbed chemicals, creating a “highly toxic” form of waste that consumers throw into the garbage. > >Such filters “are not addressing the problem. We’re just temporarily fixing it and letting those chemicals stay in the environment,” he said. > >To combat the deficiencies in combatting PFAS, the team has developed a new silicate absorbing material that captures a far wider range of chemicals. The thin material can also be reused repeatedly. > >To destroy the chemicals, Mohseni says researchers use either electrochemical or photochemical processes to break the carbon-fluorine bond. The team first published their findings in the journal Chemosphere.

This is some good news as far as PFASes are concerned, though ultimately we do need to limit their broader use in our manufacturing processes. Allowing manufacturers to jump from one compound to another to avoid regulation seems to be a major failing in our current regulatory systems.

42

Hrmbee OP t1_jd3w54d wrote

From the abstract:

>To date, analog methods of cooking such as by grills, cooktops, stoves and microwaves have remained the world’s predominant cooking modalities. With the continual evolution of digital technologies, however, laser cooking and 3D food printing may present nutritious, convenient and cost-effective cooking opportunities. Food printing is an application of additive manufacturing that utilizes user-generated models to construct 3D shapes from edible food inks and laser cooking uses high-energy targeted light for high-resolution tailored heating. Using software to combine and cook ingredients allows a chef to more easily control the nutrient content of a meal, which could lead to healthier and more customized meals. With more emphasis on food safety following COVID-19, food prepared with less human handling may lower the risk of foodborne illness and disease transmission. Digital cooking technologies allow an end consumer to take more control of the macro and micro nutrients that they consume on a per meal basis and due to the rapid growth and potential benefits of 3D technology advancements, a 3D printer may become a staple home and industrial cooking device.

From the discussion:

>As digital cooking technologies become more ubiquitous, it is feasible that humankind will see the nutritional merits and drawbacks of having software-controlled assistants in the kitchen. 3D food printing has the potential to be the next frontier in cooking. Questions surrounding cost, ease of use and consumer acceptance will likely be top factors driving the trajectory of this technology. The spotlight shed on whole foods vs. processed foods for good health may influence consumers’ perception of this technology. However, with upcoming generations’ fascination with not only novel technologies, but also environmental sustainability and healthy eating, all of these are likely to influence the extent of adoption. Additionally, development of competing cooking technologies and advancements in nutrition science may come into play. An industry built around this technology may be on the horizon, creating a new vision of better nutrition, better food accessibility and palatability for many, increasing food safety and adding art and cutting-edge science to the most basic human need—nourishment.

There are some interesting possibilities here with regards to food production, but it seems that the likely outcome of these technologies, especially in the near-term, will be first at industrial scales. The details of these systems for home use will be critical: how proprietary the ingredients and recipes might be would be a key consideration.

3

Hrmbee OP t1_jcjiwg6 wrote

>Although there are almost 5,000 banks in North America, only a handful focus on startups, despite the importance of software, biotech and clean technology to the future of our economy, health and environment. While traditional commercial banks will only lend against “hard assets” or your personal guarantee, people such as me or SVB’s team have spent decades building the expertise to provide debt capital based on the value of your “enterprise,” taking into account your company’s IP, revenue or both.
>
>When these startups approach a lender, they’re rarely profitable. That lack of profitability often scares both bankers and regulators. And yet, as SVB and other lending teams have proven across multiple economic cycles, loan losses in this sector are no higher than those in the broader economy – provided you have the right expertise.
>
>SVB recognized this market gap and became the 16th-largest U.S. bank. As memories of the last dot-com bubble waned, SVB’s success spawned a few smaller competing banks. If you were an entrepreneur, you welcomed the new competition and the lower cost of capital that resulted.
>
>...
>
>But no competitor can do in five years what took SVB decades to accomplish with its 6,000-person team. Over a 40-year period, SVB built a US$30-billion loan portfolio, and about half of that capital is already at work in the economy. SVB has also deployed another US$40-billion in support of venture capital, infrastructure and private equity funds for their day-to-day business needs. That capital and know-how helps create thousands of new, high-paying North American jobs each month. All of which came to a screeching halt last Friday.
>
>With the loss of such a large debt partner, many VC funds will need to reserve more of their own capital to fund each and every new startup. Which means these same VCs will have no choice but to back fewer new firms. And fewer new startups means there’s a irrefutable risk that the “next Moderna” won’t get that first round of essential funding. The consequences of this single bank failure are difficult to overstate.

This kind of concentration of capacity within one organization and market dominance is a problem not just with finance but with any other critical pieces of business infrastructure. They become critical points of failure when things go wrong, and as we're seeing now there can be significant widespread damage to the ecosystem because of it. There ideally should be a degree of redundancy built into all of these systems, so that in the event of a failure there can be sufficient capacity to keep things going during the rebuilding phase.

13

Hrmbee OP t1_jc0l14e wrote

>Twitter’s API is used by vast numbers of researchers. Since 2020, there have been more than 17,500 academic papers based on the platform’s data, giving strength to the argument that Twitter owner Elon Musk has long claimed, that the platform is the “de facto town square.” > >But new charges, included in documentation seen by WIRED, suggest that most organizations that have relied on API access to conduct research will now be priced out of using Twitter. > >It’s the end of a long, convoluted process. On February 2, Musk announced API access would go behind a paywall in a week. (Those producing “good” content would be exempted.) A week later, he delayed the decision to February 13. Unsurprisingly, that deadline also slipped by, as Twitter suffered a catastrophic outage. > >The company is now offering three levels of Enterprise Packages to its developer platform, according to a document sent by a Twitter rep to would-be academic customers in early March and passed on to WIRED. The cheapest, Small Package, gives access to 50 million tweets for $42,000 a month. Higher tiers give researchers or businesses access to larger volumes of tweets—100 million and 200 million tweets respectively—and cost $125,000 and $210,000 a month. WIRED confirmed the figures with other existing free API users, who have received emails saying that the new pricing plans will take effect within months. > >“I don’t know if there’s an academic on the planet who could afford $42,000 a month for Twitter,” says Jeremy Blackburn, assistant professor at Binghamton University in New York and a member of the iDRAMA Lab, which analyzes hate speech on social media—including on Twitter. > >Elissa M. Redmiles, a faculty member at the Max Planck Institute for Software Systems in Germany, says the new prices are eye-watering. “It’s probably outside of any academic budget I’ve ever heard of,” she says, adding that the price would put off any long-term analysis of user sentiment. “One month of Twitter data isn’t really going to work for the purposes people have,” she says. > >Kenneth Joseph, assistant professor at the University of Buffalo and one of the authors of a recent paper analyzing a day in the life of Twitter, says the new pricing effectively kills his career. “$42,000 is not something I can pay for a single month in any reasonable way,” says. “It totally destroys any opportunity to engage in research in this space, which I’ve in many respects built a career on.” > >The pricing documents were provided to WIRED by a researcher who asked for anonymity, since they are still accessing Twitter data through an existing API agreement and worry it could be terminated if they were identified. They say the new costs were “not viable for the academic community.” > >“No one can afford to pay that,” they say. “Even rich institutions can’t afford to pay half a million a year for a thimbleful of data.”

From a lay perspective, it looks like this kind of pricing scheme for API access is designed to eliminate the possibility of independent research on the platform more than it is to generate revenues for the company.

51

Hrmbee OP t1_jbfsi3a wrote

>DuckAssist uses OpenAI’s natural language technology to generate answers to user's search queries at the top of the search results page, making responses more direct than traditional search results. > >Contrary to other generative AI search assistants that use input from thousands of websites, DuckAssist only sources information from Wikipedia and Britannica, hoping to prevent incorrect information from being used when generating answers. > >This restrained approach also differentiates DuckAssist from the AI-powered summarizer that Brave Search launched last week, which sources content from news portals, making it more susceptible to false or misleading information in some cases. > >... > >As for search query anonymity, which sits at the core of DuckDuckGo’s values, the company assures that DuckAssist is fully integrated into its private search engine; hence, user queries or browsing history aren’t logged. > >Some data has to be transmitted to search content partners like OpenAI and Anthropic, but no personally identifiable information or IP addresses are ever shared with those entities. > >DuckDuckGo says that DuckAssist will gradually roll out to users in the coming weeks and promises that this will be the first of the many AI-assisted features it plans to roll out in the coming months.

It's good that this company is rolling these features out cautiously and gradually, and limiting it initially to Wikipedia and Britannica. Also cautiously optimistic that their commitment to privacy will remain with this new service, but only time will tell.

3

Hrmbee t1_jbdurhl wrote

>Pali Bhat joined Reddit from Google about a year ago — he’s actually Reddit’s first-ever chief product officer, which is pretty surprising considering that Reddit is a series of product experiences: the reading experience, the writing experience, and importantly, the moderation experience. One thing we always say on Decoder is that the real product of any social network is content moderation, and Reddit is maybe the best example of that: every subreddit is shaped by volunteer moderators who use the tools Reddit builds for them. So Pali has a big job bringing all these products together and making them better, all while trying to grow Reddit as a platform. > >Pali wanted to come on Decoder to talk about his new focus on making Reddit simpler: simpler for new users to join and find interesting conversations; simpler to participate in those threads; and simpler to moderate. We talked a lot about the tension between what new users need when they’re learning to use Reddit and what Reddit power users want — if the goal is to grow the site, you run the risk of irritating your oldest users with change. > >We also talked about video. Reddit is rolling out a dedicated video feed, which sounds a lot like an attempt to compete with TikTok, which every social network is trying to do — and we talked quite a bit about Google and search. Lots of people use Google to find things on Reddit, which is often used as a criticism of Google’s search quality. I wanted to know if Pali thinks Google is vulnerable in search, if Reddit can become a primary search engine for people, and most importantly, what he took from Google’s culture and what he left behind in organizing Reddit’s product team.

This was an interesting interview. Of particular interest to me was that before Pali there was no chief product officer. Even without that though the product team(s) seemed to be doing a passable job at the very least. Hopefully this new director impacts the user experience in a positive way in the coming months and years.

2

Hrmbee OP t1_jaezir8 wrote

>ShrimpApplePro reports that accessories like AirPods and cables are already being manufactured overseas based on the standard. Any cables that aren’t MFi-certified will be “limited in data and charging speed.” > >What does MFi stand for? Well, it’s “Made for iPod,” which isn’t a device that exists anymore (RIP to the mp3 players of yore), but the certification program was implemented back in 2005. Apple expanded it when the iPhone and iPad were introduced and rebranded it as MFi in 2012 after the iPhone 5 adopted the Lightning standard—remember going from 30-PIN connectors to Lightning connectors? What a journey it’s been. In addition to helping standardize cables, MFi certifies all sorts of gadgets and accessories to label what’s safe for Apple users, including headphones, speakers, and even smart home devices. The only caveat to this program is that accessory makers have to pay a licensing fee of about $100/year. It only applies to manufacturers of electronic accessories, however, particularly those that don’t utilize an Apple standard like MagSafe. > >While it’s easy to see this as another way that Apple is sealing in its walled garden, Android manufacturers practice the same exclusivity with charging cables. OnePlus, under the Oppo brand, uses the red cable motif for its charging standard. The brand has long offered a faster charging specification than the rest of the Android brood within its ecosystem. And now that it’s adopted SuperVOOQ, buying the right cable and adapter is essential to reach full 80W charging speeds. Its latest release, the OnePlus 11, can charge fully in about 30 minutes with the cable and adapter included in the box.

One of my ongoing frustrations with cables and connectors in computing more broadly are the proliferation of standards using the same plugs. Without clear markings, it's sometimes impossible to know which cables are capable of what until you plug them in - and even then it's not always clear either. Manufacturers should be doing better to ensure a better user experience for all users here.

27

Hrmbee OP t1_ja0j8ou wrote

>The Vancouver-based technology company says the tool, which allows users to browse, manage and schedule social media posts, will come with a fee beginning March 31. > >After that date, anyone who used the Hootsuite Free plan will have to stop using the service or switch to one of four new, paid plans starting at $99 per month for its Professional tier. > >... > >The move away from the free tier comes after Hootsuite cut seven per cent of its staff — about 70 people — in January, making it the company’s third layoff in the last year.

Haven't used this service in years, but it seemed to be useful especially for those who have to manage multiple social media channels. For our company with a single social media channel, it was convenient but hardly necessary.

8

Hrmbee OP t1_j8ybagg wrote

>We worked on ways to improve our toxic-speech-identification algorithms so they would not discriminate against African-American Vernacular English as well as forms of reclaimed speech. All of this depended on rank-and-file employees. Messy as it was, Twitter sometimes seemed to function mostly on goodwill and the dedication of its staff. But it functioned. > >Those days are over. From the announcement of Musk’s bid to the day he walked into the office holding a sink, I watched, horrified, as he slowly killed Twitter’s culture. Debate and constructive dissent was stifled on Slack, leaders accepted their fate or quietly resigned, and Twitter slowly shifted from being a company that cared about the people on the platform to a company that only cares about people as monetizable units. The few days I spent at Musk’s Twitter could best be described as a Lord of the Flies–like test of character as existing leadership crumbled, Musk’s cronies moved in, and his haphazard management—if it could be called that—instilled a sense of fear and confusion. > >Unfortunately, Musk cannot simply be ignored. He has purchased a globally influential and politically powerful seat. We certainly don’t need to speculate on his thoughts about algorithmic ethics. He reportedly fired a top engineer earlier this month for suggesting that his engagement was waning because people were losing interest in him, rather than because of some kind of algorithmic interference. (Musk initially responded to the reporting about how his tweets are prioritized by posting an off-color meme, and today called the coverage “false.”) And his track record is far from inclusive: He has embraced far-right talking points, complained about the “woke mind virus,” and explicitly thrown in his lot with Donald Trump and Ye (formerly Kanye West). > >Devaluing work on algorithmic biases could have disastrous consequences, especially because of how perniciously invisible yet pervasive these biases can become. As the arbiters of the so-called digital town square, algorithmic systems play a significant role in democratic discourse. In 2021, my team published a study showing that Twitter’s content-recommendation system amplified right-leaning posts in Canada, France, Japan, Spain, the United Kingdom, and the United States. Our analysis data covered the period right before the 2020 U.S. presidential election, identifying a moment in which social media was a crucial touch point of political information for millions. Currently, right-wing hate speech is able to flow on Twitter in places such as India and Brazil, where radicalized Jair Bolsonaro supporters staged a January 6–style coup attempt. > >Musk’s Twitter is simply a further manifestation of how self-regulation by tech companies will never work, and it highlights the need for genuine oversight. We must equip a broad range of people with the tools to pressure companies into acknowledging and addressing uncomfortable truths about the AI they’re building. Things have to change.

This was an interesting perspective from someone who experienced this shift firsthand. It's certainly worth taking heed of the warning of algorithmic biases that are already baked into many systems. Further, self regulation though laudable, has proven at least in most tech sectors, to be ineffective at best. What we need are regulators who are familiar with key issues that are facing technology as it relates to broader society, but not beholden to tech companies or platforms. This will be tricky going forwards, but if properly administered can bring lasting benefits not just to the platforms, but also to the rest of society as well.

49

Hrmbee OP t1_j8ojqup wrote

Yeah, if they're in the same trench/conduit then that certainly presents additional challenges. For sites that are subject to frequent construction and/or maintenance activities such as airports, it would be prudent to have at least one backup that is using a different physical route. In some facilities that I'm familiar with, they've gone with wireless systems (microwave, satellite, etc) as a backup in case the physical link goes down.

3

Hrmbee OP t1_j8nvdqh wrote

>Lufthansa confirmed the cause of the outage in an email to Gizmodo, saying “During construction work in Frankfurt, fiber optic cables belonging to a telecommunications service provider were damaged.” The company said on its website that all Frankfurt flights were suspended while some flights in and out of Munich were also canceled and recommended that passengers should not travel to the airport. > >Deutsche Telekom spokesman Peter Kespohl told Bloomberg that Telekom had repaired two cables thus far and is working to repair the others but did not specify how long the process would take. > >Lufthansa said in its email that it “expects the situation to ease further over the next few hours” and expects its flight operations to largely resume and be back on schedule on Thursday. The company added that passengers who booked “domestic flights can switch to Deutsche Bahn until Sunday.”

It's amazing that in 2023 a major airline at a major international airport doesn't have redundant service to keep things running in case of an outage. Given the IT challenges observed of late by several airlines though, perhaps this is more of an industry wide issue, and one that requires a shift in attitude by the industry as a whole.

6

Hrmbee OP t1_j856i35 wrote

>The Supernova release will include an overhaul of Thunderbird's user interface. Castellani didn't share screenshots, but he indicated that the new UI would be "simple and clean" and targeted mostly at new users. For "veteran users," the interface will also be "flexible and adaptable" so that people who prefer the way Thunderbird looks now can "maintain that familiarity they love." > >Supernova will also include several other big changes, including a redesigned calendar and support for Firefox Sync. > >Beyond news about the redesign, the blog post is worth a read if you're curious about what the team is doing to battle the software's technical debt or if you want to know why it seems like the app's development moves so slowly (the developers spend a lot of their time simply keeping up with upstream changes from Firefox since the browser still serves as the foundation for Thunderbird's email rendering). The post is also helpful if you need a refresher on the long and complicated relationship between Thunderbird and Mozilla. > >Thunderbird used to be maintained by Mozilla alongside the Firefox browser, but in the modern era, it hasn't always been clear who's responsible for it. Mozilla executives had wanted to spin Thunderbird off as early as 2007, and it moved to a more community-driven development model in 2012.

It's good to see that an old stalwart client is getting a much-needed overhaul. Fingers crossed that this goes well, and that they have enough resources to properly execute on their vision.

15

Hrmbee OP t1_j730v4s wrote

For those interested in the original paper, it's available here:

Multilayered optofluidics for sustainable buildings

&#x200B;

>Significance
>
>Buildings consume 32.4 PWh (32%) of our global energy supply, a footprint that is expected to double by mid-century. Designing facades like the skins of biological organisms, with dynamic multilayered optical reconfigurability, would enable homeostasis-like environmental responsiveness and significantly improved energy efficiency. Here, we develop an adaptive building interface, leveraging confined multilayered fluids to achieve a versatile library of shading, scattering, and selectively absorbing solar responses. Configurable optimization of this “building-scale microfluidic” platform can reduce energy consumption in our models by 43%, representing a design paradigm toward net-zero buildings.
>
>
>
>Abstract
>
>Indoor climate control is among the most energy-intensive activities conducted by humans. A building facade that can achieve versatile climate control directly, through independent and multifunctional optical reconfigurations, could significantly reduce this energy footprint, and its development represents a pertinent unmet challenge toward global sustainability. Drawing from optically adaptive multilayer skins within biological organisms, we report a multilayered millifluidic interface for achieving a comprehensive suite of independent optical responses in buildings. We digitally control the flow of aqueous solutions within confined milliscale channels, demonstrating independent command over total transmitted light intensity (95% modulation between 250 and 2,500 nm), near-infrared-selective absorption (70% modulation between 740 and 2,500 nm), and dispersion (scattering). This combinatorial optical tunability enables configurable optimization of the amount, wavelength, and position of transmitted solar radiation within buildings over time, resulting in annual modeled energy reductions of more than 43% over existing technologies. Our scalable “optofluidic” platform, leveraging a versatile range of aqueous chemistries, may represent a general solution for the climate control of buildings.

10

Hrmbee OP t1_j730akc wrote

>Squid and several other cephalopods can rapidly shift the colors in their skin, thanks to that skin's unique structure. Engineers at the University of Toronto have drawn inspiration from the squid to create a prototype for "liquid windows" that can shift the wavelength, intensity, and distribution of light transmitted through those windows, thereby saving substantially on energy costs. They described their work in a new paper published in the Proceedings of the National Academy of Sciences. > >“Buildings use a ton of energy to heat, cool, and illuminate the spaces inside them,” said co-author Raphael Kay. “If we can strategically control the amount, type, and direction of solar energy that enters our buildings, we can massively reduce the amount of work that we ask heaters, coolers, and lights to do.” Kay likes to think of buildings as living organisms that also have "skin," i.e., an outer layer of exterior facades and windows. But these features are largely static, limiting how much the building "system" can be optimized in changing ambient conditions. > >... > >Kay and his colleagues thought the structure of squid skin might hold the key to creating dynamic, tunable building facades. “Sunlight contains visible light, which impacts the illumination in the building, but it also contains other invisible wavelengths, such as infrared light, which we can think of essentially as heat,” said Kay. “In the middle of the day in winter, you’d probably want to let in both, but in the middle of the day in summer, you’d want to let in just the visible light and not the heat. Current systems typically can’t do this: they either block both or neither. They also have no ability to direct or scatter the light in beneficial ways.” > >So Kay et al. constructed a prototype microfluidics system featuring flat sheets of plastic containing an array of thin channels for pumping fluids. Adding customized pigments or particles to the fluid changes what wavelength of light gets through, as well as the direction in which that light is distributed. Those sheets can be combined into layered stacks, with each stack performing a different kind of optical function, such as filtering the wavelength, tuning how the transmitted light scatters indoors, and controlling the intensity—all managed with small digitally controlled pumps. > >According to Kay, this simple and low-cost approach could enable the design of "liquid-state, dynamic building facades" with tunable optical properties to save energy on heating, cooling, and lighting. While their prototype is a proof of concept, the team ran computer simulations of the system's likely performance as a dynamic building facade, responding to changing ambient conditions. Their models showed a single layer controlling the transmission of near-infrared light would result in a 25 percent savings. Adding a second layer controlling the transmission of visible light could achieve closer to 50 percent in energy cost savings.

This looks to be some interesting research with particular applications to building science and energy performance of buildings. Hopefully further testing and development can provide us with usable systems to help architects and engineers design and build more energy efficient and comfortable buildings in the near future.

7