Soupjoe5

Soupjoe5 OP t1_ixe31q0 wrote

Article:

1

Algorithms that create art, text, and code are spreading fast—but legal challenges could throw a wrench in the works.

THE TECH INDUSTRY might be reeling from a wave of layoffs, a dramatic crypto-crash, and ongoing turmoil at Twitter, but despite those clouds some investors and entrepreneurs are already eyeing a new boom—built on artificial intelligence that can generate coherent text, captivating images, and functional computer code. But that new frontier has a looming cloud of its own.

A class-action lawsuit filed in a federal court in California this month takes aim at GitHub Copilot, a powerful tool that automatically writes working code when a programmer starts typing. The coder behind the suit argue that GitHub is infringing copyright because it does not provide attribution when Copilot reproduces open-source code covered by a license requiring it.

The lawsuit is at an early stage, and its prospects are unclear because the underlying technology is novel and has not faced much legal scrutiny. But legal experts say it may have a bearing on the broader trend of generative AI tools. AI programs that generate paintings, photographs, and illustrations from a prompt, as well as text for marketing copy, are all built with algorithms trained on previous work produced by humans.

Visual artists have been the first to question the legality and ethics of AI that incorporates existing work. Some people who make a living from their visual creativity are upset that AI art tools trained on their work can then produce new images in the same style. The Recording Industry Association of America, a music industry group, has signaled that AI-powered music generation and remixing could be a new area of copyright concern.

“This whole arc that we're seeing right now—this generative AI space—what does it mean for these new products to be sucking up the work of these creators?” says Matthew Butterick, a designer, programmer, and lawyer who brought the lawsuit against GitHub.

Copilot is a powerful example of the creative and commercial potential of generative AI technology. The tool was created by GitHub, a subsidiary of Microsoft that hosts the code for hundreds of millions of software projects. GitHub made it by training an algorithm designed to generate code from AI startup OpenAI on the vast collection of code it stores, producing a system that can preemptively complete large pieces of code after a programmer makes a few keystrokes. A recent study by GitHub suggests that coders can complete some tasks in less than half the time normally required when using Copilot as an aid.

But as some coders quickly noticed, Copilot will occasionally reproduce recognizable snippets of code cribbed from the millions of lines in public code repositories. The lawsuit filed by Butterick and others accuses Microsoft, GitHub, and OpenAI of infringing on copyright because this code does not include the attribution required by the open-source licenses covering that code.

2

Soupjoe5 OP t1_ixd0s85 wrote

Article:

The French government wants Europe to raise its game in space.

PARIS — Europe has to boost its strategic autonomy in space to compete with the likes of China and the United States, French Economy Minister Bruno Le Maire said Tuesday.

Speaking as ministers from the European Space Agency's 22 member countries convene in Paris to firm up a record €18.5 billion budget running for between three and five years depending on the program, Le Maire said that capitals needed to pay for "autonomous access to space."

"There must be a single Europe, a single European space policy and unwavering unity to face Chinese ambitions and American ambitions," said Le Maire.

While Europe has its own satellite systems to monitor climate change and provide geolocation services, it has no capability to send its astronauts to space or offer commercial satellite communication services, for example.

The summit, being held next to the Eiffel Tower at the Grand Palais Éphémère, comes as NASA's Artemis test flight rounds the moon as part of a trial run for sending humans back to the lunar surface this decade, and with the future of the International Space Station in doubt.

Talks at the two-day conference will cover funding for everything from research projects to weather satellites, such as Aeolus which can monitor global wind flow, along with a Mars mission and extra cash for a secure communications satellite system proposed by the European Commission and targeted at competing with Elon Musk's Starlink.

But the haggling comes amid soaring inflation and a cost-of-living crisis which puts public space spending in focus.

"There is a price to pay for independence and we stand ready to pay the price," said Le Maire of his government's intention to meet new funding requests from ESA, a non-EU organization but with overlapping membership.

France is typically Europe's aerospace leader but during the last ESA summit in 2019 Germany overtook it as the largest single contributor to the budget.

Before the summit got underway on Tuesday, Le Maire, and his German and Italian counterparts Robert Habeck and Adolfo Urso, agreed to help finance the delayed European rocket system Ariane 6, along with Vega C, which are launched from French Guiana.

The deal heads off differences between the three countries over how best to develop rocket tech, with fierce competition for commercial and governmental satellite launches and an end to cooperation with Russia's Roscosmos because of the war in Ukraine.

"This is a very good starting point for Europe ... and for the space ambitions that we all want to share and move on over the next years," said Le Maire.

1

Soupjoe5 OP t1_ix8nigj wrote

Article:

The Eris rocket developed by Australian company Gilmour Space will be the first Australian system to go into orbit if it successfully launches next year

Australian company Gilmour Space has nearly finished building a rocket that it will attempt to launch into space in April 2023. If successful, it will be Australia’s first homegrown orbital spacecraft.

“Space [technology] is one of the key enablers of society – it’s good for a nation to have access to space capability if it can,” says Adam Gilmour, a long-time space enthusiast who co-founded the company after working in banking for 20 years.

The rocket, called Eris, will stand 23 metres tall and weigh over 30 tonnes. It will be powered by five hybrid engines that contain a solid fuel and a liquid oxidiser.

A final test conducted in early November found that each engine could generate 115 kilonewtons of thrust – “enough to pick up three or four SUVs [sports utility vehicles] each”, says Gilmour.

The company expects to finish building Eris by March and is planning a test launch from a site near Bowen in north Queensland in April.

The rocket will be fitted with a lightweight satellite and aim to enter low Earth orbit.

“We’re confident it will take off the pad, but no first launch vehicle from a new company has ever successfully gone to space on the first try,” says Gilmour. “What generally happens is the second one works, so we’re building two of them so we can learn from the first and succeed with the second,” he says.

If the launch is successful, it will make Australia the 12th country in the world to send one of its own orbital rockets into space, joining the US, UK, Russia, China, Japan, South Korea, North Korea, France, Israel, India and Iran.

Most of the funding for the project has come from venture capital, with the Australian government contributing a small amount.

Following a successful launch, Gilmour Space plans to build bigger rockets that will be able to carry payloads of up to 1000 kilograms into low orbit. This would allow it to launch satellites for the Australian government and private companies for use in mining, agriculture, communications, defence, Earth observation and other areas.

“We’ve been using other countries’ rockets for the last 50 years, but there are a lot of restrictions,” says Gilmour. “If you’ve got an Australian launch vehicle, then if you’re an Australian company or the government, you’ve basically got unfettered access,” he says.

Aude Vignelles, the chief technology officer of the Australian Space Agency, says that having space capabilities would give a boost to Australia’s national well-being. “Australia’s geographical advantages and political stability [also] make us an attractive destination for launch activities,” she says.

If Eris successfully gets to orbit, it will be the first rocket with hybrid engines to do so, says Vignelles. Most rocket engines contain fuels and oxidisers that are both solids or liquids, since they tend to be more powerful. But several companies are developing hybrid engines that have one component in solid form and the other in liquid form, since they have the potential to be safer, simpler and cheaper.

Gilmour Space also has ambitions to build rockets that can carry astronauts by 2026.

13

Soupjoe5 OP t1_iw08scy wrote

4

Agility plans to produce Digit in volume by 2024. It is working with several big, though unnamed, delivery outfits, on ways in which Digit could work safely with people. If someone is detected by the robot’s sensors it pauses and then navigates around him or her. Nevertheless, says Dr Hurst, the robot will soon acquire a simplified face to help signal its intentions. An animated set of eyes, for instance, will look in a particular direction to indicate which way it is heading, and a glance at someone will show it has detected them.

Do no harm

Such safety systems will be needed for robots to interact successfully with people. At present, the use of robots is governed mainly by standard safety and product liability rules. Some argue, though, that special robot-specific laws will be required to ensure they are operated safely. As every scifi buff knows, Isaac Asimov laid out a set of these eight decades ago. They are:

• A robot may not injure a human being or, through inaction, allow a human being to come to harm.

• A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

• A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

But, as every scifi buff also knows, Azimov’s storylines often revolve around these laws not quite working as planned.

About his Digits, Dr Hurst says, “My opinion is that they are very safe. But we need real statistics and a regulatory environment to prove this.”

For his part, Mr Musk said that Optimus would contain a device that could be used as an off switch if necessary. Although the robot itself would be connected to wi-fi, the switch would not, so that it was isolated to prevent remote interference.

As far as the Amecas’ safety is concerned, Mr Jackson is taking an engineering approach. He observes that one reason human limbs avoid injuring others is by being firm and floppy at the same time. Unfortunately, the small, powerful actuators needed to emulate this in robots do not yet exist. He is working on that, though, for it will be of little use teaching an Ameca social graces if it then commits the faux pas of bashing into you.

2

Soupjoe5 OP t1_iw08nhl wrote

3

Other roboticists have turned a hobby into a business. Shadow Robot, a firm in London that makes one of the most dexterous human-like robot hands available, traces its roots to hobbyists meeting in the attic of its founder’s home. Most robot developers, however, have emerged from universities. One of the best known is Boston Dynamics, which began at the Massachusetts Institute of Technology. Atlas, its Hulk-like humanoid, has become an Internet video sensation—running, jumping and performing backflips. But Atlas is principally a research project, and at present would be too expensive to put into production. The company does sell a walking robot, but it is a four-legged one called Spot, which resembles a dog.

One of a bipedal robot’s advantages is that it should, in principle, be able to go wherever a person can. That includes navigating uneven surfaces and walking up and down steps. Digit, made by Agility Robotics of Corvallis, Oregon, is actually able to do this.

Digit is based on a walking torso called Cassie, which was developed at Oregon State University using machine-leaning studies of human locomotion. It set a world record in May as the fastest robot to run 100 metres. (It did so in 24.7 seconds, some way behind Usain Bolt’s 9.6.)

Unlike Cassie, Digit has a chest, arms and hands of a sort—though no fingers. In place of a head it has a lidar, an optical analogue of radar that builds up a three-dimensional model of the world around it using lasers. Digit is not designed to be humanoid, says Jonathan Hurst, Agility’s Chief Technology Officer. It is, rather, a “human-centric” robot intended as a tool for people to use to achieve more things.

One of Digit’s first roles is likely to be in a distribution centre run by an online retailer or freight company. Some already use automated goods-handling, but usually in areas fenced off to keep people out, in order to avoid injuries. Elsewhere, tasks remain labour intensive. By being designed to work safely alongside people, Digit could start changing this—for instance, by moving and stacking crates. It could then progress to unloading trucks. Eventually, it might even make home deliveries, carrying items from a van to the doorstep. Ultimately, the aim is for a user to be able to instruct the robot by talking to it.

2

Soupjoe5 OP t1_iw08jt6 wrote

2

Although the company, which has its origins in making animated figures for the entertainment industry, can construct highly realistic faces, Ameca’s phizog is designed deliberately to look how people might expect a robot from the world of science fiction to appear. It has a grey complexion, visible joints and no hair. It therefore avoids falling into the “uncanny valley”, an illusion that happens when an artificially created being shifts from looking clearly not human into something more real, but not quite real enough. At this point people feel disturbed by its appearance. Comfort levels rise again as similarity to a human becomes almost perfect.

Some roboticists do, however, seek such perfection. Besides assisting people, robots can also act as their avatar representatives. Ishiguro Hiroshi, director of the Intelligent Robotics Laboratory at Osaka University, in Japan, has built one in his own image. He recently unveiled another, which resembles Kono Taro, Japan’s digital minister. The idea is that people either speak through their avatar with their own voice, or through someone else’s voice modified to sound like them. Mr Kono’s avatar will, apparently, be used to stand in for the minister at public-relations functions.

Ameca could also work as an avatar. Though less humanlike, its conversation is more compelling. That loquaciousness comes from an external “brain” in the form of an ai program called a large language model. It interacts with this over a wi-fi connection and the Internet. Engineered Arts is also working on hardware and software to allow the latest developments in computer vision to be incorporated quickly into its robots. And, as Mr Jackson readily admits, Ameca needs work in other areas, too. Asked if it can walk, the robot replies: “Unfortunately not, but I hope to soon. Until then I am bolted to the floor.” A set of experimental legs stands ready in a nearby corner.

Different strokes

Different companies are coming from different directions in their approaches to making humanoid robots. Mr Jackson, born into a family of artists involved in automatons, gravitated naturally towards producing modern versions of them for the likes of theme parks, museums and the film industry. These have steadily evolved in sophistication. Some work as interactive guides. Others are used as research platforms by universities. During the covid lockdown, when business dried up, the firm threw all of its resources at developing Ameca, its most advanced model yet.

Other developers, like Tesla, are able to organise far bigger efforts—but not always successfully, as the case of Honda, a Japanese carmaker, shows. At one point, Honda’s diminutive humanoid robot asimo was considered the world’s most advanced. The firm started work on this project in the 1980s, and although asimo could walk—albeit clumsily—interpret voice commands and move objects, Honda shut the project down in 2018 to concentrate instead on more practical forms of robotics, such as mobility devices for the elderly.

2

Soupjoe5 OP t1_iw08erv wrote

Article:

1

Walking, talking machines will soon act as guides, companions and deliverers

sked a question, Ameca fixes you with sapphire-blue eyes. Does that face contain a hint of a smile? “Yes, I am a robot”, is the reply. Another Ameca, standing nearby in a group of four, stares across inquisitively and tries to join in. “Currently, it’s the worst ever party guest,” says Will Jackson, Ameca’s creator. “It butts in on every conversation and never shuts up.”

Mr Jackson, boss of Engineered Arts, a small robotics company in Falmouth, south-west England, is trying to fix that problem. Those eyes contain cameras and the Amecas are being trained to recognise faces and decide who is paying attention or making eye contact during conversations. Teaching manners to robots in this way is another step in the long, complicated process of making humanlike machines that can live and work alongside people—and, importantly, do so safely. As Ameca and other robots show, great strides are being made towards this end.

Some big boys are also moving into the business. On September 30th Elon Musk, boss of Tesla, SpaceX and Twitter, unveiled Optimus, a clunky, faceless prototype that walked hesitantly on stage and waved to the crowd. It was built from readily available parts. A more refined version, using components designed by Tesla, was then wheeled on. Although it was not yet able to walk, Mr Musk said progress was being made and that in volume production its price could fall to around $20,000.

Every home should have one

That is a tenth of the cost of a basic Ameca. Mr Jackson, who attended Optimus’s unveiling, agrees prices will come down with mass production. (He has sold 11 Amecas so far, and plans to open a factory in America to boost output.) But he wonders what, exactly, Mr Musk is proposing. The unveiling featured a video of Optimus moving parts in a Tesla factory. Yet car factories are already filled with the world’s most successful robots—transporting components around, welding and painting parts, and assembling vehicles. These robots do not look like people because they don’t need to.

The reason for building a humanoid machine, Mr Jackson maintains, is to perform tasks that involve human interaction. With a bit of development Ameca might, for example, make a good companion for an elderly person—keeping an eye on them, telling them their favourite programme is about to appear on television and never getting bored with having to make repeated reminders to the forgetful. To that end, Engineered Arts aims to teach its robots to play board games, like chess. But only well enough so that they remain fallible, and can be beaten.

To interact successfully with people, Mr Jackson asserts, a robot needs a face. “The human face is the highest bandwidth communications tool we have,” he observes. “You can say more with an expression than you can with your voice.” Hence Ameca’s face, formed from an electronically animated latex skin, is very expressive.

1

Soupjoe5 OP t1_ivtbw7k wrote

2

These maps are grounded in technical accuracy. The sonification of an image of gas and dust in a distant nebula, for instance, uses loud high-frequency sounds to represent bright light near the top of the image, but lower-frequency loud sounds to represent bright light near the image’s centre. The black hole sonification translates data on sound waves travelling through space — created by the black hole’s impact on the hot gas that surrounds it — into the range of human hearing.

Scientists in other fields have also experimented with data sonification. Biophysicists have used it to help students understand protein folding. Aspects of proteins are matched to sound parameters such as loudness and pitch, which are then combined into an audio representation of the complex folding process. Neuroscientists have explored whether it can help with the diagnosis of Alzheimer’s disease from brain scans. Sound has even been used to describe ecological shifts caused by climate change in an Alaskan forest, with researchers assigning various musical instruments to different tree species.

In the long run, such approaches need to be rigorously evaluated to determine what they can offer that other techniques cannot. For all the technical accuracy displayed in individual projects, the Nature Astronomy series points out that there are no universally accepted standards for sonifying scientific data, and little published work that evaluates its effectiveness.

More funding would help. Many scientists who work on alternative data representations cobble together support from various sources, often in collaboration with musicians or sound engineers, and the interdisciplinary nature of such work makes it challenging to find sustained funding.

On 17 November, the United Nations Office for Outer Space Affairs will highlight the use of sonification in the space sciences in a panel discussion that includes Díaz-Merced and Arcand. This aims to raise awareness of sonification both as a research tool and as a way to reduce barriers to participation in astronomy. It’s time to wholeheartedly support these efforts in every possible way.

6

Soupjoe5 OP t1_ivtbvml wrote

Article:

1

In astronomy, the use of sound instead of light is breaking down barriers to participation and providing insight into the Universe.

For astronomers who are sighted, the Universe is full of visual wonders. From shimmering planets to sparkling galaxies, the cosmos is spectacularly beautiful. But those who are blind or visually impaired cannot share that experience. So astronomers have been developing alternative ways to convey scientific information, such as using 3D printing to represent exploding stars, and sound to describe the collision of neutron stars.

On Friday, the journal Nature Astronomy will publish the latest in a series of articles on the use of sonification in astronomy. Sonification describes the conversion of data (including research data) into digital audio files, which allows them to be heard, as well as read and seen. The researchers featured in Nature Astronomy show that sound representations can help scientists to better identify patterns or signals in large astronomical data sets.

The work demonstrates that efforts to boost inclusivity and accessibility can have wider benefits. This is true not only in astronomy; sonification has also yielded discoveries in other fields that might otherwise not have been made. Research funders and publishers need to take note, and support interdisciplinary efforts that are simultaneously more innovative and inclusive.

For decades, astronomers have been making fundamental discoveries by listening to data, as well as looking at it. In the early 1930s, Karl Jansky, a physicist at Bell Telephone Laboratories in New Jersey, traced static in radio communications to the centre of the Milky Way — a finding that led to the discovery of the Galaxy’s supermassive black hole and the birth of radio astronomy. More recently, Wanda Díaz-Merced, an astronomer at the European Gravitational Observatory in Cascina, Italy, who is blind, has used sonification in many pioneering projects, including the study of plasma patterns in Earth’s uppermost atmosphere.

The number of sonification projects picked up around a decade ago, drawing in researchers from a range of backgrounds. Take Kimberly Arcand, a data-visualization expert and science communicator at the Center for Astrophysics, Harvard & Smithsonian in Cambridge, Massachusetts. Arcand began by writing and speaking about astronomy, particularly discoveries coming from NASA’s orbiting Chandra X-Ray Observatory. She then moved on to work that centred on the sense of touch; this included making 3D printed models of the ‘leftovers’ of exploded stars that conveyed details of the physics of these stellar explosions. When, in early 2020, the pandemic meant she was unable to get to a 3D printer, she shifted to working on sonification.

In August, NASA tweeted about the sound of the black hole at the centre of the Perseus galaxy cluster; the attached file has since been played more than 17 million times. In the same month, Arcand and others converted some of the first images from the James Webb Space Telescope into sound. They worked under the guidance of people who are blind and visually impaired to map the intensity and colours of light in the headline-grabbing pictures into audio.

3

Soupjoe5 OP t1_iv17e9a wrote

3

The news had a somber effect on the participants as they waited for the revelations of the simulation’s final “day,” August 16. After academic participants explained the energy release the region would experience, Vernon was blunt. “There would be collapsed buildings,” he says, “we’d lose our hospitals, a lot of our infrastructure would be gone, there was a chance this could take out cell phone reception for at least 50 miles, and the whole region would lose power.”

The simulation presented a final misinformation gut punch. Post-impact, an individual calling themselves “National Expert T.X. Asteroid” claimed the explosion released toxic materials from outer space into the atmosphere. As a result, residents should expect symptoms similar to radiation exposure. The baseless claims were all over social media, and “T.X” was giving interviews to news outlets.

On the positive side, NASA’s ability to disseminate information received high marks from participants, given the agency’s widespread credibility. In addition, the framework established in the White House plan also appeared robust enough to manage the flow of information between federal and state agencies and activate all necessary communication channels.

The conversations between federal and local officials provided some of the best results of the exercise: decision-makers at all levels reached new understandings regarding who would coordinate the post-impact rescue and recovery efforts and what they needed to do their jobs. One finding was that sometimes at the fine-grain levels, less is more in terms of communicating the science. “We couldn’t keep up sometimes, and that’s something they need to consider,” Vernon says. “I have mayors, fire chiefs and other folks to explain this to. We may not need to know all the science behind it, but we need to know what, when and where because we need to start making big decisions as early as possible.”

Participants also discovered that the face of the “expert” should change from the federal to the local level. “At our level, we asked who our lead spokesperson would be,” Vernon says. “Who would people respect, trust and believe when we find out it’s headed towards us? That might not be the same person NASA puts out there.”

Ultimately, the participants and the simulation’s facilitators agreed that the biggest thing they lacked was time. The asteroid destroyed Winton-Salem because of the narrow window between its discovery and impact. Widening that window is critical. “A decade is a fairly comfortable timeframe to be able to do something that would be effective,” Stickle says. “Thirty years would be ideal. That’s enough time for detailed observations, planning, building a spacecraft and getting something big to move. You’d even have time to send up a replacement if something goes wrong.”

There are promising signs that with enough warning, humanity could mount a successful response. The DART mission, for instance, already showed that a spacecraft’s impact can alter a space rock’s trajectory. Multiple surveys of near-Earth objects, asteroids and comets are ongoing, and NASA received $55 million more for planetary defense from Congress than it asked for.

“It’s going to take time and money to detect and characterize everything out there,” Rainey says. “As well as having the ability for missions that can get underway rapidly and be effective against something like this. But ultimately, that’s much cheaper than rebuilding a city.” But just in case, Vernon says, “At least now, we have a plan. Hopefully, it never has to be used.”

4

Soupjoe5 OP t1_iv17dk8 wrote

2

The short but realistic timeline from discovery to impact highlighted major problems from the start. TTX22 was small and fast. By the time it was seen, it was too late to put together a mission to study, deflect or destroy it. NASA has no garages full of rockets on standby just in case an asteroid shows up. Shifting the rock’s trajectory would require at least 12 kinetic impactors, each like NASA’s DART mission that recently altered the orbit of the asteroid Dimorphos and which took more than five years to move from concept to rock-puncher. The recommendation from the after-action report on this front was blunt: develop these capabilities.

At the same time, the asteroid’s velocity, unknown composition and policy ramifications in the brief timeline ruled out hitting TTX22 with a nuclear bomb. However, late-in-the-game nuclear disruption remained an intriguing last-ditch option for some participants. “If you send up a nuclear explosive device, you could disrupt an asteroid just as it enters the atmosphere,” Stickle says. “In theory.”

That option, however, leans toward Hollywood, not reality. “There’s this tendency to think, ‘I saw this in a movie—they just launched ICBMs and blew it up,’” Johnson says. “The point of including this option in the simulation is to get them to understand that it’s not as simple. Using a nuclear explosive device in the terminal phase of an impact is a situation we don’t ever want to get ourselves into.”

Blasting an asteroid in space may result in a cluster of smaller but still-dangerous, fast-moving rocks. And an upper-atmosphere detonation of a nuclear weapon has unknown but most likely dangerous effects. The explosion may not fully disintegrate the rock, forcing portions of it down somewhere else. Radiation could persist in the upper atmosphere at levels making traveling through it on your way to space prohibitive.

With no way to stop the asteroid from hitting Earth, the exercise was all about mitigation—what must be done leading up to the impact and in the immediate aftermath. Organizations at all levels needed to be in contact, emergency plans had to be developed and enacted, and the public informed.

Within the simulated timeline, misinformation was constant. Many online news stories about the asteroid were factually incorrect, while “asteroid deniers” and claims of “fake news” grew unabated. Misinformation was a regular source of frustration for participants, who recognized that they would need to address it head-on in a real-life situation.

Johnson explained that his office is attempting to play the long game against misinformation. “We want to establish NASA’s Planetary Defense Coordination Office and those that work with us as the authorities when it comes to these situations,” Johnson says. “The plan is that the media and public understand that a group at NASA tracks and manages these types of things.”

But as participants pointed out, there are limited strategies to deal with a constant flow of lies from dozens or hundreds of outlets in a short time frame. In this case, misinformation yielded a deadly toll. “When we discussed evacuation, we were told that 20 percent of people would not leave because it was all fake news or the government was lying or some other reason,” says August Vernon, Winston-Salem/Forsyth County emergency management director. “That was about 200,000 people, all spread out. So here I am, not sure we’d even be able to evacuate the hospitals and prisons, and then we have people that can leave, refusing to leave.”

3

Soupjoe5 OP t1_iv17cte wrote

Article:

1

A trial of how government, NASA and local officials would deal with a space rock headed toward Earth revealed gaps in the plans

On August 16, 2022 an approximately 70-meter asteroid entered Earth’s atmosphere. At 2:02:10 P.M. EDT, the space rock exploded eight miles over Winston-Salem, N.C., with the energy of 10 megatons of TNT. The airburst virtually leveled the city and surrounding area. Casualties were in the thousands.

Well, not really. The destruction of Winston-Salem was the story line of the fourth Planetary Defense Tabletop Exercise, run by NASA’s Planetary Defense Coordination Office. The exercise was a simulation where academics, scientists and government officials gathered to practice how the United States would respond to a real planet-threatening asteroid. Held February 23–24, participants were both virtual and in-person, hailing from Washington D.C., the Johns Hopkins Applied Physics Lab (APL) campus in Laurel, Md., Raleigh and Winston-Salem, N.C. The exercise included more than 200 participants from 16 different federal, state and local organizations. On August 5, the final report came out, and the message was stark: humanity is not yet ready to meet this threat.

On the plus side, the exercise was meant to be hard—practically unwinnable. “We designed it to fall right into the gap in our capabilities,” says Emma Rainey, an APL senior scientist who helped to create the simulation. “The participants could do nothing to prevent the impact.” The main goal was testing the different government and scientific networks that should respond in a real-life planetary defense situation. “We want to see how effective operations and communications are between U.S. government agencies and the other organizations that would be involved, and then identify shortcomings,” says Lindley Johnson, planetary defense officer at NASA headquarters.

All in all, the exercise demonstrated that the United States doesn’t have the capability to intercept small, fast-moving asteroids, and our ability to see them is limited. Even if we could intercept space rocks, we may not be able to deflect one away from Earth, and using a nuclear weapon to destroy one is risky and filled with international legal issues. The trial also showed that misinformation—lies and false rumors spreading among the public—could drastically hamper the official effort. “Misinformation is not going away,” says Angela Stickle, a senior research scientist at APL who helped design and facilitate the exercise. “We put it into the simulation because we wanted feedback on how to counteract it and take action if it was malicious.”

Several key differences set this practice apart from previous ones in 2013, 2014 and 2016: First, this trial gave NASA’s Planetary Defense Office a chance to stress-test the National Near-Earth Object Preparedness Strategy and Action Plan, released by the White House in 2018. The plan lays out the details of who does what and when within the federal government, which allowed this year’s exercise to involve more governmental agencies than in previous years—including state and local emergency responders for the first time. The simulation was also the first to include not just an impact but its immediate aftereffects.

Events started with the “discovery” of an asteroid named “TTX22” heading toward Earth. Participants were presented with a crash course in asteroid science and told everything that was known about the asteroid and the likelihood of an impact. Each meeting jumped ahead in the timeline, with the final installments set just before and after the asteroid’s impact near Winston-Salem.

4

Soupjoe5 OP t1_iutgymb wrote

3

Leaner, simpler, cheaper

Burkhard Rost, a computational biologist at the Technical University of Munich in Germany, is impressed with the combination of speed and accuracy of Meta’s model. But he questions whether it really offers an advantage over AlphaFold’s precision, when it comes to predicting proteins from metagenomic databases. Language model-based prediction methods — including one developed by his team3 — are better suited to quickly determine how mutations alter protein structure, which is not possible with AlphaFold. “We will see structure prediction become leaner, simpler cheaper and that will open the door for new things,” he says.

DeepMind doesn’t currently have plans to include metagenomic structure predictions in its database, but hasn’t ruled this out for future releases, according to a company representative. But Steinegger and his collaborators have used a version of AlphaFold to predict the structures of some 30 million metagenomic proteins. They are hoping to find new kinds of RNA viruses by looking for novel forms of their genome-copying enzymes.

Steinegger sees trawling biology’s dark matter as obvious next step for such tools. “I do think we will quite soon have an explosion in the analysis of these metagenomic structures.”

2

Soupjoe5 OP t1_iutgy1o wrote

2

Meta’s network, called ESMFold, isn’t quite as accurate as AlphaFold, Rives’ team reported earlier this summer2, but it is about 60 times faster at predicting structures, he says. “What this means is that we can scale structure prediction to much larger databases.”

As a test case, they decided to wield their model on a database of bulk-sequenced ‘metagenomic’ DNA from environmental sources including soil, seawater, the human gut, skin and other microbial habitats. The vast majority of the DNA entries — which encode potential proteins — come from organisms that have never been cultured and are unknown to science.

In total, the Meta team predicted the structures of more than 617 million proteins. The effort took just 2 weeks (AlphaFold can take minutes to generate a single prediction). The predictions are freely available for anyone to use, as is the code underlying the model, says Rives.

Of these 617 million predictions, the model deemed more than one-third to be high quality, such that researchers can have confidence that the overall protein shape is correct and, in some cases, can discern finer atomic-level details. Millions of these structures are entirely novel, and unlike anything in databases of protein structures determined experimentally or in the AlphaFold database of predictions from known organisms.

A good chunk of the AlphaFold database is made of structures that are nearly identical to each other, and ‘metagenomic’ databases “should cover a large part of the previously unseen protein universe”, says Martin Steinegger, a computational biologist at Seoul National University. “There’s a big opportunity now to unravel more of the darkness.”

Sergey Ovchinnikov, an evolutionary biologist at Harvard University in Cambridge, Massachusetts, wonders about the hundreds of millions of predictions that ESMFold made with low-confidence. Some might lack a defined structure, at least in isolation, whereas others might be non-coding DNA mistaken as a protein-coding material. “It seems there is still more than half of protein space we know nothing about,” he says.

4

Soupjoe5 OP t1_iutgxep wrote

Article:

1

Microbial molecules from soil, seawater and human bodies are among the planet’s least understood proteins.

When London-based Deep Mind unveiled predicted structures for some 220 million proteins this year, it covered nearly every protein from known organisms in DNA databases. Now, another tech giant is filling in the dark matter of our protein universe.

Researchers at Meta (formerly Facebook, headquartered in Menlo Park, California) have used artificial intelligence (AI) to predict the structures of some 600 million proteins from bacteria, viruses and other microbes that haven’t been characterized.

“These are the structures we know the least about. These are incredibly mysterious proteins. I think they offer the potential for great insight into biology,” says Alexander Rives, the research lead for Meta AI’s protein team.

The team generated the predictions — described in a 1 November preprint1 — using a ‘large language model’, a type of AI that are the basis for tools that can predict text from just a few letters or words.

Normally language models are trained on large volumes of text. To apply them to proteins, Rives and his colleagues fed them sequences to known proteins, which can be expressed by a chains of 20 different amino acids, each represented by a letter. The network then learned to ‘autocomplete’ proteins with a proportion of amino acids obscured.

Protein ‘autocomplete’

This training imbued the network with an intuitive understanding of protein sequences, which hold information about their shapes, says Rives. A second step — inspired by DeepMind’s pioneering protein structure AI AlphaFold — combines such insights with information about the relationships between known protein structures and sequences, to generate predicted structures from protein sequences.

5

Soupjoe5 OP t1_iujwzqm wrote

3

“The more data our scientists have that they can work on, the better it will be for us all,” Bergquist tells TIME.

For its part, China doesn’t want to close its space station doors—Tiangong is open to all U.N. member states. The ESA has even planned for its astronauts to board the Tiangong, though this has been stalled pending further discussion with Beijing. One of the station’s designers told state media that Tiangong is “inclusive” and designed to be adaptable for non-Chinese astronauts. And at least 1,000 scientific experiments will be conducted in the station, Nature reports, mostly involving Chinese researchers but also including projects led by researchers from 17 other countries and regions like Kenya, Russia, Mexico, Japan and Peru, some of which are struggling to support their own space initiatives.

While the U.S. is decades of operational experience ahead of the Chinese space program, China’s willingness to partner with other countries may be cementing its place as a space power today. Since 2016, China has made 46 space cooperation agreements with 19 different countries and regions.

“I don’t believe [China] wants to be confrontational,” Parker tells TIME. “I think they want people to like them; I think they want to be trusted.”

13