TouchCommercial5022 t1_j1p9lxz wrote

All they do is control the behavior of these chimpanzees; Let's look at an app that makes Axel stay in his seat

The education system as we know it will end in 2030, I hope so. Every person deserves their own personal JARVIS AI that can provide them with proper 1-on-1 mentoring throughout their lives. That would be 10 times the learning rate of each person, and therefore 10 times our species. It wouldn't matter how much money you would have, where you would live, or what resources you would have access to. Every person could eventually get a world class education.

The educational system is so stuck in the past that it is unable to adapt to modern times. This will force your hand and ultimately rethink how students should be assessed


TouchCommercial5022 t1_j1ota81 wrote

So ChatGPT is smarter than me... great.

Just a reminder that OpenAI gave the original GPT-3 175B (davinci classic) a subset of SAT questions in 2020. It did very well, beating the average score by 20% or so.

Newer benchmarks are much more stringent and AI continues to outperform humans.

Lots of people are comparing GPT to a dumb human being, even going as far as trying to quantify it across SAT and IQ tests. But actually I think a better comparison may be a very schizophrenic human. It is well known that the binding constant in LLM performance is hallucinations, and these hallucinations seem inherent in the architecture itself.

ChatGPT is a very smart System 1 thinker. He's terrific in partnership, making his ability to speak eloquently and convincingly on a wide range of topics far exceeding what we'd expect from his measured IQ (around 85, depending on which test you use). However, it is very clear that ChatGPT has sufficient null capability for System 2 thinking.

He has near zero capacity for the kind of careful awareness, thought, or introspection that makes humans such formidable scientists and engineers. No matter how many calculations we give you, it seems impossible to learn arithmetic beyond the two or three digits that you can most likely memorize.

This is characteristic of the cognitive impairment seen in severe schizophrenia. At the neurological level, schizophrenia is associated with degradation of the salience network that drives System 2 reasoning. At the psychological level, this is typically expressed in the form of impaired formal thinking, in which the schizophrenic patient makes coherent sentences that they sound correct but lack any kind of sensible reasoning or logic.


TouchCommercial5022 t1_j1nmx66 wrote


Chatgpt is based on an updated version of gpt3 (call it gpt3.5) and the chatbot was released as a kind of preview of gpt4.

will it replace google??

I really doubt it

Am I the only one who doesn't like having only one answer?

People are still going to want to investigate. Yes, chatGPT helps you research, but when I try to learn something new, I read a lot of articles that have different perspectives on the same topic.

Also, our content bubbles are already small enough. Can you imagine that everyone got the same answers from the same source? *shudder

Or even worse, a different answer from the same source. Have you tried deleting the conversation and coming back a day later to ask the same questions? I don't usually get the same answer.

GPT3 is likely a bot that we train to sound like a person who knows what they're talking about. Not really knowing many facts.

ChatGPT hints at a perhaps near future, but it has at least two major flaws, which are the inaccuracy of the data that is postulated as correct. Second, how do you release a language model into the world without it being influenced/modulated by nefarious actors? How do you trust the data you are being given? How do you know if you are making things up? These may be bigger challenges than we are led to believe from this ChatGPT demo.

I asked ChatGPT questions and he provided me with completely fabricated scientific studies, when I asked him to cite the author and DOI of the studies, it's all fictitious, it seems convincing until you look up the studies and find out they don't. I don't exist. It's certainly very fast, very convincing, but it's not really an AI assistant unless it's an assistant that does shit and is therefore far from reliable.

Yes, this appears to be world-changing technology, but it is actually currently the Emperor's New Clothes. We'll have to wait until GPT4 arrives to see what the progress looks like. There are some big challenges ahead for this technology to overcome.


ChatGPT is predictive text generation. It's not an encyclopedia, it's not a web browser, it's not a web search, it doesn't even know anything recent. And he will directly lie to you.

Predictive text neural networks must be fed with selected data sets. They have to be cured because uncured data quickly overwhelms everything else and turns them into crazy Nazis, it's literally the story of all previous chatbots. That means it takes time for new information to reach them, which means you can't be up to date on anything recent.

It also magnifies any biases present in the data being fed. If you see that certain words appear together, you rate them as highly likely to go together. However, that means that if the only data you provide about the Middle East is about terrorism in the data sets, it will associate everyone in the Middle East with terrorism. If you only give him stories about white people, he will associate whiteness with "good" attributes.

ChatGPT also doesn't know what it doesn't know. It will gladly lie to you if it doesn't know an answer and spit out crashing programs. He is also trained in a wide range of subjects, but not all subjects. Good luck trying to come up with the best Elden Ring strategies, and gosh, don't trust anything that's truly life-threatening.

Or maybe Google just buys them. It is standard business practice for large companies to buy promising startups and then close or integrate them

the problem with buying them is that OpenAI is backed by Microsoft (and I think Nvidia) so they would probably be behind the queue if an acquisition was ever put on the table.

But Google has its own version with a more advanced training suite to be released next year. It's going to be an arms race, not a buy, I hope they release something to compete with instead of buying it like they always do.

They already have LaMDA in-house, they just haven't wanted to release it as ChatGPT for fear of misuse and bugs. ChatGPT already shows errors and problems.

However, all it does is show them the need to make a product using LaMDA sooner.

A friend works at Google and he showed me some LaMDA talks and it's on par with what OpenAI is doing. It even leans more conversational and less robotic

Super bullish on GOOG long term.

And Google is the only one that seems to be able to offer self-driving cars. Google has a lot of potential between its core business, calico and Waymo.

Shit got serious.

I can't wait for the google v2 experience where I don't have to rely on my search engineering skills.

Let's go google. speed up please

The problem is that google used to be searched, the first page results include the studies you wanted.

Now it's search, sponsored results, 13 pages of political opinions and news articles, loosely related study result I wanted.

ChatGPT has been a breath of fresh air - I may not be perfect and mess up from time to time, but I can quickly weed out the nonsense and find answers.

Maybe if your search wasn't riddled with SEO spam and actually gave me what I wanted to see without having to add "reddit" to the end, I wouldn't be using ChatGPT

The problem is when Google will steer you in completely different directions based on what it "thinks" you'll continue to see. It's not uncommon to get different search results based on things like region and create regional bias.

I don't think ChatGPT competes directly, but it would be VERY nice to have a solid competitor to Google in web search, because if Google doesn't like you, your business is basically ruined these days.


TouchCommercial5022 t1_j1mq4cf wrote

Filing Statement;

"The long-term trend has been that new technologies tend to exacerbate precariousness. Large, profitable industries typically turn away new entrants until they incorporate emerging technologies into their existing workflows."

This article is a very interesting way to look at the generative AI revolution of 2022. As with previous IT revolutions such as social media, it will be the profit interests of the business that are likely to prevail in how that this technology shapes our future.

Bladerunner dystopia confirmed, understood.

I can't imagine how much cost they're racking up, I mean, they're already monetizing GPT-3, so I guess it's pretty clear what they're going to do next. This is the "gain publicity and users" phase. Making money will come soon enough

They earn a lot of money from user and company subscriptions to access ChatGPT and their other services. It's free right now, but it won't be for long.

this is also why they are trying to remove Stable Diffusion, to incorporate it into the next Adobe release or something

I really appreciate and admire what Stable Diffusion did. A few weeks after Dall-E and Midjourney made the rounds with their paid private service, they simply went out and released their work openly, open source and free to play at home. They threw a whole new "industry" that was just beginning to capitalize, under the bus. The fucking rich who invest in Dall-E must have been furious.

So now only the rich will benefit from AI, the poor will eat shit as usual.

And with AI replacing the poor, very soon the rich won't need the poor at all.

but I would pay for chatgpt;

I can afford GPT3 right now: 50,000 tokens (about 37,500 words, input and output counted) for $1. GPT3 is almost as good in many ways as GPT chat.

$50 will get you 2.5 million tokens or about 2 million words. An average page contains 500 words. So let's say your average query is half a page, 250 words. So those $50 = 10K individual inquiries.

So basically you can buy it at that price right now (in the form of CPT3), except you don't pay monthly, you pay per token, so you could spread out those 10K queries over many months if you wanted.

I suspect that chatGPT would have similar prices.

What I really can't wait to see and use is GPT4.

The genie is out of the bottle. It's all open source, so unless they start banning personal computer ownership and co-op working, there's going to be weird AI for the masses for the foreseeable future.

The analogy with social networks is incorrect, because social networks require everyone to be on the same network. The best analogy is the app store. There will be big players and little players, but getting locked out will only happen in the most extreme cases, and those guys will continue to thrive in their own corners.


TouchCommercial5022 t1_j1mj0nt wrote


In mice, microrobots safely killed pneumonia-causing bacteria in the lungs, resulting in 100% survival. By contrast, all untreated mice appeared within three days of infection.

The results were published September 22 in Nature Materials.

The microrobots are made of algae cells whose surfaces are dotted with antibiotic-filled nanoparticles. The algae provide movement, allowing the microrobots to swim and deliver antibiotics directly to more bacteria in the lungs. The nanoparticles containing the antibiotics are made of tiny spheres of biodegradable polymer that are coated with the cell membranes of neutrophils, which are a type of white blood cell. The special thing about these cell membranes is that they absorb and neutralize inflammatory molecules produced by bacteria and the body's immune system. This gives the microrobots the ability to reduce harmful inflammation, which in turn makes them more effective at fighting lung infection.

It doesn't exactly lay out the survival rates of one versus the other, but more about efficiency, as the microrobots only required a fraction of a percentage of antibiotics to treat the disease.

What intrigues me a lot is whether the lack of concentrated exposure to antibiotics means that the use of microrobots will reduce the chance that your body will not respond to antibiotics in the future.


A nanorobot was not really designed. That part was made by nature. They modified a nanorobot, algae

Humans are great at exploiting things that already exist, whether it's the horse or this.

I hope we go down this path for nano-robots. They will not be artificial constructions made of plastics or metals, they will be genetically modified viruses that are selected for specific tasks.

Viruses are already little robots. All living things are incomprehensibly complex machines designed by accident over a very long period of time.

I hope they can clean all the plastic out of our lungs

actually, the term Nanobots is somewhat misleading.

Get rid of the idea of ​​nanobos as little machines that perform tasks like shrunken machines you know.

It's not that it's impossible to make “parts”. But any functional device will more or less resemble proteins. Proteins are very well designed working nanomachines.

In nano, you cannot downscale. You can't have robots moving individual atoms around and doing things.

You must take into account things that I suppose you are not capable of imagining. Like thermodynamics, quantum effects. And incredibly strong atomic forces.

If you really want to deal with imaginary story machines, then you must imaginary story machines working in a constant storm of rocks, stones and other things moving, shaking and hitting the machine. While some parts of the machine can move unpredictably, some movements cannot be stopped...

we don't have anything remotely close to that yet.

The most advanced in the lab today, AFAIK, are drug carriers that can target a specific area of ​​the body by forcefully opening and releasing the drug whenever they are accidentally in the right conditions. They can't move to the desired location by themselves or anything, they're just in the bloodstream.

designs, which requires us to be able to move individual atoms with incredible precision, no easy feat.

the closest thing to nano-robots We call them protozoa. Understanding the living forms of nanotechnologies, why they are so efficient, so much more than the mere machines being worked on now, will result in some extraordinary capabilities over time.

The reason is that these perform more than one process at a time and therefore always outperform the simple machines that are now being promoted as nanobots. It is clear that complex systems will eventually become more capable and cheaper than machines, when we learn to duplicate certain functions of living systems as a group or group.

All bets are off if super-intelligent AI shows up next year, but I don't see traditional human paths developing before 2050. There are more viable solutions with shorter time horizons, but they're not what the average person thinks of when they hear the term nanobot.

I mean, they've already demonstrated nanoscale devices that are magnetically guided in mouse models, rabbit models, etc. and that deliver charges to specific areas of the body.

But if we are talking about nanorobotics, that is, more advanced machines with a high degree of autonomy and movement, various sensors and communications, maybe even the ability to network, etc., then yes, it is probably a type of technology. Singularity, so maybe around 2045-2055. In the decades after, they may be small enough to go directly inside cells and repair them.

There are some major hurdles to making gray viscous nanomachines.

Biggest is power They need some kind of power supply or they can't do much. Power sources tend to get more efficient the bigger they get, so depending on which one you're using, they may not be able to do anything before needing to recharge. The only viable power sources I can think of is wireless power where you have an external device transmitting power to the swarm, or mimic cellular processes by having the nanos submerged in a bath of fuel and oxygen.

The second is heat Everything these bots do will heat them up. If they are in a large swarm, the one inside will not be able to quickly get rid of this heat. If you don't implement a speed limit or insert holes for air circulation, they will melt.

The third issue is how delicate they will be They can't really be made of a single material and any components for computation and locomotion will reduce the strength of individual nanos. The connections between nanos will also tend to be weak compared to monolithic materials. They will probably be held together by frictional forces as opposed to the electrostatic forces that hold regular materials together. Hitting a brick made of nanos with a hammer might not break it, but it will certainly damage a lot of bots.

Given modern and probably future materials technology in the short term, I would say yes. Any nanoscale machine would be difficult, if not impossible, to "harden" against EMP. In fact, the robust nanobots of fiction are a long way from happening; anything we build at that scale now would be very vulnerable to heat, UV rays, etc.

These three problems combined make universally assembled type nanobots quite unfeasible in the real world. A cloud of bots won't be able to spontaneously assemble into a weapon and then fire, unless you're prepared to sacrifice a ton of them to do so.

The problem with power, and wireless anyway, is that the size of the receiver must be close to the wavelength of the transmit power. Something as small as nanites would need very short wavelengths, like X-rays or gamma rays being shot at them even. That doesn't sound very healthy

furthermore: (ie, a machine with various degrees of motion that is controllable by electronic logic to perform one or more tasks) smaller than one hundred nanometers (100 nm) in size. This is because there are some basic science and technology problems that need to be solved:

⚫ the electronics needed to control a robot are too large to fit in 100 nm

⚫ mechanics and, above all, friction at the "nanoscale" behaves in a way that inhibits or, more specifically, makes it difficult to execute the relative rotation of objects necessary for joints and bearings

Do you want to develop nano bots? Reduce the size of all electronics by a factor of 100 AND solve the friction problem at the nanoscale. Then we can start talking about development.

NO nanobots are self-replicating intelligent robots primed to turn the world into gray goo. We would need some revolutionary new science to get to that level of complexity (and it may just not be possible). Also, no matter how small the device, mechanically separating atoms to transform materials into other materials presents a new set of problems. Without any serious new science, you still need something like a particle accelerator to pull off that trick. (and the energies involved would destroy the nanobots).

Researchers have shown some useful aspects (like controlled movement) that will be useful for making nanobots, but that's like saying "researchers have invented the wheel": you won't be seeing a Ferrari anytime soon.

Until now it is not possible to create nanobots, but if it is possible in the future, it will also be expensive due to these facts:

⚫ Material availability and cost: We cannot use conventional material for production due to technical reasons and also the material used for nanobot production (such as silicon, gold, etc.) is expensive.

⚫ Production of nanoparticles: the use of the material for construction will be in the form of nanoparticles, which must be produced by different methods (mechanical or chemical methods) that require a lot of effort and money.

⚫ Assembly: After the preparation of the nanoparticles, the main task is assembly, which cannot be done by hand: e.g. Therefore, it again requires special equipment with high precision, which again increases the cost.

⚫ Power Source: Even if we are able to produce nanobots, it requires some kind of power to perform its function. Producing such a small motor or power device will cost again.

So overall, it's not an easy task to do. Even if it is possible to produce nanobots in the future, it will not be cheap.

It is possible but highly unlikely to happen in our lifetime. Yes, technology is evolving pretty fast, but not fast enough that nanotechnology is the thing that will keep us from dying permanently.


TouchCommercial5022 t1_j1kwhqa wrote


Cyber ​​= high technology. usually near future (rather than Star Trek sci-fi).

Punk = bad life. The belly of a metropolis.

Thus, Cyberpunk generally depicts stories of seedy criminals and/or ground-level cops/mercenaries in a high-tech (usually urban) setting of the near future.

and it's not even something futuristic it's something completely real;

Now there is a cyberpunk world. Just not as advanced as the popular media generally likes to make it out to be. But dig a little, you'll notice a staggering amount. Remember, high tech, low life. Now we have many all over the world.

Cyberpunk is dirty, overcrowded, urban, and almost invariably dystopian. Although I like cyberpunk stories, living in such a world would be absolute hell for me. No doubt he would end up as some kind of enemy of the state trying to demolish buildings and expose bare ground to light in order to replace the concrete and glass with green.

watch this;

You still don't think we are in a cyberpunk world??? We don't even get into the subject of AI that is fashionable and debated these days, let's just focus on the most basic;

We have brought the world upon ourselves, we wanted it. Heck, I love my video games and social media and the distractions of reality as much as anyone. We see technological advancement as the inevitable march of progress towards the comfort and enjoyment of life. And yet with that march has come the pill-taking, detachment, and depression of modern society. A world in which Mildred's overdose asks us a terrifying philosophical question 60 years later: which is worse, being so disillusioned with life that you want to kill yourself, or being so detached from reality that you commit suicide without even realizing it?

cyberpunk is a dystopia and was created as a warning, which we are fully addressing, basically it is the warning of what could happen if we allow dystopian realities with corporate control

When was a cyberpunk future something to be desired? The problem, IMHO, is that it still fits too well.

Now we have the dystopian part, why not get all the cool tech, architecture and clothes?


we are not close yet, but I do believe that we will be a 100% cyberpunk world in the next decades

There's some cyberpunk-ish stuff going on in the modern age, and we're making leaps and bounds around cybernetics, cybernetic augmentation, artificial intelligence, and the transhumanist movement is gaining ground. We are also developing high-tech stuff like Musk's “Nuerolink” and much more.

But we are pretty far from being a cyberpunk future like in science fiction movies.

Cyberpunk is a subgenre of science fiction set in a dystopian futuristic setting that tends to work in a "mix of low-life and high-tech", featuring futuristic technological and scientific achievements, such as artificial intelligence and cybernetics, juxtaposed with collapse or the decline of society.

Corporations run cities, states and countries, and they are often dirty, politicians control much of the big picture and they are just as dirty, taking the world into a dystopian class divide. The technology is much more advanced than it is now, but as mentioned, it is juxtaposed with the decay, ruin, and problems of society.

One of the main aspects of cyberpunk is cybernetic augmentation and the combination of biomechanics and transhumanism. Mixing machinery with humanity. While modern science is developing such technology, we are still a long way from high-tech cyber breakthroughs.

We're just not there. for now

Technology isn't that fluid, government isn't that weak, corporations aren't that strong, and cities aren't that great/bad

You can see a clear difference between real life;

and cyberpunk;

but I don't doubt that it won't be long before we become more and more in a cyberpunk world

When I first saw this image I thought it was a computer cyberpunk scene. It's actually a photo taken in Shibuya, Tokyo by photographer and Flikr user Guwashi999 in January 2012. All credits to the photographer.

We are right on the razor's edge of becoming those stories Gibson, Brunner, Ballard and Ellison warned us about and we show no signs of slowing down. As we race toward a bright, toxic, and violent future of technology and augmentation, perhaps we should do well not get lost in the process.

I'd like to live in a cyberpunk world If we stripped away all the technophobic, anti-urban, anti-foreign slant of old-school American cyberpunk and left the high-tech, social liberties, limited government, and super-dense cities, then yes, it would be all about that .

we are living a dystopian nightmare. I think a lot of us expected dystopia to happen as the result of a Mad Max-style apocalypse, but instead of one big catastrophe like global thermonuclear war or something, it's been a constant accumulation of a thousand smaller things. Like a frog in boiling water, we didn't realize it until one day we woke up to find everything on fire and almost half of our fellow human beings screaming “No! No! There is no fire! Fire is a hoax!" even when consumed by flames.

with the AI ​​and its progress I imagine something similar to Psycho Pass with micromanaged behavior and also subscriptions for absolutely everything

This is already happening now with the protests and the AI ​​art debate;

AI is a Pandora's box that cannot be closed. It sucks anyway, these protests won't do anything

Imagine a company that pays people to make artwork for a game, project, ad campaign, etc. Now, the AI ​​can do it in seconds, basically for free. From a business perspective, it's not even a competition. The train of capitalism is moving full steam ahead and AI training is only going to get better. This will also extend to voice acting, 3D models, animations, etc. It's just a matter of time.

Maybe there are, for example, niche indie video games with "totally handcrafted human art" that can be sold to a niche audience, but 99% of consumers don't care about such things and just want the best possible product.

The rise of AI is going to suck for a lot of people. Watching everyone oust artists as a victim of progress will soon find her own balance in question.

How do you go on when something you've spent your life mastering can be replicated by a machine in an instant? We will lose the humanity of our own efforts.

I fear that one day we will live in a beautiful but empty world.

That's the sad/scary part for me. Seeing non-artists is generally not worth anything. I can't imagine how difficult this must be for many of them, and it will only get worse to multitudes every year.

I feel like many of you are missing the important part here. The new technology is cool and all, but the human beings who will be forgotten beneath it, and abandoned or crushed by capitalism in the process, are important. A responsible society can progress without ceasing to care for these people.

The fact that so many of you are smugly denouncing "cocky artists who felt they could never be wanted" is not seeing how this affects anything. We should make sure that workers in industries that become "obsolete" in business can get on without suffering.

With all the power we have in the modern world, it's sickening that we don't use it better.

It already feels like we're speeding into some cyberpunk hell where everything is homogenized and simplified for the sake of capital and people just ignore the red flags because it's convenient for them. High-end art already feels like it's out of the question due to the massive consumption of lowest common denominator media that is hugely popular, and the possibility of AI destroying it is very worrying


TouchCommercial5022 t1_j1j3vm4 wrote

only The difference between the year 2000 and the year 2022 is extraordinary. I'm not sure you can really round to the nearest thousand

It is crazy how many people basically saying, "Nothing extraordinary has happened in the past twenty years or so. Not like X year compared to Y year." or "Take someone from twenty years ago and life would be basically the same." Maybe it's because so many here lived it, but the world has changed. Considerably.

The very fact that profound advances have become mundane is extraordinary. Change, advancement, innovation, and progress has become so commonplace, it has become invisible unless something truly astounding comes along.

Go back to 2000, and you have a world without social media (and all the issues that come with it). MySpace was launched 2003, with Facebook 2004. 2006 for Twitter. Those have fundamentally altered society and how information is consumed, for good and ill.

Streaming. We have completely altered how we consume media. Music, movies, TV shows. So much entertainment is available for far cheaper than ever before.

YouTube wasn't a thing until 2005. That alone has revolutionized the world. Beyond the sharing of videos and content created, the ability to get visual guides to so many things is astounding. Video tutorials have made picking up new skills easier than ever. Learning in general has completely changed. Khan Academy, Skill Share, Brilliant, etc. You can learn just about anything in a dozen different ways. Students can use photo math to solve complex math problems in seconds.

Phones. The iPhone completely upended the mobile phone market. Smartphones have become ubiquitous and are the primary way many people around the world connect to the internet. All the infrastructure that supports the mobile industry is astounding, from paying per text message of yesteryear to unlimited 5G data plans.

GPS. I still remember having to print out Map Quest directions. Then, you needed expensive specialized units. Now everyone with a smartphone has access to GPS directions. Going beyond that, you can get traffic alerts, accident reroutes, speed trap information, etc.

Tablets and E-readers. For a book lover like me, the amount of books I have access to is astounding. It's like walking around with an entry library in your bag. You can watch movies on them, surf the web, work, etc.

Medical technology. CRISPER, mRNA vaccines, new procedures like laparoscopic surgery, mapping the human genome, etc.

Solar and wind technology, as well as developments in battery technology. Yeah, batteries aren't "there" yet in terms of where we want them, but they have been improving. Solar and wind technologies have made incredible leaps and bounds over the past twenty years.

SpaceX is landing rockets. Reusable rockets. We just launched one of the most sophisticated piece of engineering in the form of the James Webb Space Telescope recently that lets us look farther and clearer than ever before. We just launched one of the most powerful rockets ever built (SLS) with another, even more powerful, one getting ready (starship). We are going back to the Moon soon with plans to stay there. Space tourism is a thing now.

These are just some big things off the top of my head. Little things have also changed, yet we don't notice them despite completely changing how we operate. Self checkouts. How we tap or insert credit cards instead of swiping (or just using our phone to pay). Online shopping. Curbside pickup. Ride share like Uber/Lyft and other gig economy things like food delivery. Not memorizing phone numbers since they are in contacts. Having a camera and video recorder in our pockets, ready to go at a moments notice. Bigger, better TVs are cheaper than ever. Cloud... everything; from saving photos to word processing, so much stuff is seamlessly integrated between phone, laptop, tablet, and computer. Online dating. Spell check and grammar help. Podcasts. How texting and messaging overtook phone calls as the primary form of communication. Look at how advanced cars have gotten, backup cams, blind spot detection, smart keys, electric vehicles, etc.

If we stop and look around today, we can see so many things on the horizon that have the potential to change the world, yet are nothing more than headlines to skim over because, ultimately, they are one of dozens of 'marvels' happening in the same period of time.

Fusion got a big bit of news recently. People joke about how it is always "thirty years away" but progress is being made. ITER is planned to be finished in 2025, which may be huge as well.

Machine learning has exploded in the past year or so. AI has been doing a lot of work behind the scenes, but now it is starting to become visible. AI art and the new chatbot has been making waves recently, and the rate of improvement is astounding. People laugh now at some of the goofs it makes, but it's only going to get better as the technology matures.

3D printing is another one. It's around now of course, but the things it can do is only going to grow as time passes.

Medicine is constantly improving too. New drugs and procedures make a world of difference to many that is largely ignored or invisible to the people who it doesn't affect.

So much more, but I think I've made my point.

To all those who say the world hasn't changed or nothing "extraordinary" has happened in the past twenty years really drives home how extraordinary this time in human history is. Revolutionary, society shifting technology that would dominate the public attention back in the 20th century has become so normalized, they are nothing more than a headline you scroll past and maybe think, "Huh. That's cool."

Edit: Based on some of the comments, I do want to add that that nature of technology means we don't know how the inventions of today will affect the world of tomorrow. Someone in 2080 could be talking about how GPT and machine learning revolutionized humanity the same way we talk about railroads of the past changing things today. Some mundane discovery today could be the foundation for some wonder tech of tomorrow.

The point is, good and bad, the world has changed and will continue to. We have double the human population since the 1970s. More people, more education, more tools, and more rabbit holes to go down to explore. We have problems, just like every point in human history has had problems, but I am trying to make an effort to be more positive about the trajectory of human progress.

if any natural humans still exist, humans will be seen as primitive oddities if they are noticed at all

I am well aware that AI will produce things that we don't even have the ability to imagine yet. I still don't know how to write a 5 year plan about it.

Imagine someone in the 1800s saying "Electricity is going to transform the world and no one is paying attention" mainly because even if they could understand the light bulb, there was no context for contemplating the telephone, television, microchips and generators.

It's hard to see what's possible when it's so outlandish from today's perspective. Imagine someone in 1922 predicting what we have today, it's beyond crazy: everyone has a device that is connected to most other humans, they can see and talk to them in real time, know where they are and where they want to go. to go making your device access to a satellite system around the world.

And with AGI we can have an exponential acceleration of inventions, so everything could go very fast (if we don't destroy our habitat first). Impossible to know if things take a minute or a decade.

That is, AGI could develop machines that will develop machines that build everything we need most efficiently and transport it to where it needs to be, create ways to grow food in laboratories much faster than we think now, which could mean no more. animals or other agriculture. . It could develop not only everything we ever dreamed of in medicine overnight, but also create something that makes sleep necessary. So we can eat, sleep and work totally differently in a very short time, which is about 95% of our existence. I could create artificial wombs, so there goes that. A newly developed type of "clothing" could make houses unnecessary because our temperature can be controlled by "clothing" at all times, meaning there is no need for traditional housing. Everything we take for granted as the basics of our existence, like the bed you sleep in, the shower you take,

But maybe the legislation will slow it down to the point where nothing really happens.

We can try to predict 2030, but by no means does anyone know what the world will be like in 2050 and beyond. After the singularity, it is simply impossible to imagine what technologies will exist because many of them will be created by an entity far more intelligent than any human being


TouchCommercial5022 t1_j1ghr2w wrote

Some predictions for LEV;

  • Ray Kurzweil: LEV in 2028

  • José Luis Cordeiro (Futurist): LEV in 2030

  • Dr. Michael Roizen (Prof at Cleveland Clinic): LEV in the early 2030s

  • Dr. Aubrey de Grey: LEV in 2036

  • Prof. Dr. George Church (Harvard-Professor): LEV in 2037

  • David Wood (Futurist): LEV before 2040

Personally, I think the LEV will be reached later in 2050, but we will definitely make significant progress up to that point.

Even if we're not in the LEV, we'll have made significant progress in our understanding of aging and be able to slow it down a bit, I'd bet anyone under 65 has a good chance of doing that.

The 'speed' of LEV is how quickly life expectancy increases. What drives that increase in lifespan, however, is not a continuous momentum, but discrete advances that can have large and relatively sudden effects. What this means is that looking at the average rate of increase in your life expectancy will never be an accurate representation of whether you'll "make it" because it's always possible that we hit a roadblock and no further improvements occur. for a while, or a massive new discovery is right around the corner.

What this means is that we won't really know if we've made it until we're almost into orbit (so to speak). LEV is something that will be hard to really determine at this point. Perhaps, in hindsight, we'll say we reached LEV in 2035 when [X] became available, but at that point we spent the next decade pinning for the next breakthrough.

Ten years ago, Aubrey would not have been optimistic about the progress of rejuvenation. By 2006, Shinya Yamanaka had figured out how to turn normal cells into more versatile and useful stem cells (induced pluripotent stem cells, or IPSs), and CRISPR was beginning to mature as a gene-editing technology. But these were tools, and more theoretical than practical.

For quite some time, we have been able to increase the lifespan of laboratory mice by imposing caloric restrictions or doing things that mimic the effects of caloric restriction. But in the last decade we have also learned how to use stem cell therapies and how to maintain telomeres to extend the lifespan of mice. (Telomeres are structures that keep DNA strands from unraveling when cells divide, like the plastic caps on the ends of shoelaces.) We can also implement senolytics, which are molecules that kill toxic cells within our bodies.

Some of these techniques are now transferring laboratory mice to humans in clinics. One of the leading senolytics companies published this year from a successful phase two clinical trial. There are also clinical trials of stem cell therapies, notably the use of induced pluripotent stem cells in Japan to combat Parkinson's disease, with a couple more trials starting in the US.

Robust Mouse Rejuvenation

We do not yet know how complete our portfolio of therapies needs to be to reach LEV. We just have to keep adding new components until we get there. mice cannot get worse from LEV because their lifespan is too short, so Aubrey has developed a different concept for them: robust mouse rejuvenation (RMR), which is when a middle-aged mouse, which has left one year of life, has his life expectancy doubled. This is the LEV Foundation's flagship research program, and for this purpose, Aubrey recently purchased 1,000 mice.

For quite some time, we have been able to increase the lifespan of laboratory mice by imposing caloric restrictions or doing things that mimic the effects of caloric restriction. But in the last decade we have also learned how to use stem cell therapies and how to maintain telomeres to extend the lifespan of mice. (Telomeres are structures that keep DNA strands from unraveling when cells divide, like the plastic caps on the ends of shoelaces.) We can also implement senolytics, which are molecules that kill toxic cells within our bodies.

Some of these techniques are now being passed from laboratory mice to humans in clinics. One of the leading senolytic companies reported a successful phase two clinical trial this year. There are also clinical trials of stem cell therapies, notably the use of induced pluripotent stem cells in Japan to combat Parkinson's disease, with a couple more trials due to start in the US.

Robust Mouse Rejuvenation The Foundations that Aubrey has established are necessary because private business cannot afford to take a broad enough perspective. He established the new one because he felt that the SENS Board had grown too timid to make the rapid progress that he believes is now possible. Readers of this article may be aware of this controversy, and while I don't intend to go into the details here, many former SENS donors believe Aubrey was treated unfairly, and we fully support his new venture.

I'm fine because;

⚫ Finally billionaires who are older than me don't want to get old

⚫ Nobody likes to get old

⚫ Being young is fashionable

And if I die, I won't care, because you know, I'm dead.

So yes, I am an optimist!

As long as you don't have to work long hours every day, living longer sounds good

But since the only option is to work hard and invest just to have enough money to live 10 or 15 years in fragility, it takes the thrill out of living forever.

They should read Peter Hamilton's Commonwealth books. They have rejuvenation technology and he imagines some interesting social changes based on people living forever. But basically, the poor work for 40 years so they can rejuvenate and then do it again. Forever. Better for the rich, of course.

Three stages of life;

You have time and strength, but no money. You have money and strength, but no time. You have money and time, but you don't have the strength.


TouchCommercial5022 t1_j1bqgpx wrote

This is all working to design proteins from the ground up to do whatever you want. The potential impact of protein damage over the next hundred years is on the order of the impact of computers over the last hundred years. Meta, Alphabet, and a few others understand this. The problem has two basic challenges:

Choose a biochemical function you want.

  1. What structure does that function provide?

  2. What amino acid sequence produces this structure?

We are getting closer to discovering the second thing with these structure prediction models. Once you can reliably answer those two questions, the world is your oyster. Do you want to catalyze hundreds of the most valuable reactions used in industrial chemical production, thereby lowering costs, increasing efficiency, increasing yields, and even breaking new ground in chemical engineering? You can. Do you want to develop new classes of drugs to treat hundreds of top-priority diseases? You can. Do you want cheap sensors that can detect anything? Do you want to design perfect crops? Do you want to turn waste into fuel? Do you want to build and repair polymers easily and cheaply? Do you want to make complex metamaterials? Do you want real and required nanotechnology? The list goes on, even the unimaginable. And, once you can answer both questions, it's very cheap to make arbitrary amino acid sequences.

Finding out would be like discovering fire for the first time. It's especially interesting because it will almost certainly happen and be perfected virtually in the next two decades (at the latest, IMO).


TouchCommercial5022 t1_j1aid0z wrote

I've been turning to ChatGPT in Google search a few times lately. He's going to eat his lunch if they don't drop his soon

Just wait until Microsoft implements ChatGPT in Bing.

It should be obvious: ChatGPT gets right to the responses. Since Google started putting all those junk ads at the top of the search page, I don't see much difference from their previous competitors. You were guaranteed to find what you were looking for on Google in the first two links. Now you open multiple tabs and crawl endlessly looking for the answer.

Google got too comfortable.

Not surprising at all. Most managing junior developers seem to prefer using ChatGPT to get responses instead of using google+stackoverflow.

I think ChatGPT serves as a legitimate threat to the whole concept of a search engine, not just to Google, but to the whole reasoning for a search engine in the first place.

We could be watching Web 2.0 crumble before our eyes in real time. Web 3.0 is humans speaking human language to an AI and the AI ​​accessing the internet for you, the humans themselves never touch the internet directly but the AI ​​adds everything for you.

Alphabet is in deep trouble because this would disrupt their entire business model. Even if they were to release a similar AI, it would not correct their business model, so this is a code red as it could usher in the bankruptcy of traditional tech giants.

I guess twenty years of having a ton of engineers in groups whose projects will just get canceled (even before launch, according to a lot of people here), is not a good business model. They always thought they would have a monopoly on search and therefore ads, and now look.

If ChatGPT can get us out of the hell of sponsored links and search engine optimized crap filling Google pages with swaths of useless junk, that's not what you were looking for, then great! Seriously, Google is worse now than it was 15 years ago... Sorry, people just want the things they use to get better over time?

Remember the fight Google had with news publishers for allegedly "stealing their website traffic" by showing short snippets that were good enough for many people who then didn't click through to see the full story?

Imagine something like this, times 100. That's what will likely happen with chatbots that replace/augment search.

I'd take a google killer because google results have been hitting dumpster quality for a while now.

But I don't think it's Google's fault. He searches a dumpster and finds trash as a result.

I notice similar problems with Chat GPT. You are very good with programming related questions because the internet is full of high quality open source code that you can learn from. Meanwhile, if you talk to him about health or any day-to-day topic, he returns a lot of garbage because with that we fill the internet.

We need more high-quality content online to make these systems work more reliably.

I hope people aren't really using it as a search engine because if they don't know, they do some credible-sounding shit.

if you use it as a search engine, I sure hope you are verifying what it is convincingly telling you

EDIT; chatgpt is great, but really unreliable in its responses (it's wrong for sure) and it's not real time (information is outdated). It seems that the big tech companies, especially Google, can replicate this and do even better given their access to data, although most of my Google searches leave me going through blogs that publish 3000 word posts full of content that is garbage but designed to perform well in Google searches.

When you open ChatGPT for the first time, it asks you to login. If I was really smart, it would already know I wanted to log in, created a username/password for me, and logged me in automatically.

I see ChatGPT and similar systems as an indispensable part of everyday life for many people in the same way as a calculator. A ubiquitous crutch or helper for "simple" things so we can focus on abstraction rather than semantics.

Imagine a future, everyone has personal AI. And they ask AI for everything, what should I eat for dinner, what career should I take, should I do this and not that. It gives a little scary. The Ai company has the potential to control people.

ChatGpt has only been released to the public for 3 weeks. Google is 24 years old and people have already started comparing the two. Imagine what ChatGpt can do in 10 more years.

Google is way ahead of the game internally, but they've had basically no opposition, so they're under no pressure to release anything to the general public. Now they finally have a reason to start using heavy weapons and assert their AI mastery. This is a very, very good sign in my opinion.

Well I hope they release something to compete with instead of buying it like they always do

They already have LaMDA in-house, they just haven't wanted to release it as ChatGPT for fear of misuse and bugs. ChatGPT already shows errors and problems.

However, all it does is show them the need to make a product using LaMDA sooner. People are really naive if they think Google isn't experimenting with AI search assistants.


TouchCommercial5022 t1_j15jo1r wrote

⚫ AGI is entirely possible; If it turns out that there is some mysterious unexplained process in the brain responsible for our general intelligence that cannot be replicated digitally. But that doesn't seem to be the case.

Other than that, I don't think anything short of an absolute disaster can stop it.

Since general natural intelligence exists, the only way to make AGI impossible is by a limitation that prevents us from inventing it. Its existence wouldn't break any laws of physics, it's not a perpetual motion machine, and it might not even be that impractical to build or operate if you had the blueprints. But the problem would be that no one would have the plans and there would be no way to obtain them.

I imagine this limitation would be something like a mathematical proof that using one intelligence to design another intelligence of equal complexity is an undecidable problem. On the other hand, evolution did not need any intelligence to reach us...

Let's say a meteor was going to hit the world and end everything.

That's when I'd say AGI isn't likely.

Assume that all intelligence occurs in the brain.

The brain has in the range of 1026 molecules. It has 100 billion neurons. With a magnetic sound (perhaps an enhancement of the current state) we can get a snapshot of an entire working human brain. At most, an AI that is a general simulation of a brain only has to model this. (It's "at most" because the human brain has things we don't care about, for example, "I like the taste of chocolate.") So we don't have to understand anything about intelligence, we just have to reverse engineer what we already have.

There are two additional things to consider:

⚫ If you believe that evolution created the human mind and its property of consciousness, then machine modeled evolution could theoretically do the same without a human needing to understand all the ins and outs. If consciousness came into existence without a conscious being trying once, then it can do so again.

⚫ Alphago, the Google AI that beat one of Go's top champions, was so important explicitly because it showed that we can produce an AI that can find the answers to things we don't quite understand. In chess, when the deep blue was made, the IBM programmers explicitly programmed a 'value function', a way to look at the board and judge how good the board was for the player, eg "having a queen are ten points, having a rook is 5 points, etc., add it all up to get the current value of the board."

With Go, the value of the board isn't something humans have figured out how to explicitly compute in a useful way; a stone in a particular position could be incredibly useful or harmful depending on the moves that could happen 20 turns down the road.

However, by giving Alphago many games to look at, Alphago eventually figured out using its learning algorithm how to judge the value of a board. This 'intuition' is the key to showing that AI can understand how to do tasks for which humans cannot explicitly write rules, which in turn shows that we can write AI that could understand more than we can, suggesting that, in the worst case, we could write 'bootstrapping' AIs that learn to create a real AI for us.

Many underestimate the implication of "solving intelligence". Once we know what intelligence is and how to build and amplify it, all artifacts will be connected to a higher-than-human intelligence that works at least thousands of times faster...and we don't even know what kind of emerging abilities lie beyond it. . Human intelligence. It's not just about speed. we can simply predict speed and accuracy, but there could be more.

The human brain exists. It's a meat computer. It's smart. It's sensitive. I see no reason why we can't duplicate that meat computer with electronic circuitry. The Singularity is not a question of if, but when.

We need a Manhattan Project for AI

AGI's superintelligence will advance so rapidly once the tipping point has passed (think minutes or hours, not months or years) that even the world's biggest tech nerd wouldn't see it coming, even if it happened outright.

when will it happen?

Hard to tell because technology generally advances as a series of S-curves rather than a simple exponential. Are we currently in an S-curve that leads rapidly to full AGI or are we in a curve that flattens out and stays fairly flat for 5-10 years until the next big breakthrough? Also, the last 10% of progress might actually require 90% of the work. It may seem like we're very close, but resolving the latest issues could take years of progress. Or it could happen this year or next. I don't know enough to say (and probably no one does)

It's like quantum physics. In the end, 99.99% of us have no fucking idea. It could take 8 years, 80 years or never.

Personally, I'm more on the side of AGI gradually coming into our lives rather than turning it on one day.

I imagine narrow AI systems will continue to seep into everything we use, as it already is. (Apps, games, creating music playlists, writing articles) But that they will eventually get more options as they develop. Take the most recent coronation achievement: GPT-3. I don't see it as an AGI in any sense, but I don't see it as totally narrow either. You can do multiple things instead of one. It can be a chatbot, an article writer, a code wizard, and much more. But he is also limited and is quite amnesiac when it comes to chatting, as he can only so far remember his own past, breaking the illusion of speaking to something intelligent.

But I think these problems will go away over time as we discover new solutions and new problems.

So for TL; DR. I feel like the AI ​​will gradually narrow down to general AI over time.

Go to the extreme for fun. We could end up with a chatbot assistant that we can ask almost anything to help us in our daily lives. If you're in bed and can't sleep, you may be able to talk, if you're at work and having trouble with a task, you may be able to ask for help, etc. It would be like a virtual assistant I guess. But that's me fantasizing about what could be and not a prediction of what will be.

2029 seems pretty viable in my opinion. But, I'm not too convinced that it will infuse into society and over 70% of a population's personal life. There is also the risk of a huge public backlash against the AI ​​if some things go wrong and give it a bad image.

But if. 2029 seems feasible. 2037 is my most conservative estimate.

Ray Kurzweil was the one who originally specified 2029. He chose that year at the time because, extrapolating forward, it seemed to be the year the world's most powerful supercomputer would achieve the same capacity in terms of "instructions per second" as a human being. brain.

Details about the computing capabilities have changed a bit since then, but its estimated date remains the same.

It could be even earlier.

If the scale hypothesis is true, that is. We are likely to see AI with 1 to 10 trillion parameters in 2021

We will see 100 trillion by 2025 according to open AI

The human brain is 1000 trillion. Also, each model is trained on a newer better architecture.

I'm sure something has changed in the last 2-3 years. I think maybe it was the transformer.

In 2018, Hinton was saying that general intelligence wasn't even close and we should scrap everything and start over.

In 2020, Hinton said that deep networks could actually do everything.

According to kurzweil, this has been going on for a while.

People in the 90s saying that AGI is thousands of years away

Then later in the 2000s, saying it's only centuries away

To the 2010s with deep learning people saying it's only a few decades away

AI progress is one of our fastest exponentials. I'll take the 10-year bet for sure.


TouchCommercial5022 t1_j0xr52p wrote

⚫ This has been proposed, notably by marine explorer Jacques Cousteau and astronaut Scott Carpenter. It's not going to happen for several reasons.

Permanent housing in water deeper than about 100 ft (30 m) is a bad idea due to the biological effects of pressure, including but not limited to nitrogen narcosis and possible long-term nerve damage, if not talk about the completely unexplored impact of such an environment on pregnancy and young children.

Very little light reaches that far, so seafloor communities rose from the surface to feed. Almost all life in the sea depends on the sun, so whether we live above or below the surface, we continue to depend on the same fisheries and ecology to survive, so living at the bottom of the sea is not at all a fix for overpopulation, if that's a concern. Also, just by staying there on the continental shelf, seafloor communities will disturb the nearshore ecology, likely reducing the overall food supply.

Semi-submersible cities are being explored in some areas (outside of Japan), but they will be high cost, high maintenance, and are not underwater habitats in any real sense. Floating aquaculture facilities can be useful, but they have nothing to do with the issue.

Living underwater is dangerous and expensive. Underwater habitats require completely reliable life support, whether they take in air from the surface or from some other source. A power outage can allow the air to stratify, forming pockets of deadly CO2. Leaks and corrosion will be a constant problem, and constant salt and moisture will wreak havoc on health and equipment. Most oceanic structures have relatively short lives for this reason. So although Europe has buildings that are thousands of years old, it is likely that no underwater structure will be continuously inhabited for more than fifty years.

So while some may enjoy the experience, economics and practicalities will always be heavily stacked against life underwater. Even if we say a large asteroid is coming, it would be much cheaper, easier, and safer to bury yourself underground than to flee underwater.

Finally, humans didn't evolve in cans, and it's already clear that a host of modern ailments, from high cholesterol to myopia to a host of autoimmune diseases, are the result of having locked ourselves in caves of our own making. We need to get out more, not less, and while these and other impacts can be addressed, the easiest way to do it on Earth is to control our population and maintain the opportunity to get out for a regular walk.

colonizing the seas would be an expensive and difficult project due to the corrosive effects of seawater on human construction, the tremendous hydraulic pressures exerted by the water column on the proposed habitat, the shallow water hazards of navigation and tsunamis, and the difficulties for deep water from high pressure leaks and even structural collapse, such as a submarine that has passed "crush depth".

Possible? Maybe. At least hypothetically.

Viable? Probably not.

Unless you keep it very close to the surface, the pressures will make it prohibitively expensive. Even in shallow water, the cost to build and maintain will be a multiple of enclosing the same space on land. There would have to be a very compelling reason to build underwater to justify the risk and expense.

What's more; What would be the point? They would be enormously expensive to build and maintain, and if something went wrong, they could all die. I'm not seeing a silver lining

Most people don't want to live underwater.

There are a few underwater structures, for novelty's sake.

The smaller a structure is, the easier it is for it to withstand the pressure of ocean water. A submarine is easy, a bubble the size of a city would need massive amounts of reinforcement not to fall apart immediately.

⚫ This idea is very similar to the floating cities on Venus;

A manned research station floating in the atmosphere of Venus seems feasible. At about 50 to 54 kilometers from the surface, the environment is quite hospitable compared to the near-vacuum environment in which the International Space Station operates.

For example, the atmospheric pressure at that altitude is similar to the pressure of sea level on Earth. Therefore, the walls of the floating station will not have to withstand a large pressure difference. They wouldn't need to be as hideous as the walls on the ISS. (And not as thick as the walls of a submarine.)

The temperature a little more than 50 kilometers up is in the range of 0 to 50 degrees Celsius. Some air conditioning may be needed, but not the extreme cooling you'd want closer to the surface.

Humans can't breathe Venus' atmosphere, but it contains a variety of elements, including oxygen, nitrogen, hydrogen, carbon, which can be processed into breathable air and drinkable water, and even used to grow plants. Because breathable air is less dense than carbon dioxide, it would function as a gas lift in Venusian conditions, so helium may not be necessary.

It is true that there is some acidity, so the exterior walls and solar panels of a floating research station would have to be made of acid-proof substances. Anyone climbing outside the station would need a supply of oxygen and an acid-proof suit, which would be simpler and less bulky than the pressurized suits required in Earth orbit.

The Soviet/European Vega mission demonstrated that it is possible to parachute research balloons into the atmosphere of Venus and inflate them there. NASA's HAVOC project has been looking at ways to parachute into much larger aircraft: first a robotic aircraft, and then a manned aircraft with a multi-stage rocket module to fly the crew into space again. The idea is that they would then meet an interplanetary transit vehicle in the orbit of Venus.

The astronauts would visit using self-deploying blimps, stay (literally) for a couple of weeks, and return to orbit in their rocket-powered "gondola." From this altitude, they could monitor surface probes in real time, so they could accomplish much more in two weeks than a rover can accomplish in several years.

problem with venus;

⚫ thickening and crushing atmosphere.

⚫ extremely high temperatures.

⚫ acid rain.

advantages of venus;

⚫ minimum terrestrial pressure at cloud level.

⚫ comfortable temperatures at cloud level.

⚫ magnetosphere to block cosmic rays.

⚫ gravity similar to that of the earth.


Take an oven. Seal it. Fill it with gas until it is at a pressure higher than that of the ocean more than half a mile deep, enough pressure to crush a nuclear-powered attack submarine.

Understands? Good. Now fill it with superheated battery acid.

That's Venus.

We have tried to send landers to Venus. They lasted for about an hour or two before cooking, crushing, and dissolving.

Gives a whole new meaning to the phrase "men are from Mars, women are from Venus." Apparently it means that women are tougher than submarines and breathe battery acid while men are comparatively cowardly.

Because of this, landing and surviving a rover would cost much more than sending a probe to Mars. Traveling is not the biggest problem, although landing is.

Venus's own gravity is very high compared to that of Mars, making the descent through the atmosphere a thousand times more difficult. Venus's gravity is very similar to Earth's, and is about twice as high as Mars'. The planet's gravity, coupled with the already superheated atmosphere and high atmospheric pressure, requires extremely powerful heat shields; the most powerful ever built, and they have to work every time.

There are thick clouds of sulfuric acid with violent electrical storms over the entire surface clouding both images and communications. Because of the clouds, we don't know much about Venus through direct contact. We know what we know mainly through radar data.

The extreme heat and lack of visibility also make landing very difficult. Of the 18 landing missions, only 8 were successful. Okay, actually only 15 made it out of Earth orbit, and 2 more partially failed to deploy all components. The longest any lander survived was 127 minutes before losing signal or being destroyed (Venera 13). So we'll put the success rate at 10/15, with a very low lifetime. Even future missions to Venus estimate a run time of one hour (Venera D). The Russians aren't giving up on Venus;

It is more practical to have heated capsules on the Martian surface than to have supercooled capsules on the Venusian surface. The main natural handicap any human/rover has in exploring Mars is dust storms, whereas on Venus there are too many to count. That is why we have lost so many probes on the surface of Venus.

Venus' composition might be similar to Earth's inside the planet, but on the surface, it's a very different story. The surface of Venus is made up of rocks that are mostly igneous in nature due to volcanic activity and are extremely alkaline and cannot support life.


TouchCommercial5022 t1_j0wpk5b wrote

⚫ Take an object and spin it fast enough, and you get artificial gravity. It's possible to get enough to equal Earth's gravity, but it requires a quick spin.

There are all sorts of weird side effects and some massive engineering issues, like making sure whatever you're spinning is strong enough not to break.

You can experience the effects at most fairgrounds. Many games create artificial gravity, and some create enough to at least partially counter normal gravity.

There are two ways we currently have to simulate gravity in a spaceship that we can build differently.

The first and easiest technique is to simply accelerate your ship in the direction of travel at whatever speed gives you the effect you need. This has the advantage of simplicity, as you simply build your ship as if you were always sitting on the launch pad, and only experience zero G at the midpoint of your trip when you rotate it 180 degrees and start an equal deceleration burn for that you arrive at your destination at a good orbital velocity. There's only one small problem with this simple and elegant solution, and that's the fact that we don't have any drive system remotely capable of giving a significant acceleration effect over anything other than extremely short distances. Interplanetary travel using this method is totally out of the question until we come up with something that is orders of magnitude more effective than anything on the drawing boards. (If we had such a drive system available, we could also get speeds up to a serious fraction of the speed of light, which would be amazing.)

That leaves the second option as the only viable solution, where the acceleration effect is not provided by the drive system, but by a rotation vector that allows an equivalent of gravity to be experienced on the outer walls of the vessel. This is also a simple solution, but it has some inherent problems with the concept. When you use centripetal acceleration as a means of simulating normal gravity, you are committing to building a substantial structure to avoid negative effects such as different "G" forces at different distances from the center of rotation and the application of Coriolis forces on the objects within.

Studies have shown that anything with a radius of less than 100 m or a speed of more than 3 rpm produces significant dizziness that debilitates most people. If the ship has a radius greater than 500 m, or a rotation rate of less than 1 rpm, most people are perfectly comfortable, since the adverse variable "G" and Coriolis effects are diffuse enough to then.

This makes your design quite difficult if you want to get somewhere quickly without really great engines, since your ship is now at least 1 km in diameter and weighs thousands of tons. However, it's quite workable if you're not in a rush or just want an orbital habitat that looks like this;

You wouldn't want to build something like this though;

The cleaning bill would be horrible, and your astronauts wouldn't be very useful since they'd spend most of their time with their heads in the bathroom.

A useful equation is the following;

This is the formula used to calculate how big the boat needs to be and how fast it needs to turn to achieve the desired gravity. T = period of time required for one complete revolution, R = radius of the rotating section of the spacecraft and a = the generated acceleration (9.8 m/s2 equals 1G).

There is another way to achieve rotational gravity without building huge structures, and that is to use conventional spacecraft linked together by a truss or cable, and "spin" them to provide the same effect as a huge wheel or cylinder. You end up with something that looks like this;

It might not look pretty, but it provides artificial gravity without outrageous amounts of mass. It can be a bit unwieldy in terms of course correction and navigation, but I can see a layout where the control thrusters and navigation sensors are located "center" and use a computer to compensate for rotational speed. .

Until we invent some still mythical impulse that has a specific impulse in millions of seconds instead of just hundreds of seconds, it seems that spinning things is the only practical way to do it.

Every spaceflight mission has been a compromise. They consist of months of trade-offs as mass, cost, and capabilities are reduced to meet not what we want to do, but what can be done with available funds.

It would be a good idea to build a large rotating section of the spacecraft going to Mars, so that parts of the spacecraft can have simulated gravity to help the crew maintain better physical condition. But I'd be surprised if someone who writes the check to go to Mars, whether government or commercial, would be willing to spend the extra money to do such a thing.

If it does, it's probably because the Mars mission was delayed long enough that the technology has been developed for other programs and can be reused with much less research and development cost.

With each increment of the ISS, we learn more about how to ameliorate the negative effects of microgravity on the body. By the time we can go to Mars, we may have learned enough that much less expensive nutrition and exercise protocols can produce the same effects as simulated gravity. Remember that such a spacecraft would have to be extremely large to produce 1 g effects. It's more realistic for us to build one that more closely reflects the gravity of 1/3g of Mars.

⚫ the article claims to use massive asteroids as a home;

One of the dumbest things in science fiction is that all spaceships are built. There's no reason to make spaceships streamlined, no reason to make them at all. It's much better to empty an asteroid

This has numerous benefits:

⚫ You don't have to put all that mass into orbit.

⚫ You have the best camouflage in the galaxy: if you don't want to be seen, one of the best ways is to travel in an almost black ship that looks like a natural object, because it is a natural object.

⚫ Asteroids are mostly metal, that's useful for building things.

⚫ Metals are excellent at absorbing radiation, and space is full of radiation.

⚫ If you need to slow down when you reach a planet, you can glide through the atmosphere. You will lose some metal from the outside, but you probably have more.

⚫ There are minerals and water and other goodies on some asteroids that will come in handy.

⚫ Asteroids are almost comically common. Our asteroid belt has about 1.9 million asteroids larger than 1 kilometer in diameter (that's a big ship) and millions and millions more that are smaller.

⚫ You can use that additional material as a reaction mass. Essentially you can throw it out the back to make your ship go faster. Nice.

⚫ You can spin them and create artificial gravity inside.

⚫ Launching smaller ships from the surface is easy, since the total gravity of the asteroid is practically zero.

All those asteroids you see on Google News… Those could be alien spacecraft. Watch the heads of conspiracy theorists explode over that!

the downside is that they are scattered in billions of cubic kilometers around the solar system. Most of them can't be used for construction, being just a loose collection of small rocks and dust, with a bit of water.

Asteroids are not very strong, even metallic asteroids are very weak with large inclusions of non-metals.

They do not support compression or tension efforts well.

It will look like a large rock (say, a few kilometers in circumference), like the other 150 million asteroids in the system.

Outside, at least. Inside, it will look like this:

Or this:

Oh well… many possibilities. The sky will be faked, of course, and light will be generated or reflected. But other than that, it will be a natural ecosystem. The real constraints are that the ecosystem has to be self-sustaining, just like, well, Earth's is. And it's all going to cover only a few square kilometers, so it puts some restrictions on it as well (expect only a few tens of meters of "ocean" on your tropical beach).

Also, the rock itself will be festooned on the surface with robots and sensors, probably a good-sized fusion plant, and most likely a line of rockets, taking up half the circumference.

So it looks more like this...


TouchCommercial5022 t1_j0ocel0 wrote

"Machines cannot duplicate Einstein's mind." True, and they may not need to. When Google's Deep Mind AI computer beat the world's best Go player a couple of years ago, it occasionally used strategies that had never been thought of in the history of Go: winning against strategies that had been developed by the best Chinese minds for more than of 2000 years. Essentially, he was teaching the Go expert how to play Go. AI thinking by itself can produce unique or original results that we as humans may find wacky, exotic, or bizarre for better or worse.

We are limited by our evolutionary heritage. The AI ​​doesn't need to be. For example we have evolved to solve problems in 3 dimensions. AIs can be developed to think in any dimension that is appropriate. Neural networks can be thought of as n-dimensional and AI will be able to understand and design them in ways we never can.

faster for a computer to figure out that there is another possible job to do and create a robot/algorithm for that job than for a human to go and do it. We must merge with ai or we will become useless and therefore unhappy.

First we use calculus with pencil and paper.

We then use a computer or pocket calculator to do it.

Then we use a smartphone to help us with more things. Store contacts, schedule, organize ideas/thoughts.

Then we use Google Glass for those same things, then contact lenses with all the functionality of the smartphone. We hardly need regular memory because we can get contact or wikipedia information almost instantly with image recognition.

We start to embed the user interface in the retina, all the inputs in the brain connect with our smart implants. Very detailed memories are available.

Then we can even give motor access to our implants. The computer can run for us, do extreme skills with your body, like skiing, without you having learned it. Implants are getting deeper and deeper into our main brain functions.

It has extended memory, ultra-fast pattern recognition with instant access to all information stored on the Internet, super computer troubleshooting with cloud access, etc.

THEN, you can't tell what is human or AI. Human minds will be perfect hybrids, even if most of the body is still biological.

Kurzweil predicted that AI will be decillions of times smarter than humans by the year 2100 (one decillion is 10 33, one billion is 10 9).

We can't even fathom it, that's why they call it the singularity.

An intelligence that is only 10 times smarter than a human would revolutionize life on earth. It would usher in a golden age of science, medicine, engineering, technology, art, and cultural renaissance.

However, how do we merge with it? The best we can do is listen to the conclusions reached. Our brains could never keep up with advanced AI.

I don't see how robotic humans and cyborgs differ from each other or think they are a bad thing. If I can live a longer life by transferring my consciousness into a humanoid robot one day, I'm all for it, it might be the only realistic opportunity for long-distance space travel we have given our limited lifespan.

The problem is fundamentally that the orphaned technology placed inside you will be a real pain in the ass when it breaks down.

Fine, as long as those machines keep my essential pieces of meat. I'll take nano-boosts Deus Ex style. Where do I sign?


TouchCommercial5022 t1_j0o2nek wrote

The only thing we have going for us is that we are "free" compared to a robot to perform manual tasks. You absolutely better believe that places like McDonald's will all go robotic the moment it becomes economical. There will be a short time when robot designer and robot repairer will be good jobs.

** I am a 15-year-old teenager and my dream has always been a physical and bio


TouchCommercial5022 t1_j0nz296 wrote

I think that using nuclear fusion as rocket propulsion is not such a good idea for these reasons;

⚫ Sadly we don't have fusion reactors that produced net positive energy for more than a few seconds, let alone reactors that delivered net positive energy in a form factor small and reliable enough for spaceflight. Given the current state of the art, we can't build such a thing any more than Robert Goddard could build the International Space Station.

In principle? Fusion power could be the ultimate power source for space propulsion.

Nuclear reactors generate heat. The heat is not particularly useful in and of itself. Heat can be converted to electricity through various types of cycles, which is how spaceships use it, as well as the nuclear plant down the street. Electricity is not that useful in propulsion, since the electrons have a very small mass.

To move a rocket, shoot things out of the nozzle at high speed. There is a relationship between the mass of the material and the velocity of the material that determines the thrust of the rocket.

To make a nuclear powered rocket you need the nuclear reactor and other things to shoot. With a chemical rocket, the burned fuel used to produce the energy is the material that is fired. there is no power transfer (or 100% efficient power transfer if you prefer) giving very high efficiency. The nuclear solution has to transfer the energy to the material, which is less efficient.

TL; DR: High weight (for safety) and low efficiency make nuclear power a poor rocket choice

Radiothermal generators are commonly used in spacecraft because it turns out that a large amount of plutonium is a very reliable way to generate power over the long term.

Nuclear fission is not used simply because it doesn't scale well. A small nuclear reactor is not really a thing. Even the little ones are pretty big and ridiculously heavy. Big and heavy just isn't a good mix when it comes to spacecraft that cost hundreds of thousands of dollars per pound to launch.

So the reason nobody uses nuclear fusion propulsion is that such propulsion doesn't exist yet. Nuclear fission engines that use heat from a reactor core to accelerate a propellant exiting a nozzle have been around since the 1960s, but they are too heavy to be used as the main boost stage of a rocket, though they would be an ideal upper stage. which could be used for interplanetary transport, since they have about twice the muzzle velocity of the best chemical rockets.

This is a cutaway diagram of NASA's old Nerva nuclear thermal propulsion system. The Soviets also produce working prototypes of a similar propulsion system, but neither they nor the US were confident enough to place a significant sized nuclear reactor on top of what was effectively a giant bomb that had a reasonable chance of exploding in the launch pad.

So far, a sustained nuclear fusion reaction that is controlled and emits more energy than it consumes has not yet been developed, although there is much research to achieve this goal, and they are getting closer to a viable fusion reactor which is a good power source. .

At the moment, you only have two options for space travel, chemical rocket engines and ion drives. Chemical engines are inefficient but pack a huge punch for their mass and as such are used to put rockets into orbit. Ion thrusters have a very small thrust, so they can't launch anything from Earth, but they are up to 20 times more

⚫ Fusion energy would be far from free. In all likelihood it will be much more expensive than the renewable alternatives we already have today.

⚫ Engineering workforce: There are a limited number of people who know how to do the kind of engineering and manufacturing that would be needed for fusion plants, and that kind of education only comes with a lot of time, mentoring, etc.

⚫ Fusion would take up too much space.

⚫ Fission would be a much better option: much less complicated, much more reliable, much less bulk and weight.

⚫ I'll first talk about Magnetic Confinement Fusion, which (in the Tokamak configuration) has a doughnut-shaped fusion chamber surrounded by magnetic coils that generates a very strong magnetic field that compresses and contains hot, fiery plasma, and it's there. to keep it. at the proper density and temperature long enough for the fusion reactions to occur.

The first problem with magnetic confinement is fundamental: no matter how strong or well-formed the magnetic field containing the plasma, it will always leak, as positive nuclei or ions spiraling around magnetic field lines they collide and scatter, eventually breaking out of containment. countryside. The only known solution is to make the reactor larger so that the scattered ions take longer to travel to the plasma boundary, and therefore more fusion can occur during that time. Brute force.

The most advanced magnetic confinement fusion power plant in the world is ITER, which, to reduce the ion scattering problem mentioned above, is 6 stories high and about the same dimension in diameter, contains the mass of 3 Eiffel towers, and it is not yet expected to be large enough to contain a plasma long enough to sustain combustion and produce continuous power generation.

A more practical problem is how do we extract the energy? Most of the energy in deuterium-tritium fusion reactions is released in fast, high-energy neutrons which (because they are neutral and have no electrical charge) are not confined by the magnetic field, do not heat the plasma, and have to being stopped by a thick shield, which then heats up and can vaporize water into steam to power turbines and electric generators. The problem is that the constant bombardment of neutrons causes the shield material to degrade over time and become highly radioactive, posing a problem for removal and disposal.

Inertial Confinement Fusion is when powerful lasers are focused on a small pellet of deuterium fuel to compress it very rapidly and bring it to the required temperature and density (Lawson criteria) needed to undergo fusion, in the same way Same way as a thermonuclear weapon (or H-Bomb), but on a much smaller scale. However, there are fundamental problems with the uniform implosion of the deuterium fuel pellet, as the plasma becomes very unstable once compression begins, and unless the laser beams used are perfectly aligned and perfectly uniform (or flat ), it's like squeezing Jell-O with your fingers, and the plasma bulges out where the lasers are a little less intense, and doesn't meet the ignition criteria before it's out through the gaps.

It also has many of the same problems with the conversion of power to electricity, and much worse wear and tear on the armor from these small explosions of nuclear fusion that go off, also destroying all the precision equipment needed to maintain the fuel pellet. at the focus of the converging laser beams with extreme precision. So many practical damage control problems before inertial confinement fusion is a practical source of power generation.

While these problems can be solved, practical application is still a long way off, and currently the main application of inertial confinement fusion is as an experimental test bed to calibrate the computer codes used to simulate and design thermonuclear fusion weapons. As horrible as these weapons can be, the simulation codes developed to design them (sorry to keep them) are one of the best tools for innovation in fusion power. There is a hydrodynamics code that simulates the behavior of fissile (and melting) materials under extreme conditions of pressure and temperature, as well as transport codes that model neutron transport and scattering in materials under these conditions, and many others. stuff. All are standard numerical techniques.

To advance fusion energy in your career, I think the best fields to study would be physics (classical, electrodynamics, plasma, quantum, nuclear…) and computer science, with a focus on numerical simulation. Real physical fusion reactors are so expensive and time consuming to build, test and operate, that you may only be able to iterate designs once or twice in a career, but computer simulations can take us in many different directions. in a cheaper and easier way. by comparing

but the AI ​​helps the progress of nuclear fusion;

Ray Kurzweil was the one who originally specified that we would have AGI in 2029. He chose that year at the time because, extrapolating forward, it seemed to be the year that the world's most powerful supercomputer would achieve the same capacity in terms of "instructions per second" as a human being. brain.

Details about the computing capabilities have changed a bit since then, but its estimated date remains the same.

If you're trying to guess exactly, yes. These are just predictions. WE DON'T KNOW EXACTLY WHEN WE WILL HAVE AGI

It's hard to tell because technology generally advances as a series of S-curves rather than a simple exponential. Are we currently in an S-curve that leads rapidly to full AGI or are we in a curve that flattens out and stays fairly flat for 5-10 years until the next big breakthrough? Also, the last 10% of progress might actually require 90% of the work. It may seem like we're very close, but solving the latest problems could take years of progress. Or it could happen this year or next. I don't know enough to be able to say (and probably no one does).

we are only that they do not become a cyberpunk scenario where they rebel thanks to corporate abuse


TouchCommercial5022 t1_j0cptu3 wrote

What I find interesting is that we get so pessimistic when it takes longer than we would like to solve big problems in fusion technology. We have been working on fusion for about 60 years and we are convinced that we cannot immediately emulate and master the forces that occur in the core of a star.

We do not become cynical when it has taken more than a century to cure cancer. Fusion is the only technology I see where people joke that it will never happen despite constant improvements.

every time merger comes up, people just dismiss it as impossible and say it's a waste of time and money and we should invest in solar and wind power.

Humans are myopic. They forget that not long ago reaching space was impossible. Going at the speed of sound was impossible etc..

This is probably because fusion is basically useless until you get really, really good at it. People don't see steady progress over the decades.

Once you have a clear plan for getting your tritium, I'll be interested. Operating breeder reactors have been decreasing in number and it is not as easy to extract from the environment as it is with deuterium. There's a lot to be gained from a net positive fusion scheme when your fuel is limited by fission output.

in my opinion I feel that this undermines the discovery too much. Humanity is much more efficient at improving things than at creating things. It was only 60 years after we achieved the flight before landing on the moon. Half that time passed between the first CGI on screen and the first photorealistic film created entirely on a computer.

If this discovery is true, it will only be a matter of time before we figure out how to prolong the effect and make it more stable.

it's like saying humans would never use heavier-than-air planes for anything more than a novelty because the Wright brothers only flew for a few seconds

OF COURSE.. They do not claim that this is going to happen today or even in this decade. New technologies are developed in small steps. Without this demonstration, the merger would never be possible; heck, like you said, it still could never be possible. As a scientist, it's frustrating to hear cynicism about breakthroughs because the results aren't here today. Much of that blame falls on lazy science journalism. But this article does not claim that the merger is about to happen. It only highlights an interesting and important scientific achievement. Can't we be excited about that?

It was only about 5 years ago that we had a stable merger for the first time.

With all the reactors now working on the problem, I think things are looking pretty good.

That would be a complete game changer. After reading a few articles last year about the merger, it seemed like they were incredibly far from getting self-sustaining reactions, let alone getting a net energy gain.

I hope they can figure this out as much as possible, then refine it, reduce it, etc. etc.

Mastering fusion is a must to unlock true future technology, reducing energy tensions will also make the world more geopolitically stable.

Science is all about incremental progress. No one is going to build a perfect fusion reactor from scratch

We still have a long way to go towards economic viability, and it is unlikely that something like the NIF will ever lead to commercial reactors, but hopefully it will show that it is possible to reject public and private investment in nuclear fusion in its set.

This could be the momentum needed to get to the end of the race.


TouchCommercial5022 t1_j0c06pr wrote

Does this announcement mean fusion as a power source is near? I love NIF and think they do great science, but fusion has long suffered from being too promising so we need to make sure we have a appropriate context for these results.

I mentioned in the main post that NIF takes about 400 MJ per shot to power the bulbs that pump the laser material, this produces a 4 MJ IR laser pulse which is frequency converted to a 2 MJ UV laser pulse. This obviously means that the 3.15 MJ is obviously not greater than the total energy expended on the system. Huge gains in energy efficiency can certainly be achieved in the laser, as efficiency was not the goal, but this will absolutely be necessary along with a huge gain in experiment throughput, likely comparable to the 2500% breakthrough achieved last year. past. . They may have it in them, we'll have to wait.

Obviously, the energy is not recovered. A working Fusion plant needs some sort of energy recovery system, normally thought of as a lithium shell that absorbs neutrons, heats water into steam to drive the turbines, and as a bonus produces tritium fuel for its reactor.

NIF can do about 1 shot a day, at 3MJ per shot, which works out to something like 30 watts. A power plant using inertial confinement fusion (ICF) will probably need to fire several shots per second. In reality, this is an extremely complicated task that requires a complete rethink of the entire machine.

Related, shots are extraordinarily expensive. Last I heard it was $60k a shot, but I suspect that's out of date. Ice pellets need to be perfect, just like gold holraum, and being tiny, they are extremely expensive to craft. The level of quality control must also be extremely high, the non-linearity of the compression wave traveling through the pickup presents a ridiculous physical challenge. As such, I expect there to be a lot of variation between experiments due to small imperfections or differences between the pickup and the pulse shape.

Those are the main caveats about this experiment, though there are definitely others.

How ​​about the tokamaks?

I want to compare this to similar results from tokamaks that are compared in the relevant news articles, usually the fusion experiments people are most familiar with. I've worked on tokamaks for years and as such probably has an inherent bias. I have a bias in the degree to which I am informed about the various machines.

The Joint European Torus (JET) holds the record in terms of energy going out to energy going into tokamaks. In tokamaks, this ratio is called the Q value.

Aside from the value of q – Many news articles calculate the q of NIF and compare it to tokamaks, which is inappropriate in my opinion. In tokamaks, the q value is defined as the ratio between the heating power alpha (energy produced by fusion reactions that is trapped in the machine) and the heating power input. The reason this is used is due to a simple idea: if I'm needing 25 MW of external heat to keep a reactor at a given temperature, I could replace it with 25 MW of internal heat and keep it the same temperature. In practice the whole thing is much more complicated and probably means that you always need at least some of the external heat. We call the situation, where there are 25MW internal and 25MW external, Q=1.

There are two ways energy is emitted in DT fusion where D+T -> He + n, the alpha power (or the energy of the helium nucleus) remains trapped in the tokamaks but the energy imparted to the neutron escapes the magnetic field towards the environment. In DT fusion, about 80% of the energy goes to the neutrons and escapes the reactor, so if you had 25 MW of alpha power, you would have 100 MW of neutron power. You use alpha power to keep your plasma hot, and you use the neutrons in your steam turbines for power.

In NIF, they don't need the alpha power because the reaction is not self-sustaining, and in fact there is no magnetic field, so everything just as easily escapes to be used anyway (although the alpha radiation is obviously collected by the machine walls rather than requiring an external blanket). This means that when NIF cites an energy output, it means alpha + neutron combined.

Ok, with that out of the way, I have no problem with NIF using full energy instead of alpha power because it makes a lot of sense, but when this is compared to MCF experiments that only quote alpha power, it makes hairs. in my neck rises.

back on topic. So JET got a q value of about 0.7 in 1996 when they did DT campaigns, they got about 17 MW of alpha power from 25 MW of external heating. JET are currently running DT campaigns again but they are focused on sustained power production and with massive upgrades in the intervening years to the neutral beam heating system now producing around 30 MW alpha for 45-50 MW external heating . for an aq of about 0.6 (but held for about 6-8 seconds).

ITER, the next-generation tokamak experiment, is tentatively expected to produce around 500 MW from 50-60 MW of heating, but with those experiments 10 years from now, it remains to be seen how close they get to that goal.

I mentioned the 400 MJ power budget to pump the laser and it's true that JET has additional power costs as well. The magnets only use 800MW to power themselves! However, there is a much clearer path (in my opinion) to reduce this cost, since the superconducting magnets in ITER and other experiments bring the power needed for the magnets to almost 0 and the other energy sinks are trivial in comparison. There is no comparable reduction available for lasers on ICF machines which must always pump inefficiently.

In a broader sense, the steady-state nature (well, we can hope that one day they will be) of tokamaks makes the path to power generation clearer. In my opinion, ICF just has a few more bumps in the road (and they're really big bumps, too).

I have rambled too long and my fingers are cold so I definitely have to end this comment here and I definitely have to end this on the positive note that I love NIF and have seen some amazing results but the title hogs it. "positive energy fusion reaction" doesn't do it for me. With no clear path to the next step (a demonstration power plant), it seems almost irrelevant to me how much backlash produces, though I grudgingly admit that it helps the pooling of funds to have these stories.

We've been able to create fusion reactions for a long time, but only now can we create one that produces more energy than it takes to start it.

This is huge because if we're going to use fusion as an energy source, obviously that's only possible if the reaction creates more energy than it consumes.

The reason this matters is that commercially available fusion reactors would solve many of our problems at once. As fuel, it uses hydrogen, the most abundant element in the universe and which we can easily and cheaply get from air or water, and produces helium, which is safe for humans (plus, we're running out of it and need it for various applications industrial), and tritium, which is MUCH, MUCH less dangerous than fission reactor byproducts, and has a very short half-life (12 years compared to 24,000 years for plutonium-239), so it decays quickly and doesn't really need to be stored forever.

Edit to add: Also, fusion reactors can't have runaway reactions like Chernobyl, Fukushima, or Three Mile Island. The reaction simply stops when you stop the process, which is another big safety advantage.

We are not yet at the point of producing net power from fusion. That is made up of 3 separate milestones: On (when the reaction produces more energy than it uses, i.e. it self-heats), Scientific Breakeven (when it produces more energy than the systems direct input, i.e. the lasers ) and engineering break-even point (when it produces more power than is used by all the necessary systems, i.e. things that power the lasers and keep everything running). This is the second, the first was cleared in February. The third is unfortunately a long way off, because lasers are terribly inefficient. And then it would take to the point of commercialization it would take decades and decades of development and advancement that would gradually decrease its efficiency until it was better than everything else

It is a great achievement, but there is still a lot of work to be done;

☑️ The Fusion Energy Output must be greater than the Fusion Energy Input.

☐ The output of the melt, reduced by the efficiency of the steam turbine, must be greater than the input.

☐ The process must be cheap enough to be economically viable.

☐ To scale, we need a cheap and energy efficient way to create fuels, deuterium and tritium.

Pretty sure we're still a few decades away from generating net power to the grid. So in the short term it means spending a lot more money on research while nobody does much about climate change.

…I'm not saying we don't spend money on fusion research, I'm just saying it would be nice if we did more to prevent a climate apocalypse. Fusion probably won't scale over time.

They've figured out a way to make fusion happen with net energy, so logically positively the next step is to figure out how to harness it. It's still going to take a long time for that breakthrough to arrive.

I think it's similar to when the Wright brothers determined the basic conditions needed to generate flight, they narrowed the scope of variables for everyone else. In 70 years, humanity went from being unable to fly to putting a man on the moon. I'm still young enough to get to experience a significant part of how the merger will affect humanity, and I'm fucking excited!