Codydw12 t1_je38djn wrote

I truly wonder what is holding up larger scaled greenhouses complete with LED lights. The verticle farms effectively being skyscrapers that are filled completely with crops and are climate change resistent. I understand that the economics of purchasing the land in a high density city to build a farm isn't pretty compared to spending a fraction of that to purchase a similar amount of total land in an area more built to suit said farming, but I think long term enough people in major city centers would pay good money for fresh produce year round. This plus significantly larger crop yields on paper of a near perfectly controlled environment to me screams a fast way to grow.


Codydw12 t1_je36m7n wrote

Theoretically we can control the entire global environment. The ability to turn back the dial both in regards to global temperature changes to preindustrial levels but could even be used in small sections of land to expand the "natural" world. Turning Siberia into Pleistocene Park complete with currently extinct megafauna.


Codydw12 t1_jcfhsmv wrote

>I don't know if I think it's profound either, but I do think it's a healthy reminder. Its a good reminder that we don't really understand these algorithms, and that regardless of how human-presenting they are, they are not human and we can't trust them to act in certain ways. Maybe not particularly helpful, but worthwhile none the less (in my opinion).

And this is fair. AI will not act like a human nor will it be completely logical in every aspect. We don't actually know how one will act or react or what its been trained on.

> This has happened to me too, I've suggested exactly the same thing (though admittedly stole the idea from mark Cuban when he guest hosted on a podcast at one point). At this point everything is socialist if it's different than the status quo though so I try to ignore it.

Indeed. I have given up on trying to predict future economies but the current system won't work much longer.


Codydw12 t1_jcf9wnq wrote

> You're cherry picking. He addresses this in the article. We can't afford to be left behind, yet we also don't understand what we are racing towards.

> > But I don’t think these laundry lists of the obvious do much to prepare us. We can plan for what we can predict (though it is telling that, for the most part, we haven’t). What’s coming will be weirder. I use that term here in a specific way. In his book “High Weirdness,” Erik Davis, the historian of Californian counterculture, describes weird things as “anomalous — they deviate from the norms of informed expectation and challenge established explanations, sometimes quite radically.” That is the world we’re building.

> > I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.

> > That is perhaps the weirdest thing about what we are building: The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us.

None of this seems actually profound or useful to me. Saying that the AIs that we build will be alien to our own thinking? To me that, in his own words, is in the laundry list of obvious.

> Automation has also already cost jobs. It will cost more. This is not controversial. We need to figure out how we adapt to a world where our work does not and should not define us.

And that I fully agree with but every time I suggest heavily taxing automated jobs as a means to fund Universal Basic Income I have hypercapitalists call me a socialist for believing people should be allowed to live without the need of working.


Codydw12 t1_jcdswpf wrote

> In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.

> I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?

> We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.

> I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.

> A tempting thought, at this moment, might be: These people are nuts. That has often been my response. Perhaps being too close to this technology leads to a loss of perspective. This was true among cryptocurrency enthusiasts in recent years. The claims they made about how blockchains would revolutionize everything from money to governance to trust to dating never made much sense. But they were believed most fervently by those closest to the code.

So throw it all in the trash? Stop fighting demons? Or is it worth it to take a risk that we might burn out in an attempt to create technologies that progress to the point of immense benefit? This just reads like fearmongering.

I do not see AI as some cure all nor do I believe it will completely replace humanity as some on here seem to believe, but I do believe that a lot of the benefits that could come from it are worth it.

> Could A.I. put millions out of work? Automation already has, again and again. Could it help terrorists or antagonistic states develop lethal weapons and crippling cyberattacks? These systems will already offer guidance on building biological weapons if you ask them cleverly enough. Could it end up controlling critical social processes or public infrastructure in ways we don’t understand and may not like? A.I. is already being used for predictive policing and judicial sentencing.

Again, fearmognering. Automation and job loss is a constant fear. Terrorists and bad actors are always feared to get nukes yet none have to date. The predictions can also help eliminate disease and help crime prevention by helping those in need who are often the most predisposed to commit crime.


Codydw12 t1_jae3z3p wrote

If you actually think Musk is going to do anything other than justify his position as the Worlds Most Annoying Shithead I've got some oceanfront property in Montana to sell you. The private space companies are making small developments woth technologies such as reusable rockets, yes, but overwhelming the major programs that actually intend to put people on the moon by decades end are government.

Once again per my previous comment, Artemis 3 is a manned mission planned for landing in 2025. 4 is planned for 2027 and all the way up to 8 are planned to bring more and more people and infrastructure up there. And that's just NASA, you should look at your own Heracles program.

So what would you prefer? We all just give up on space?


Codydw12 t1_jadz3du wrote

And I can understand the apprehension after decades of being told it's happening only to never happen. But given the Artemis Program plans on having people back on the moon as well as a lunar base by decades end (might have timeline off) I think writing everything off as more hopium is pointlessly pessimistic particuarly seeing as other nations are ramping up their own space programs and the interest driven by private groups.


Codydw12 t1_jadqhe0 wrote

> Moonlight needs a common lunar reference time in order to provide accurate location data to users on the Moon’s surface. In order to keep time on different lunar missions in the past, each mission synchronized its clocks with those on Earth and used antennas in space to correct from drifts in time. ESA says this solution will prove inadequate as space agencies plan to send more humans and autonomous rovers than ever to the Moon. These different teams may need to communicate with each other, rendezvous, or conduct joint observations, and a standardized clock could smooth out issues in that regard.

That good enough a reasoning?


Codydw12 t1_j8pw6kw wrote

I get how this is seen as eugenics by removing some genes but I am not calling for anyone to die here. Hell I want more people on this Earth and get called crazy for it. But I don't see how saying "This gene causes a significantly higher risk for literal cancer" and then saying "We should probably change that to benefit the life of a person" is anywhere near wanting to genocide people.

Additionally, we can have both. Hell I'd call gene editing a healthcare procedure if you're fixing an illness.


Codydw12 t1_j8pvme8 wrote

You're right in all regards. Not having kids, bioethics and politics. But the thing is there is no example in any iteration of Pandora's Box where the box doesn't get opened. It'll be much the same with AI, robotics, space exploration and colonization and probably a whole lot more this century.

To me if someone wants to go in and edit their genetics so they grow to be 7'6", I really don't care. If someone wants to have purple eyes or bright pink hair or elf ears. If people want to get stronger or smarter or more agile or almost anything else. Fuck they could splice in Firefly genes to become bioluminescent and I wouldn't really care much the same I don't care if someone gets a tattoo, piercing or physical reassignment surgery. If you're happy and aren't hurting people I don't really care.

For gene editing their kids there's a lot that I support like improving health, removing defects or just trying to give them a good quality of life. For the more excessive things like turning their skin purple or having them grow four arms then yeah, I have an issue because you don't have the kids consent and can't get it. Now if the kid grows up and says "I want to have four arms!" then since it's them consenting I don't really care. Now we'll have another issue when two people with four arms want their own baby Shokan but that's like 50 years off at least.

I think in some regards Cyberpunk pretty accurately predicted the future. We as a society are going to have to figure shit out pretty fast.


Codydw12 t1_j8prznb wrote

So your answer is abortion. Ok fair. I haven't had the chance to have a child on account of being fucking broke but I'd like to one day. But if a couple continues to try for a child and continues to have an issue such as downs syndrome or a massive chance of becoming cancer ridden, or have crippling anxiety all their life there's not many better options. We're going to edit genes, if not today then tomorrow. We might as well get the ethics of doing as such down right now.


Codydw12 t1_j8pl105 wrote

Yet you defaulted to make it illegal to change the sex of an unborn child. I will say I disagree with said option of choosing to change sex but don't think it should be illegal.

What if you do a screening and find a gene that predisposes the child for a heart defect? Or a mental disorder? I'd say it's ethical to attempt to edit those genes.

"Oh but designer babies. The rich will just give their kids the best."

Yeah. And? I'd be more concearned if parents didn't attempt to get what's best for their kids. If you take two chess grandmasters and have them fuck is that a designer baby? What about two Olympic level athletes? Or is that different because it's natural?


Codydw12 t1_j6uicnz wrote

I support sea-steading cities. I have some major questions with it such as actual location, how it would deal with severe weather, desalination tactics, energy and overall density but on the surface building real estate out of nothing is only a positive.

If by 2040 someone can make a floating Hong Kong or Singapore or NYC with energy and water independence (possibly food as well depending on vertical farming updates) I'd move in a heart beat.