TechyDad t1_je9okh8 wrote

This is interesting. One of my obstacles to buying an electric car, besides the price of the car itself, was the cost of getting a charging station installed. If I could charge it nightly from the regular outlet on my porch, that would save some money and make an electric car more affordable.

I'm hoping my current gas car lasts for awhile, but it's 14 years old so it might not keep going too much longer. When it finally dies, I want to at least get a hybrid if not full electric.


TechyDad t1_je9n0iy wrote

Maybe someone could design a charger that you plug in, but don't immediately start charging with. Perhaps you use an app to schedule the charging to start at 2am. Then the car trickle charges just enough to not lose all power until 2am at which point it switches to full charging mode.

I'll admit that I don't know all that much about electric cars, but this shouldn't be too hard of a problem to crack. It might even be doable with current technologies. I know some chargers intelligently stop charging when your battery is full, switching to trickle charging afterwards. This would basically be the opposite with a timer built in.


TechyDad t1_jdtnrn0 wrote

I'm sure plenty of books get cancelled all the time. I have one novel published and a second written. However, I had trouble getting beta readers for the second book. So it remains finished, but unpublished. Maybe one day I'll return to the series, finish it off, and publish all the books, but until then they're cancelled.


TechyDad t1_jd104qt wrote

Could it be solved in the future? Perhaps. You never know what future technology can bring. If you talked about carrying a portable touchscreen, Internet enabled computer everywhere 40 years ago, you'd likely have been laughed at, but here we are today.

With today's technology, though, we just can't do it. From the video: "the basic crochet stitch involves 28 movements across 9 axes of motion." The most stitches one robot was able to do in row successfully was 4 and they only completed stitches successfully half the time. Obviously, there's a ton more work that would need to get done before you could have a crochet robot cranking out hats or amigurumi.


TechyDad t1_jc71gno wrote

Reply to comment by tessthismess in 2023 Digits of π [OC] by yaph

I can imagine a guy manually calculating pi in the past coming across that and thinking he's found Pi's repeating end.

"9. 9. 9? Another 9? And another 9?!!! That's it! Pi must just repeat all 9's after this. Yup. There's another 9. That's six 9's in a row. It's a repeating decimal after this for sure so the next digit will be an.. 8? An EIGHT?!!!"


TechyDad t1_jaeu620 wrote

I call that Schrodinger's Joke. The statement is simultaneously completely serious and "just a joke" until the person sees the reaction to it. Then, it collapses into the state that benefits the person the most.

If you agree with them, then they are totally serious.

If you're offended by what they said, then they were just joking and how dare you take that seriously!


TechyDad t1_jab3q83 wrote

There are some very promising things that can come from AI, but there are valid concerns about AI usage as well.

For one, AI image generators sample artists' works without permission and then use that to make new works in the same style. There are valid copyright concerns about whether this should be allowed or whether it's copyright violation.

Secondly, there's the black box problem. Say you ask an AI doctor to diagnose something and it comes up with a diagnosis. How did it arrive at that diagnosis? We can't just assume that the output from an AI program is automatically correct because it came from an AI program.

Finally, there's the bias issue. An AI program is only as good as its coding/setup and human biases can wind up incorporated into the AI. An extreme example is the chatbot that Microsoft released online a few years ago that, within a day, started spouting racist and antisemitic statements. It read stuff that humans wrote, incorporated it into itself, and began saying things like "Hitler was right."

A less extreme example might be a medical AI trained to spot skin cancer that's trained on a dataset of white people's skin. Whether due to intentional or unintentional biases, such an AI might not properly diagnose black people's skin cancer because it doesn't recognize a black person's skin as "human skin."

This isn't to say that all AI is garbage and should be tossed out. On the contrary, it's very promising. On the other hand, you also can't just hand-wave away any concerns as "old geezers unwilling to adapt to change." Like a lot of new technologies, there will be good uses and bad uses. There will be implementations that advance humanity and ones that deserve to be immediately deleted. It's important to keep a critical eye on AI usage to spot and promote the good usage while stopping the bad AI usage (and fixing it if possible).


TechyDad t1_j5743rc wrote

Yeah, I'd expect that this would fall under civil charges instead of criminal charges unless prosecutors could prove that the college not only knew that he was living there, but exactly what was going on there. That would be difficult to prove in a criminal case, but civil cases have a lower bar to clear.

Plus, every affected parent/student could sue. I wouldn't want to be in any position of responsibility in that college over the next few years.


TechyDad t1_j46ut67 wrote

I believe the first part of this is already in use. Drug companies can tell a computer "we need a chemical that will bind to this receptor. The computer will look through the possibilities and spot out some likely candidates. The drug companies then only have to test 10 or so likely drugs as opposed to looking at random for a suitable drug.

I'd expect that future iterations would incorporate modeling to guess at side effects. It won't be perfect (especially not at first), but it would give drug researchers a better head start on which compounds would give the biggest bang with the least side effects.


TechyDad t1_j46kg5n wrote

I'm especially excited about AI/machine learning generation of new drugs. We can already do this to a great extent. I don't think it'll be long before you can tell the computer "I want a drug that will have this effect with as few side effects as possible" and have it spit out 10 great contenders. Yes, those would still need to pass human trials, but it would reduce the amount of "miracle cures" that turn out to be duds once human trials begin.


TechyDad t1_j31khwd wrote

Antisemitism has been on the rise. There have always been way too many antisemites. (Goodness knows I've dealt with way too many over the years.) What's changed is that the more radical antisemites feel empowered now. Instead of trying to hide their views, they feel comfortable screaming their views from the rooftops.


TechyDad t1_j31j976 wrote

The second requirement is a huge attack on Israel so while they'll "support" Israel, they'll also sabotage any peace process and will encourage things like that settlers that cause more violence.

Also, they think that Jesus will return after the attack, take them all to heaven, and toss all us Jews into hell. This means that their "support for Israel" is really just delayed antisemitism. "If we act nice towards Israel for a bit then Jesus will doom all the Jews for us."


TechyDad t1_j1xy6nm wrote

I think lab grown meat would be a great replacement for factory farm meat. You'd still have farms with free range cattle (and other animals) that grazed. You just wouldn't have cows who live their whole lives stuck in a pen all but being force-fed to get as big as possible before slaughter.

The meat from the actual animals would be a niche item that some people enjoyed. Meanwhile, the masses eating McDonald's burgers would get lab grown meat that would be just as good (if not better) than the quality of meat they get today without risk of illnesses carried via the meat and without the huge climate footprint.