mrpenchant t1_je4y5m5 wrote

>We can use one or all these solutions.

Wrong on saying we can use only one of them. Batteries and transmission are both a must. Because of the intermittent nature if we want to continue increasing renewable production we must have batteries. And if we want to do this in a remotely sensible way, we need transmission to move electricity from where it is most efficiently and economically generated to the cities where people are.

By the way, your hydro and heat reservoirs both are just non-traditional batteries.

And only charging EVs when there is excess power is a good way to kill off EVs. I know I would never buy one if that was the case because I don't plan on getting stuck somewhere because no charging was considered a valid option. Economic incentives about when to charge are already being done and are perfectly valid on the other hand.


mrpenchant t1_je4vys2 wrote

They misstated it a little bit.

The way the format is set up is the first time it gives a length is for the whole thing, but it is defined to have 2 subchunks. The first subchunk will always have the same size for a wave file, but does provide a length of that subchunk and then the last data length is just for the data in the 2nd subchunk.

This is all to say, it's not a reminder but a slightly different length, which would be the length of the entire thing minus 36.


mrpenchant t1_jcb0z2h wrote

>If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.

So one thing I will note now, just because AI is blocked from giving you a sexist joke doesn't mean it couldn't have trained on them to be able to understand them.

>An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.

This feels like a very flimsy example. The AI is now employed as a detective rather than a chatbot, which is very much not the purpose of the ChatGPT but sure. Now ignoring like I said that the AI could be trained on sexist jokes and just refuse to make them, I still find it unlikely that understanding a sexist joke is going to be overly critical to solving a crime.


mrpenchant t1_jcaq6bd wrote

I still don't follow especially as that wasn't an example but just another generalization.

Are you saying that if the AI can't tell you jokes about women, it doesn't understand women? Or that it won't understand a request that also includes a joke about women?

Could you give an example prompt/question that you expect the AI to fail at because it doesn't make jokes about women?