Cryptizard t1_je94z3w wrote

No lol. A better way to illustrate what I am saying is that if you learn how addition works, then if you ever see 2+2=5 you can know it is wrong and reject that data. LLMs cannot, they consider everything equally. And no, there is no number system where 2+2=5 that is not how bases work.


Cryptizard t1_je6qdax wrote

If it has understanding, it is a strange, statistical-based understanding that doesn't align with what many people think of as rational intelligence. For instance, a LLM can learn that 2+2=4 by seeing it a bunch of times in its input. But, you can also convince it that 2+2=5 by telling it that is true enough times. It cannot take a prior rule and use it to discard future data. Eventually, new data will overwrite the old understanding.

It doesn't have the ability to take a simple logical postulate and apply it consistently to discover new things. Because there are no things that are absolutely true to a LLM. It is purely statistical, which always leads to some chance to conflict with itself ("hallucinating" they call it).

This is probably why we need a more sophisticated multi-part AI system to really achieve AGI. LLMs are great at what they do, but what they do is not everything. Language is flexible and imprecise, so statistical modeling works great for it. Other things are not, and LLMs tend to fail there.


Cryptizard t1_jduumtg wrote

I have access to GPT4, I’m not making this stuff up. Here are three from poking around, but keep in mind it will pretend to know the answer to anything it is just wrong when you ask it to explain the details. It will not match with actual fact, I.e. what is in Wikipedia.

What is an oblivious tree?

What is the population of Goleh-ye Cheshmeh?

Where was the 65th governor of Delaware born?


Cryptizard t1_jdsooyh wrote

I'm sorry, from my perspective here is how our conversation went:

You: GPT4 is really good at arithmetic.

Me: It's not though, it gets multiplication wrong for any number with more than a few digits.

You: I tried it a bunch and it gets it the first few numbers right.

Me: Yeah but the first few numbers right is not right. It is wrong. Like I said.

You can't claim you are good at math if you only get a few significant digits of a calculation right. That is not good at math. It is bad at math. I feel like I am taking crazy pills.


Cryptizard t1_jbehya0 wrote

I have never seen a single person that wasn’t on his payroll say the move is good. He didn’t even try to justify to us why it would be good, he just forced it through over the objections of the comptroller and the city council. It seems like an obviously corrupt action that we are going to be reading about in two years when it comes out that BGE bought him a vacation house or something.

You make a very good point about short term vs long term goals, but what do we do then? How do we get politicians to actually do what they promise? How do we stop them from being so nakedly corrupt? It’s so frustrating.


Cryptizard t1_jbegz7p wrote

How do we counter the constant stream of corrupt politicians we somehow end up with then? I was excited about Brandon Scott but now he is directly going against the voters and the city council to sell our conduits to BGE and we have no recourse. If we had the possibility to recall him I bet he would not be so brazenly corrupt.


Cryptizard t1_jbefu4e wrote

In California they used it to recall the judge that let Brock Turner off. They also recalled lots of politicians that failed to live up to their campaign promises, which seems awesome to me. Right now there is zero recourse for a politician that promises something and then does a 180 on it immediately upon getting elected.