lostmsu

lostmsu t1_je78jfg wrote

You are missing the idea entirely. I am sticking to the idea of the original Turing test to determine if AI is human-level already or not yet.

The original Turing test is dead simple and can be applied to ChatGPT easily.

The only other thing in my comment is that "human-level" is vague, as intelligence differs from human to human, which allows for goalpost moving like in your comment. IQ is the best measure of intelligence we have. So it is reasonable to turn the idea of Turing test into a plethora of different tests Turing(I) which is like any regular Turing test, but the IQ of the humans participating in the tests (both machine's opponent, and the person who needs to guess which one is the machine) is <= I.

My claim is that I believe ChatGPT or ChatGPT + some trivial form of memory enhancements (like feeding previous failures back into prompts) quite possibly can already pass Turing(70).

1

lostmsu t1_jaj0dw2 wrote

I would love an electricity estimate for running GPT-3-sized models with optimal configuration.

According to my own estimate, electricity cost for a lifetime (~5y) of a 350W GPU is between $1k-$1.6k. Which means for enterprise-class GPUs electricity is dwarfed by the cost of the GPU itself.

14

lostmsu t1_j8stb1b wrote

Love the project, but after reading many papers I realize, that the lack of verbosity in formulas is deeply misguided.

Take this picture that explains RWKV attention: https://raw.githubusercontent.com/BlinkDL/RWKV-LM/main/RWKV-formula.png

What are the semantics of i, j, R, u, W, and the function σ? It should be obvious from the first look.

1

lostmsu t1_j4lrt8a wrote

Performance/$ characteristic needs an adjustment based on longevity * utilization * electricity cost. Assuming you are going to use card for 5 years at full load, that's $1000-$1500 in electricity at 1$ per year per 1W of constant use (12c/kWh). This would take care of the laughable notion, that Titan Xp is worth anything, and sort cards much closer to their market positioning.

65

lostmsu t1_j1x7gfr wrote

>lostmsu - we can’t be sure. Maybe Fred looked at the fruit bowl yesterday

I mean. I mean. Did you read the last sentence? I am selling the logic that if two sane non-stupid people in good faith disagree, then it is unclear. In you example lostmsu is a fruit of your imagination. You can't be sure that fruit is sane and non-stupid. Here the argument is that we are in the ML subreddit context, and we both understand the topic at hand which raises the chances of both of us matching the criteria to near 100%.

In this context if I would start disagreeing with 1+1=2 you should at least start doubting, that e.g. I'm on to something.

1

lostmsu t1_j1t7nph wrote

> If we’re talking about a lost child

Now you are just making things up.

> my information is hours out of date I don’t just say

This depends on the context of the dialog, which in this case is not present. E.g. this could be a conversation about events happening elsewhere only tangentially relevant to the conversation participant(s). For a specific example consider that dialog being about the disappearance of MH370 flight.

> One person disagreeing is not a sufficient threshold for clarity. > was taking the argument to the absurd to show that one person’s unclear doesn’t invalidate a truth.

It normally would not be, but we are not two randomly selected people, and neither of us is crazy nor do we argue in bad faith.

1

lostmsu t1_j0mqde6 wrote

> limit your understanding

ROFL. One in making that statement you assume you're right, but that's the matter in question, so this argument is circular. Two, the opposite of that is called "jumping to conclusions".

> limit your ... probability of acting correctly

Unsubstantiated BS. When the transmitted information is "unclear", nothing prevents one from acting as it was "no" or "yes". That's what damn "unclear" means. On the contrary, assuming it means "no" is the limiting factor in that particular scenario.

> This exchange has a very clear subtext the child hasn’t been found.

Dude if it is clear to you and not clear to me, it damn literally means it is unclear because the people disagree on the interpretation. Your is missing the "last time I met the group of people who are searching", which could possibly be minutes ago, hours ago or even yesterday.

> I think you’ll find this level of logical pedantry only correlates with being a douche

Oh now we switch to personal attacks? How about I call you a moron, cause you can't grasp that if two seemingly not stupid people disagree about a statement, it can not possibly be "clear"?

> I say 1 + 1 = 2 is clear, you say it’s not. Well obviously it must be unclear if one of us considered it not you say

I can see that you fail to separate slightly complicated abstractions. For instance, in your example you confuse objective truth and the information that a message conveys.

1

lostmsu t1_j0d8fsl wrote

Man, this statement is not a negation of my statement neither it implies a negation of my statement, so it does not prove anything.

You somehow think being "the biggest logic pedant" is a downside. I can assure you logic pendancy correlates positively with pretty much every success metric you could imagine, except those that are hard dependent on average folk to be able to comprehend what one is saying. More so in science-related discussion like this one.

Don't you see the irony of two of us arguing about the correctness of "unclear" answer being the definite proof that "unclear" is the correct answer?

0

lostmsu t1_is5jeeb wrote

The transformer is awfully small (2 blocks, 200 embedding, 35 seq length). I would discard that result as useless. They should be testing on GPT2-117M or something similar.

28