gurenkagurenda

gurenkagurenda t1_jee8sun wrote

“Invested well” is carrying a lot of weight there. The safe money is that you won’t beat the market, and unless we continue to have exponential economic growth forever, eventually that interest will fall off.

I think it’s actually more realistic to hope that we’ll end up in a post-scarcity world in the next century or so, where money is more like Reddit karma — a mild incentive, rather than a necessity.

1

gurenkagurenda t1_jebq66o wrote

The research did, which is a bit different. I don't see why this would be a violation of the TOS though. I don't see anything in there about using model outputs to train other models. The closest would be:

> reverse assemble, reverse compile, decompile, translate or otherwise attempt to discover the source code or underlying components of models, algorithms, and systems of the Services

But that's not the same thing. Training your own model on ChatGPT outputs won't result in anything like the same source code, algorithms, or model weights as ChatGPT.

25

gurenkagurenda t1_je68go9 wrote

Almost nobody makes good CEOs, particularly people who can get the job. Founders often can't cope with how the job changes as a company matures (and/or can't speak without putting their foots in their mouths), and "credible to a board of directors" seems to be more correlated with vocalizing the corporate bullshit hidden Markov model than actual sense.

13

gurenkagurenda t1_je376xl wrote

To find a point where nominal electricity prices were half their current price, you have to go back to the year 2000. Adjusted for inflation, those year 2000 prices were about 14% lower than today's prices. If you want to go all the way back to 1980 and adjust for inflation, the price was almost exactly what it is today.

So no, they absolutely haven't.

19

gurenkagurenda t1_jdxyl28 wrote

I know this probably feels like a nitpick, but I wish instead of saying:

> or that it’s sentient (it’s not),

journalists would instead say:

>or that it’s sentient (nobody actually has any idea what that means),

Every single debate along these lines has this infuriating flaw of people throwing vague words back and forth at each other based on some gut intuition that they don't have the faintest idea how to make any falsifiable statements about.

6

gurenkagurenda t1_jacxqtz wrote

Yes. A little while back, I had someone use a Computerphile video showing ChatGPT missing on college level physics questions as proof that ChatGPT is incapable of comprehension. The bar at this point has been set so high that apparently only a small minority of humans are capable of understanding.

1

gurenkagurenda t1_jacebvc wrote

I don’t know how you want to define “understanding” when talking about a non-sentient LLM, but in my experiments, ChatGPT consistently gets reading comprehension questions from SAT practice tests right, and it’s well known that it has passed multiple professional exams. It’s nowhere close to infallible, but you’re also underselling what it does.

4

gurenkagurenda t1_jabikll wrote

The model does not need to see drawings of a horse to produce a picture of a horse. It needs to see pictures of horses, sure, but those could be photographs, drawings, whatever. As a human, you also would not be able to draw a picture of a horse without ever seeing a horse, so I’m not sure what your point is.

Also, how do you know that you’ve had it explained to you well? Unless you’ve attempted to apply the knowledge, you can only tell if you’ve had it explained convincingly

10

gurenkagurenda t1_jabfdwy wrote

> However the presumption will be that AI 'assisted' art is not entitled to copyright either.

I would draw the exact opposite conclusion from the USCO correspondence. Note this:

> We conclude that Ms. Kashtanova is the author of the Work’s text as well as the selection, coordination, and arrangement of the Work’s written and visual elements. That authorship is protected by copyright.

They’ve specifically said that everything about but the generated images themselves is copyrighted. Assuming that this decision holds up to further scrutiny (which, who knows), an assistive tool is one that combines non-copyrightable generated content with copyrightable human generated elements. With those kinds of tools, the fact that individual elements of the final work are not copyrightable would generally be academic.

Edit: phonetic typo

5