Viewing a single comment thread. View all comments

norbertus t1_j9w05jq wrote

This article has some problems. The biggest one -- beyond some of the more basic conceptual problems with what these machine learning systems actually do -- is the vague demand that AI be "democratized."

They never define what the mean by "democratize" though they caution that "Big corporations are doing everything in their power to stop the democratization of AI."

We have AI because of big corporations. And nobody is going to "democratize" AI by giving every poor kid in the hood a big NVIDIA card and the skills to work with Python, Bash, Linux, Anaconda, CUDA, PyTorch, and the whole slew of technologies needed to make this stuff work. You can't just "give" people knowledge and skills.

This article is kind of nonsense.


Sometimes_Stutters t1_j9wl0u8 wrote

No no no. You just don’t understand. All we have to do is DEMOCRATIZE artificial intelligence. It’s that simple. What? Are you against democracy?


Zyxyx t1_j9y9ztb wrote

The only way to stop a bad guy with AI is a good guy with AI.


TheRoadsMustRoll t1_j9w9fdr wrote

>This article is kind of nonsense.

yep. here's some highlights:

>AI Creativity is Real

despite the authors wordy arguments AI requires input and only that input (however jumbled) will be returned on a query. if it were really creative it could dream up something on its own and populate a blank sheet of paper with something novel. AI isn't creative. the people that program it might be.


>3. Comparison: Human Brains vs. AI

despite the title of this section the author never actually makes any comparison. we only get this:

>The present analysis posits that the human brain, in terms of artistic creation, is lacking in two conditions that AI is capable of fulfilling.
>AI decomposes high-dimensional data into lower-dimensional features, known as latent space. [AI is more compact]
>AI can process massive amounts of data in a short time, enabling efficient learning and creation of new data. [AI is more comprehensive]

ftr: the human brain processes a massive amount of data and succeeds in keeping living beings alive while driving/painting/writing code. the list of things human brains can do that AI can't is very long.


wastedmytwenties t1_j9wkp01 wrote

I'm not trying to be funny here, but I genuinely think this has been written by an AI. Play around with chatgpt and it has the same exact tone.


MoleyWhammoth t1_j9wr2e1 wrote

My thoughts exactly.

Did we just pass the Turing Test?


ibringthehotpockets t1_j9xs6bu wrote

It’s easy to be biased towards detecting that it’s an AI if you read the comments here first. There was very little in the article that made me think “nope can’t be human” - it’s a post on a Wordpress blog. I wouldn’t really hold that to NYtimes level of writing. The thing that stood out most was the jumping around topics from like Oppenheimer and Nietzsche. But still, that to me is just like a high schoolers essay lol.

So to answer your question, yes. I read a lot of books and social media and this passed the test for me. Nothing distinctly unhuman about 90% of this writing. Literally everyone in the comments thinks so too. Unless you’re promoted with “this article is written by AI,” I think most people are gonna go towards no.


oramirite t1_j9yjjlt wrote

But "to me" isn't the test. Hundreds of other people made the spot that you didn't.


ibringthehotpockets t1_j9zbk7y wrote

I would say a majority of people did pass it. Based on the comments, which are certainly more biased towards spotting it (they’re also going to be self-selecting educated/readers). At least at the time of posting my previous comment, there were many upvotes comments discussing it as if it were real. There’s certainly more rigorous tests that should bf done obviously, but even getting a 50% result posting to a philosophy board would probably make you think that posting it to the populace could only increase that number.

If 90% of people pass it and 10% doesn’t, does it pass lol? I mean I would think yes, I don’t see why not.


oramirite t1_j9zcoan wrote

Give me a break, eye-scanning a reddit thread for rough percentages is NEEEEEVWR going to be a scientifically sound sampling method. You'd have do actually do the test according to the actual specifications of the test. Anything else you wanna try to bend into being the test .... isn't.

What is being marketed and developed as "AI" is garbage, and we should all rally against it as being a solution for a problem that doesn't exist.


GepardenK t1_j9y9tth wrote

> Did we just pass the Turing Test?

Well, no. Even if everyone thought this article was written by a human that would not pass the Turing Test.

The Turing test requires two participants to engage back and forth in conversation, one being human and the other an AI, and then for a third party to watch the conversation in real time and not be able to distinguish who is human and who is not.

It is a significantly higher standard than simply confusing some algorithmic text for having been written by a person.


oramirite t1_j9yjgdt wrote

No, because it's a trash article that everyone spotted right away


cark t1_j9wtrdj wrote

> despite the authors wordy arguments AI requires input and only that input (however jumbled) will be returned on a query. if it were really creative it could dream up something on its own and populate a blank sheet of paper with something novel. AI isn't creative. the people that program it might be.

I'm not saying AI is there yet, but I have to disagree there. What would be the, presumably magical, property of the human brain that would make it work outside of its past input ? We also are merely jumbling the input to produce our output. Part of this input is innate, part is learned or sensed, and part is randomness. If the creative output is the result of a creative process that takes place in the brain, that computation is still a physical process. That process does take place in the physical realm and as such must be the result of some initial conditions.

That "jumbling" you're dismissively referring to is how we eventually got to be humans in the first place. The highly evolved, and selected for, brain we enjoy is the product of such a process. Not only that, but the brain also works that way too ! Besides the input data I evoked earlier, we're subjected to randomness by the very act of perceiving that same input. We're directed jumbling machines ourselves.

Current AI algorithms and model sizes may not be up to par yet, their creativity remaining quite benign. But this is creativity nonetheless.


oramirite t1_j9yjwic wrote

Even with your decent points in mind - no, it's still not creativity. The complexity and self-generstive qualities must be there. I know your point is that it will "get there" but to your point, it is not there yet. So no, it doesn't qualify as creativity because it's only a system that simulates creativity.

I realize you're still claiming that human creativity is still just a rehashed bundle of inputs but we don't have the complexity I'm AI to actually perform this action, therefore it is not there yet.


ElleLeonne t1_j9x51rm wrote

> despite the authors wordy arguments AI requires input and only that input (however jumbled) will be returned on a query. if it were really creative it could dream up something on its own and populate a blank sheet of paper with something novel. AI isn't creative. the people that program it might be.

My only significant gripe with this is, isn't this exactly how humans work? Everything we do is slightly derivative, and built on what came before us. All of our output is due to the input from our environment.

This isn't to say anything about your argument. I just feel like AI and humans are only truly separated by superficial boundaries like scale and implementation, and maybe we should consider this as the technology continues to advance.


rhyanin t1_j9xqkn4 wrote

Kinda, but I believe that there’s a difference. I think it works like this. Humans have the ability to understand concepts and derive new things from those concepts. AI, at this point at least, hasn’t. It can only derive from snippets of information without understanding how they connect. Therefore it can not make a truly new, unique thought.


GreenTeaBD t1_j9ynka0 wrote

The human brain, as far as we can tell, requires input to be creative too. It's just our senses. Making creativity into anything else is basically calling it magic, an ability to generate something from nothing.

This does not have to be a person typing prompts for ai, it just is because that's how it's useful. I've joked before about strapping a webcam to a Roomba, running the input through clip, and dumping the resulting text into gpt. Theres nothing that stops that from working.


Quizik t1_j9z8j2e wrote

Yes, I'm not sure we can speak of creativity when, in fact, from the datasets the machine is trained on to the infrastructure to the programming to the electricity "it" needs to be provided- everything, actually. And the "output" can't even be considered a "new" creation (even if it tricks us by having not "existed prior") in the sense as that I think it would be *derivative * definitionally. It cannot create anything that isn't a parrot-with- additional-steps, rehash, except wherein we give it IChing/magic 8-ball/ouiji nudges.

The tech is undoubtedly powerful, and the ramifications cannot be understated, but the anthropomorphization (understandably, pareidolia by another sense) going on as far as what people are willing to ascribe to "it", I think is being overstated often (if I'm allowed to make a generalized and spurious statement).

Is the Abacus doing math, if math is "done o it/using it", and in its "end state" it looks like it represents a number?

It's a simulation if we ascribe it any "entity", but since the simulation is being done with language, it is invariably degrees of difficulty harder for most people to "counteract" the illusion ("it says!" [but then, we are talking about a people who casually ascribe that manner of agency to even a collection of unrelated books, ze bible sez]).

So it's like making yourself dizzy and saying bloody Mary three times before a candle-lit mirror, it might seem spooky if your mind is playing tricks on you, but you are alone in the bathroom.


Magikarpeles t1_j9xtz9x wrote

Stability AI “democratised” stable diffusion by releasing their models and allowing open source platforms to use them. The open source solutions are arguably better than the corpo ones like Dalle-2 now.

OpenAI do release older models of GPT but they are vastly less sophisticated than the current ones. Releasing the current models would “democratise” chatGPT but it would also kill their golden goose.


GreenTeaBD t1_j9ymzr3 wrote

There are models that are open source and near GPT3. The most open are eleutherai's models, though not as big as GPT3 perform very well. You can go run them right now with some very basic python.

The problem is less that we don't have open models, it's that we haven't found good ways to run the models that big on consumer hardware. We do have open models that are about as big as GPT3 (The largest Bloom model) but the minimum requirements in GPUs would set you back about 100,000 us dollars.

Stable Diffusion didn't just democratize image gen AI by releasing SD open source, but by releasing it in a way people with normal gaming computers could use it.

We are maybe almost at this point with language models. Flexgen just came out, and if those improvements continue we might get an SD like moment. But until then it doesn't matter if GPT3 is open or not for the vast majority of people.


Otarih OP t1_ja97gh7 wrote

You got that exactly right. It's sad to see for us this didn't come across in the article. But that was our way of thinking, i.e. FOSS (free and open source software). We will improve in future articles! Thanks for reading!


oramirite t1_j9yjdc2 wrote

This isn't a conversation about better or worse, how "good" they are is centrally the problem. This is an ethics conversation.


Magikarpeles t1_j9yk6wv wrote

The person I replied to asked what democratisation means in this context and I answered.


JohnLawsCarriage t1_j9wcxns wrote

A big NVIDIA card? You'll need at the very least 8, and even still you're not coming close to something like ChatGPT. The computational power required is eye-watering. Check out this open-source GPT2 bot that uses a decentralized network of many people's GPUs. I don't know how many GPUs are on the network exactly, but it's more than 8, and look how slow it is. Remember this is only GPT2 not GPT3 like ChatGPT.


ianitic t1_j9wzh8z wrote

That's also just for inference and fine tuning. Even more processing power is required for a full training of the model.


_Bl4ze t1_j9wkpgg wrote

Yeah, but it would probably be way faster than that if only 8 people were using that network at a time!


JohnLawsCarriage t1_j9xchqo wrote

Oh shit, I just found out how many GPUs they used to train this model here. 288 A100 80GB NVIDIA Tensor core GPUs.



Netroth t1_j9y1ngr wrote

It’s been produced by an AI, hence the fluffy logic.


Nederlander1 t1_j9ydmak wrote

It just means make sure the AI is woke and follows the correct narrative


oramirite t1_j9yj8ld wrote

It's written by ChatGPT, of course it's nonsense.


Whiplash17488 t1_j9yt1tx wrote

We still haven’t figured out how to democratize democracy. We have an app for everything. Why can’t I tell my representatives what my opinions are on more nuanced issues? Why can’t I have an app that shows me how the city is spending money?

Most of political discourse is posturing by a political class. They should be teaching constituents about the pros and cons of an argument rather than spend money on showing the lack of virtue in each other. Who cares that the other guy is divorced. I need them to do their jobs.

Ah… i’m getting too old for this. I’m going back to my hobby. Making hand crafted guillotines.


SuperSonik319 t1_j9z5vkf wrote

> nobody is going to “democratize” AI by giving every poor kid in the hood a big NVIDIA card

colab kinda does. and there's so many tutorials on how to use it all over the internet. i think 90% of everyone who knows how to work AI today started with a Stable Diffusion tutorial on colab


Otarih OP t1_ja97bpp wrote

In what way is the article nonsense? I'd like some more concrete criticism so we can improve in future articles.
As concerns your point about not having specific enough what democratization means: we accept that as valid criticism. We can go more in depth in future articles. I think our core goal here was to first even set the stage for a need of democratization. Thanks for reading!