Comments

You must log in or register to comment.

---nom--- t1_j5pfw37 wrote

AI isn't everywhere. They're just algorithms. If they're smart, I'll gladly call them intelligent. But Google's photo ai can't determine a rabbit from a dog. Chatgpt can't finish off some simple number sequences, and many questions it just makes false assumptions. Wall-e has been a little more reliable, but it's really taking a lot of pre-existing images and smudging them together.

Still a believer in true AI being organically grown rather than emulated using machines.

17

jedi_tarzan t1_j5q4boi wrote

It's crazy how negative I've seen some comments regarding ChatGPT's accuracy.

Like they're disappointed it's "weak" AI, rather than correctly identifying it as an amazing demonstration of a very specific goal. That being language processing.

Like yeah, it's usually wrong at least a little. Midjourney gives people 8 fingers on each hand, ChatGPT gives k8s ingress configurations in a service definition file.

But that's a matter of use misuse and misunderstanding. Hopefully it doesn't impact the technology's growth and development.

8

KDamage t1_j5plf7r wrote

While I get your point, Artificial Intelligence doesn't mean perfect (or humanly equal intelligence), it just means a relatively independant, artificially created form of intelligence. As in being able to decide its own direction, for what it is able to produce or to output, by itself. William Gibson for example likes to call the actual internet some kind of artificial intelligence, as it has an inertia on its own. Which is very different from classical scifi narrative.

On top of that, it is also the ability to learn by itself (Machine Learning should be the real name instead of AI, which is based on the tools, or algorythms, it has been given)

Around that concept, there are indeed varying degrees of autonomy, with its (abstract) tipping point being singularity. ChatGPT, Dall-E, etc are, technically, are organically growing, but for now their model is just in its infancy compared to what they'll become with time.

4

showturtle t1_j5qa1a9 wrote

I don’t know about the others you mentioned, but I wouldn’t necessarily call ChatGPT an “organically growing AI”. It’s architecture and hyperperameters are pretty restricted and it is entirely incapable of real-time “learning” or the incorporation of new data into its decision-making paradigm as a language model. It actually has not been “trained” on any new data sets since 2021.

Regardless, I love ChatGPT and I think what it can accomplish as a language model are amazing- what I think truly restricts it from real, “organic growth/learning” is that it is not “aware” or “present” - it has no perception of circumstances and therefore no ability to acquire and incorporate new data to fill the gaps it it’s incomplete understanding. It can’t handle ambiguity- period. Once it is capable of real-time incorporation or data from it’s environment, THEN organic growth and true learning are possible.

5

KDamage t1_j5qha1p wrote

I see what you mean, which is true for the dataset. Are we sure OpenAI has not incorporated some sort of auto-annotator based on user interaction ? Like the kind of cleverbot where it was growing its dataset from user-to-bot conversation ? Modern chatbots all do this, which was feeding my assumption for chatGPT. There is some room for two models actually, one for the knowledge database, which has stopped training, one potential other one for the interaction, which is growing

−1

showturtle t1_j5qi0j1 wrote

It can use some of the information provided within the current conversation it is having to help contextualize the responses it gives so that they are more appropriate- but, it does not store the information to its database of knowledge or incorporate any new data in the discussion to help it make decisions. It simply helps it recognize patterns in the conversation so that it can make more appropriate responses.

4

MEMENARDO_DANK_VINCI t1_j5ppp61 wrote

Yeah like you tell any human on the planet to do some free form writing assignment and you’ll see lots of the problems the commenter above you listed

2

KamikazeArchon t1_j5q7maj wrote

Pineapples aren't apples.

AI is a term that has shifted its meaning over time from the individual word-components.

3

speedywilfork t1_j5pi5nx wrote

i totally agree with this guy. most people dont understand how AI even work. fundamentally AI are really really dumb.

11

3SquirrelsinaCoat t1_j5plgko wrote

One issue in your thinking that needs to be teased out is that not all companies are equal when it comes to AI adoption. You say some companies dive in with an incorrect view of what AI can do - yes and no. The world's largest companies are already deploying AI in every business unit. Indeed, the current power of ML has been driven hugely by the private sector dumping huge sums into programs that, at scale, show the hype is real. Sometimes deploying just one bot for a given process, globally, that turns into millions of dollars in savings, or huge increases in efficiency, etc. I do not agree that these kinds of companies went into AI with anything less than 20/20 vision. Some big businesses certainly struggle to get things out of the lab, but that has more to do with their processes and decision making, very little to do with the AI capabilities themselves.

Looking at smaller businesses, eg $5M revenue with one IT person and no data scientists, sure, they could be lured in by hype. But there are mature off the shelf automations, or as-a-service offerings, that can cater to these organizations as well. The issue for these groups is figuring out what to spend money on and what not to, and the hype could lead them into thinking they are going to moonshot past the competition, when in reality, they just need some basic automations and perhaps new tech investments - data warehousing, retire old tech stack, etc.

2

Rondaru t1_j5pxyl8 wrote

Well, same goes for natural intelligence. I feel so often disappointed by it.

2

bojun t1_j5pftvy wrote

That is identical to the issues that sprout around IT every time some new concept is brought out whether mainframe, PC, network, COTS, web, cloud, etc. It was all a grappling of trying to re-architect and re-engineer while keeping the lights on. Basic considerations.

One of the dangers is thinking that issues around AI are new issues.

1

MpVpRb t1_j5pnkar wrote

Agreed

I'm optimistic overall about the future of AI

I'm a bit more pessimistic about the ways that the clueless write about it and the ways the weasels scheme to weaponize it

1

Vibin_Eternally t1_j5poh15 wrote

I think it can be said no matter the functions we develop for Artificial Intelligence, the term itself being an umbrella 🏖️ for many iterations, there will always need to be someone responsible for the "input", meaning orders, commands, code, or even to program the on and off functions, as well as basic overall design. No matter what, A.I.🤖 is only as intelligent, beneficial, disciplined, or dangerous as it's creators🧑🏼‍💻

1

odigon t1_j5r7it0 wrote

I can't imagine any way that that statement can be any more incorrect. We already dont understand how neural networks solve specific problems, we just let them loose on training data and reinforcement and get them to figure it out. Narrow AI already vastly outperforms humans in very narrow domains such as Chess and Go, and the best human masters struggle to explain what they are doing. AIs trained to play computer games often exploit glitches that humans didnt know existed to the extent that they do something that satisfies the program, but wasnt at all what was intended. They find solutions that humans will never have thought of and there is no reason to think that a general AI that has human level flexibility wont do the same in the real world. This may be a good thing or a very, very, very bad thing.

1

Vibin_Eternally t1_j5rv7u6 wrote

I respect your "imagination", as that plays apa part in humans creating things of genius. I see the sentiment of computers exploiting what humans miss in the programming. As they are programmed to do. Which is working along with humans to do great things. Like to improve your chess game to a higher level after defeat. Because the AI, calculated all the different moves to attempt or not attempt along with success/fail rate, depending on what level you input or set the game at. There is even the program named "Codex" that can write its own computer coding in 12 different computing languages. Whatever one inputs or ask for, it can create the code for. However it can't think for itself or decide to code without a pre-programmed purpose/input. If by human flexibility, you are referring to reason and true thinking, then we're not talking about AI. Art imitates life, life oftetimes imitates art, but AI imitates humans, and humans then have the option to imitate AI

1

Vibin_Eternally t1_j5rw37k wrote

As far as neural networks which is a supreme display of genius, even it is mimicking.It mimics the nodes or neural pathways of brains. We don't understand quite how it works because we're still trying to layout the exactness of how a human brain truly operate

1

odigon t1_j5s65g9 wrote

I have really no idea what you are saying in your reply. Your original statement was that "A.I. is only as intelligent, beneficial, disciplined, or dangerous as it's creators". That's like saying a racing car isnt any faster that its creators.

We have in the past found a way to make machines that can go fast, can fly, can go underwater, and to see incredible distances, far beyond what its creators can do, which is the entire point of them. Now we are attempting to create a machine that can reason and if we are successful it will reason with much more ability than we can, in the same way that the best chess grandmasters are no match for a chess computer set at a high enough level. Will it have the same goals as us? Why should it? If it doesn't and becomes a danger can we stop it? How? It will be able to outwit us at every turn.

1

madejust4dis t1_j5pxau6 wrote

Very optimistic about AI, but in a few specific ways. The Machine Learning (ML) community has had an infatuation with emergent capabilities of Neural Nets these last few years and that fuels lay-person speculation on the power and prowess of the ML models. Working in the space, I don't buy it. But I think AI will transform the world in 2 important ways:

(1) AI will be used to improve legacy systems, beyond chatbots and natural language prompting. It's not a question of solving problems or replacing humans, but efficiency. There will be a first mover advantage for companies who integrate first and this will lead to calcification of industry positions, as well as a few conglomerates taking over the AI Integration space. I don't believe there are enough ML engineers for midsized and small businesses to accomplish this integration themselves, and the tools aren't there yet. There are serious issues with training and structuring data, so the two additional winners will be the companies that can provide ML integration and the companies who jump on it first.

(2) The next winners are the truly transformative companies that use AI to build from the bottom up. While some AI based firms will seek to improve legacy systems, others will seek to develop new applications and concepts using AI. Current trends in AI will only serve to encourage more creative problem solving. Most ideas will not be supported by current technologies, but some methods for training and systems architecture will emerge from that pool of hopeful entrepreneurs as they develop specific use cases for the underlying technology.

In general, ML software will become more and more like commodities. Toolkits and APIs for are popping up everywhere now, and I'll tell you it was a very small and niche community just 2-3 years ago. While transformative companies will exist and leave us gobsmacked, the real success will be in legacy system integration and rapid adoption, as well as equally quick scaling of smaller companies. Current AI hype (as we know it) will subside, but it will become a foundational tool in every business in the next decade.

1

ThisIsAbuse t1_j5r2ess wrote

I don't think most people understand or recognize human intelligence.

1

odigon t1_j5rbs4d wrote

By far the greatest danger of artificial intelligence is that it will be achieved before we know how to do it safely. General AI isnt here yet. Very good narrow AI is here: AlphaGo. StockFish, ChatGPT. General AI that can at least do everything a human can may be some decades away, maybe much more, maybe less. We know General AI is possible because we humans can do it and humans are not magic, they are physical system. It seems logical that if we can build a human level AI then we can increase the resources and build something that can outperform humans. What will this look like? What will it do? Whatever it does, will we be able to stop it if we dont like the result? I honestly dont think we will be able to; it will be able to fool us or force us to not stand in its way.
Here is a genuinely frightening series on AI safety by a guy called Robert Miles.
https://www.youtube.com/watch?v=pYXy-A4siMw&ab_channel=RobertMiles

1

fwubglubbel t1_j5y52gq wrote

Anyone interested in the future of humanity regarding AI should be (or become) familiar with Eliezer's extensive body of work.

1