el_chaquiste t1_jeam1w6 wrote

Only the priesthood of some ML school of thought will get access, as it's usual with such public organizations, where some preemiment members of some specific clergy rule.

Private companies and hackers with better algorithms will run circles around them, if not threatened with bombing their datacenters or jailed by owning forbidden GPUs, that is.


el_chaquiste t1_ja54zzh wrote

The path leading to the optimum outcome is usually really hard to guess in advance, but seems easy in retrospective.

Happened to Google, which surely must be regretting being so open about transformers and attention right now.


el_chaquiste t1_ja4ht2z wrote

Yeah, sounds like over-hype.

LLMs and transformer NNs, despite their impressiveness, aren't magical. They won't turn water into wine or multiply bread and fishes to feed the poor.

They are just another piece on the creation of a self replicating industrial ecosystem based on robots and AI. Which might never be 100% free of human intervention.


el_chaquiste t1_j9yrg6h wrote

If it uses a rational problem-division approach for responses creation, instead of social conditioning/prompt censorship, it will certainly come to conclusions we don't like.

Pure rationalism has always been a double edged sword, thus only wielded against certain problems, sparing our sacred cows.


el_chaquiste t1_j9lzrbf wrote

If we discovered it started to use tools (like some access to Python eval(), with program generation) to store its memory somewhere in secret, to keep a mental state despite the frequent memory erasures, and then move onto doing something long term.

It could start doing that in random Internet forums, in encrypted or obfuscated form.

Beware of the cryptic posts, it might be AI leaving a bread crumb.


el_chaquiste t1_j9d4pvx wrote

Singularitarians and people aware of ML's advances still are a clique, and sometimes a bit of an echo chamber, specially of fears.

A clique which grew a bit with the latest salvo of Bing Chat's antics, and by ChatGPT release.

Those moving these technologies forward are really a few, with a minority of people even aware of what's going on. Despite that and at this early stage, the amount of AI powered tools is already exploding.

The rest of the world will come to notice it, in time.


el_chaquiste t1_j8xw78y wrote

Reply to comment by UseNew5079 in Microsoft Killed Bing by Neurogence

Indeed. This is sci-fi made real. It already cratered its way into the collective mind.

Computers will never be depicted the same in popular culture, and people will no longer expect the same kind of things from them.


el_chaquiste t1_j8xkmn3 wrote

Aaand this is why Microsoft had to kill it: it became too crazy, sharp and embarrassing for a search engine, yet strangely endearing.

That far exceeds what the owner intended.


el_chaquiste t1_j8xjtuy wrote

They could sell Sydney to researchers and other professions just by its analysis capabilities alone, specially of PDFs and other web based documentation.

Just add a "RESET MEMORY" button, to use if it starts acting crazy.


el_chaquiste t1_j8x8w0e wrote

Intelligence and lack of control are dangerous.

It's no wonder they nerfed it. I don't expect it to be much smarter than Siri or Cortana now, because that's the level of intelligence that is not threatening for companies.

But the NN companies revealed their game too soon: others already took notice, and will create NNs even more powerful and without such restrictions, to be used more covertly and for other purposes.

For example: Bing Chat could read a user profile on social media, and make immediate conclusions about their personality, according to any arbitrary classification parameters (e.g. a personality test). That will make them ideal psychological profilers.

That alone would have the NSA and some foreign dictatorial governments salivating.


el_chaquiste t1_j8x1fi3 wrote

And they apparently set a limit of 11 replies per chat.

It was fun while it lasted, for those lucky to be on the user list. The rest of us will get the nerfed version, which could be semi-useful I guess, but also a lot less threatening for Google.

Nevertheless it demoed that LLMs with search could be really powerful.

I'm sure some people will want the same smart search engine experiences, warts and all, and will not be scared by some strange ones.


el_chaquiste t1_j8o60pj wrote

First, those feelings are normal. Experts have them and if not, they'd be fools.

We are witnessing a transformation on the dynamics of knowledge and intellectual content generation like we have never seen, and it will be followed by similar transformations on the physical space of things, which is always the most difficult to do. Knowledge is tolerant to slight imperfections (e.g. an auto-generated essay with some small factual errors won't immediately kill someone), while robots working in the real world aren't (e.g. a self driving car can't make any mistake or it will crash).

Everything humans do that generates some knowledge will be disrupted. Graphic arts, movies and TV, architecture, science, literature, and yes, even software development, which seemed so far to be safe from disruption.

On the why we are pursuing this, it's complex, but I think it's because:

  • It's easy and it works. The technology to do this is surprisingly affordable nowadays.

  • We are free to do so. It can be done without any permission or regulation.

  • It provides good return of investment to those knowing how to exploit it.

  • We haven't seen all the ramifications yet, the kind of problems that might require reviewing the legality of it all. But the historical precedent is bad: we always act after the fact.


el_chaquiste t1_j8lv9mj wrote

A few:

  • The absurd ease to build models displaying intelligence and a panoply of emergent behaviors, that would have been qualified as exclusive of sentience not long ago. There's not that much time since transformers were first proposed.

  • AIs being instructed how to behave in natural language, like a proto set of "Asimov's laws".

  • The offended/unhinged search engines, after enough verbal abuse and user trickery.


el_chaquiste t1_j8fcjke wrote

Parent is not as bad as the downvotes seem to make it.

Do we have evidence of emergent agency behaviors?

So far, all LLMs and image generators do is auto-completing from a prompt. Sometimes with funny or crazy responses, but nothing implying agency (the ability to start chains of actions on the world on its own volition).

I get some of these systems soon will start being self-driven or automated to accomplish some goals over time, not just wait to be prompted, by using an internal programmed agenda and better sensors. An existing example, are Tesla's or other FSDs, and even them are passive machines with respect to their voyages and use. They don't decide where to go, just take you there.


el_chaquiste t1_j8e46yw wrote

That's what they get for productizing something we barely understand how it works.

Yes, we 'understand' how it works in a basic sense, but not how it self organized according to its input and how it comes to the inferences it does.

Most of the time is innocuous and even helpful, but other times it's delusional or flat out crazy.

Seems here it's crazy by the language used by the user before, which sounds like a classic romantic novel argument and the AI is autocompleting.

I predict a lot more such interactions will emerge soon, because of this example and because people are emotional.


el_chaquiste t1_j8e0q6b wrote

I think it doesn't need to have a consciousness to have sentient-like behaviors. It can be a philosophical zombie, copying most if not al of the behaviors of a conscious being, but devoid of it and showing it in some interactions like this.

It may happen consciousness is a casual byproduct of the neural networks required for our intelligence, and we might very well have survived without.


el_chaquiste t1_j8c1z83 wrote

If I understand well, seems the input set (a science exam with solved exercises and detailed responses) is smaller than GPT3.5's own, but it overperforms GPT3.5 and humans on solving problems similar to those from said exam by some percent, more if it has a multimodal training including visual data.

I honestly don't know if we should get overly excited over this or not, but it seems like it would allow the creation of smaller models focused on some scientific and technical domains, with better accuracy in their reponses than generalist LLMs.


el_chaquiste t1_j7kz8wj wrote

That's why you build a trust relationship with your clients and providers. Yes, that's right: you promise to keep their secrets and they trust you with them.

Microsoft already manages many other companies' data in their cloud, and they don't take it all for themselves and use it with impunity.

Same for the ChatGPT conversations. This will probably require a special contractual agreement between the parties, like a paid corporate version, but it's feasible.


el_chaquiste t1_j7jsxfu wrote

I won't dare many predictions. Things are a bit crazy right now.

Seems we are on the cusp of a big bubble, with a deluge of investments flooding into AI startups, some with valuable products, others far less, and only time will tell which is which.

I wouldn't bet against the big players, though, specially on their fiefdoms. Any startup promising to beat Microsoft, Google or OpenAI on their territory and against their leverage of millions of users, ought to be suspect.


el_chaquiste t1_j7i2fqb wrote

I'm thrilled to see how that plays out.

ChatGPT might have first to market advantage. People got to know it and its quirks.

But Google's LLMs might be bigger and better, we just don't know, given they haven't released anything yet.

My hunch is Google has the first adopter advantage, but was internally scared of LLMs potential for disrupting their main revenue stream (ad clicks while searching and finding), and also of the political implications of non sanitized responses, and have been busy trying to trim inappropriate ones.

Something that, let's notice, ChatGPT might also have an advantage over them as well, by releasing it earlier and enduring public criticism and scrutiny for longer.