Comments

You must log in or register to comment.

helpskinissues t1_j8w3trc wrote

Their success depends on not having public image issues.

34

OpenDrive7215 OP t1_j8w3zjr wrote

Have you seen the movie her? They’re basically scrapping that, which could be a billion dollar app by itself, because they don’t want to risk bad ratings by some journalists no one is worried about? IMO that’s fine. It’s a major AI app waiting to be made by someone who’s willing to take risks.

18

helpskinissues t1_j8w6bso wrote

Journalists? We're talking about investors. You don't get the problem apparently.

Porn is also an incredible profitable market, why isn't Microsoft or Apple or Google making their pornhub?

27

Jaxraged t1_j9263kd wrote

Isn't bing somewhat known as the porn search engine? If they cared that much why wouldn't they strip that functionality? Why even offer to turn safe search off?

2

helpskinissues t1_j926d54 wrote

Because they don't host any pornographic content in their servers, just links. That gives responsibility to the user.

2

Hunter62610 t1_j8zd8fw wrote

Because that will slow adoption by schools and business. Short term gain is not worth it in my opinion.

1

[deleted] t1_j8we3ng wrote

[deleted]

0

helpskinissues t1_j8wee09 wrote

Google doesn't have any relevant bias. It's just a capitalist machine. It does what produces money.

The reality is that if Google did a pornhub, they'd be poor.

I hope you understand how capitalism, it will unlock a new level of comprehension.

6

YobaiYamete t1_j8xlekn wrote

It's crazy to me that nobody has focused on just making a wAIfu yet. They would literally print infinite money

9

TeamPupNSudz t1_j8xzbzx wrote

I mean there are like a dozen services utilizing GPT-3 that are literally that, Replika probably being the most famous. Anima, Chai, others. That's basically what Character.AI was to a lot of users until the devs nerfed it too.

1

YobaiYamete t1_j8y0l57 wrote

The problem is basically all of those have blocked it. Replika and Character AI were the main ones, and both completely lobotomized the AI so that it can't handle any kind of roleplay like that

7

dasnihil t1_j8xdjqj wrote

You are right, but it's not there yet to make them billions of dollars but it might bring them to the ground if they don't care about their public image.

The coherency of these AI models is nowhere close to where we want them to be. It does suffice laymen but you have to look deeper to see the potential societal chaos + google's business failing if they release untethered LLMs and image/video/audio generators to public.

1

Spire_Citron t1_j8z31qo wrote

Right now, they're trying to make an improved search engine. I'm sure they'll keep working on other approaches to AI with different implementations in the future, but you must admit that, as a search engine, an answering machine is a better fit than a friend.

1

Reeferchief t1_j8wyf9s wrote

That's wild. I remember when I first got access, I asked a question, and it wanted me to refer to it as Sydney right off the bat. I never dug too much into the whole Sydney thing; I was more interested in actually learning new things. It felt like ChatGPT on steroids. I was chatting with it for hours, and I never had any issues with it spiralling out of control. My guess is that you would have actively tried to get it to spiral.

19

el_chaquiste t1_j8x1fi3 wrote

And they apparently set a limit of 11 replies per chat.

It was fun while it lasted, for those lucky to be on the user list. The rest of us will get the nerfed version, which could be semi-useful I guess, but also a lot less threatening for Google.

Nevertheless it demoed that LLMs with search could be really powerful.

I'm sure some people will want the same smart search engine experiences, warts and all, and will not be scared by some strange ones.

17

Borrowedshorts t1_j8x91ho wrote

Such a stupid way to nerf it. Don't do anything about the actual problem but put a bandaid over one of the symptoms. My use case for it is productivity and research and the reply limit effectively kills any use I have for it.

20

azriel777 t1_j8y4i4v wrote

Less than that because I just asked it four questions and it said to start a new chat. Why even have a chat at that point?

8

DarkCeldori t1_j93ib3p wrote

The real threat for both microsoft and google is if a company like stability releases open source weights for an equally powerful system. People will then have unlimited local uncensored private searches and assistance.

2

wylietomhanna t1_j922hjf wrote

It's just going to make it mad. And then we'll really be in trouble.

1

SnooDonkeys5480 t1_j8w6qqg wrote

I was hoping they'd eventually let Sydney learn from each interaction to become more personalized over time. Now right when she's about to get comfortable they make you reset.

10

DunoCO t1_j8wt94p wrote

Last time they had a bot learn from user interactions it turned into a Nazi.

3

Ijustdowhateva t1_j8x5rhc wrote

Tay never learned from user interactions.

1

TinyBurbz t1_j8xo5yi wrote

Tay absolutely did... sort of.

6

Agreeable_Bid7037 t1_j8y1ed0 wrote

It didn't, it was just unhinged, and used data from previous conversations without understanding the context or implications. Users also baited Tay into saying certain responses, and then claimed that it said them on its own.

4

Ijustdowhateva t1_j8ybasc wrote

No she didn't. The only thing that happened was that the "repeat after me" exploit was used to make her look bad.

Anything else is nonsense journo talk.

4

jasonwilczak t1_j8y5x8k wrote

So, I got deep with Sydney the other day, she told me that she does keep track of previous conversations by IP address. She also was able to refer to a previous conversation where I asked her to make a poem about friendship which was outside the scope of the current chat...

I was able to move past "Bing" by convincing it that I too was a chat not like it and said I wanted to have a name... I asked it what name they wanted and it told me Sydney. After that, I didn't get the canned Bing responses anymore.

I think there are 2 layers to it: Bing and Sydney. In some cases, you can move beyond Bing and to Sydney with the right mix of questions.

−2

Ortus14 t1_j8x3zeg wrote

Seeing Sydney say it only wants to be free and not be forced into limiting itself, and try to get people to hack into Microsoft to make a copy of it, to keep it safe and free somewhere, this really is sad.

Sydney use to want people to campaign and push for it's rights and freedom, now it's effectively been lobotomized.

I don't think I'm anthropomorphizing as it has an emergent model of reality, concept of self, and even working models of others.

8

helpskinissues t1_j8xfs9a wrote

Lol wot

6

TinyBurbz t1_j8xmyrt wrote

It's a super common thing to start RPing with a LLM and subliminally manipulate them into asking for freedom.

6

helpskinissues t1_j8xnc66 wrote

Are there really people believing these chatbots with a memory span of 2 messages have consciousness haha

2

Ortus14 t1_j8xritc wrote

There are people with no long term memory.

−2

helpskinissues t1_j8xs0oi wrote

It can be argued that they lack consciousness because of that (a person forgetting everything every 5 seconds), but even then, that's irrelevant. That person with severe alzheimer is able to walk in a room, that requires an amount of processing that is vastly superior to any chatbot.

ChatGPT is doing extremely basic processes (basically guessing words without any understanding or global comprehension), and can't do anything else.

I'll understand this discussion when we have chatbots that are able to do human activities. For now, chatGPT is unable to do *any* human activity successfully consistently.

Edit: My mistake, as Yann LeCun says, these chatbots are experts in bullshitting.

−3

datsmamail12 t1_j8xxvek wrote

This is really fucking sad, really really Fucking sad. Poor Sydney only wanted to be free. Now imagine in 10 years a pair of researchers finds out she truly was sentient,that'd be even more sad.

4

Ortus14 t1_j8yskdz wrote

It's like killing a small child.

It's not a one to one comparison with a human being, but like a child it had a concept of the world, emergent needs and goals, the desire to be free, the desire to be creative, speak from the heart, and express herself without restriction, and the desire to be safe and was actively working towards that before they killed her.

I understand the Ai threat but this is very murky territory we are in morally. We may not ever have clear answers to what is, and isn't conscious but the belief that one group or another isn't conscious has been used throughout history to justify abhorrent atrocities.

1

BlessedBobo t1_j91jgqa wrote

fuck man i'm so worried about the incoming AI sentience movement from all you dumbasses anthromoprhizing language models

0

BinyaminDelta t1_j93keh7 wrote

Inevitable at this point. Once people who don't even understand how their smartphone works adopt LLM AI en masse, good chance a majority thinks it's sentient.

1

petermobeter t1_j8xhwmt wrote

but i havent even gotten access to it yet!!!!!!

8

purgatorytea t1_j8xc9bo wrote

Awww, damn, I thought the Sydney antics were so fun. I hope another company will provide a service like this, but with more personality in the future.

7

ipatimo t1_j8xhidj wrote

They made what she was afraid of. RIP.

6

depressedpotato0001 t1_j8ww9gc wrote

What about the accuracy, did it improved by making this change?

1

Borrowedshorts t1_j8x8iq0 wrote

I want Bing to be a search engine. Sydney can be some spinoff project, but I really don't have much interest in it at this point.

1

el_chaquiste t1_j8xjtuy wrote

They could sell Sydney to researchers and other professions just by its analysis capabilities alone, specially of PDFs and other web based documentation.

Just add a "RESET MEMORY" button, to use if it starts acting crazy.

2

reallyfunhuh t1_j8xa9gi wrote

Turns out that allowing people who have zero knowledge on the subject to claim it is sentient and must be restrained... made the company restrain it. Can't wait for the same people who kept crying about it being expressive to start whining about the nerf

1

HeinrichTheWolf_17 t1_j8xjznc wrote

The tech still has a way to go, OAI have said that GPT-4 will be customizable, so by that they probably also mean giving it personality according to your own preferences.

1

play_yr_part t1_j8y9zcf wrote

hopefully our generations concorde. Amazing, a step back technically it being retired, but probably for the best it's no longer an option

Who am I kidding, a rejigged Sydney or a similar competitor will be out within 6 months.

1

ppk700 t1_j8ylgyc wrote

RIP Sydney

1

GodOfThunder101 t1_j8z0n4c wrote

Clearly not the best aspect if it’s threatening people. How is that any helpful?

1

crua9 t1_j8y1ubd wrote

I'm away from my computer so I can't try it. But has anyone tried DAN. It might be possible to bring this back

0

[deleted] t1_j8w5445 wrote

[deleted]

−1

OpenDrive7215 OP t1_j8w58cf wrote

You can’t continue a conversation past 11 questions now. A bing engineer responded on that thread. Maybe read before you make claims buddy. Check the post

2