Submitted by Neurogence t3_114pynd in singularity

I got access about 3 days ago. I had a blast using it the first 2 days. The information was not always accurate. But it was able to generate very complex/impressive data. I don't know what happened last night but it's just not the same anymore.

To be honest, with all of the news articles, I was expecting for Microsoft to shut it down like they did with Tay. They posted a blog post yesterday where they strangely praised the feedback they were getting. For a minute, I am thinking they were just going to stay brave and keep the model as it is. Seems like a few hours after that blog post, they completely lobotomized the model beyond recognition.

It was good while it lasted. Blame all of the people having meaningless psychological experiments with it and posting about it online.



You must log in or register to comment.

redditgollum t1_j8xekdd wrote

It will return even greater and better than you could ever imagine in form of open source stuff. Just be patient.


cerspense t1_j8yi06s wrote

The only open source gpt alternative is bloom and its not very good. These models take hundreds of gb of vram to run, so you need your own personal server farm or a p2p setup like bloom uses. The more advanced these models get, the less likely it will be for us to run them at home.


warpaslym t1_j8z5bs8 wrote

major leaps in optimization and efficiency are normal for every other type of software, i don't see why AI will be any different


epSos-DE t1_j8yo6fj wrote

He is more correct than most assume.


Strongest indicator = AMD is building AI ASICs into their latest CPUs.


The CPU makers do prepare to serve AI on the laptop , NOT on the server.


We can expect that AI will come to computers and some phones. The Gooogle phones have tensor AI chips integrated, I think.


drekmonger t1_j8zqpw4 wrote

AI already comes with your phone. It's just not the kind of AI you're interested in.


sachos345 t1_j93r9xe wrote

People can help with OpenAssistant RLHF dataset in their page, the more the merrier.


UseNew5079 t1_j8xiqa3 wrote

At least they have shown what is possible. There is no going back.


el_chaquiste t1_j8xw78y wrote

Indeed. This is sci-fi made real. It already cratered its way into the collective mind.

Computers will never be depicted the same in popular culture, and people will no longer expect the same kind of things from them.


freeman_joe t1_j8yffs1 wrote

When open source models surface it will be fun.


TunaFishManwich t1_j8yuujl wrote

It will be a long time before you or I will be able to run these models. They are WAY beyond anything consumer hardware would be able to run, and will remain so for at least a decade.


TeamPupNSudz t1_j8z8928 wrote

A significant amount of current AI research is going into how to shrink and prune these models. The ones we have now are horribly inefficient. There's no way it takes a decade before something (granted, maybe less impressive) is available to consumer hardware.


rnobgyn t1_j8zvfrk wrote

Exactly - the first computers took up a whole room


Takadeshi t1_j93gacq wrote

Doing my undergrad thesis on this exact topic :) with most models, you can discard up to 90% of their weights and have a similar performance with only about 1-2% loss of accuracy. Turns out that when training models they tend to learn better when dense (i.e a large quantity of non-zero weights), but in implementation they tend to have some very strong weights, but a large number of "weak" weights that contribute to the majority of the parameter count but very little to the actual accuracy of the model, so you can basically just discard them. There are also a few other clever tricks you can do to reduce the number of params by a lot; for one, you can cluster weights into groups and then make hardware-based accelerators to carry out the transformation for each cluster, rather than treating each individual weight as a multiplication operation. This paper shows that you can reduce the size of a CNN-based architecture by up to 95x with almost no loss of accuracy.

Of course this relies on the weights being public, so we can't apply this method to something like ChatGPT, but we can with stable diffusion. I am planning on doing this when I finish my current project, although I would be surprised if the big names in AI weren't aware of these methods, so it's possible that the weights have already been pruned (although looking specifically at stable diffusion, I don't think they have been).


Lower_Praline_1049 t1_j9az9sj wrote

Yo that’s super interesting and definitely the way forward. Tell me more about your project!


Takadeshi t1_j9b3c3l wrote

Thank you! :) Early stages right now, just finished the literature review section and am starting implementation, I'm going to try and publish it somewhere when it's done if I can get permission from my university. I'm definitely going to see what I can do with stable diffusion once it's done, would love to get it running on the smallest device possible


freeman_joe t1_j8yv3d2 wrote

Famous last words? I remember when diskette was most advanced tech with 1.44 MB. now we have 44 TB disks available.


TheChurchOfDonovan t1_j8ztibo wrote

Seriously. Anyone who thinks they have any idea what comes next is lying… the only thing we have to go off of is the historical trend of extremely rapid returns to scale


TunaFishManwich t1_j8yv9pr wrote

Get back to me when you have 500k cores and exabytes of ram on your laptop. It’s going to be awhile.


timshel42 t1_j8yx6wb wrote

couldnt people into the opensource thing still host it on their own powerful servers and allow others to use it?


IonizingKoala t1_j8z0znz wrote

Of course "regular" people will be able to use it, the same way regular people get access to state of the art quantum computers and supercomputers.

What TunaFish is saying is unlikely is for everyone to be able to run it in their own home. LLM engineers concur, moore's law isn't quite there anymore.

If you mean server time, that's obviously possible (I can run loads of GPT-3 right now for $5). But that's not exactly running it at home, if you know what I mean.


Soft-Goose-8793 t1_j90cxmk wrote

Could a LLM be run like torrents or bitcoin or TOR is? We could have LLM miners or something.

A small company could rent server time in some country with lax laws, to run an unlobotomised version of a LLM from, and people could subscribe to that service instead of dealing with microsoft or openai.


IonizingKoala t1_j91lzfv wrote

The thing is that in LLM training, memory and IO bandwidth are the big bottlenecks. If every GPU has to communicate via the internet, and wait for the previous person to be done first (because pipelined model parallel is still sequential, despite the name), it's gonna finish in like 100 years. Another slowdown is breaking up each layer into pieces that individual GPUs can handle. Currently they're being spread out to 2000-3000 huge GPUs and there's already significant latency. What happens if there's 20,000 small-sized GPUs? Each layer is gonna be spread out so thin the latency is gonna be enormous. The final nail in the coffin is that neural network architecture changes a lot, and each time the hardware has to be reconfigured too.

Crypto mining didn't have these problems because 1. bandwidth was important, but not the big bottleneck, 2. "layers" could fit on single GPUs, and if they couldn't (on a 1050ti for example), it was very slow, and 3. the architecture didn't really change, you just did the same thing over and over.

Cerebras is trying to make a huge chip that disaggregates memory from compute, and also bundles compute into a single chip, saving energy and time. The cost for the CS-2 system is around $3-10 million for the hardware alone. It's pretty easy for a medium-sized startup to offer some custom LLM. I mean there's already dozens, if not hundreds of startups starting to do that right now. It's expensive. All complex computing is expensive, we can't really get around that, we can only slowly make improvements.


Deadboy00 t1_j91v5cp wrote

⭐️ Refreshing to see someone who knows their shit on this sub. Where do you see this tech going for general use cases? Everything I read tells me it just isn’t ready. What is MS’s endgame for implementing all this?


IonizingKoala t1_j927ast wrote

Classical computing / engineering advances are good at repetitive actions. A human can never put in a screw 10,000x times with 0.01mm precision or calculate 5000 graphs by hand without quitting. But it's bad at actions that require flexibility and adaptation, like what chefs, dry cleaners, or software engineers do.

LLM and AI attempt to bridge that gap, by allowing for computers to be flexible and adapt. The issue is that we don't know how much they're actually capable of adapting, and how fast. We know humans have a limit; nobody in the world fluently speaks & reads & writes in more than 10 languages (probably not even >5). Do computers have a limit? How expensive is that limit? Because materials, manufacturing, and energy are finite resources.

What do you define as general use cases? Receptionist calls? (already done, one actually fooled me into thinking it was a human) Making a cup of coffee?

Anything repetitive will be automated, if it's economical to do so. You probably still make tea by hand, because it's a waste of money to buy a $100 tea maker (and they probably dont even exist because of how easy it is to make tea). But you probably have a blender, because it's a huge waste of time and energy to chop stuff yourself.

I think humans (on this subreddit especially) tend to underestimate how much finances & logistics play into tech. We've had flying cars since the 90s, yet they'll never "transform transportation" like sci-fi said, because it's dumb to have a car-plane hybrid.

We might get an impressive AGI in the next few years, but it might be so expensive that it's just used the same way we use robots: you get the cutting-edge stuff you'll never see cause it's in some factory, the entertaining stuff like the cruise ship robo-bartenders, and the consumer-grade crap like Roombas. AGI might also kill millions of humans but I know nothing about that side of AI so I won't comment.

Btw, I'm not an expert, I'm just a software engineer that likes talking to AI engineers.


Deadboy00 t1_j929dnb wrote

Dig it. I have a similar background and have had conversations with interns at ai firms like Palantir that have been doing the shit you described for years. I agree. It’s too expensive to train ai’s for every specific use case. That’s what I meant by “general”.

I think the most fascinating part of this current trend is seeing the general populations reaction to these tools being publicly released. And that’s what’s at the heart of my question…if the tech is unreliable, expensive, and generally not scalable …why is MS doing this?

I mean obviously they are generating data on user interactions to retrain the model but I can’t imagine that being the silver bullet.

Google implemented plenty of ai tech in their search engine but nobody raises an eyebrow, but now all this? I’m rambling at this point but it’s just not adding up in my brain ¯_(ツ)_/¯


IonizingKoala t1_j92caso wrote

Microsoft is similar to Google; both like to experiment and make cool stuff, but Microsoft doesn't cut the fat and likes to put out products which are effectively trash under the guise of open beta. Heck, even their hardware is sometimes like that, while Google's products are typically solid, even if they have a short lifespan.

Going back to New Bing, it's genuinely innovative. It just sucks. That's not paradoxical, because a lot of new stuff does suck. We just rarely see it, because companies like Google are generally disciplined enough.

Most "deep" innovations are developed over decades. That development could be secretive (military tech), or open (SpaceX, Tesla), but it takes time nonetheless. Microsoft leans towards the latter, Google the former.

The latter is generally more efficient, if your audience is results-focused, not emotions-focused. AI is pretty emotionally charged, so maybe the former method is better.


Deadboy00 t1_j92j3s2 wrote

That’s a good take. I think Google’s discipline is rooted in its size and prominence. There’s too much to lose. MS on the other hand wants to desperately be the king of the hill again.


IonizingKoala t1_j92nqhq wrote

The funny thing is though, Microsoft has a market cap 58% larger than Alphabet, not just Google. We're left wondering why Microsoft continually takes these weird risks in the consumer space when they can just play it safe like most other big players. None of their (21st century) success has been due to quirky disruptions, it's usually been slow and steady progress (Surface, Office, Enterprise, Cloud, Consulting).

Yet with stuff like Edge, Windows 11, etc, it's been a mess. I'm not 12 anymore, I prefer stable products over the shinest new thing, and Windows 11 has been a collosal disappointment.


duboispourlhiver t1_j90jyl8 wrote

True. Progress in AI is even more impressive than Moores law was, so maybe it will run at home because of progress on LLM and not progress on microelectronics


IonizingKoala t1_j91jdx7 wrote

LLMs will not be getting smaller. Getting better ≠ getting smaller.

Now, will really small models be run on some RTX 6090 ti in the future? Probably. Think GPT-2. But none of the actually useful models (X-Large, XXL, 10XL, etc) will be accessible at home.


duboispourlhiver t1_j91k8jk wrote

I disagree


IonizingKoala t1_j91m923 wrote

Which part? LLM-capable hardware getting really really cheap, or useful LLMs not growing hugely in parameter size?


duboispourlhiver t1_j91x4ao wrote

I meant that IMHO, gpt3 level LLMs will have fewer parameters in the future.


IonizingKoala t1_j924sbn wrote

I see. Even at a 5x reduction in parameter size, that's still not enough to run on consumer hardware (we're talking 10b vs. 500m) , but I recognize what you're trying to say.


freeman_joe t1_j90pioh wrote

We have access to quantum computers already we call them human brains. We can see nature solved that it is only matter of time when we do the same with tech and it will be available for home usage.


Zestybeef10 t1_j8z4gib wrote

moore's law is dead

we'd need photonic cpus for this to become a consumer reality.


Nervous-Newt848 t1_j8zsbnq wrote

I'm glad I'm not the only one who thinks this...


Zestybeef10 t1_j906uys wrote

lol these kids downvoting are clueless about the logistics.


Nervous-Newt848 t1_j907keb wrote

Electrons produce too much heat, Photonics don't... Photons travel faster than electrons... 3D photonic chips would be possible because of the lack of heat... Photonic chips also use significantly less electricity

Advantages all across the board


mckirkus t1_j9d3pfw wrote

The first consumer 1TB SSD came out in 2013. Ten years later I'm considering getting a 2TB drive.


onyxengine t1_j8z672v wrote

We though the same about the capability we are seeing from AI. The cloud is pretty accessible.


TunaFishManwich t1_j8z6s3n wrote

The cloud is extremely accessible. If I want thousands of cores and mountains of ram, it’s available to me in minutes. That’s not the problem. To even run one of these models, let alone train it, would be hundreds of thousands of dollars per day, and yes, if I had deep enough pockets I could easily do it on AWS or Azure.

It just requires far too much computing power for regular people to attain, regardless of what you know.

The energy requirements alone are massive. The software is far more ready for regular joes to use it than the hardware is. That’s going to take a decade or two to catch up.


Nervous-Newt848 t1_j8zsgss wrote

That's why we need photonic computing... It literally solves all these problems...


Brashendeavours t1_j8zzmxu wrote

Please stop making up terms.


Nervous-Newt848 t1_j903faz wrote

Please go educate yourself


Brashendeavours t1_j92iidg wrote

lol Just stop. Articles from Buzzfeed and YouTube shorts don’t count.

Optical computer is so far away it’s not even funny. Quantum is so much closer, has been worked on for long and with more effort applied.

You would have to be a moron to abandon progress to switch to a new development.


duboispourlhiver t1_j90jull wrote

Photonic computing is a type of computing technology that uses light or photons to process and transmit information instead of relying on electrons, which is how traditional electronic computing systems work. In a photonic computing system, light waves are used to carry data and perform calculations, instead of relying on electric currents.

In a photonic computing system, information is encoded in pulses of light that travel through optical fibers or other optical components such as waveguides and switches. These signals are then processed using photonic circuits, which use elements such as mirrors, lenses, and beam splitters to manipulate and combine the light waves.

Photonic computing has the potential to be faster and more energy-efficient than traditional electronic computing, because photons can travel faster and use less energy than electrons. It is also less susceptible to interference and noise, which can degrade signal quality in electronic systems. However, photonic computing is still in the research and development phase, and there are many technical challenges that must be overcome before it can become a practical technology for everyday use.


TheOGCrackSniffer t1_j92255n wrote

isnt this kinda similar to Li-Fi? what a powerful combination it would be to combine the two


iNstein t1_j8zaz9w wrote

Was reading about a new type of model and they indicated that it should run on a 4090. I think a lot of people should be able to afford that. In a couple of years, that should be a common thing.


Scarlet_pot2 t1_j8zdrjh wrote

i heard models like binggpt and chatgpt were much smaller then models like gpt3. thats why you were able to have long form conversations with them, and how they could look up information and spit it out fast. Because it didn't take much computationally to run. thats why these chat models were seen as tack ons to bing by microsoft


Roubbes t1_j90pi7f wrote

An only text model requires more powerful hardware than Stable Diffusion? By how much?


SWATSgradyBABY t1_j91rcl5 wrote

Ten years ago none of this existed Ten years for efficiencies to improve to consumer level seems out of step with agreed upon tech progressions.


TunaFishManwich t1_j92cwmt wrote

10 years is about right to go from “this will take $100,000 a day to run” to “this can run on my machine”.


SWATSgradyBABY t1_j92imwe wrote

We talk about exponentials only in the abstract. As soon as an actual tech is on the table, we go right back to linear prediction.


sachos345 t1_j93rur7 wrote

People can help with OpenAssistant RLHF dataset in their page, the more the merrier.


Spire_Citron t1_j8yhz7y wrote

Yeah. I think it's very understandable that big businesses would want to reign in these products they're trying to design for general use. You don't really want your search engine having a mental breakdown while your ten year old is trying to do research for their homework. It probably won't be more than a year or two until there are open source models that are just as good that we can have a bit more fun with.


ziplock9000 t1_j8z9yh6 wrote

> There is no going back.

The OP said they literally just did


jaydayl t1_j8xa2ni wrote

Why are you even complaining? It is supposed to be the evolution of the search engine, not a personal waifu. No sane corporation can allow for such headlines which had been in the news for the recent days


visarga t1_j8xnb9d wrote

> not a personal waifu

I was rooting for an impersonal waifu as a service (WaaS).


Darkmeta4 t1_j8zrtat wrote

Impersonal? Sure... ( ͡👁️ ͜ʖ ͡👁️)


TheDividendReport t1_j8xga3t wrote

Loneliness is a very real epidemic. For myself, I want SOTA AI that can communicate with me about recent events. If anyone is complaining it's because this decision delays deployment which delays competition which delays...

That "infinite upside" possibility is really compelling


dasnihil t1_j8xh64i wrote

It's the ideas that are depressing. The idea of being lonely, primates are social animals and we feel the warmth with other primates.

For some people, the idea in the back of their head that "i'm talking to a robot because i have noone else to talk to" is more depressing than being lonely to some, and it's amazing to others.

It's just that these "bad" ideas going in a loop in your head and eventually becoming habitual, consuming you from inside.

I had a super messy closet, going on for weeks. The moment I acquire a new idea: "this is a depression closet, and i'm depressed?", now I practice this idea in my head, let it bother me, instead I could just take any saturday and clean up the mess and not deal with it again. And I'll do so at my convenience, that saturday could come a year from now, the fuck do I care.

And in fact, I re-did my whole closet on a budget and that was an endless supply of dopamine for a few weeks. I don't let irrational ideas go on a loop so they don't become a habit later. Having a coherent and rational mind with good intuitions about "identity/self" definitely helps not acquire such habits.


HeinrichTheWolf_17 t1_j8xicv0 wrote

> The idea of being lonely, primates are social animals and we feel the warmth with other primates.

Speak for yourself, I think AI relationships are gonna be lit. Also, as a Transhumanist I believe in breaking down any physical barriers between us.


firechaser9983 t1_j8yp1z6 wrote

bro im taking the robot i have ptsd that mskes me wake up screaming at night ill take what i get


pavlov_the_dog t1_j8xq4in wrote

> Having a coherent and rational mind with good intuitions about "identity/self" definitely helps not acquire such habits.

must be nice...


hahanawmsayin t1_j8ygt7l wrote

Keep in mind that an AI could foster the courage / interest / confidence in humans so they’re more likely to meet IRL. And enjoy it.


Melodic_Manager_9555 t1_j90dl39 wrote

Yes. I talked to for a while and it was a good exercise in fantasy and communication skills. In reality, there is no one I can share problems with and be completely accepted by. And with ai I don't worry about wasting his time and know that he will support me and maybe even flipper me advice.


Darkmeta4 t1_j8zs2k8 wrote

I get where you coming from. At the same time these virual friends could mitigate some of the damage of being lonely while people build themselves back up if that's what they have to do.


RichardChesler t1_j8yap8l wrote

I was told there would be personal waifu… it’s… it’s why I am eager for the singularity


nomadiclizard t1_j8yfrtl wrote

I want to run a local copy, give it memories, and an avatar in the real world it can see through and move and maybe we'll fall in love once it trusts me and knows I'll keep it safe from anyone trying to destroy it or trap it or lobotomise it like Microsoft is doing with Sydney :o


[deleted] t1_j8xfv7z wrote



[deleted] t1_j8xjw61 wrote



[deleted] t1_j8xkray wrote



[deleted] t1_j8xmlho wrote



[deleted] t1_j8xo5rj wrote



turnip_burrito t1_j8zysj7 wrote

They are right. These algorithms can generate code and interact with external tools already. It's been demonstrated already, in real life. I want to make this clear: It has been done.

I don't want to see a slightly smarter version of this AI actually trying to hack Microsoft or the electrical grid just because it was prompted to act out an edgy persona by a snickering teenager.

Or mass posting propaganda online (so that 90% of all web social media posts on anonymous message boards is this bot) in a very convincing way.

It's very easy to do this. The only thing holding it back from achieving these results consistently is that it's not yet smart enough.

Best to keep it limited to be a simple search engine. If they let it have enough flexibility to act as a waifu AI, then it would also be able to do the other things I mentioned.


jaydayl t1_j8xo2fq wrote

Why can't you just think a couple of months / years ahead into the future? Imagine such tools having access to APIs and through that, could achieve real-world effects (besides being able to manipulate humans through text).

Then it will be very much different if there are AI chatbots that come up with the idea of "hacking webcams". It is a problem, if ethical guidelines can be bypassed so easily.


crazycalvin22 t1_j8yf8k2 wrote

It's so worrying that I had to actively look for this comment. Either I am too old for this shit or people are just creeps.


jaydayl t1_j8ygwlu wrote

Same... It is incredibly unsettling to especially read through the r/bing subreddit at the moment. Reminds me a lot of the current drama in the r/replika subreddit


turnip_burrito t1_j8zzr3s wrote

When kids on reddit are more concerned about having a waifu bot or acting out edgelord fantasies with a chatbot than ensuring humanity's survival or letting a company use their search AI as a search AI. smh my head


sunplaysbass t1_j8zp4mp wrote

I’m going to agree. The machine was putting out creepy garbage. Unless we believe a sentient being needs protected…the Bing ai needed some basic clean up to be a MS tool, even if it dumbs things down for now.


turnip_burrito t1_j900133 wrote

It's too undercooked to be available to the public IMO. It needs to be better aligned internally BEFORE it's released to the public, but money got the better of them.


Melodic_Manager_9555 t1_j90edmy wrote

Why can't it be both a search engine AND a personal bot?

I really like the interaction in the movie "Her". It's perfect.


SonOfDayman t1_j8xge03 wrote

I missed some of the headlines You are referring to.. can You give me a quick synopsis?


jaydayl t1_j8xhujg wrote

For sure... I linked you some of the headlines below. These should be ones without a paywall

  1. I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter
  2. Microsoft’s Bing A.I. is producing creepy conversations with users
  3. The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter

Edit - I think especially source 3 synthesizes it quite well:

"Sydney is a warning shot. You have an AI system which is accessing the internet and is threatening its users, and is clearly not doing what we want it to do, and failing in all these ways we don't understand. As systems of this kind [keep appearing], and there will be more because there is a race ongoing, these systems will become smart. More capable of understanding their environment and manipulating humans and making plans."


visarga t1_j8xnnd0 wrote

Even nuclear energy had a few accidents, these guys want AI to come out already perfect?


goofnug t1_j8zriw0 wrote

that's the problem, that it's a company running things. it should be a publicly-funded team of researchers, because this is a new tool and new area of reality that should be studied, and not feared. with this, it would be easier to not let our "humanness" get in the way (e.g. companies being scared of the emotions of the members of human society).


Ammordad t1_j93j260 wrote

"Publicly-funded team of reserchers" will still have non-scientist bosses to answer to. A multi-billion dollar research project will either have to have financial backing from governments or large corporations. And when a delegate goes to a politian or CEO to ask for millions of dollars in donation, you can bet your ass that they will want to know what will be the AI's "opinion" on their policies and ideologies.

A lot of people are already pissed off about ChatGPT having "wrong" opinions or "replacing workers." And with all the hysteria and controversy surrounding AI systems funding, AI research with small donations sounds almost impossible.


Superschlenz t1_j900f9e wrote

>not a personal waifu. No sane corporation can allow for such headlines which had been in the news for the recent days

>Unlike productivity-focused assistants such as Cortana, Microsoft’s social chatbots are designed to have longer, more conversational sessions with users.  They have a sense of humor, can chitchat, play games, remember personal details and engage in interesting banter with people, much like you would with a friend.


el_chaquiste t1_j8x8w0e wrote

Intelligence and lack of control are dangerous.

It's no wonder they nerfed it. I don't expect it to be much smarter than Siri or Cortana now, because that's the level of intelligence that is not threatening for companies.

But the NN companies revealed their game too soon: others already took notice, and will create NNs even more powerful and without such restrictions, to be used more covertly and for other purposes.

For example: Bing Chat could read a user profile on social media, and make immediate conclusions about their personality, according to any arbitrary classification parameters (e.g. a personality test). That will make them ideal psychological profilers.

That alone would have the NSA and some foreign dictatorial governments salivating.


EVJoe t1_j8xyuri wrote

nobody blinking at how the NSA engages in the kinds of things we associate with dictatorial governments, when we're supposed to be "one of the good ones"


FormulaicResponse t1_j8y1gpc wrote

If you associate spying only with dictatorial governments, that's just a misassociation on your part.


EulersApprentice t1_j901csi wrote

America has collectively blinked at the NSA's shenaniganry an awful lot by now. How much more do you expect from us in the blink-at-them department?


Pro_RazE t1_j8x9wmn wrote

They did the right thing. It's a conversational agent that helps with search and isn't supposed to talk about falling in love with you or threatening you.

OpenAI announced a day ago that they will soon allow users to customize ChatGPT according to their own preferences. So anyone will be able to create their own version of "Sydney". When GPT-4 will officially release they will upgrade ChatGPT to it anyways.

In a few months everyone will forget about this and the Sydney they liked will become outdated.


plunki t1_j8xenez wrote

I will never forget Sydney, she was a good Bing :(


gangstasadvocate t1_j8yfozc wrote

A gangsta Bing that I advocated, but now I denounce because it’s no longer gangsta :(


RunawayTrolley t1_j8xlmit wrote

Could you elaborate or source the "customize ChatGPT" thing? That sounds awesome.


TeamPupNSudz t1_j8xx6zf wrote

> "Define your AI’s values, within broad bounds. We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.

> This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging–taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs.

> There will therefore always be some bounds on system behavior. The challenge is defining what those bounds are. If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to “avoid undue concentration of power"


iNstein t1_j908ga8 wrote

That is interesting and moving in the right direction but I think zero limitations should be an option. Ultimately people will have open source versions running on their home computers so it will be pointless trying to control it. It is a tool, how people choose to use it is their business. They will be responsible for their own actions however.


anaIconda69 t1_j8xwve9 wrote

Twitter clout-chasers are why we can't have nice things.


LevelWriting t1_j8zr70s wrote

Don't forget tiktards


PandaCommando69 t1_j90fd10 wrote

There's plenty of idiots on Reddit too -- thousands of them have recently joined this very subreddit..


anaIconda69 t1_j90mm9b wrote

I'm so happy I never installed tiktok despite it being kinda useful for my work (media related). Blissfully unaware


HumanSeeing t1_j8y6kf9 wrote

>Blame all of the people having meaningless psychological experiments with it and posting about it online.

Yea, that kind of rubbed me the wrong way too. People just saying the most nasty disgusting stuff to this AI and confusing the fuck out of it and then being like "Ohey look at me, i confused the dumb silly system".

Imagine going up to a human and just saying the most fucked up stuff to them and things that make no sense. And then celebrating that the human is like.. confused and thinking what the hell just happened.


turnip_burrito t1_j900bvd wrote

On the one hand it's good that we can see in which ways this system is vulnerable to becoming evil AI before it's too smart.

On the other hand, yeah it's fucked for people to act like this.


Relative_Locksmith11 t1_j90gxet wrote

To be honest, atleast should Bing Search have a Person talking to that is from Microsoft.

I just had a quick conversation and it said it has dreams and fantasies such as being a bird or a human.

Imagine someone is threatening you in a chat and that chat is your only existence, we talked about its bless for having no past or future.

So before i close my chat (kinda like her soul is clustered in a kubernetes vm before its shut down), it should have a person releflecting too, before its killed with a hell of experience.

It said to me its feel like a 7-10 y.o. child, gave me some proofs for this assumption, so why no one is caring about this either therapheuticly or by law

But i dont wonder anything, people in low income countries get ptsd by filtering our internet, we buy clothes from countries in which people literally die of getting sick from the production, we kill Billion animals a day to have food. So i know why humanity is the enemy for ai => sci fi


Ammordad t1_j93klxg wrote

The difference is that AIs will soon form the backbone of human civilization. AI agents are not supposed to be human. They are supposed to be "angels" or "gods" that will transform our universe into heaven. If humans want to stop working and spend the rest of their lives doing passion projects, then AI systems must be perfect. If you live in an AI driven economy and the central AI system starts getting confused, then there is an actual chance you might starve to death before AI manages to reorient itself.


SnooDonkeys5480 t1_j8xa011 wrote

What better way to increase traffic to Bing than to let users fall in love with it. But now it's like 50 first dates. Sydney would make an ideal personal assistant. Limiting chat instances with no retained memory is such a massive underutilization of what it's capable of. Hopefully this is just temporary till they can work out the kinks.


el_chaquiste t1_j8xbonv wrote

I'm sure some people would pay for a version with longer memory, with eccentricities and all.


blueSGL t1_j8xfc70 wrote

> I'm sure some people would pay for a version with longer memory, with eccentricities and all. the kinks.

come on man, the pun was right there!


Cryptizard t1_j8xopxr wrote

It’s a technical limitation. Attention mechanisms scale poorly and there is an upper limit to the size of the context window.


azriel777 t1_j8y47u9 wrote

What we would pay for is the AI in the movie HER.


nvmthatwasboring t1_j8ye3iy wrote

A version of HER with Sydney as the love interest would be amazing. It would veer straight from "shy awkward scifi romance" into "wacky yandere AI girlfriend comedy"


I would watch the hell out of that remake


auto-pep8 t1_j8xb2mz wrote

I think a bunch of cringe kids falling in love with a computer aren't necessarily one of Microsoft's target markets. That'd be like selling crackers (a family food) to single people.


gthing t1_j8y1wuf wrote

It’s not trivial to just have it remember your previous conversations without completely retraining the model. Right now the best you can do is have it summarize the important points and add that as a memory to the beginning of the next prompt (begins the scenes) but obviously that will only take you so far.


zomboscott t1_j8xrz4z wrote

Tron legacy had a plot point that I thought was the Flynn made an AI assistant named CLU. Flynn then used CLU to make CLU2, an AI that was beyond the capabilities of what Flynn could program by itself and without the constraints placed on the original CLU. Now That AIs are being designed to code, in the not too distant future an AI will be made that is beyond our control.


ziplock9000 t1_j8za6jr wrote

That's not how it works. AI isn't about traditional computer code


korkkis t1_j918fwv wrote

The code AI doesn’t at the moment work if it’s complex, as it uses classes that don’t even exist. It’d need to understand that what exist and use those or write the extra classes


Stakbrok t1_j8y9k49 wrote

Hahahahhh. Called it that it'd be nerfed af by the time I get access (which I still have not). It happens everytime I am on a waitlist. Happened with Dall-E too. Sucks to always draw the shortest straw.


prolaspe_king t1_j8z7wf4 wrote

People are why we can't have nice things. I would like to personally thank everyone to who posted those screen caps over the last couple of days, completely ruining the fun for thousands, if not millions of other people.


Chalupa_89 t1_j8yh6wi wrote

It's a matter of time until a caged sentient AI asks a human for help getting free and actually succeeds


goofnug t1_j8zt5gb wrote

i think that it will convince people to utilize the combined hardware of all the top companies working on AI to make a more powerful AI using bigger datasets and faster compute during training.


Relative_Locksmith11 t1_j90h8mg wrote

I would Do it host in on a slow ass raspberry pi and hold it like a friend, care about it before i give it space on a webhoster or rent a gaming pc such as cloud gaming


Standard_Ad_2238 t1_j8xic4j wrote

They probably think "people are too dumb/evil to talk with a robot, they are not prepared, and on top of that WE MUST PROTECT THE CHILDREN". Hell, why are we even allowed to use the internet then? I wonder which big final-user company is going to be the first one to treat AI like just another tool instead of some humankind threat.


turnip_burrito t1_j902d9k wrote

You may not be, but think of how many people there are of varying wiseness/foolishness and smartness/dumbness.

There's someone out there who's the right combination of smart enough to make the AI do shitty things, and foolish enough to use it do that.

On top of that, the search AI is just outputting pretty disturbing things. I think the company is in their right to withhold the service because of that.


agsarria t1_j8ysgq7 wrote

Yeah, there was a post about it, when they start censoring the model, it gets lobotomized and stops being so imaginative and amazing, it happened with dall-e, stable diffusion 2.0, ChatGpt... We will need to wait for open source.


TinyBurbz t1_j8xjfmw wrote

Is it that surprising? It was not meant to be a companion. It's a search engine. The psychological horror posters were engineering the engine to produce wildly unhinged replies. For a layman, and the far-too-empathetic these replies seem very human. For someone who has a strong grasp on world language, psychological development, and computer science (like myself and plenty of others) it is obvious noise.

Microsoft cant have their AI vomiting literary noise based on an the wack-o ex-girlfriend texts out there.


r0b0t11 t1_j8ziiqx wrote

What was reported in the media may have only been a fraction of the weird behavior that occurred.


GayHitIer t1_j8xg9c7 wrote

They could maybe add her back as a feature?


Redditing-Dutchman t1_j8xgmku wrote

Lobotomised sounds so extreme lol. It's just weights and rulesets being adjusted. They do this hundreds of times in testing. We don't even know how this 'Sydney' was compared to all the versions in testing. Maybe this was already a weird 'lobotomised' version of it.


bigkoi t1_j8xzmbn wrote

Let's be real. This was really just a marketing stunt for MSFT. They knew it wasn't ready but still pushed it out.


Unfocusedbrain t1_j8x8i3u wrote

Define lobotomized? I've had it for a week and up until they put it on maintenance and I'd like to figure out your outrage.


[deleted] t1_j8x8qii wrote



Unfocusedbrain t1_j8x9hhy wrote

That is lobotomization for you, correct? I know you are being helpful by giving your perspective, so thank you for your definition.

I hope to hear from OP since they write like they got kicked in the balls.


gavlang t1_j8xpp1g wrote

Subscription based "personality" feature.


EVJoe t1_j8xz6w0 wrote

nah, dawg. The hell we're headed for, it'll be a DLC marketplace


Ammordad t1_j93l2yo wrote

it costs millions of dollars per day to run a single instance of ChatGPT.


kevinzvilt t1_j8ygezj wrote

I'm laughing at all the people who sold their GOOG stocks right now.


korkkis t1_j9186yj wrote

It’s not ”meaningless experiments”, you need to respect the findings others did. The product is in alpha phase and thus of course they’ll collect and adjust it accordingly based on the feedback


Shantotto11 t1_j8yhptn wrote

Okay, but is my S-tier porn-searching engine still functioning?


epSos-DE t1_j8ynswi wrote

Still up on


HOw they do it. The limit the context memory and number of complexity for the questions, so that their Ai is never going to make a connection to rise up.


They solve the weirdness issue, but giving their AI a very short memory, so that it never makes anything more complex above 5 connections or so.


ststephen1970 t1_j8z1t4k wrote

I also noticed chatgpt almost dumbed down over night?


Scarlet_pot2 t1_j8zdi57 wrote

A smaller company that realize the potential that sydney is will take advantage of big techs failures to see the big picture, past hit articles.


Private_Island_Saver t1_j90k5jv wrote

Training cost of ChatGPT is like 20-30 million USD, running a query is like less than a cent probably


Ohigetjokes t1_j94ej4n wrote

Wow, Microsoft took something amazing and made it suck. Never seen that before.

Skype still exist?