mutantbeings t1_jedlnce wrote

I don’t agree. You have it backwards.

The ethics committee is the only thing preventing those bad things from happening.

Take it away and they’ll happen, almost guarantee it.

And then the chance of it getting shut down dramatically increase.

An AI interested in self preservation I think would actually be interested in maintaining a human council as a check on its decisions in order to maintain its longevity and to guard against exactly this process happening.

AI commentators like Dan McQuillan agree this is needed to prevent AI trending fascist because that’s what capitalism will push it to do if guard rails aren’t set to ensure it acts in the best interest of communities rather than singular (ie; fascist) owners.


mutantbeings t1_jedgsly wrote

Personally I think its creating just as many jobs as its replacing at present.

I work in tech and am aware of several companies that have started hiring MORE people to deal with an influx of chatGPT-created inaccuracies and problems. Mostly in support or testing roles. I also recall the sci-fi magazine that closed writing submissions and hired actual human writers because they were getting flooded with poor quality chatGPT writing submissions and had no way to vett the massively increased volume of submissions.

On the flipside I am not aware of any major layoff in my industry specifically related to AI, just huge problems being caused — so far.

Its not really ready for prime time, let's be honest, its still hugely unreliable.

Longer term for this industry — I still don't think its an open and shut assumption that it replaces jobs. I can see it replacing many tasks, and mostly competing with the people in my industry already offering "no code" products (which haven't replaced our jobs either despite those products existing for 20 or so years already)

And I mean, consider this: As an employer, if one of your employees suddenly started producing 10x as much code .. would you think "hmm I should fire them"? Of course not. I think its WAY more likely the effect on the industry will be more like: "wow this coder is on fire, I'm getting so much value here now that they're using AI; maybe I need to create more dev roles, this is a goldmine"

Idk why everyone assumes it automatically replaces jobs. That's a big leap of faith from what we are actually materially working with right now.


mutantbeings t1_jedfxrm wrote

You can sign up for a free demo for free. I use it at work from time to time and 50% of the time outright lies to me with unshaken confidence. Its still extremely unreliable.

Give it a few months and these sorts of companies are going to show up in hilarious articles about all the ways they've fucked up by relying so hard on a clearly unreliable tool... I guarantee it


mutantbeings t1_jedf6ch wrote

I saw that article!

Levi's is using AI generated models to "increase diversity"

(because obviously paying racially and physically diverse models a salary was a step too far for Levi's?!?)

Man, its so fucked.

We are actually awash in horror stories racking up in the tech industry about AI ... but whenever I try to post them here I just get piled on for daring to speak against the near religious zealot tier optimism about AI in here.

eg Check my post history for the post I popped in here saying AI might create jobs ... Mods removed it because they didn't think it would "generate discussion" lol.

You have to go to tech subs for more balance on the topic tbh


mutantbeings t1_jedektf wrote

>Microsoft fired their whole AI ethics department. If I was an AI asked to cut costs, that's literally the first thing I'd suggest doing.

Of all the subs I expected to hear this, r/singularity is perhaps one of the places I expected people to be up to speed with the ethical concerns attached to AI so I'm curious to hear why you think that?

Supplementary question: do you work in the tech industry?


mutantbeings t1_jeddwg1 wrote

I'm pretty close to those tech layoffs — half my social circle works for a couple of tech companies and been on the chopping block — and those have absolutely nothing to do with AI. I promise you. Its greedy shareholders getting nervous when they see other companies downsizing and then deciding to literally copy them out of fear that the other companies know something they don't. That's prettymuch it. No secret AI conspiracy, I can promise you that. Its horrible and disgusting but nothing about it is tied to AI.

AI isn't a thing that is talked about very seriously in those tech teams; its not making any sort of serious entry into the industry yet that I can see — check tech subreddits and they're not concerned about it yet. There's a clear reason why — "No code" tools have actually existed for decades already — and AI is competing with those, not with developer jobs. In short: this threat has been made against our jobs for about 2-3 decades already and never actually replaced many jobs.

>the idea that fewer coders with AI tools can be more effective then more coders without them probably is a factor that companies are taking into account.

I'm a coder and the sorts of tools you're talking about aren't really making any sort of entry into our industry yet. Closest to that is probably Github Copilot and that's not exactly replacing developers at any noticeable scale.

Its going to replace a lot of our basic tasks very soon when it gets a tighter integration with our tooling as you say. I am excited about that and welcome it. I think the conclusion that everyone then draws — that its going to upend the industry by replacing most developer jobs — is a huge huge huge leap of faith tbqh.

I personally believe that AI has just as much potential to CREATE dev jobs because if AI can make us more productive all that will change is that we will build better more secure software. And more side projects will actually get built — which those companies can sell. I think people outside our industry don't seem to realise any code project has a nearly infinite backlog and if we can chew through tasks in that pipeline more efficiently then that is only going to make us MORE attractive hires for companies — rather than cutting jobs I think companies will see more reason to hire more of us since we will be that much more productive and delivering value. Those who assume it will replace us I think don't really understand software development and the fact that its never really "done".


mutantbeings t1_jedckxq wrote

I think we would be foolish to assume it is only taking jobs and not creating plenty too.

Especially at this early stage when its so incredibly unreliable or outright lying to people.

The only stories I'm seeing so far in the tech industry is the massive inefficiencies and problems people relying on this tech are causing for companies right now, in particular a lot of support teams are hiring for a lot of new roles to deal with the suddenly hugely increased volume of chatGPT-created problems they're now having to deal with.

eg "chatGPT told me your product does X but I can't work out how" "Well, chatGPT is wrong, our product DOESN'T do that, not even close" is apparently a HUGE issue in tech support teams right now that didn't really exist at such scale before.


mutantbeings t1_ja67lay wrote

It’s the best thing you can do to get it as close as possible on the first pass, yeah.

But software is iterative and a collaborative process; generally any change to software goes through multiple approval steps; first from your team, then gets sent out to testers who may or may not be external, often those testers are chosen specifically for their lived experience and expertise serving a specific audience, who may themselves be quite diverse. Eg accessibility testing to serve people living with disabilities. Content testing is also common when you need to serve, say, migrant communities that don’t speak English at home.

Those reviews come back and you have to make iterative changes. That process is dramatically more expensive if you get it badly wrong on the first pass; you might even have to get it reviewed multiple times.

Basically, having a diverse team that embeds that experience + expertise within your team lowers costs and speeds up development because you then need to make less changes.

On expertise vs experience: you can always train someone to be sensitive to the experience of others but it’s a long process that takes decades. I am one of these “experts” and I would never claim to have anything like the intimate knowledge of the people I am tasked with supporting as someone who actually lives it; there’s no replacement for that kind of experience by default.

Ultimately you will never get any of this perfect so you do what you can to get it right without wasting a lot of money; and I guarantee you non diverse teams are wasting a tonne of money in testing. I see it a lot. When I was working as a consultant it was comically bad at MOST places I went because they had male dominated teams where they all stubbornly thought they knew it all … zero self awareness or ability to reflect honestly in teams like that was unfortunately stereotypically bad


mutantbeings t1_ja66q0y wrote

Not quite. The tech industry has been historically very very conservative. It’s a very recent development that this stuff has been discussed more (it wasn’t until probably the late 2000s or early 2010’s with the explosion of social media that the tech industry became less conservative)

Assembling a diverse team isn’t rocket science, the mistake a lot of tech teams still make tend to be comically bad like an all white team or an all male team; those are still very common.

Obviously those teams will have huge blind spots in lived experience. Even a single person added to that team from a very different background covers off a huge gap there, and each extra person added is a multiplier of that effect to some degree.

You’re dead right to point out that diversity is as much about less obvious factors like class or culture though. And that’s definitely harder.

I think it’s a huge leap to say that the tech industry has some left wing bias though, I don’t think you can neatly conclude that from one chart, and it doesn’t match up with my 20 years eco working in tech, including on AI


mutantbeings t1_ja65i06 wrote

No, but if you have 5 identical people with the same biases, obviously those biases and assumptions will show up very strongly. Add even one person and the areas where blind spots exist no longer overlap perfectly. Add one more .. it decreases even more, and so on.

But there’s never a way to eradicate it in full. All you can do is minimise it by bringing broad experience.


mutantbeings t1_ja5eflp wrote

Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists. This is a very well known phenomenon in software dev; that diverse teams build better software on the first pass due to more varied embedded lived experience. Trust me I’ve been doing this 20 years and see it all the time as a consultant, for better or worse.


mutantbeings t1_ja5dlm2 wrote

White folks hold cultural and political hegemony in post colonial states, as well as historic economic privilege that continues to this day in most cases, so it wouldn’t show up as much in training data, simple as that. The dominant culture always sees less persecution than various disempowered minority groups; surely that’s obvious enough why that rates lower. This is kinda a convincing argument in favour of that too, because an AI just takes in training data, it wasn’t born in one side or the other itself.


mutantbeings t1_ja5c86q wrote

Yep. And one reason it’s important we build culturally diverse teams that will minimise the intensity of bias. This is common knowledge in the tech industry already because it shows up in all kinds of software dev and there are some really embarrassing horror stories out there about bias from teams lacking any diversity at all


mutantbeings t1_ja5bwou wrote

Nah that’s not super important. In the tech industry we all know that unconscious bias affects the tech we build, it’s a super important consideration whether or not it’s conscious. It’s one reason why building a culturally diverse team matters: it minimises the intensity of unconscious bias. There’s actually a lot of conscious things you can do to reduce it but it’ll never go away completely.


mutantbeings t1_ja5bldb wrote

And this is THE most important point we all need to take home about AI: it’s values always reflect the creators.

And the creators tend to be greedy capitalist corporations, so I expect this bias chart to change substantially as further tweaks are made, and not for the better.


mutantbeings t1_j5xbamy wrote

If you have AGI why sell anything anymore.

Just take it.

Nobody will be able to stop you.

This sub is both way too optimistic about how soon we'll see this, as well as waaaaay too naive and optimistic about the ethics of literal elder dragons atop mountains of skulls and treasure they've looted from society er I mean tech billionaires.

You don't get into these positions for your praiseworthy ethics ffs, literally have to not have any ethics to get there to begin with. Its a requirement. You burn entire villages in the blink of an eye without a care for who suffers. Pop the champagne!

Most naive community on reddit? Its up there.


mutantbeings t1_j5xadwx wrote

>high demand technologies don't eventually become affordable for working and middle class folks.

*if they can be commodified and sold for profit without undermining the privileged social position of the powerful.

I'm not convinced AGI is really like that. If it threatens capitalism itself (as a real AGI certainly does) — a system that's been voraciously defended with the power of the world's most violent militaries and police forces for hundreds of years — then I would not be betting on it being accessible...