Comments

You must log in or register to comment.

153IQ-yet-retarded t1_j0id8wu wrote

Long term we, the programmers will become the new proofreaders for AI generated code solutions.

16

kypjks t1_j0lfgao wrote

It is still far away. Most serious codes require a very complex description to explain what it is and usually such a description in human language is more complex than the generated code itself. So it is not efficient to use human language to dictate an ai agent to generate code.

1

Hyper1on t1_j0m9ui8 wrote

But it is often faster to write in comments a description of the algorithm you want, even if complex, then it is to code it up yourself (especially if coding involves any googling, risk of off by one errors, etc). Besides, it's easier to verify solutions than to write them.

1

kypjks t1_j0nase5 wrote

You don't get the point. In many times, it is much faster to write code itself rather than adding comment to explain it. Adding comment to define precise behavior is not that simple. If you see lots of comments, that code is very unusual. Take a look at serious open source project like linux kernel and android. If you pick up any non trivial code from those and try to define comments to explain what the code is doing, it will take way more efforts. If it is taking more time, why would any serious sw project do that?

1

Hyper1on t1_j0nbs7q wrote

I don't know about the linux kernel source, but having contributed to several major OSS libraries including Pytorch, I think that most PRs in my experience can be more easily described in natural language than in code. When I said comments, I didn't mean like line by line comments of everything, but I was more thinking of docstrings. I am very sceptical of the idea that on average it is faster to write complex code than to describe what you want it to do, which is partly why I think AI code synthesis can achieve significant speedups here.

3

tyrellxelliot t1_j0jrf8f wrote

imo 50% of white collar jobs are going away in the next 10 years.

ChatGPT already generates mostly working code, and currently it doesn't even use feedback from executing that code, instead just writing it in a one-shot fashion. If they train it using RLHF but with a more specialised code model and compiler/unit test in the loop instead of a human, I think it can totally generate fully working end products.

Any job that involves application of specialised knowledge in the text domain (accountants, para-legals, teachers etc) are under threat. Hallucinations should be easily solvable by incorporating a factual knowledge database, like in RETRO.

12

arcandor t1_j18ymo4 wrote

No, the jobs won't go away. This new technology will benefit a small group of people and their companies. For the rest of us, we may get a small productivity boost. Some low hanging fruit, easily automatable jobs, will shift from 'do the thing' to 'monitor the ai doing the thing'. The number of positions will decrease but not disappear.

1

blose1 t1_j0shnwm wrote

> Hallucinations should be easily solvable by incorporating a factual knowledge database, like in RETRO.

> accountants, para-legals, teachers etc

​

No, RETRO style DB will not help with hallucinations, it will only help with simple fact checking, will not help with all generative nature of responses because they are not facts and you can't fact check if they are true in any DB.

​

> imo 50% of white collar jobs are going away in the next 10 years.

You are hallucinating.

0

thedabking123 t1_j0i2bon wrote

I think it depends on how quickly they deal with "hallucinations" or situations where the responses don't make sense.

As these reduce over time, more and more use cases will become realistic.

  • Fault tolerant use cases are already there today like inspiring copy for emails, or for reports etc. These use cases don't need exact responses as it depends on the human to go the last mile and the cost of a mistake is very low.
  • Very fault intolerant use cases like recommmending diagnoses in healthcare will likely need extremely low levels of hallucinations. The cost of a mistake here is extremely high.
11

Aggravating-Act-1092 t1_j0hrdtf wrote

I think that is a given even assuming AI research hits a wall. Sam Altman openly talked recently of how he hopes a wave of startups will be founded to take advantage of models like ChatGPT, tuning and specialising it for a variety of tasks like proofreading, simple coding and many others.

But then you have to consider where AI was 5 years ago, and how all these tasks seemed completely impossible then.

I expect in the next 20 years the effects of AI on the job market will be greater than those of the Industrial Revolution or the computer.

10

jms4607 t1_j0im19u wrote

Human labor(even that requiring intelligence) is going to replaceable with electricity. I honestly to struggle to see how this isn’t going to create wealth disparity like we have never seen.

14

race2tb t1_j0hpwh6 wrote

Been playing with it. Honestly I think it is going to replace a lot more kinds of jobs than people think over the next 10years. Eventually we will just be its worker bots following instructions. That wont last long though before it can just do it itself, maybe 30 years.

9

ztbwl t1_j0k5dnj wrote

Just like the WYSIWYG editor replaced all web devs.

1

sambiak t1_j0kahfe wrote

We may finally have good customer support chatbots, so I imagine a reduction in the number of customer support.

That said I struggle to see it replacing jobs that require a few months of training. I work for a multi-billion healthcare company and most of the cost savings/human labour reductions is done through traditional software practices. An administrative error can be horrendously expensive.

2

danielbln t1_j0kgnu4 wrote

Healthcare is on the end of the spectrum where small mistakes have grave consequences (eben though human physicians make mistakes all the time). There are plenty of problem domains though that are a lot more resilient or can be made resilient to erroneous output, and those are significantly more likely to be displaced by AI.

2