Comments

You must log in or register to comment.

User1539 t1_ir9xi21 wrote

This is hardly different than what a compiler does.

We've been dumbing down the human side of programming, while automating the hard stuff, since the beginning of Computer Science.

I used to write 8-bit assembly, then 16-bit, and some 32-bit ... but when the 486 came out, I sort of quit assembly, or at least assembly on full microcomputers, because it was too complex to even begin to match what a compiler was doing for size and speed.

As languages evolve, we've factored out having to know any specifics about the hardware. Multi-threading goes through a process of virtual threads, and the machine is really handling the complex issues around it.

IDEs are also keeping my dumb monkey brain from making typing mistakes, or doing anything so incredibly stupid it won't even compile.

We're just in the middle, explaining complex processes in abstractions to a machine that will translate that to machine code almost no one is even capable of reading.

AI is going to help us communicate with the machine at a higher level than COBOL. That's all. We're already only a supervisory role.

Of course we'll lose our jobs along with everyone else. Did he think programmers honestly thought they were better than Doctors?

It's just where everything is headed, and that's fine. We'll probably be among the last jobs, because complex communication of logical ideas is one of the hardest things for humans to do.

Hell, I already spend half my time bending and twisting my specifications to be internally consistent logic, because the PHDs above me can't spot mutually exclusive variables, and not violate the basic internal logic of their own processes!

Who cares if, after I fix the spec, I just have to write it in English to get the software?

27

Whattaboutthecosmos t1_iragn6x wrote

>Hell, I already spend half my time bending and twisting my specifications to be internally consistent logic

Could you give an easily digestible example of this? I'm very curious.

2

User1539 t1_irai69l wrote

If we were trying to replace a process that had been done by hand, and produced official records.

They asked me to automate this process, but I found that in some cases they just hadn't really kept track of their own thinking on a subject.

So, imagine one part of the document that says 'If A, B and C add up to 20, the status is Y'. Then, another part of the document saying 'If A,B and C is less than 22, the status is N'. That's an oversimplification, but you get the idea.

They give me things all the time where they think they have a simple, logical, process but where they have conflicting rules about the output.

So, part of my job is laying these issues out. They're usually the result of translating what looks like legal documents, and the conflicting logic is usually separated by a page or two.

The leadership makes a decision, and writes something impenetrable and usually flawed, then the process manager tries to make that into a logical process, then I get asked to automate it, and they often just realize they've been doing it half one way, and half the other way, for a year ... or they realize they've always done one or the other, and we just make the decision to pretend the conflicting bit doesn't exist.

business rules are made by people, usually committee, and they forget what they were doing from one page to the next.

9

flatterlr t1_ireqlak wrote

I write code for the government-- this is a really well done explanation of going from business rules to code. Thanks :)

2

JediDP t1_irba1kf wrote

I do sometimes wonder what the future careers would look like. Even chemistry operations are getting automated with the deployment of robots that can run reactions that were programmed remotely.

1

User1539 t1_irbctpz wrote

I honestly expect it to look like Star Trek, where the humans are just talking to a computer, maybe even as a humanoid hologram, and talking the problem through with the AI.

There are several episodes of StarTrek where they basically ask the computer 'How do we make X happen' or 'Would this work', and they continue on from there.

Humans will direct AI towards areas of research, but ultimately they'll be asking the AI to do something, and then explain the solution back to them.

4

JediDP t1_irm48zw wrote

That seems a bright look at the future. What if the AI is limited to a few big corporations and the rest become grounders like Altered Carbon

1

User1539 t1_irmm5wu wrote

Yeah, the real danger would be hardware. The software can be stolen or whatever, but if a strong AI requires a building full of computers to run (and at first it likely will), then it will be limited to the control of the few.

But, stories like Altered carbon really don't make sense once you have strong AI.

Why do practically any of the people have jobs? If we had strong AI, surely we could build a dumb robot that can do pretty much whatever a dumb human can do, but without any need to pay them? I'm not saying fully AI autonomous robots, but it wouldn't take much from where we are to have robots that can do all the world's jobs.

So, what do we do with the people then? In a world where technology can free them from having to do any work, you have to imagine some kind of inexplicably cruel person at the top, not only hoarding wealth, but purposely doing things to hurt those beneath them.

In short, AI could easily solve the issues of wealth inequality and create a world where no one suffers. We already produce more food than the world needs, yet people starve? It's not a matter of work, or resources. It's a matter of organization.

So, with an AI organizing the world's resources, you'd have to make the conscious choice to force others to starve. The AI would say 'So, if we just start this way of distributing food, we can end starvation', and someone would have to say 'No, don't do that. I want to see them starve, even though the other option means just throwing that food away.'

Of course some people, maybe .001% of the population, are exactly like that, and those people might even rise to the top. But, it only threatens them to make needless enemies of the rest of the population, instead of letting the AI solve their problems and be hailed as a god.

I just don't really see a way that the current system, based on scarcity, continues in a world where there's no need for scarcity. Once a machine can build machines that can do our work for us, and organize our resources, there's no reason for anyone to work or starve, and anyone that tries to create a reason for that is just making needless trouble for himself.

Cyberpunk stories, like Altered Carbon, or Elysium all fall prey to the same issue. They imagine a single technology advancing, and the ramifications of that, while they forget what the ramifications of all the other technologies advancing at the same time would be.

In both, you have this absurd world where the 'rich' are rich just because they happened to be born in the right circumstances, and 'rich' really only means the ones who hold down everyone else, because you can manufacture bodies to do the work, and heal human bodies infinitely, and mine asteroids if you want to for resources.

There's no scarcity, and the 'rich' are really just bullies, there to torture the people by denying them access to what costs them nothing.

It doesn't make any sense that a world like that would exist for any amount of time before everyone in the bully's organization turns on them.

1

SwordsAndWords t1_ir9g3aq wrote

"Unlike human children"

Just wanted to point out, that's exactly what we do with human children, it just takes longer.

10

MasterFubar t1_ir9tk5i wrote

I think neural nets are not the final answer. A lot of research has been done on neural nets lately because computer hardware has evolved enough to make it possible, but we will have better technologies in the future.

I see AI with neural nets as airplanes with flapping wings. We don't imitate nature because we don't have the same limitations. Eventually, we will develop more efficient algorithms that perform the same operations as neural nets.

6

DungeonsAndDradis t1_iragf3p wrote

This recent* explosion of AI progress was made possible by the Transformer architecture (and the paper Attention is All you Need: https://arxiv.org/abs/1706.03762). I think we're approaching the limits of that.

Companies have stopped trying to go bigger and better with data and parameters, and are now trying to streamline training to be more efficient with less. I believe GPT-4 is only going to be trained on 10% of the data or something like that. But it is still expected to be a significant improvement over GPT-3.

I assume that the next "big thing" in AI is what will kick us into high gear towards AGI.

Some researcher, in some lab in Silicon Valley or academia, is probably writing and revising the research paper now on "big thing 2.0". It will probably be called something like "Self-training and recursion of knowledge".

*since 2017

3

Effective-Dig8734 t1_irbx4o1 wrote

I think what you’re thinking of is that gpt 4 will have a similiar number of parameters, but it will be trained on Farrr more data

3

MasterFubar t1_irajhs5 wrote

An interesting thing about transformers is that they are simpler than the LSTMs that came before them. Problems like vanishing gradients set limits on how complex a neural network can be.

2

TheNotSoEvilEngineer t1_iraw9c5 wrote

There will be a whole new field in prompt engineering. Which might end up hiring English majors . Working with ai is no longer explaining things in math, but words and verbose descriptions.

4

AllNinjas t1_ir99vnz wrote

From the article’s last paragraph: “In this New Computer Science — if we even call it Computer Science at all — the machines will be so powerful and already know how to do so many things that the field will look like less of an engineering endeavor and more of an an educational one; that is, how to best educate the machine, not unlike the science of how to best educate children in school. Unlike (human) children, though, these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries. I would argue that the vast majority of Classical CS becomes irrelevant when our focus turns to teaching intelligent machines rather than directly programming them. Programming, in the conventional sense, will in fact be dead.”

Imo it will become more important as more of these big companies while will be taking models and other artificial programs from others the ones who can write completely from scratch regardless of previous template will be in high demand.

The need to know how an os runs/operates will still forever be a thing. Might be a few steps below need to know life skills like swimming in my opinion.

3

Deformero t1_ir9c7zb wrote

Yeah, it sounds great and looks like it might be true, but it also might never happen as it sure collides with a lot of interests in several (if not all) fields.

I mean, what if Google (for now it seems that they are the only one with data and resources to make something as advanced) keeps it for themselves and give average user just crumbs of advanced AI functionalities as some subscription plan. Their AI model is their AI model.

6

Whattaboutthecosmos t1_iraj8et wrote

After reading this, a thought popped into my head; Although there is certainly a disparity in the data/intelligence google has vs the person in the basement, the person in the basement still has a lot of potential with only the information that is available free online. Even if this disparity always exists, the person in the basement will also still be able to progress their data/intelligence as time goes on exponentially just as google is able to.

4

Deformero t1_iram2jr wrote

Sure, you can get bunch of books and all kinds of expert literature for free or very cheap, but average person in the basment will not read that and work on himself. Average Internet consumer watches YouTube, Netflix and porn.

2

Whattaboutthecosmos t1_irapd7i wrote

Yes, I presume you are correct. Though, any person with a computer and internet now has access. This includes above average people (however you wish to define above-average). Also, an average person may partake in the activities you listed on 99% of the days of their lives. But they may find a few days in their life to produce something worthwhile. This could be something as big as defying mortality or writing a meaningful blog post that could be accessed by millions and nudge discourse in a new direction. Even deciding to upvote or downvote something could have a micro-impact.

side-note: (*This is just free thinking) I find it interesting that you call them "internet consumer"s, as I see interacting with the internet as a sort of communication. A user that is going to sites such as youtube, netflix, etc. is still communicating with the internet by giving prompts to go to those sites. I couldn't consume the internet the same way I couldn't consume a person just by interacting with them.

2

Zermelane t1_iracd7y wrote

I don't agree with OP's specific story as such. I think by the time you have the level of AI capability that they're describing, the gameboard has been flipped and pretty much all future predictions are off.

But there are nearer-term levels of AI capability that could revolutionize programming while still being far from doing all the work of putting together entire programs and software services. Even just a significantly smarter Copilot could cause some serious frothiness in the job market. And we might see one soon, if a self-play approach like programming puzzles works out and lets us get around the current problem of paucity of training data for code models.

2

doodlesandyac t1_irbc49c wrote

The game will just be upped like it always has been with new technology. A single human will be able to accomplish much more, and at some point we probably won’t be needed at all

1

mr_tyler_durden t1_ira0mnu wrote

> It seems totally obvious to me that of course all programs in the future will ultimately be written by AIs, with humans relegated to, at best, a supervisory role.

Unless we accomplish AGI then I’m calling bullshit. Yes our tools will continue to improve and help us accomplish more but the author is severely downplaying the role of a programmer here (and clearly isn’t one or isn’t a good one). I use GitHub CoPilot daily and it’s nice, it’s FAR from being able to correctly write more than a handful of lines (more-so if it’s boilerplate).

Again, I do believe our tools will continue to improve but they make huge (unsubstantiated) leaps in logic.

0

zvndmvn t1_irat4pl wrote

Copilot still needs a lot of guidance, but considering that it didn't even exist a couple of years ago, I wouldn't be surprised to be... surprised.

3

DungeonsAndDradis t1_iraf7ct wrote

"This programming AI is so stupid. Only half the stuff it writes is useful. This will never be worth it. We should stop trying to improve it."

1

mr_tyler_durden t1_irar6wm wrote

Not at all what I said and you know it.

I never said AI was stupid, I never said it will never be worth it, I never said we should stop trying to improve. I specifically said I expect it to continue to improve but the current roadmap in no way points to "The End of Programming".

3

JediDP t1_irbaeua wrote

I developed the same opinion once I started using copilot

1