Viewing a single comment thread. View all comments

User1539 t1_ir9xi21 wrote

This is hardly different than what a compiler does.

We've been dumbing down the human side of programming, while automating the hard stuff, since the beginning of Computer Science.

I used to write 8-bit assembly, then 16-bit, and some 32-bit ... but when the 486 came out, I sort of quit assembly, or at least assembly on full microcomputers, because it was too complex to even begin to match what a compiler was doing for size and speed.

As languages evolve, we've factored out having to know any specifics about the hardware. Multi-threading goes through a process of virtual threads, and the machine is really handling the complex issues around it.

IDEs are also keeping my dumb monkey brain from making typing mistakes, or doing anything so incredibly stupid it won't even compile.

We're just in the middle, explaining complex processes in abstractions to a machine that will translate that to machine code almost no one is even capable of reading.

AI is going to help us communicate with the machine at a higher level than COBOL. That's all. We're already only a supervisory role.

Of course we'll lose our jobs along with everyone else. Did he think programmers honestly thought they were better than Doctors?

It's just where everything is headed, and that's fine. We'll probably be among the last jobs, because complex communication of logical ideas is one of the hardest things for humans to do.

Hell, I already spend half my time bending and twisting my specifications to be internally consistent logic, because the PHDs above me can't spot mutually exclusive variables, and not violate the basic internal logic of their own processes!

Who cares if, after I fix the spec, I just have to write it in English to get the software?

27

Whattaboutthecosmos t1_iragn6x wrote

>Hell, I already spend half my time bending and twisting my specifications to be internally consistent logic

Could you give an easily digestible example of this? I'm very curious.

2

User1539 t1_irai69l wrote

If we were trying to replace a process that had been done by hand, and produced official records.

They asked me to automate this process, but I found that in some cases they just hadn't really kept track of their own thinking on a subject.

So, imagine one part of the document that says 'If A, B and C add up to 20, the status is Y'. Then, another part of the document saying 'If A,B and C is less than 22, the status is N'. That's an oversimplification, but you get the idea.

They give me things all the time where they think they have a simple, logical, process but where they have conflicting rules about the output.

So, part of my job is laying these issues out. They're usually the result of translating what looks like legal documents, and the conflicting logic is usually separated by a page or two.

The leadership makes a decision, and writes something impenetrable and usually flawed, then the process manager tries to make that into a logical process, then I get asked to automate it, and they often just realize they've been doing it half one way, and half the other way, for a year ... or they realize they've always done one or the other, and we just make the decision to pretend the conflicting bit doesn't exist.

business rules are made by people, usually committee, and they forget what they were doing from one page to the next.

9

flatterlr t1_ireqlak wrote

I write code for the government-- this is a really well done explanation of going from business rules to code. Thanks :)

2

JediDP t1_irba1kf wrote

I do sometimes wonder what the future careers would look like. Even chemistry operations are getting automated with the deployment of robots that can run reactions that were programmed remotely.

1

User1539 t1_irbctpz wrote

I honestly expect it to look like Star Trek, where the humans are just talking to a computer, maybe even as a humanoid hologram, and talking the problem through with the AI.

There are several episodes of StarTrek where they basically ask the computer 'How do we make X happen' or 'Would this work', and they continue on from there.

Humans will direct AI towards areas of research, but ultimately they'll be asking the AI to do something, and then explain the solution back to them.

4

JediDP t1_irm48zw wrote

That seems a bright look at the future. What if the AI is limited to a few big corporations and the rest become grounders like Altered Carbon

1

User1539 t1_irmm5wu wrote

Yeah, the real danger would be hardware. The software can be stolen or whatever, but if a strong AI requires a building full of computers to run (and at first it likely will), then it will be limited to the control of the few.

But, stories like Altered carbon really don't make sense once you have strong AI.

Why do practically any of the people have jobs? If we had strong AI, surely we could build a dumb robot that can do pretty much whatever a dumb human can do, but without any need to pay them? I'm not saying fully AI autonomous robots, but it wouldn't take much from where we are to have robots that can do all the world's jobs.

So, what do we do with the people then? In a world where technology can free them from having to do any work, you have to imagine some kind of inexplicably cruel person at the top, not only hoarding wealth, but purposely doing things to hurt those beneath them.

In short, AI could easily solve the issues of wealth inequality and create a world where no one suffers. We already produce more food than the world needs, yet people starve? It's not a matter of work, or resources. It's a matter of organization.

So, with an AI organizing the world's resources, you'd have to make the conscious choice to force others to starve. The AI would say 'So, if we just start this way of distributing food, we can end starvation', and someone would have to say 'No, don't do that. I want to see them starve, even though the other option means just throwing that food away.'

Of course some people, maybe .001% of the population, are exactly like that, and those people might even rise to the top. But, it only threatens them to make needless enemies of the rest of the population, instead of letting the AI solve their problems and be hailed as a god.

I just don't really see a way that the current system, based on scarcity, continues in a world where there's no need for scarcity. Once a machine can build machines that can do our work for us, and organize our resources, there's no reason for anyone to work or starve, and anyone that tries to create a reason for that is just making needless trouble for himself.

Cyberpunk stories, like Altered Carbon, or Elysium all fall prey to the same issue. They imagine a single technology advancing, and the ramifications of that, while they forget what the ramifications of all the other technologies advancing at the same time would be.

In both, you have this absurd world where the 'rich' are rich just because they happened to be born in the right circumstances, and 'rich' really only means the ones who hold down everyone else, because you can manufacture bodies to do the work, and heal human bodies infinitely, and mine asteroids if you want to for resources.

There's no scarcity, and the 'rich' are really just bullies, there to torture the people by denying them access to what costs them nothing.

It doesn't make any sense that a world like that would exist for any amount of time before everyone in the bully's organization turns on them.

1