Viewing a single comment thread. View all comments

[deleted] t1_ixi9rnt wrote

And this helps us for what reason? Why are we not developing tech to first focus on efficiency of energy and then working jobs that are dangerous to human safety? Why are we doing this?

1

Amiga-Juggler t1_ixkxeqq wrote

To be gods… and sell that technology to companies that are sick of employees… as an example, I think if Bezos could run Amazon with 50 distributed executives and service managers, and everything else with robots, he would. I would be interested to see how that would play out in the end… service contracts for the automation, service visits, software costs, etc. Basically this; what would be the trade-offs?

Edit: I can’t help but think we are headed to some new form of slavery. I know that sounds weird, but I am just suggesting that what they are trying to build is a “human” that can’t demand anything of its owners. Does that make sense? …I emphasize “trying”. I think a lot of this is just noise.

1

[deleted] t1_ixl2jk5 wrote

But why do robots need that level of interpersonal skills to do that? It's almost like automating leadership at that point.

3

Amiga-Juggler t1_ixm6vdx wrote

Because the human mind is complicated. Replacing humans in your workforce is no easy task…and they are working so goddamn hard to change that. As far back as I can remember… robots (blue collar), off-shore (white-ish collar), touch-tone answering services, chat-bots (more white-ish collar) and click-through software (no-code)… all efforts to reduce costs and maximize profits by getting rid of the pesky human factor. I remember my first software project I was on where the goal was clear; they wanted the software to do the heavy lifting so they could take a team of ten down to three…and that was over 20 years ago. What if computers were interpersonal enough to eliminate even those last three? But thank god for the creative process! You can’t replace good ‘ol creative thinking and artistic expression… wait… (Dall-E).

Edit: Just had another thought; I have seen on occasion arguments being made about AI “rights”… something along the line of these AI “beings” getting some kind of “human rights protection” as the line between perceiving these AI bots and actual humans becomes increasingly blurred. My thought was on the Luddites and how they sabotaged modern farming equipment back in the 19th century… I was thinking “I am sure we will see some hacking group go after these technologies as the Luddites did back in the day..”. However, what if, in some weird “blame it on the tree-huggers” way, these AI bots do get some form of legal protection, making it illegal to “kill” them? Yeah, I know… the hackers would have already broken the law by the time they got to the code… but it’s and interesting plot line already been played out in several science fiction movies. But?…

1