Viewing a single comment thread. View all comments

el_chaquiste t1_j8o60pj wrote

First, those feelings are normal. Experts have them and if not, they'd be fools.

We are witnessing a transformation on the dynamics of knowledge and intellectual content generation like we have never seen, and it will be followed by similar transformations on the physical space of things, which is always the most difficult to do. Knowledge is tolerant to slight imperfections (e.g. an auto-generated essay with some small factual errors won't immediately kill someone), while robots working in the real world aren't (e.g. a self driving car can't make any mistake or it will crash).

Everything humans do that generates some knowledge will be disrupted. Graphic arts, movies and TV, architecture, science, literature, and yes, even software development, which seemed so far to be safe from disruption.

On the why we are pursuing this, it's complex, but I think it's because:

  • It's easy and it works. The technology to do this is surprisingly affordable nowadays.

  • We are free to do so. It can be done without any permission or regulation.

  • It provides good return of investment to those knowing how to exploit it.

  • We haven't seen all the ramifications yet, the kind of problems that might require reviewing the legality of it all. But the historical precedent is bad: we always act after the fact.

3

wastedtime32 OP t1_j8p0egt wrote

I understand what you’re saying. But I just don’t have faith in governing bodies to properly regulate it (bc of them being corrupted by the corporations who have a vested interest in dis-regulation) and I also know that in these unprecedented circumstances, there will be oversights and negative externalities that could likely be devastating.

1