onyxengine t1_jadzeid wrote

We kinda are, if the industry experts in the field you want to join are collaborating with machine learning engineers to build an AI that streamlines their workflows and knows what they know. You’re not going to become an industry expert before that AI becomes a tool that replaces the industry experts.


onyxengine t1_j9g9096 wrote

You can never guarantee that some thing capable of a thing will never do that thing. If you want ai to remain harmless, then you have to construct them in such a way that they can’t do physical harm.

And that ship has sailed. Most militaries are testing AI for scouting and targeting and we even have Weaponized law-enforcement robots in the pipeline. San Francisco is the program that I’m currently aware of, I am sure there is more.

Even the linguistic models are extremely dangerous. Language is the command line script for humans and malicious people can program ai to convince people to do things that cause harm.

We’re not at the point where we need to worry about AI taking independent action to harm humans, but on the way there is plenty of room for humans to cause plenty of harm with AI.

Until we build agi that has extremely sophisticated levels of agency, every time an Ai hurts a human being it’s going to be because a human wanted it to be the case or overlooked cases in which what they were doing could be harmful.


onyxengine t1_j62hiot wrote

Everything has limitations, and for sometime the AGIs we build will be bound by the limitations we place on them. The details matter, a hyper intelligent AI confined to a room with no internet access or any ability to communicate with humans probably couldn’t accomplish much.

Let it talk to a small group of people though, and it might be able to convince them to provision it with the minimum number of resources to cease control of the entire planet.


onyxengine t1_j62gcuh wrote

I don’t trust anyone to be in my mind at that level with the next evolution of tech, im down for it, but the level of disclosure for how the tech works would have to meet a pretty high bar. If the “code” isn’t open source I would want to pass.

I wouldn’t join a network fielded by corporations until it became do or die for basic survival in society.


onyxengine t1_j4xa6hm wrote

To be fair AI is going to create economic upheaval, in the long term it should be an overall positive. In the short term it should accelerate job loss to the point that governments have no choice but to start rolling out UBI


onyxengine t1_j4hagtl wrote

There’s no void to fill it’s really just a philosophy that embraces the potentiality of technology to augment human form and society. If there is a void that people are looking for transhumanism to fill its the void in our life spans. I could easily do 400 years given how much rapid and radical change we are likely To see. It would be amazing to watch us build the first underwater cities and live in one, or live on a off planet colony. Or even contribute to building them.


onyxengine t1_j3pbq7i wrote

I don’t think its a disease, I think its a preconfigured setting for the replacement of individual members of a species. Women grow brand new organisms with clock set to zero all the time, it seems if we knew what we were doing we could induce phases that rejuvenated the individual indefinitely.


onyxengine t1_j29sh04 wrote

The neural network is the logic center of mind, its definitely not a nothing in regards to generating machine consciousness. Architecturally we can see what neural nets are missing by looking at our selves.

Motivation(survival instincts, threat detection, sex drives, pair bonding etc). Not to say we need to fabricate sex organs, but we need to generate prime directives that NNs try to solve for outside of what NNs are doing. Thats how human consciousness is derived, the person is virtual apparatus invested in our biological motivation. We can, fight and argue not just to survive but for what we desire.

Agency in context of an environment (cameras, robotic limbs, sensors recording a real time environment). We field neural nets in tightly controlled human designed ecosystems, they don’t have the same kind of free reign to collect data as humans do.

There are parts of the human mind neural nets are not simulating, we have to construct those parts and connect them to NNs.

I think conscious machines are a matter of time and an expansion of ML architecture to encompass more than just problem solving. Machines don’t have a why yet.