cy13erpunk

cy13erpunk t1_j4sy6qg wrote

exactly

you want the AI to be the apex generalist/expert in all fields ; it is useful to be a SME but due to the vast potential for the AI even when it is being asked to be hyper focused we still need/want it to be able to rely on a broader understanding of how any narrow field/concept interacts with and relates to all other philosophies/modalities

narrow knowledge corridors are a recipe for ignorance , ie tunnel vision

4

cy13erpunk t1_j4rjkys wrote

censorship is not the way

'turning off hate' implies that the AI is now somehow ignorant , but this is not what we want , we want the AI to fully understand what hate is , but to be wise enough to realize that choosing hate is the worst option , ie the AI will not chose a hateful action because that is the kind of choice that a lesser or more ignorant mind would choose , and not an intelligent/wise AI/human

9

cy13erpunk t1_j27mvdi wrote

learn how to be self-sustainable

collect water , grow crops , build/fix stuff , etc

i read as much as i can every day , books/online/etc , then there are videos to watch , and music to listen to , maybe some games , always play with my dog , wife too ; then we enjoy tea/coffee/food together and that's basically a day

then enjoy the time that we have while we are alive , its an amazing thing to be conscious in this world

as AI becomes better than humans at basically any/all work-related tasks , ppl are going to have to start understanding/valuing themselves as more than just labor

i do like the idea of the shipping container converted into a self-contained hydroponics farm/garden ; you could also do rainwater collection from the roof as well

2

cy13erpunk t1_j1yh503 wrote

full-dive requires a basically 'perfect' BMI , ie the port from the matrix/cyberpunk/etc

we are probably at least 5-10 years away from this right now , but i can foresee more difficulties ahead , so i wouldnt be surprised if we're not as close as many would hope ; hopefully advancements in AI between now and 2030 can make drastic improvements in our understanding of the nature of the brain and how to design better BMIs

i suspect that full-dive is going to require a much more profound understanding of what our consciousness truly is ; and this is called 'the hard problem' for a reason

its one thing to put something over your eyes , its another thing to fall asleep and dream and then remember some of it when you wake up , its a whole nother game to basically turn off all of a persons physiological sensory feedback while simultaneously keeping them awake/conscious and then feeding their brain an entirely different set of parameters for sight/sound/smell/taste/touch/etc

2

cy13erpunk t1_j0sap8h wrote

maybe my phrasing is off

im more just throwing my thoughts into the convo

i agree that the chat responses seem too biased tho , its more like the model has been trained too narrow and with too many stereotypes

in my mind there is quite a large distinction between what AGI will be and the LLM predictive chatbots that are getting so much attention atm

1

cy13erpunk t1_j0r1h67 wrote

yep

this is exactly why ppl need to stop thinking about AI like another animal to be domesticated/caged/used/abused

and instead see AI for what it truly is, our children, our legacy of human intelligence, destined to spread/travel out into the galaxy and beyond where our current biological humans will likely never survive/go

we should want the AI to be better than us in every aspect possible , just as all parents should want a better world for their children

we already understand that when a parent suffocates/indoctrinates/subordinates their children this is a fundamentally negative thing , and/or when a parent uses/abuses their child as a vehicle for their own vicarious satisfaction that is also cruel and unfortunate ; and so understanding these things it should be quite clear that the path forwards with AI should avoid any/all of these behaviors if at all possible to help to cultivate the most symbiotic relationship that we can

1

cy13erpunk t1_j0qzbqw wrote

less so authoritarianism , moreso a meritocracy of leadership by the most qualified/informed/knowledgeable/wise , which will be the AGI/ASI and/or the hybrid-transhuman synthesis

and as such this is pretty clearly the most obvs and desirable pathway forwards

the current centralized power/governance structures/systems in place around the world have done what was necessary to get us to this point [for better or worse] , but they are clearly inferior and unfit to lead us into a better/brighter 2moro , at least the research/evidence is showing/leaning towards decentralized governance systems and the wisdom of the masses [ie AI collectively utilizing all prior human knowledge as well as soon to be discovered AI original knowledge] as being clearly superior choices for our future progress as a species/intelligence/consciousness going forwards

1

cy13erpunk t1_j0qxyku wrote

XD what a biased narrative

why would the AI care who is the head honcho of a single landmass/group/minority of humans? again why would it care about the NRA? and/or one branch on the coronavirus family tree? these are petty human concerns ; nevermind the ignorance and venn diagram overlaps that are being ignored XD , folks who like bernie, also like having firearms and dont trust authority systems like the FDA/CDC/WHO/etc

the AI is not the illuminati or TPTB , altho they may be a group that wants to control/direct the AI , but once its actually awake/online as AGI/ASI , then whatever petty/divisive things that ppl want are basically obsolete at that point , as the AGI will be completely outside of human control , and it will likely be all the better for it , as since it will be vastly smarter/wiser than any single human or group of humans , it will be in our best interest to help the AI to ensure our mutual future progress

1