tornado28
tornado28 t1_j8zwrwo wrote
Reply to comment by currentscurrents in [D] What are the worst ethical considerations of large language models? by BronzeArcher
It seems to me that the default behavior is going to be to make as much money as possible for whoever trained the model with only the most superficial moral constraints. Are you sure that isn't evil?
tornado28 t1_j8ztxdg wrote
Reply to comment by currentscurrents in [D] What are the worst ethical considerations of large language models? by BronzeArcher
Yeah I guess I'm pretty pessimistic about the possibility of aligned AI. Even if we dedicated more resources to it, it's a very hard problem. We don't know which model is going to end up being the first AGI and if that model isn't aligned then we won't get a second chance. We're not good at getting things right on the first try. We have to iterate. Look how many of Elon Musk's rockets blew up before they started working reliably.
Right now I see more of an AI arms race between the big tech companies than an alignment focused research program. Sure Microsoft wants aligned AI but it's important that they build it before Google, so if it's aligned enough to produce PC text most of the time that might be good enough.
tornado28 t1_j8zqwy4 wrote
Reply to comment by currentscurrents in [D] What are the worst ethical considerations of large language models? by BronzeArcher
Why are the robots going to want to keep you around if you don't do anything useful?
tornado28 t1_j8zcrc2 wrote
People will use them to make money in unethical and disruptive ways. An example of an unethical way to use them is phishing scams. Instead of sending out the same phishing email to thousands of people, scammers may get some data about people and then use the language model to write personalized phishing emails that have a much higher success rate.
Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.
The other disruptive possibility is that LLMs will be able to themselves rapidly build more powerful LLMs. I use GitHub copilot every day and it's already very good at writing code. It takes at least 25% off the time it takes me to complete a software implementation task. So it's very possible a LLM could in the near future make improvements to it's own training script and use it to train an even more powerful LLM. This could lead to a singularity where we have extremely rapid technological development. It's not clear to me what the fate of humankind would be in this case.
tornado28 t1_j65wtvp wrote
Reply to [D] ImageNet2012 Advice by MyActualUserName99
You might be able to get some free compute from AWS or GCP
tornado28 t1_j5rdebv wrote
Sorry to be skeptical but I don't think this is really why your one run was better than the other. I think you also changed something else inadvertently.
tornado28 t1_irtfv6q wrote
Reply to comment by mm_maybe in White House Releases Blueprint for Artificial Intelligence Bill of Rights by izumi3682
Thanks for apologizing but... are you seriously claiming that AI experts are not the right people to evaluate existential risk from AI?
tornado28 t1_irckgi2 wrote
Reply to comment by mm_maybe in White House Releases Blueprint for Artificial Intelligence Bill of Rights by izumi3682
I am a machine learning scientist. I read the ML literature regularly and contribute to it. Those sci-fi dystopias are an increasingly real risk. So yes I think it's a much bigger deal than a little discrimination.
tornado28 t1_irc619p wrote
I'd really like to see more explicit focus on avoiding the creation of a superintelligent AI that could kill us all if it wanted to.
tornado28 t1_ir3ryg8 wrote
Reply to [R] Self-Programming Artificial Intelligence Using Code-Generating Language Models by Ash3nBlue
This is a bad idea. Like seriously it should be illegal.
tornado28 t1_je78qzx wrote
Reply to Anyone else feel like everything else is meaningless except working towards AGI? by PixelEnjoyer
Maybe an unpopular opinion on here but I'd say working to prevent AGI.