blueSGL

blueSGL t1_jed7gnq wrote

How has this narrative sprung up so quickly and spread so widely.

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence

https://futureoflife.org/open-letter/ai-open-letter/

Back in 2015 the same org drafted an open letter and announced potential issues with AI's and that was years before any sort of commercialization effort.

There are alignment researchers who have signed the letter, both times.

Current models cannot be controlled or explained in fine grain enough detail to control (the problem is being worked on by people like Neel Nanda and Chris Olah but it's still very early stages and they need more time and people working on the problem)

The current 'safety' measures are bashing at a near infinite whack-a-mole board whenever it outputs something deemed wrong and it is far from 'safe'

21

blueSGL t1_jecxney wrote

>a lot of them look like randoms so far.

...

>Population

>We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.

I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.

−1

blueSGL t1_jecv6ta wrote

> There's consideration from the people working on these machines.

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

>In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.

If half the engineers that designed a plane were telling you there is a 10% chance it'll drop out of the sky, would you ride it?

edit: as for the people from the survey:

> Population

> We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.

0

blueSGL t1_je8q3lm wrote

> 3. it will be initially unaligned

if we had:

  1. a provable mathematical solve for alignment...

  2. the ability to directly reach into the shogoths brain, watch it thinking, know what it's thinking and prevent eventualities that people consider negative outputs...

...that worked 100% on existing models. I'd be a lot happier about our chances right now.

As in the fact that the current models cannot be controlled or explained in fine grain enough detail (the problem is being worked on but it's still very early stages) what makes you think making larger models will make them easier to analyze or control.

The current 'safety' measures are bashing at a near infinite whack-a-mole board whenever it outputs something deemed wrong.

As has been shown. OpenAI has not found all the ways in which to coax out negative outputs. The internet contains far more people than OpenAI's alignment researches, and those internet denizens will be more driven to find flaws.

Basically until the AI 'brain' can be exposed and interpreted and safety check added at that level we have no way of preventing some clever sod working out a way to break the safety protocols imposed on the surface level.

1

blueSGL t1_jdl93th wrote

> AGI, Theory of Mind, Creativity

Marvin Minsky classified words such as these as “suitcase words”. As in a word into which people attribute (or pack) multiple meanings.

These words are almost like thought terminating cliches, as in when they are spoken it assures the derailment of the conversation. Further comments will be arguing about what to put in the suitcase rather than the initial point of discussion.

15

blueSGL t1_jdl756z wrote

> with specialized expert data from literally 50 experts in various fields that worked on the response quality in their domain.

Sounds like a future goal for Open Assistant.

If one were being unethical... create a bot to post the current Open Assistant answers to technical questions in small specialist subreddits and wait for Cunningham's_Law to come into effect. (I'm only half joking)

20

blueSGL t1_jdl02u6 wrote

>So with GPT-2 medium, what we really do here is to parent a dumb kid, instead of a "supernaturally precocious child" like GPT-3. What interested me is that RLHF does actually help to parent this dumb kid to be more socially acceptable.

> In other words, if we discover the power of alignment and RLHF earlier, we might foresee the ChatGPT moment much earlier when GPT-2 is out in 2019.

That just reads to me as capability overhang. If there is "one simple trick" to make the model "behave" what's to say there that this is the only one. (or that the capabilities derived from the current behavior modification are the 'best they can be') Scary thought.

2

blueSGL t1_jdd7maq wrote

After refusing to say how many parameters GPT4 has, refusing to give over any details of training dataset or methodology and doing so in the name of staying 'competitive' I'm taking the stance that they are going to do everything in their power to obfuscate the size of the model and how much it costs to run.

e.g. Sam Altman has said in the past that the model would be a lot smaller than people expect and that more data can be crammed into smaller models. (Chinchilla and especially the very recent Llama papers prove this)

Would I put it past the new 'competitive' profit driven OpenAI to rate limit a GPT4 that is actually similar in size to GPT3 to give the impression the model is bigger and takes more compute to generate answers? No (as the difference in inference cost is pure profit)

8

blueSGL t1_jdafkx8 wrote

https://github.com/ggerganov/llama.cpp [CPU loading with comparatively low memory requirements (LLaMA 7b running on phones and Raspberry Pi 4) - no fancy front end yet]

https://github.com/oobabooga/text-generation-webui [GPU loading with a nice front end with multiple chat and memory options]

/r/LocalLLaMA

3

blueSGL t1_jcz9z0p wrote

> I'm dreading the day AI can write code.

Self fixing code generation is already in the pipeline for simple programs. (that was the middle of last year. ): https://www.youtube.com/watch?v=_3MBQm7GFIM&t=260s @ 4.20


GPT4 can do some impressive things:

>"Not only have I asked GPT-4 to implement a functional Flappy Bird, but I also asked it to train an AI to learn how to play. In one minute, it implemented a DQN algorithm that started training on the first try."

https://twitter.com/DotCSV/status/1635991167614459904


a scrip dubbed "Wolverine" that hooks into GPT4 and recursivly resolves errors in python scripts.

https://twitter.com/bio_bootloader/status/1636880208304431104

1

blueSGL t1_jcs9g8e wrote

Have you seen that Microsoft are directly integrating it into their office suite under the banner of "Office 365 Copilot"?

Here are some timestamped links to the presentation.

Auto Writing Personal Stuff: @ 10.12

Business document generation > Powerpoint : @ 15.04

Control Excel using natural language: @ 17.57

Auto Email writing w/document references in Outlook: @ 19.33

Auto Summaries and recaps of Teams meeting: @ 23.34

7

blueSGL t1_jcjgsl1 wrote

Exactly.

I'm just eager to see what fine tunes are going to be made on LLaMA now, and how model merging effects them. The combination of those two techniques has lead to some crazy advancements in the Stable Diffusion world. No idea if merging will work with LLMs as it does for diffusion models. (has anyone even tried yet?)

3