LettucePrime
LettucePrime OP t1_j9sho3d wrote
Reply to comment by jedi_tarzan in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
EDIT: I am so sorry this is long as shit & it ends on a downer. It's just a really morose & unpleasant read.
Later in the thread I used a better comparison: Wolfram Alpha is not used to teach pre-calculus. 4 function calculators are not used to teach basic arithmetic. We gate a student's "generative ability" based on the skills we want them to develop. Trigonometry does not measure a student's ability to draw a sine function, but rather their ability to represent, measure, & manipulate one. The robot can draw the line to match your function, that's the easy part. Making sure your function is correct is the part you need to learn.
The essay is the function, not the line. It is the proof of the struggle with something new that will produce necessary skills for development. At the very least, it's proof that the user can read a new thing & generate a cogent output from it, which is such an impressive accomplishment in nature that teaching it to machines has caused significant economic & social disruptions.
It's evidence of a user's ability to interrelate information - a process so complex it must be done essentially from scratch every time the user alters even one parameter of their data set. Where mathematical reasoning, at least elementary math, linearly grows in complexity, allowing students the ability to compress portions of the process generatively, no such linearity exists in any other discipline. No one in studying Faust is saying: "I learned about 17th century English Literature last year. I'll just plug Paradise Lost into the machine to return a comparison between Milton & Goethe's portrayals of aberrant desire"
Lastly, it's evidence of the user's ability to communicate, which can be considered a complex test of metacognition, a much simpler test of the arbitrary constraints of syntax, & a gauge for how fulfilling the experience was for the user. At the end of the day, that is what it's about.
We need people to have all of these skills. Many of them are difficult to learn. Most of them overlap with ChatGPTs advertised features. We are asking our education system to revolutionize itself in response to a new toy in an extremely short time while extremely underfunded & extremely overtaxed. This is a recipe for a goddamn catastrophe.
You asked what the actual fallout of the last several decades of neglecting liberal arts education has been, &, if I may be perfectly frank, I think it's produced a fucking wasteland. Our industries are corrupted by a revitalized mercenary fetish for cutting overhead & maximizing dividends at a human cost. Our public gathering places are being bulldozed & replaced with more profit-sucking real estate. Our actions are monitored, dissent is catalogued, & punishment is divvied out on an industrial scale. When it happens to us, so often we are incapable of placing it in a larger context. When it happens to others, we struggle with our incomplete grasp of empathy & susceptibility to reams of misinformation. All of this, helmed by engineers, computer scientists, lawyers, entrepreneurs, politicians, & citizens simultaneously over & under-educated.
I have a personal example. My dad held a degree in Nuclear Engineering & had nearly 30 year's experience in systems analysis, quality assurance, continuous improvement & adjacent managerial disciplines in the Energy, Aerospace, & Manufacturing industries. He died a year & a half ago. The disease was systematized ignorance. Delta variant was just a symptom.
LettucePrime OP t1_j9nyzj2 wrote
Reply to comment by khamelean in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
Yeah I'm very aware. I'm also aware it's detrimental to getting the knowledge in your head. It's the same reason we don't teach kids basic arithmetic with 4 function calculators. An essay is "showing your work" about the topic of your essay.
LettucePrime OP t1_j9nx9qv wrote
Reply to comment by khamelean in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
I wonder why high school math students are still required to show their work when Wolfram Alpha can factor polynomials no problem.
LettucePrime OP t1_j9nwrjp wrote
Reply to comment by khamelean in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
Yes. The student's information. The AI cannot interpret, interact, nor can it, by definition, be unique. The AI cannot be used by a student as a crutch get out of developing their own assessments, as is done presently - & the essay is still an excellent medium to do this.
LettucePrime OP t1_j9nw9fu wrote
Reply to comment by Surur in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
I understand you now, my apologies.
LettucePrime OP t1_j9nw6se wrote
Reply to comment by khamelean in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
Oh, Sam's calculator shtick. Yeah I fundamentally disagree with that. Writing is not the same as Arithmetic. The goal of an essay is not to convey information, but for the student to internalize the concepts & present their interpretation & interaction with the ideas in a compelling & unique way. AI-assisted tools, at least with the strength of ChatGPT, negate this process to the detriment of most of academia. The struggle is the process.
LettucePrime OP t1_j9nv0b8 wrote
Reply to comment by Surur in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
Ehh no actually, that's not true. ChatGPT inferences are several times more expensive than your typical Google search, & utilize the same hardware resources used to train the model, operating at the same intensity, it seems.
LettucePrime OP t1_j9ntf72 wrote
Reply to comment by Clairvoidance in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
I had an enormous 10+ paragraph version of this very simple post discussing exactly some of those smaller LLMs, & while I'm not too familiar with Pygmalion, I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating. Effectively I argued that, because of economic & tech pressures, the AI industry is due for a contraction pretty soon, meaning that AI generated text would only come from an ever dwindling pool of sources as the less popular models die out.
I abandoned it before I got there, but I did want to touch on truly small scale LLMs & how fucked we could be in 3-5 years when any PC with a decent GPU can run a Russian Troll Farm.
Regarding privacy concerns, yeah. That's probably the best path to monetization this technology has at the moment. Training models on the business logic of individual firms & selling them an assistant capable of answering questions & circulating them through the proper channels in a company - but not outside it.
LettucePrime OP t1_j9sine9 wrote
Reply to comment by CubeFlipper in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
Oh no that seems a bit silly to me. The last 15 years were literally about our global "store-everything" infrastructure. If we're betting on a race between web devs encoding tiny text files & computer engineers attempting to rescale a language model of unprecedented size to hardware so efficient it's more cost effective to run on-site than access remotely, I'm putting money on the web devs lmao