amplex1337

amplex1337 t1_jedvacf wrote

I still find more useful code examples from Google search more quickly than chatGPT. Even 4.0 spits out code that doesn't work way too often and I am debugging and finding bad API urls, finding PowerShell cmdlets that don't exist, finding the information is outdated or just doesn't work, etc.. It's often faster just to RTFM. Hate to be in the 'get off my lawn' camp because it's still exciting technology, and I've considered myself a futurist for >20 years, but I completely agree. We could have an AGI by 2025 but I'm not sure if we are as close as people think, and the truth is no one knows how close we really are to it, or if we are even on the right path at all yet. It's nice to give people hope, but don't get addicted to hopium.

2

amplex1337 t1_jedufrg wrote

So AI will come up with a way to extract resources from the environment automatically, transport them to facilities to refine, create and fabricate, engineer and build the testing equipment, perform the experiments en masse somehow faster than current time requires? It seems like a small part of the equation will be sped up but it will be interesting to see if anything else will change right away .. It will also be interesting to see what kind of usefulness these LLMs will have in uncharted territory. They are great so far with information humans have already learned and developed, but who knows if stacking transformer layers on an LLM will actually benefit invention and innovation.. since you can't train on data that doesn't exist, RLHF is probably not going to help much, etc. Maybe I'm wrong, we will see!

6

amplex1337 t1_j8qp89h wrote

chatGPT doesn't understand a thing it tells you right now, nor can it 'code in multiple languages'. It can however fake it very well. Give me an example of truly novel code that chatGPT wrote that is not some preprogrammed examples strung together in what seems like a unique way to you. I've tried quite a bit recently to test its limits with simple yet novel requests, and it stubs its toe or falls over nearly every time, basically returning a template, failing to answer the question correctly, or just dying in the middle of the response when given a detailed prompt, etc. It doesn't know 'how to code' other than basically slapping together code snippets from its training data, just like I can do by searching in google and copy pasting code from the top results from SO etc. There are still wrong answers at times.. proving it really doesn't know anything. Just because there appears to be some randomness to the answers it gives doesn't necessarily make it 'intelligence'. The LLM is not AGI that would be needed to actually learn and know how to program. It uses supervised learning (human curated), then reward based learning (also curated), then a self-generated PPO model (still based on human-trained reward models) to help reinforce the reward system with succinct policies. Its a very fancy chatbot, and fools a lot of people very well! We will have AGI eventually, its true, but this is not it yet and while it may seem pedantic because this is so exciting to many, there IS a difference.

2

amplex1337 t1_j8dl4d3 wrote

It doesn't understand anything, it's a chatbot that is good with language skills which are symbolic. Please consider it's literally just a GPU matrix that is number-crunching language parameters, not a learning, thinking machine that can move outside of the realm of known science that is required for a doctorate. Man is still doing the learning and curating it's knowledge base. Chatbots have been really good before chatGPT as well.. you just weren't exposed to them it sounds like

2

amplex1337 t1_j8dkb3j wrote

Plot twist. Bob is autistic and does love dogs, but doesn't necessarily show his love in ways that others do. His wife understood that and bought the shirt for him knowing it would make him happy. Bob probably wouldn't have bought a dog on his own because of his condition, and was very happy, but isn't super verbal about it. Sandra probably wouldn't have married Bob if he didn't love dogs at least a little bit.

9