Jeffy29

Jeffy29 t1_jeg33to wrote

That's one of the most boring names I've have seen a paper have lol, though I skimmed it and it looks quite good and is surprisingly readable. Though I don't think this method will be adopted anytime soon, from the description it sounds quite heavy on inference and given how much compute is needed for current (and rapidly growing) demand, you don't want to add to it when you can just train a better model.

The current field really reminds me of the early semiconductor era, everyone knew that there were lots of gains to be had by making transistors in a smarter way but there wasn't the need when node shrinking was going so rapidly and gains were, it wasn't until the late 2000s and 2010s when the industry really started chasing those gains which there are plenty but it isn't nearly as cheap or fast as the good ol' days of transistor shrinking. But it is good to know that even if LLM performance gains inexplicably completely stops tomorrow, we still have lots of methods (like this and others) to improve their performance.

2

Jeffy29 t1_jeeo3va wrote

>One thing I keep seeing is that people have been making a buttload of assumptions that are tainted by decades of sci-fi and outdated thought. Higher Intelligence means better understanding of human concepts and values, which means easier to align.

I am so tired of the "tell AI to reduce suffering, it concludes killing all humans will reduce suffering for good" narrative. It's made up bs by people who have never worked on these things and has a strong stench on human-centric chauvinism where it assumes even advanced super intelligence is actually a total moron compared to the average human, it's somehow capable of wiping humanity and at the same time is a complete brainlet.

18

Jeffy29 t1_je9cuhr wrote

AI will become progressively better at refining datasets, even GPT-4 is quite good at it. From my understanding right now they use low-paid workers, often from 3rd world countries to go over data but that's not particularly efficient method and there just isn't any way to go through all the data with enough care, so there is lot of garbage in those datasets. But AI could do it, it would still require some human supervision but it would speed up the process by a lot and I expect datasets to get dramatically better over the next 5 years.

1

Jeffy29 t1_je8itvc wrote

Jesus Christ this clown needs to stop reading so much sci-fi

>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Start a World War 3 to prevent imaginary threat in our heads. Absolute brainiac. This reeks of same kind of vitriolic demonization that muslims were subjected after 9/11 or trans people are subject to right now. Total panic and psychosis. There is all this talk about AGI and when AI is going to reach it, but holy shit when are going humans going to? Emotional, delusional, destructive, for supposed pinnacle of intelligence we are a remarkably stupid species.

12

Jeffy29 t1_jdzbcp7 wrote

>(imagine thinking that no one understands what “intelligence” is except you😂), and speculative philosophal nonsense. (With a hint of narcissism thrown as well.)

I really get the sense lot of the time reading the doubters is that they think nobody else even considering all the problems and challenges. All these PhD researchers are just mindless idiots chasing some fad. Just reeks of an immense hubris.

>The author made the laughable claim that superhuman AI was merely science fiction

The thing is, this thing doesn't even need to be superhuman. I am not sure how many people know of John Von Neumann but he should have been as famous as Einstein and he was arguably even smarter. His Wikipedia page reads like piece of fiction, you look at his huge list of things he is known for and at the end of the list you have (+93 more), what... His contribution to mathematics and computer science is beyond immense, it's very likely we would have been right now quite a bit behind in number of fields if he didn't exist. Now imagine if instead of person of this kind of brilliance being born once or twice a century, we could instead have million of them, on-demand at all times. If it wouldn't result in a singularity, it would be something very close to it.

2

Jeffy29 t1_jdynbwc wrote

I think Her (2014) and A.I. Artificial Intelligence (2001) are two of the most prescient sci-fi movies created in recent times. One with more positive outlook than the other, but knowing our world, both will come true at the same time. Like I can already picture some redneck crowd taking sick pleasure at destroying androids. You can already see some people on Twitter justifying and hyping their hate for AI or anyone who is positive about it.

11

Jeffy29 t1_jdylw27 wrote

>It’s like arguing that a plane isn’t a real bird or a car isn’t a real horse, or a boat isn’t a real fish. Nobody cares as long as the plane still flies, the car still drives and the boat still sails.

Precisely. It's an argument that brain worm infested people engage on Twitter all day (not just AI but a million other things as well), but nobody in real world cares. Just finding random reasons to get mad because they are too bored and comfortable in their life so they have to invent new problems to get mad at. Not that I don't engage something in it as well, pointless internet arguments are addicting.

3

Jeffy29 t1_jdyl665 wrote

The original tweet is immensely dishonest and has a poor understanding of science. Key advancements in science often come because the environment allowed it to happen. This notion that scientists sit in the room and have some brilliant breakthrough in a vacuum is pure fiction and a really damaging stereotype because it causes young people to not pursue career in science because they think they can't think of any brilliant idea. Even Einstein very likely would have not discovered special and general relativity if key advancements in astronomy in late 19th century did not gave us much more accurate data about the universe. I mean look at the field of AI, you think it's a coincidence that all these advancements came right as the physical hardware, the GPU allowed us to test our theories? Of course not.

I do think a very early sign of ASI will be if model will independently solves a long-standing and well-understood problem in science or mathematics. Like for example one of the Millennium-Prize Problems, but absolutely nobody is claiming AI as we have it now is anywhere near that. The person is being immensely dishonest to either justify perpetuating hate or more likely in this case just drifting. There is a lot of money to be made if you take stance on any issue and scream it loud enough, regardless how much it has to do with reality.

A personal anecdote from my life. I have a friend who is very very successful, he is finishing up his PhD in computer science at one of the top universities in the world. He is actually not that keen on transformers or machine learning through a mass amount of data, he finds it a pretty dumb and inelegant approach, but week ago we were discussing GPT-4 and I was of course gushing over it and saying how this will allow all these things, his opinion still hasn't changed, but at that moment he surprised me he said that they've had access to GPT-3 for a long time through university and he and others have used it to brainstorm ideas, let it critique the research papers, discuss if there is something they missed they should have covered etc. If someone so smart, at the bleeding edge of mathematics and computer science, finds this tool useful (GPT-3 no less) as an aid for their research, then you have absolutely no argument. Cope and seethe all day but if this thing is useful in real-world doing real science, then what is your problem? Yeah, it isn't Einstein, nobody said it was.

4

Jeffy29 t1_jdsm90r wrote

Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42

>But I strongly doubt it'll help much: it's not that the solutions have minor bugs, they're usually just completely wrong

I strongly doubt that it wouldn't help. I haven't tested GPT-4 in coding but from what I've seen GPT-3 makes a number of simple errors, especially in longer complex code it's almost inevitable. But it's able to quickly identify and correct it when you point it out. GPT-4 not being able to compile and test its own code that is a big limitation that humans don't have. It also can't calculate the math, it's essentially guessing the calculation, but both can be addressed with an external compiler and calculator like Wolfram. Something humans also have access to. There would need to be some time limit imposed so it can't brute force the solution after guessing for a few days but even so I think the improvements would be quite large.

3

Jeffy29 t1_jdqsp3q wrote

If you think AGI/ASI will lead to utopia or something close to it, then I would say we are one of the last unlucky humans to be born before the singularity compared to billions of years humans (or their successors) will be born afterwards. Was the invention of steam engine incredibly transformative and cool moment for the history of our species? Absolutely. Would I prefer to be living then? Sure as hell not. Likewise if I had a choice I would pick being born 200 years from now, maybe they would not be experiencing big technological leaps, but life would be better.

2

Jeffy29 t1_j1xgto6 wrote

Yes, after the Arrancar (Aizen) arc there is a rather short "Lost Substitute Shinigami" arc (20 or so episodes) that's manga cannon and is the ending of the original show. After the Arrancar arc they inserted filler arc but the ending leads to the same result as the canon, so it can be skipped. You can use the filler list to know what to skip, basically, Arrancar arc ends at 310, and a new one starts at 342. The show is on Hulu/Disney+ and they are numbered correctly.

2

Jeffy29 t1_j1tjr2g wrote

Not exactly. Thousand Year Blood War is the last arc of the manga that was never adapted in the original anime (because manga wasn’t finished and the TV station chose to end the series instead of making yet another filler). TYBW picks up right where original anime ended, but it is also in many ways is also a new take on Bleach.

The animation style has changed a lot, instead of hand drawn they are I think using mix of digital and hand drawn (to great effect I might add, the show has some of the best animated scenes in the entire series). The story is much darker and grimmer because the manga was too, but animation embraces it, it’s noticeably more graphic than the original series. The characters are bit older and there is (slightly) less immature humor. Though production studio still shows lot of respect to the old series, one way is by use of the old iconic soundtracks in key moments of the show. Overall it’s a much better paced and produced show that will feel perfect for old fans of the show who have grown up since the original series.

3

Jeffy29 t1_j1t0x6k wrote

I was being more facetious. I rewatched the entire series over the past month or so before the new one and I did not expect to enjoy it as much as back then, but I totally did! My only complaint was the poor pacing at times which wasn't due to the story but more the production demanding certain episodes to be stretched to meet the quota.

0

Jeffy29 t1_j1sdufk wrote

Just finished the last episode of the first season. Am I in my thirties now and I shouldn't be so emotionally invested in children's anime? Yes. Do I care about it when Number One hits? Not for a goddamn second!

The adaptation is genuinely fantastic, and the animation is incredible, and since the audience is much older than when the original show aired the adaptation has aged with them, the show is much more graphic (if the plots demands it) than the original. It almost works on its own and I wish I could recommend it to others but, to enjoy it properly you need to have seen the previous series to appreciate everything. And I have hard time recommending original series to anyone without knowing they are prepared to skip all the fillers and prepared to deal with constant skipping as the original series can sometimes have 6+ minutes of recap which is terrible for binge-watching.

I hope when things are done, they'll take a look at the original series. I genuinely think the show could be recut into a ~100-episode abridged series if you leave out all the filler, cut out some of the pointless gags inserted to fill time, the terrible mid-episode break, and some repeating shots that are inserted there purely to fill time. It's hilarious how "filler" episodes have better pacing than some of the canon episodes because they spread it over 3 episodes purely for more time. I think the material is there, they just need to let someone put it together.

7

Jeffy29 t1_iyytd4r wrote

Reply to comment by __Maximum__ in OpenAI ChatGPT [R] by Sea-Photo5230

While I agree with you, the same thing can be said about calculators but at some point, we determined it's okay to use the calculators if you know the basics, because there will never be a time you won't have a calculator near you so these days using calculators is natural in higher learning, even very complex ones. If AI like this (and one day much smarter than this) will become as ubiquitous as calculators won't it change how we teach people just like we did with the calculators?

Though it's way too soon to have this conversation, this is very immature technology right now, but I think this will one day create a discussion in society.

2