pier4r

pier4r t1_jead39m wrote

As a semi layman, while I was amazed by the progress in ML, I was skeptical of every increasing models, needing more and more parameters to do good. I felt like "more parameters can improve things, then other factor follows".

I asked myself whether there was any effort in being more efficient shrinking things and recently I read about LLAMA and I realized that that direction is now pursued as well.

1

pier4r t1_jd0pf1x wrote

> 128Gb of Uniform RAM which can be used by CPU, GPU or Neural Engine.

But it doesn't have the same bandwidth as the VRAM on the GPU card iirc.

Otherwise every integrated GPGPU would be better due to available ram.

The neural engine on M1 and M2 is usable IIRC only with apple libraries, that may not be used by notable models yet.

10

pier4r OP t1_j97cwwy wrote

The nobelitis part was referring to tesla's later period , there few patents are involved. Have you read the article?

One example:

> Tesla claimed that not only could he send electric power wirelessly for 50 million or 100 million miles at “rates of one hundred and ten thousand horsepower.” He also said that he had made a radio machine that “could easily kill, in an instant, three hundred thousand persons.” Even stranger Tesla swore that he received an unusual communication that he decided must have been from Martians. (Although he also added the thought that there could also be aliens on Venus or the moon as, “a frozen planet, such as our moon is supposed to be, intelligent beings may still dwell, in its interior, if not on its surface.”[58])

about "things that work"

> As the years passed, Tesla didn’t manage to demonstrate any significant communication nor transmission of power from his tower. Instead, on January 19, 1903, Marconi was the one who sent the first two-way transatlantic wireless signal from Roosevelt in America to King Edward of England and back, and Marconi appeared to everyone to be the winner of the wireless race.[62] Tesla was undeterred, but Morgan was done with Tesla and his promises and cut off funding. By the next year, Tesla wrote J. P. Morgan in desperation: “Since a year, Mr. Morgan, there has hardly been a night when my pillow was not bathed in tears.”[63] By 1906, he had to fire all his employees at his wireless tower, Wardenclyffe, where it remained empty for many years

Thus I still have the feeling you didn't bother to read the article.

1

pier4r OP t1_j8w545v wrote

it is about air brakes? Maybe you can contact the author of the article because she would love to put things in perspective and revise her take (of course she needs reliable sources and documents).

See here: https://kathylovesphysics.com/george-westinghouse-the-unsung-hero/?utm_source=rss&utm_medium=rss&utm_campaign=george-westinghouse-the-unsung-hero#ind

3

pier4r OP t1_j8vg1ty wrote

>Seeing firsthand how difficult it is for things to get patented, I would not refer to Tesla’s later years as having Nobelitis

I'm not sure how the two things disprove each other. Winning the Nobel prize isn't easy either (I'd say is harder than making patents)

One can have a good career at first, achieving things that are pretty hard, and then due to this success one could start going in the Nobelitis direction.

From your comment I get the feeling that you didn't read the article (or watched the video) nor checked the Nobelitis part. Could it be?

0

pier4r OP t1_j8vff33 wrote

I strongly believe that the "being famous" is related to nowadays. everyone today knows Tesla but not the other guy.

If I read your comment correctly you want to say that Westinghouse was famous before Tesla in his prime time and then Tesla was famous later. Thus independently from the situation nowadays.

11

pier4r OP t1_j8rwi10 wrote

Why I find it interesting:

The internet in the last decade hyped Tesla a lot. I didn't dig into his history, but I assumed he was someone unmatched, a polymath able to do everything.

The author is amazing, she went to a lot of primary sources and I was appalled to discover that practically Tesla got Nobelitis after some very successful patents.

Further Tesla was far from being mathematical. Apparently he had a great intuition, but couldn't follow his ideas with the proper mathematics. Last but not least his ideas weren't, like, decades ahead of everyone else. The 3 phase transmission was already implemented and perfected (not only patented) in Germany by a Polish-Russian Engineer, while wifi communications were done by G.Marconi pratically identical like Tesla's patent.

Further: there is also a video on this https://www.youtube.com/watch?v=kSyGFEjoYOM

7

pier4r t1_iyzl5ta wrote

I not too deep in ML , but I read articles every now and then (especially about hyped models, GPT and co). I see that there is progress on some amazing things (like GPT-3.5) also because their NN gets bigger and bigger.

My question is: are there studies that check that NN could do more (are more precise or whatever) given the same parameters? In other words, it is a race in making NN as large as possible (given that they are structured appropriately) or is the "utility" per parameter also growing? I would like to know if there is literature about it.

It is a bit like an optimization question. "Do more with the same HW" so to speak.

2