CommunismDoesntWork t1_jddbke7 wrote

>Tesla still uses lithium-ion batteries which is the norm for any EVs today,

And unheard of like 6 years ago. Are we just going to pretend auto companies would have switched to EVs the way they are now if Tesla hadn't come along and started to eat their lunch? That's a revolution.

SpaceX revolutionized space if you were already in the space industry. For average people, we won't see the ramifications until starship, agreed, but everyone in the space industry felt the revolution that was 2015 when SpaceX landed the falcon 9 for the first time


CommunismDoesntWork t1_jddayh4 wrote

Define wealthy? His dad was an engineer and mom a regular model (not super model). That's upper middle class at best. "But the emerald mine" his dad invested $40k of his life savings on a 50% stake in a mine and doubled his money over 10 years which barely beats the stock market.


CommunismDoesntWork t1_jddagpj wrote

Stop spreading the lie that Elon didn't found Tesla and SpaceX. First, SpaceX was soley founded by Elon and he's been the CEO and chief engineer since inception.

Tesla was Co founded by 5 people. Elon musk and JB straubel were independently going to start an EV company using tech from AC propulsion. AC propulsion then introduced them to Mark and Martain because they wanted to do the same thing, and the 4 decided to team up and musk provided the initial funding for Tesla. Tesla was nothing but a piece of paper when they teamed up. The courts decided all of them get to call themselves founders because there's no hard and fast rule of what counts as being a founder.


CommunismDoesntWork t1_jdcxx5u wrote

>But in principle there's nothing standing in the way of building a 100B parameter SNN.

That's awesome. In that case, I'd pivot my research if I were you. These constrained optimization problems on limited hardware are fun and I'm sure they have some legitimate uses, but LLMs have proven that scale is king. Going in the opposite direction and trying to get SNNs to scale to billion of parameters might be world changing.

Because NNs are only going to get bigger and more costly to train. If SNNs and their accelerators can speed up training and ultimately reduce costs, that would be massive. You could be the first person in the world to create a billion parameter SNN. Once you show the world that it's possible, the flood gates will open.


CommunismDoesntWork t1_j9ngi4q wrote

Reply to comment by xott in Stephen Wolfram on Chat GPT by cancolak

It's simple but not interesting from a research perspective. Humans don't need calculators to do math after all. Someone has done it though. They posted about it on the machine learning subreddit a few days ago


CommunismDoesntWork OP t1_j259z6s wrote

ChatGPT has been shown to have problem solving and analytical reasoning skills. It can also explain the reason behind it's answers. It can be confidently incorrect sometimes, but ChatGPT is for sure more than just "predicting what word should come next". There's a spark of AGI in it, even if it's not perfect. Transformers have been shown to be turing complete, so there's nothing fundamentally limiting it.


CommunismDoesntWork OP t1_j2531og wrote

>being potentially useful for the early stages of exploring a new idea and an unfamiliar body of work.

Exactly, this is what I had in mind when I was quizzing ChatGPT on the immune system. I wanted it to teach me everything there is to know about the immune system basically, which is something I know almost nothing about. If you keep asking ChatGPT "why", it will eventually bottom out and won't go into any more detail, whereas I imagine a research orientated GPT could keep going deeper and deeper until it hits the current limit of our understanding about a particular subject.

>New research hyper-relevant to mine is likely to cite at least one of my papers, so I already get an alert. >There are many times when my research takes me into a new sub-field for just one or two questions ancillary to my own work

But how do you know a completely separate area isn't relevant to your work? Not a sub field, but a completely separate area. Let's say a team is trying to cure Alzheimer's. At the same time, a different team is working to cure aids. The aids group makes a discovery about biology that at first only looks applicable to aids, and so only people studying aids learn about it. But as the alzheimer's team uncovers more raw facts about Alzheimer's, they uncover a fact that when combined with the aids discovery could create a cure for alzheimer's. But then many years go by without anyone making the connection, or worse case scenario the alzheimer's team randomly rediscovers the same thing the aids team discovered years ago. Where I think a research assistant GPT would really shine is by being able to absorb all of these independent data points and instantly making the connections. If it even speeds up research by a week it would totally be worth it.