beezlebub33

beezlebub33 t1_jdqezp3 wrote

I recognize your point, and try to withhold my enthusiasm. But this really does feel different. I don't know if it will be the same sort of 'different' that the introduction of PCs, the internet, and cell phones caused, but it feels even more 'different' than that.

The world has changed a lot in the past 50 years, technologically, economically, environmentally, and socially. The widespread use of LLM AIs is going to change it again, the only question is how much and how fast. I think more and faster. And whatever comes next is going to change it even more and even faster.

(And, as someone else pointed out, the difference between us and Christians is that we have data and can make plots of what has changed)

7

beezlebub33 t1_jdqecpn wrote

Simply a variation of the lottery fallacy: Something unlikely but wonderful happened, therefore I must be special.

But logically, someone must eventually win, and it's just blind luck which one it is. You are a bundle of short cuts, rough approximations, and biases that just happens to do well in the world. Let's hope that our progeny can do better.

18

beezlebub33 t1_jd9ftuv wrote

That sounds useful, maybe I should get one....

>A single unit costs anywhere from $100,000 to $375,000.

Oof, nevermind. It might make sense for a community or and emergency department, but not homeowners.

The open question is how this would compare with other sources of the same products (electricity and water) and how much those would cost. Assuming that this needs to be pulled behind a large truck, the question is whether it makes sense to have it pull this or have the truck pull a diesel generator and a large tank of water.

3

beezlebub33 t1_j9ugt9v wrote

They released code to run inference on the model under GPL. they did not release the model and describe the model license as 'Non-commercial bespoke license', so who the hell knows whats in there.

You can apply to get the model. See: https://github.com/facebookresearch/llama but no info about who, when, how, selection criteria, restrictions, etc.

Model card at: https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md

(I'd also like to take this opportunity to remind people that Model Card concept is from this paper: https://arxiv.org/abs/1810.03993. First author is Margaret Mitchell, last author is Timnit Gebru. They were both fired by Google when Google cleared out it's Ethical AI Team.)

5

beezlebub33 t1_j9ue4ov wrote

Slightly different things. That's more the episodic memory.

For Life-Long-Learning: No system gets it right all the time; if there is a mistake that it makes, like misclassifying a penguin as a fish (it doesn't make this mistake), then there is no way for it to get fixed. Similarly, countries, organizations, and the news change constantly and so it quickly becomes out of date.

It can't do incremental training. There are ways around this; some AI/ML systems will do incremental training (there was a whole DARPA program about it). Or the AI/ML system (which is stable) can reason over a dynamic data set / database or go get new info; this is the Bing Chat approach. It works better, but if something is embedded in the logic, it is stuck there until re-training.

1

beezlebub33 t1_j9t4s2s wrote

Flaws. Note that they are partly interrelated:

  • Episodic memory. The ability to remember person / history
  • Not life-long-learner. If it makes a mistake and someone corrects it, it will make the same or similar mistake next time
  • Planning / multi-step processes / scratch memory. In both math and problem solving, it will get confused and make simple mistakes because it can't break down problems into simple pieces and then reconstruct (see well-known arithmetic issues)
  • Neuro-symbolic. Humans don't do arithmetic in their heads, why would an AI? Or solve a matrix problem in their head. Understand the problem, pass that off to a calculator, get the answer back, convert back into text. (See what Meta did with Cicero, AI that plays Diplomacy for a highly specific example of what to do)
6

beezlebub33 t1_j7fbjii wrote

"Henry Ford did nothing revolutionary, the engineering work in making a car isn't particularly difficult, it's just perceived that way by the public. There will be a half dozen other car manufacturers in 6 months."

LeCun is going too far the opposite way. I would not be surprised if he has access to systems at FAIR that could do something similar, so dismisses the whole thing or misses the main point. But, like Ford, what OpenAI has done with Dalle2 and ChatGPT is make AI useable and available to us benighted common folk.

It doesn't matter whether Google and Meta not releasing something like this is due to a can't or a won't. It's all the same to the rest of humanity who can't use it in either case.

13

beezlebub33 t1_j6smj59 wrote

> Our energy grids are strained as people will pay the energy cost for having a virtual assistant and using it to improve their lives and corps having more powerful AI that strains grids further

I don't think so. Yes, energy cost is high in many places in the world, but there are places where it is cheap, either because of the government, lack of environmental regulation, and local resources.

Just as tech support has been moved off shore (and is why your tech support person has an Indian accent), running a virtual assistant can be done where it makes sense from an economic point of view. Yes, you will need to have a good data connection, and you have to move servers there, but frankly it's easier and cheaper to ship those servers from China to someplace with cheap electricity than to US or EU.

Expect a new set of countries that host AI server farms, just as there are a set of countries that operate garment manufacturing. R&D will still occur in developed countries, but 'production' will move.

2

beezlebub33 t1_j63o8lh wrote

>I’ve already debunked the photography comparison in another comment.

I've read all your replies, and no, you didn't. You dismissed it, like you have with most of the other replies. Your most coherent response that I can find is:

> Also the shift from painting to photography was a change of the use of 2 mediums, not the handing over of methods of creation from humans to automation.

I think that you are wrong about how AI works. As you yourself has pointed out, art and software development are different, and as SW dev working in AI/ML, I can tell you that AI art is a tool, not a replacement.

Can you actually explain why going from painting to photography is simply switching mediums but going from photography to AI art is not? Or if you have made an argument somewhere can you point to it?

6

beezlebub33 t1_j63koxe wrote

>The issue I have is that it is currently not a tool but a replacement, it’s taking creativity out of human hands.

I don't think so.

Art is a creative, human endeavor. AI Art is, and will continue to be, a tool though an incredibly powerful one. And a real artist can do much more with that tool then someone who is not an artist.

The same can be said about cameras and photography. It has completely taken the manual, actually 'create the art' aspect away. Literally anyone can take a picture and call it art and it will be as good as 'real' art in terms of the technical specs. But it's not good art, because it doesn't have an actual artist with a vision creating it.

Of course, the camera was devastating for portrait artists and especially high-paid painters. If you want something to remember your children or grandparents, it's a millisecond to create an eternal, perfect (by some measure) image of them, and that will blow away anyone wanting to be the next John Singer Sargent. Instead we have Yousuf Karsh and Annie Leibovitz. Maybe AI art will replace them too, but that just means we'll have to create the next thing.

6

beezlebub33 t1_j5yqbuv wrote

>gary marcus' objections have nothing to do with world models,

I think they do. See: https://garymarcus.substack.com/p/how-come-gpt-can-seem-so-brilliant . GPT and other LLMs don't are not grounded in the real world, so cannot form an accurate model of them; only getting secondary (from human text). This causes them to make mistakes about relationships; they don't 'master abstract relationships'. I know he doesn't use the term there, but that's what he's getting at.

Also, at https://garymarcus.substack.com/p/how-new-are-yann-lecuns-new-ideas he says:

>A large part of LeCun’s new manifesto is a well-motivated call for incorporating a “configurable predictive world model” into deep learning. I’ve been calling for that for a little while....

The essay isn't primarily about his thoughts on world models, but marcus, for better or worse, thinks that they are important.

3

beezlebub33 t1_j5jf1c4 wrote

While it's theoretically possible, it's unlikely.

First, organisms evolve based on mutations and differential reproduction, so you would need the same sorts of mutations and the same sort of selection pressures. Both of these are unlikely, the conditions just are not the same. Also, why did D go extinct? Because they died out because of over competition in their niche, some parasite, etc. ; well, it would affect a new D too. And of course mutations are random, so it's pretty much impossible to exactly replay.

That said, we do have lots of examples of convergent evolution, where different organisms have evolved to fill in niches in different areas. See: https://en.wikipedia.org/wiki/List_of_examples_of_convergent_evolution . Let's say there is a land that doesn't have a large diversity of birds (say, the Galapagos). The first birds that arrive will radiate (diversify through evolution) to fill lots of different niches, such as eating nuts, eating fruits, eating insects, even though they had the same progenitor species. Interestingly, the evolved organisms filling the niche don't do it quite the same way, because evolution adapts what is at hand.

51

beezlebub33 t1_j59z3lq wrote

Trying to break ChatGPT is the way that you figure out what it can and cannot do. You find where it fails and publicize that, and then OpenAI can fix it.

However, the number of people trying to break it is dwarfed by the number of people trying to use it for other things, so it's not like this is the cause of it being busy. Just wait till there is a subscription model (which I'm guessing is coming very soon) and then you can use it all you want for productive things.

Right now, it's an openly available toy and you don't get to decide how we play with it.

4

beezlebub33 t1_j52d84h wrote

>If a member of a species was born with an extra chromosome, or two chromosomes fused, their offspring have a high change of being sterile. How could the increase of decrease of a chromosome become wide spread in a species if that happens?

I think I understood the question and answered it. 1. The sterility of an offspring with an additional / fused chromosome isn't that high as shown by examples, it can be neutral; and 2. neutral mutations can become fixed.

The argument is quite similar to mutations in general. There is the general opinion that mutations are bad and overwhelmingly deleterious. They aren't. Most are neutral; the result is most people have mutations, often quite a few. Those mutations can become fixed simply because there are so many of them and they are not selected out. There are certainly bad mutations, which cause developmental or functional problems. They are sometimes really bad and really obvious, and people remember those. Sometimes they are good and increase selection.

Similarly, sometimes chromosomes fuse or split, and it doesn't make a difference. Sure, sometimes, in fact more often than not, they are bad and get selected out. But sometimes they are neutral, and sometimes the different number gets fixed. This is not unexpected.

0

beezlebub33 t1_j4zurfq wrote

Your premise is incorrect. During reproduction, the chromosomes have to line up in order for them to produce offspring. The number of chromosomes is important but it's not clear exactly how difficult the chromosome number makes for reproduction, relative to other factors, and it depends on where and how the chromosome number changed. It is not the barrier to evolution that it is portrayed in anti-evolution literature.

People with Down's syndrome can and do reproduce and they have an extra chromosome. Consider Robertsonian translocations. which can reduce the number of chromosomes. See: https://en.wikipedia.org/wiki/Robertsonian_translocation

Of course, the most famous 'cross' is a male donkey and a female horse to produce a mule, which is sterile. However, there are a large number of equine species, and they have wildly different numbers of chromosomes. See: https://en.wikipedia.org/wiki/Equid_hybrid and https://en.wikipedia.org/wiki/Zebroid . Some of the crosses are fertile; for example, Przewalski's horse (66 chromosomes) and domestic horses (64 chromosomes) can and do produce fertile offspring.

Scientists can study the changes that have occurred in the number chromosomes, their shape (lengths of arms for example), banding patterns, etc. (this is called the karyotype of the organism) in related to help understand the evolutionary history of them. See, for example: https://pubmed.ncbi.nlm.nih.gov/23532666/ which focuses on equines. However, the same thing can be done for much more distant species. See this which reconstructs different chromosomes a wide diversity of animal: https://www.pnas.org/doi/10.1073/pnas.2209139119 Figure 2 in particular shows how the chromosomes line up, and what happened as they split, merged, grew, and shrank.

Summary: reproduction between individuals with different numbers of chromosomes can and does happen. The history of related (both near and far) animals provides evidence for what changes occurred in chromosome number (and shape, and banding, etc.).

82

beezlebub33 t1_j4wls9p wrote

>Kinda failing to consider that charging the EV is expected to create
>
> the peak demand.

How so?

Take a look at the duck curve. The idea is that you charge at work which means that it charges during the middle of the day, when solar is greatest.

The peak demand is in the evening 6-9 pm or so. That's when you want to discharge back to the grid, and then charge the car again 3-6 am.

The only place where there is overlap between high grid usage and people wanting to charge their car is immediately after they get home from work. It's easy to disincentivize that with variable rates.

8

beezlebub33 t1_j1lus6k wrote

Gattaca

Genetic modification of people is coming, and the normals will not be able to compete. Any business, given the choice between someone who has been modified to be smarter, better looking, harder working and someone who has not will obviously choose the modified. Un-modified will be left behind, and won't even be considered. Unless they take extreme measures of course.

4

beezlebub33 t1_j04k5c9 wrote

That would, IMHO, be a big win. Even if the scaling hypothesis is correct, why would you want to solve the problem that way, when there are probably far better ways to solve it.

Sure, we could fly an interstellar spacecraft to another solar system, but it would be a bad idea to do it, because in the time that it would take to get there, some other ways of getting there would be invented. IF you left for the stars now, people would be waiting for you when you got there.

In the same way, simply scaling compute and data may get you to a certain amount of intelligence. But the costs and effort would be huge. It would probably be better to spend that time and effort (and money) on making the underlying ideas better. And even if it turns out that, yes, we have to scale, waiting until computational costs come down further is probably a good idea.

3