phriot

phriot t1_jegify6 wrote

Reply to AI investment by Svitii

Leading today doesn't mean that they'll lead in the future. 3dfx and ATI were top dogs for graphics in the 1990s. They both got acquired. Digital Research was an early OS competitor to Microsoft, until CP/M got taken out by PC-DOS/MS-DOS.

I think that a total market index will be sufficient to invest in, because whatever tomorrow's large corporations are, they'll be the ones benefitting from AI. That said, if you want to take a flier, using 5-10% of your net worth (maybe up to 30% if you're under 25) to speculate in individual stocks and/or AI-focused ETFs wouldn't be awful.

1

phriot t1_je779ga wrote

But if you feed an LLM enough input data where "5 apples" follows "Adding 2 apples to an existing two apples gets you...," it's pretty likely to tell you that if Johnny has two apples and Sally has two apples, together they have 5 apples. This is true even if it can also tell you all about counting and discrete math. That's the point here.

2

phriot t1_jdxt7a4 wrote

>I also foresee that the 4% rule or even the whole FIRE (Financial Independence, Retire Early) movement will die out once UBI arrives, for obvious reasons.

Maybe, but I'm not so sure. If it does die out, it will be because the kinds of upper-middle class jobs that support the effort disappear. UBI won't be enough to kill it alone. Most FIRE people seem to want to keep those upper-middle class lifestyles. Fewer seem to want to get to $500k in order to live off $20k/year in the rural Midwest for the rest of their lives. UBI will provide for barely more than subsistence living until we get to a post-scarcity economy.

2

phriot t1_jdxbp71 wrote

I don't think anything is a given during this period. That said, I think the possibility of society becoming even more stratified over this time is very real. Between "living off wealth" and "living off UBI," I know what side I'd prefer to be on. I don't plan on having much more actual cash/cash deposits than I would otherwise, but I absolutely want to own as much of other assets as I can before my own job is disrupted.

2

phriot t1_jae9xnr wrote

Not exactly what you asked, but as I sit here today, I feel like my ideal life would look something like: 2 days a week doing science of some kind, either academia or industry; 2 days a week working for a charity, likely either based around homelessness, nutrition, or education; no more than a 10 minute commute for either thing; 3 days a week, plus all the time gained from not commuting for spending time with my family, exercising, and doing hobbies.

(FWIW, I have a spouse and a house. One day we'll have kids. I'm not really in a place where I'd be satisfied with 4 walls, a UBI, and a subscription to Nature anymore.)

3

phriot t1_jae72c7 wrote

If automation-based job displacement is that widespread, either the government steps in with expanding welfare in some way (UBI or a jobs guarantee), or we'll have a lot more going wrong than "will my apartment building be profitable?" But in reality, I'd probably split my investing somewhat between real estate and index funds. Corporations are likely to do amazing as automation increases. (Again, if we get to the point that literally no one can afford to buy the things corporations are selling, there's not much you can do other than stock up on canned food and a shipping container in the woods.)

4

phriot t1_jae522p wrote

It's the same answer as it has always been: You do a PhD, because you love the research (or at least like it a hell of a lot more than anything else you think you could do).

Some PhDs do pay off, but you don't do one for the money. There are easier ways to make money. If I was 18-20 today, and I only cared about money, I'd probably try to get into a trade, live as cheaply as possible, and try to invest half of each paycheck. I'd buy a house (or 2-4 unit multifamily property) as soon as I could afford it, and rent out all the other rooms/units. When I could afford another one, I'd move out, rent that room, and do it all again. Repeat as necessary until I could trade up into an apartment building. At the same time, I'd be trying to figure out how to run my trade as a business. If I had done something like that, I probably could have retired by the age I was when I finished my PhD (but I did finish rather late; I was a bit older when I finished my BS, and then my PhD took longer than average).

All that said, I love science. I wouldn't trade it for anything, now, but that's what I would do if I were starting over today, knowing what I know from my experiences, and if my priorities were different.

6

phriot t1_jae1348 wrote

I'm already a white collar worker with a PhD. While I am always learning, if I have to receive a new credential to prove additional competencies, I doubt I have more than one go around left before that's entirely unfeasible. This is partly a funding thing, and partly a time thing. Having degrees already, I'm pretty sure that I'm ineligible for federal student loans for another Bachelor's, and getting into a second PhD program with assistantships would be difficult, if not impossible. That leaves me likely self-funding a Master's Degree. Doing this more than once would wipe my finances out beyond the point where there would ever be a payoff.

I think a better route is for people to self-learn as much as they can about where their fields are heading, and the tools that are on the horizon. I believe that it will likely be easier to try to evolve with your current degree than banking on trying to repeatedly get new ones. Try to be the last "you" standing, as it were. This could involve getting certificates, certifications, or even new degrees if you can get them paid for, but I see this as extending skills, rather than replacing them. What I can't see is saying "Okay, my accounting job is likely to get automated, so I'll get Master's in Computer Science. Okay, I probably won't be senior enough to survive chatbots coming for entry level coding positions, so in 3 years I'll go get an Associate's in robot repair. Okay, now robots are all modular and self-repairing, so it's back to another Master's in Space Civil Engineering." You'll just never be actually working long enough to make any progress other than always having an entry level job.

6

phriot t1_ja98nfw wrote

If you don't count my early paper route, I've worked since I was 14. Since then, I've garnered enough degrees, credentials, and experience to earn a pretty good income. Statistically, I likely make more than you (edit: referring to OP). Maybe I don't, but there's a good chance I do. A UBI would have to be far beyond "basic," to get me to stop working. (I'll probably refocus my time and energy once I reach financial independence on my own, though.)

I still think that it's likely that automation will come for a large portion of the workforce during my working life. I still think that UBI is a better macroeconomic solution, and a more humane one, than something like a jobs guarantee. It's certainly better than just letting people fend for themselves and crash upon the current welfare system.

3

phriot t1_j9y37gz wrote

Study whatever lets you figure out if that field is likely to be fully automated quickly, or not. Many roles will be augmented by automation long before they are replaced by it. You want to be able to recognize and use the help, rather than run from it. Use those skills to make money and invest, because capital is likely to do just fine when robots and software come for labor.

Or if you just want to make sure you have "a job" for as long as possible, yeah, go for trades like the other commenter suggested. You'll make good money, and it will take a while for a robot to replace you, longer than an office worker.

2

phriot t1_j9ppu4c wrote

I don't like the way Kurzweil writes at all. I didn't really have trouble understanding any of his books that I've read (Age of Spiritual Machines, The Singularity is Near, and Fantastic Voyage), but I found them all to be tedious reads. Honestly, if you listen to a long-form interview with him, you'll probably get like 80%+ of what's in any of the books. What you'll miss out on is mostly his ideas around sex in the future, opinions that I could care less about.

That all said, if you want to make it through, the best thing you can probably do is to sit down and read it with an open web browser in front of you, and a notebook and pen beside you. It will take you quite a bit longer, but you start at Wikipedia with any term you don't understand, and keep working your way down with understanding until you don't care about the detail anymore. Writing stuff down will help you remember it and/or let you look up terms later, if you decide you'd rather just get the gist of any given section, first.

11

phriot t1_j9pmyej wrote

I'm all for infrastructure spending. But if when we get to a point where it's like "the overall economy is so productive due to automation, that we can pay essentially the same to have a person go pursue whatever passions they may have, or have them work on an infrastructure project that will get completed whether or not humans are involved," then why force people to dig ditches, just so we can give them "a job."

In the short term, sure. Maybe a jobs guarantee will be good for displaced data entry office worker drones. But if you want to get there by forcing a company to choose a human or efficiency and a huge tax bill, and then using the tax money from the companies that choose efficiency to tell the displaced workers "You can put up these solar panels, or get nothing," then I think you're hurting both the economy and the displaced workers.

1

phriot t1_j9obwdm wrote

It depends how you implement the tax. I voted for Bernie in 2016, but he definitely wants to preserve human jobs, rather than ensure dignity and prosperity when jobs no longer make sense. IIRC, in the past he was for a jobs guarantee over a UBI, for example.

I'm not a tax or policy expert, but I assume the better way to do this would be to tax the economic output of automation systems, rather than tax companies for replacing a human worker.

7

phriot t1_j5p190a wrote

As I'm sure you know, supercomputers these days are racks of nodes with fast interconnects. In that way, they are distributed, just housed in a single location. It's the software that allows the resources to be allocated to however many applications are running on one at any given time. I believe most of these supercomputers actually are running different applications at once, except maybe when they're being benchmarked. I don't think it's any less legitimate to call an AuroraX5 a single system than it is to call Aurora itself a single system. (You might call a node the smallest single system unit of a supercomputer, but even then, Aurora's nodes are 2 Xeon processors and 6 Xe GPUs.)

But yeah, I don't know how the scaling works. Maybe you really need to build an AuroraX7 or X10 to get to 10 exaFLOPS instead of just X5. The point is that if you just care about having raw computing power reach a specific level, the only thing really stopping you is money.

2

phriot t1_j5ovn3n wrote

Reply to comment by Ortus14 in Steelmanning AI pessimists. by atomsinmove

I don't think that you have to simulate a human brain to get intelligence, either. I discuss that toward the end of my comment. But the OP asked about counterarguments to the Kurzweil timeline for AGI. Kurzweil explicitly bases his timeline on the those two factors: computing power and a good enough brain model to simulate in real time. I don't think that the neuroscience will be there in 6 years to meet Kurzweil's timeline.

If we get AGI in 2029, it will likely be specifically because some other architecture does work. It won't be because Kurzweil was correct. In some writings, Kurzweil goes further and says that we'll have this model of the brain, because we'll have this really amazing nanotech in the late 2020s that will be able to non-invasively map all the synapses, activation states of neurons, etc. I'm not particularly up on that literature, but I don't think we're anywhere close to having that tech. I expect that we'll need AGI/ASI, first, to get there before 2100.

With regards to your own thinking, you only mention computing power. Do you think that intelligence is emergent given a system that produces enough FLOPS? Or do you think that we'll just have enough spare computing power to analyze data, run weak AI, etc., that will help us discover how to make an AGI? I don't believe that intelligence is emergent based on processing power, or else today's top supercomputers would be AGIs already, as they surpass most estimates of the human brain's computational capabilities. That implies that architecture is important. Today, we don't really have ideas that will confidently produce an AGI other than a simulated brain. But maybe we'll come up with a plan in the next couple of decades. (I am really interested to see what a LLM with a memory, some fact-checking heuristics, ability to constantly retrain, and some additional modalities would be like.)

1

phriot t1_j5lcp4i wrote

My main plan is to try to have enough in assets before it happens that I'm not worried about it. My first backup plan is to not let myself get too far away from the physical aspects of my job that I can't make a case for going back to them. (I'm a scientist in biotech, so I want to be in a position where I can tell a next employer "I missed working at the bench," if I need to in order to get a job.) Second backup plan is to have skills for a job that won't be automated at the same time as my primary job. I have my real estate license, but I think that could be automated before my science job, unless all the various Boards of Realtors lobby the hell out of things to always require a person involved. Failing that, I think I might able to help a friend flip houses this year. This will give me more skills in construction/renovation, and possibly also help with the main plan.

2

phriot t1_j5kyd45 wrote

Kurzweil's prediction is based on two parameters:

  1. The availability of computing power sufficient to simulate a human brain.
  2. Neuroscience being advanced enough to tell us how to simulate a human brain at a scale sufficient to produce intelligence.

I don't think that Kurzweil does a bad job at ballparking the calculations per second of the brain. His estimate is under today's top supercomputers, but still far greater than a typical desktop workstation. (If I'm doing my math right, it would take something like 2,000 Nvidia GeForce 4090s to reach Kurzweil's estimate at double precision, which is the precision supercomputers are measured at, or ~28 at half or full precision.)

That leaves us with the neuroscience. I'm not a neuroscientist, but I am another kind of life scientist. Computing power has followed this accelerating trend, but basic science is a lot slower. It is more of a punctuated equilibrium model than an exponential. Things move really fast when you know what to do next, and then it hits a roadblock while you make sense of all this new information you gather. It also relies on funding, and people. Scientists at the Human Brain Project consider real-time models a long term goal. Static, high resolution models that incorporate structure and other data (genomics, proteomics, etc.) are listed as a medium term goal. I don't know what "long term" is to this group, but I'm assuming it's more than 6 years. And if all that complexity is required, then Kurzweil is likely off by several orders of magnitude, which could put us decades out from his prediction. Then again, maybe you don't need to model everything in that great of detail to get to intelligence, but that goes against Kurzweil's prediction.

Of course, this all presupposes that you need a human brain for human-level intelligence. It's not a bad guess, as all things that we know to be intelligent have nervous systems evolved on Earth and share some last common ancestor. If we go another route to intelligence, that puts us back at factoring people into the process. We either need people to design this alternate intelligence architecture, or create weak AI that's capable of designing this other architecture.

While I could be wrong, and maybe you can slap some additional capability modules onto an LLM, let it run and retrain itself constantly on a circa 2029 supercomputer, and that will be sufficient. But I A) don't know for sure that will be the case, and B) think that if it does happen, it's kind of just a coincidence and not to the letter of Kurzweil's prediction.

4

phriot t1_j5k1uem wrote

I think it's tough to be so granular as to make a prediction based on this trend only 3-4 years out. Various versions of Kurzweil's graph show a number of machines/years below trend. This appears to be particularly true in periods just before a computing paradigm shift. Limits of the silicon transistor suggest that we're due for one of these shifts. Computational power is one of those trends that I feel much more confident making medium-term predictions over short-term. Of course, in your example of exascale supercomputers, all it would really take to get 10 exaFLOPS would be to spend $2.5B to build 5 Aurora's worth of racks, instead of just 1.

4

phriot t1_j52bwuy wrote

>As AI begins to replace jobs, unemployment will begin to rise assuming new jobs aren't created when old jobs are replaced.

In the past, mechanization and automation was pretty much contained to one role at a time, not even necessarily one job at a time. So, if you mechanized sewing, you could sell more clothes at a cheaper price point, driving demand for sales clerks, designers, managers, construction workers to build new stores, and so on.

The thing is that this wave of automation is coming for arbitrary jobs. There's nothing that eventually won't be able to be done better, or at least more cheaply, by some software, maybe with a robot attached. Soon (now?), if you can make a better shirt for a cheaper price, you can sell it on a website, have an AI write the copy, have a robot build your warehouse, have a self-driving truck bring the shirts to a shipping service, a smaller self-driving truck go to a neighborhood, and a drone drop the shirt off at the customer's doorstep. Where's the new job for humans? Influencer? How many of those can an economy support?

You also now lost the jobs for the truck stop workers, and maybe some additional gas station and fast food workers. The self-driving trucks and drones will need maintenance, but likely less than older ICE vehicles. Planning on having robot maintainers replace the lost jobs will only work until the robots can repair themselves. (More likely an automated repair depot caring for modular robots).

As you note, it will probably take some time to get to that future. But it will be decades, and not centuries. How many of us can become plumbers and electricians before those manual, non-routine jobs are lost, too?

>This will take many years to manifest, beginning with basic jobs like long haul transport and admin services, and compounding over years towards more complicated jobs like the trades.

Don't forget that a lot of "difficult" knowledge jobs will be automated, too, and probably well before the trades. It won't just be fast food workers, truck drivers, and personal assistants.

6

phriot t1_j50ho15 wrote

In some ways you're probably right. The ability to be in constant contact with anyone else on the planet has some downsides; it's tough to really ever be out of touch, now. Even if you turn your phone off, there's an expectation that people can get a hold of you. Having a bunch of smarthome devices with cameras and microphones sounds like 1984.

Other areas of life are more of a wash. We traded the existential risk of nuclear war mostly for the existential risk of climate change. Although, we've done a lot to clean up other kinds of pollution since the 1970s. Healthcare is much better today, provided you have access to it.

With regards to your short vision, I'd hate living in a pod. I don't like to gamify my self-improvement. I've tried, and it doesn't do anything for me. I'd prefer to have at least the semblance of choice in my daily life, even if an AI is optimizing the set of choices.

4