nblack88

nblack88 t1_j9djbav wrote

>The pseudo religious proselytizing is the most boring part of the community.

I agree completely. It's also problematic in the practical sense. It skewers perception, and compromises the community's ability to have nuanced discussions about these topics.

1

nblack88 t1_j9diy1o wrote

I understand how you feel, and you've already had some good and interesting responses. I'd like to add another facet:

What you're describing isn't unrelenting hope, and you aren't alone on this sub in feeling that way. It's despair. A lot of people here are unhappy with their lives for a medley of reasons, and are hoping the singularity will bring a fantastical world where they can be happy. It's possible. It's also unhealthy, and does little but create a feedback loop that justifies their pain, instead of encouraging them to face it, and make something good for themselves.

Desperately wanting something to believe in is no bad thing. It just means you're human, and you're alive. That's good news! It also means you're responsible for living that life as best you can. That sounds silly, but stay with me. If you want something to believe in, cultivate some interests to identify your beliefs and then work to fulfill them. You don't need to start with some broad, sweeping ideal. Start small and practical. "I want to get in better physical shape." Do that. Then do the next thing, and the next. Along the way you'll find that Thing, that idea, or that faith that inspires you.

But you won't find it being passive. Don't wait for the singularity to have things happen to you. Go out and happen to things.

22

nblack88 t1_j6nsfwb wrote

Companies reinvest earnings all the time. Some reinvest 100% of the profit they earn. Some reinvest a percentage, and then pay a dividend. If your question is: Why don't these organizations ever stay non-profit, then the answer is: They'd never have the funding to exist in the first place. If they were founded as a non-profit, they don't currently generate enough revenue to pay the cost of building and maintaining these models, so additional investment is needed. Investors want a return on their investment, so for-profit is the only path forward.

7

nblack88 t1_j2c3gld wrote

Valid point. I stopped short of changing the baseline of the human experience, because it's too far beyond what we can fathom with any degree of reliability. We have theoretical guideposts in most artistic mediums about FDVR, anchored in a human experience we can understand. Moving beyond that enters the realm of something akin to Transcendence.

Inevitably, what it means to be human will change, and we may manipulate and augment our cortex and neocortex to the point where we cast away the need for a human relationship, or they move beyond any basis we experience now.

I disagree that it's an inferior experience. We won't know that until we reach the next level, and by then, words like "inferior" and "superior" may have lost any meaning. I quite like being human, with all the mess and the animal impulses. There are some things I would change, add, or tweak. One day, should I live long enough, I'll likely decide to shed our current understanding of humanity to go on the next adventure of existence, but here today, I see no benefit in categorizing the whole experience as inferior. Only the bits and pieces I'm looking to change.

2

nblack88 t1_j2b4up8 wrote

Your thoughts are well-founded. I think they should have the proper framework, though. What you're describing will be an issue, but I think that in relation to humanity as a whole, it's a short-term issue.

Someone mentioned below that AI would provide the appropriate challenges to stimulate the user, thus not providing a frictionless existence. Your reply indicated it goes back to gamification. At this point in our current existence, gamification is largely a philosophical concept. Life is gamified all the time: We gamify life when going to school, conducting job interviews, learning about techniques to improve our social lives, exercise, finance...every meaningful aspect of life has aspects of gamification to it.

By the time we have FDVR, and AI can simulate realities as complex as our own, I would refer to our reality as Base Reality, or Reality Prime. Any successive "realities" generated in FDVR will be considered more, less, or of equal importance to Prime, depending on the individual. Currently, what is Real is everything we can perceive and process with our senses, in addition to the things we can and cannot control. Both within ourselves, and the world around us.

That groundwork out of the way, I believe this will largely go the way social media has:

The Social Media Age was in full gear by 2009, when Facebook had the most active userbase and reached the most people. Within the previous 13 years, we've become aware of the advantages and perils of social media. Now there is a growing focus on bringing awareness to these issues and their effects on humans. It's part of the process, and there are parallels of growing pains throughout the history of technological evolution.

FDVR will largely be the same. Those looking to escape Prime will spend as much of their time in FDVR as they can. They'll start with worlds they have total control over. Then they'll get bored, and add complexity to those worlds by having AI provide challenges. Eventually, humans will want to form attachments to individuals with whom they have no agency to control.

If we can solve aging and longevity, then many of the people most prone to this escapism can make this journey, eventually returning to the desire to form 'organic' attachments as people changed and strengthened by their experiences in FDVR. If not, we may lose several people to it over a few generations. It's happened before, it'll happen again. I don't see these fears ruining us as a species, or civilization. Just another kind of evolutionary crucible to become more superior to our former selves.

2

nblack88 t1_j2azbfl wrote

I think that's part of the point OP is making with their post: That people should have relationships with AI and humans, not AI instead of humans. Human relationships are messy, and interacting with people who are rude or tiresome is difficult. But having these interactions and maintaining a healthy mindset despite them is necessary for personal growth. Using AI as a crutch to substitute for human relationships, instead of as a tool to learn and grow, is where we get into trouble. Failure and suffering are a part of growth. I think that's the point OP is making here.

2

nblack88 t1_j15fqbo wrote

I didn't use ChatGPT to write it, but the snark was fair! I hate to think of the day that all people who write proper English, if a touch formally, are robots! :P

All of your points are valid. This is the cyclical nature of the spread of new technologies. People ignore the shortcomings for the presentation. The excitement they hear from others--which in the Digital Age is unbound by geography or significant latency--shapes their perspectives, and so on.

This happens with every major shift. I remember 56k dialup internet was amazing when the average user got hold of it, but I grated at its flaws and limitations. I maintain my opinion that yes, these AI systems have issues, but the things they can do are worth the positive buzz. Also important to remember that the singularity subreddit has a major hopium bias for "AI will solve incredible problems for us tomorrow, life will be utopian and amazing!!!" So...the sample here definitely trends positive. Go over to Futurology and it's the opposite. The most upvoted comments are doom and gloom 24/7.

These implementations, as you said, are not ready for primetime, or to be used by the average person in their day to day activities. DALL-E, ChatGPT, et al. are just proofs-of-concept. My excitement outweighs my understanding of the flaws and limitations, because I'm looking iterations down the line. I've played with ChatGPT for a while now. It's really just a novelty for me. But version 5.0? Who knows?

3

nblack88 t1_j14hu1f wrote

I agree with you that it lacks creativity. Mediocre and stupid are takes that could be improved on.

ChatGPT is good at the purpose for which it was designed by the developers: To assist users in learning new information in the format of a dialogue. It's the most advanced tool of its type, for the moment. While we could ultimately Google more thorough and comprehensive answers to our questions, ChatGPT takes about ~4 seconds to come up with a generic overview, and is performing extremely well for new software in beta that's analogous to a newborn baby.

I think your point--which is a good one--is that ChatGPT isn't the AI that other enthusiastic users and clickbait journalists are hyping it up to be. It's really just a better version of Siri, or Google Assistant, in some respects. The fact that it's performing as well as it is--despite its many flaws and limitations--is pretty amazing. In addition to the fact that it's breaking into more mainstream awareness, which is bringing more attention to AI as a whole.

Dealing with the hype and excessive exuberance surrounding this advance can get pretty old after a bit. I get that. I don't think it serves us to say that the advance is mediocre because we're dealing with many people's first exposure--and first real excitement--to an application of AI that they can understand and play with. As iterations continue, this might level out.

7

nblack88 t1_ixjne8y wrote

My guess for the near term--after AI can generate convincing media--is no. There isn't an absolute answer to questions like this. Likely, man-made media will become a niche good. People will still make media for those who value consuming media made by other humans. My basis for this belief is that there are many communities full of enthusiasts who still use out-of-date technologies when more advanced options are still available:

People still use rotary phones for fun, even though we have smartphones. People still play games on the Super Nintendo, even though we can emulate all those games on phones/computers/handhelds. Movies are still made on film, reenactors follow old lifestyles to roleplay...and so on.

Mass media might become obsolete, but in the near term, it won't become extinct.

9

nblack88 t1_ixiviel wrote

I'm biased against an age-related death. That aside, the premise of dying at the current mortality rate of 80 years, give or take, versus living forever is a bad one. It's too extreme a juxtaposition to be useful. "Forever" is a catch-all concept that doesn't do what a good question is meant to do:

Keep the questioner focused on addressing the primary concern. In Lex's case the question appears to be: Die at the current natural lifespan, or don't?

A healthy person can always choose to die. A dead person cannot then choose to live. The better the question, the better the answer.

My opinion: Lex's belief that death gives life meaning is premature and romantic. I have a lot of experience with death. I haven't found anything romantic about it. That notion is best left at the distance of fictional representation. I would much rather live a healthy life independent of death mandated by causes that seem preventable, like pathologies related to aging. I want the possibility of existential boredom. If that becomes untenable, I can always choose to die. That's the point: When it comes to death by aging, I am pro-choice.

4

nblack88 t1_ix95clk wrote

The positive attitude in the community is one of the subreddit's best qualities. I don't necessarily always believe in the wild optimism that AGI will magically solve all our problems, but the shortcomings of this community are far, far outweighed by the benefits. I much prefer it here over Futurology. If you scroll for a while, they've got some good comments. Unfortunately most of those are buried by the same lazy, negative comments and opinions (and some well thought out negative arguments, too!).

Thanks for being a great contrast and palate cleanser, everyone.

33

nblack88 t1_ix5an9x wrote

Both. Unless you have enough money to start your own private equity firm, you could invest indirectly by investing in the firm itself, e.g., Berkshire Hathaway. There are also Mutual Funds and ETFs that invest in private companies, which is another way to gain exposure. Note that these have investment minimums. Don't hold me to the fire for it, but unless I misremember, Vanguard offers one for a 700k minimum. Many options require one to be an Accredited Investor. To qualify, one has to have ~200k in income, assets over 1M, and/or work in the financial industry.

Investing in private companies isn't meant for the average retail investor, and unless one invests as part of a larger fund, they'd likely be disinterested in smaller investment amounts.

2

nblack88 t1_ix4c5ej wrote

Disclaimer: I am not a fiduciary; this is not financial advice.

There are two approaches to this:

  1. High risk, high reward.
  2. Low risk, lower, stable reward.

Option 1 means investing in companies that are publicly traded (or will be), and hoping the share price accrues to whatever price you're content to sell. There are ways to buy shares in private companies, but I won't go into that. It's beyond the basics. Don't invest anything you can't afford to lose. Know that you'll probably lose. Even if you read the prospectus, understand technical analysis, have reliable news sources, and great timing...you'll still probably lose. More people lose money picking single stocks than gain from them. If you're prepared for that, then go for it. This isn't just for OP, who probably knows this, but for other readers as well.

Option 2: An ETF or Mutual Fund. You can take the really safe route and invest into a vehicle that tracks the whole market, like VTSAX. As long it exists, it will automatically price in the gains and efficiencies made by AI that are reflected in the market.

Or you could invest in an ETF that weights heavily toward tech, at the expense of greater volatility. If you're at a high level, you can essentially build your own bucket. I'd suggest finding a bucket that already contains the companies you're interested in, and then DYOR. My criteria for investing in an ETF/Mutual Fund:

  1. My interests and the company's interests are aligned.
  2. Low fees.
  3. Steady returns.

Using VTSAX as an example. VTSAX is a mutual fund offered by Vanguard. The ETF equivalent is VTI. Vanguard is owned by the investment vehicles it offers. That aligns our interests. VTSAX has a low expense ratio of .04%; it's cheap. It returns an average of approximately 7% a year. Numbers below:

https://investor.vanguard.com/investment-products/mutual-funds/profile/vtsax#performance-fees

Best of luck.

17

nblack88 t1_iwzjf97 wrote

An increase is possible, and if all goes well, I expect it. It will still take time, unless we've dramatically increased the efficiency, scaling, and production of our various "hard" networks: Transportation, manufacturing, distribution, et al.

I hope by the time we have AGI and are ready to implement it in some/most aspects of society, we'll have these increases to facilitate the improvement.

1

nblack88 t1_iwhkuel wrote

I tend to follow John Carmack's opinion on this: Having AGI doesn't actually herald instantaneous changes in our infrastructure or daily lives. It will take time to implement these advances, and build out the framework that AGI is incorporated into. The AI may be capable, but humans take time to allocate capital, achieve regulatory compliance, and execute those advances. It's coming, faster than we realize. But there is no magic bullet for deployment.

13

nblack88 t1_ivp4dac wrote

There are three things to unpack here that I think will better answer your question:

  1. Bias. Many people who believe the singularity will occur also believe it will occur sometime around 2045. It is commonly believed that AGI is a necessary precursor to the singularity, and many of the popular experts in the field believe we'll have AGI of some sort between 2035-2045. There's a member of this subreddit who helpfully chimes in with a list of each expert and their predictions. Wish I could remember their name, so I could tag them. Bias also works in the opposite direction. Negative bias permeates every facet of our culture, because we have a 24/7 news cycle that perpetuates that bias to make money. We believe everything is getting worse, but it's actually getting better in the long-term.
  2. Predictions. We're pretty useless at predicting events 20 years or longer into the future. 10 years is exceedingly hard. I was alive in 96'. I didn't imagine smart phones in 06. I thought it would take longer. There's a lot of evidence people can cite to support their positions for or against the date for AGI. Truth is, nobody knows, so pick whichever one aligns with your worldview, live your life, and see what happens.
  3. Choice. Speaking as someone who believes the technological singularity is coming...it's more fun. Can't tell you when, or how. It just means I live in a more interesting world when I choose to believe we're headed toward this thing. Nobody else in STEM is any better at predicting the future in 20 years than anyone else. So each group could be right or wrong. Probably both.
8

nblack88 t1_iuwvvpf wrote

Residents DON'T want them? That surprises me. Can you give insight as to why?

I'm a big fan of Dark Sky friendly lighting, and donate to the International Dark Sky Association (IDA) sometimes. Every resident I've spoken to who experienced the transition has enjoyed the new lighting, provided it's implemented well.

1

nblack88 t1_ir5kn42 wrote

Then it's fair to assume you're in your 20s to early 30s? At the apex of your health? Nothing wrong with choosing to die naturally. Extending lifespan is all about giving people the choice to do so. The advances we make in doing so will hopefully allow you and those like you to live hale and healthy lives until it's time to die. Good fortune to you.

2