TemetN

TemetN t1_jegzdms wrote

Basically two things here, the first is that different rules for various products and loopholes mean they could likely pretty much just... sell it until the government did something. Possibly even outright admit what it was doing and the government might have trouble stopping it in the short term.

The second is that I think there'd probably be wholesale resistance to removing humans from the decision making chain in the short/medium term. Don't get me wrong, I actually would generally favor both of these (presuming they were both mature technologies), I just don't think it's going to be technical progress that necessarily slows the AI prescription part (arguably, that might be doable now).

1

TemetN t1_jegy8rk wrote

That's an interesting question, but I think it's probably even harder to answer honestly since that's largely a matter of social/cultural change. I'd particularly note how messy and incoherent our drug laws are in America in this case.

In practice I might actually expect something like a pill printer to leave this obsolete rather than it happening in some other way.

1

TemetN t1_jegw78p wrote

Define people I guess? A fifth? Half? Almost all? Like another commenter said, some people are already comfortable, and it's worth a reminder that in certain cases machine surgeons have been shown to outperform human ones. That said, even after that takes off, and ignoring the considerations of how much of the population, a huge amount of comfort will depend on soft factors such as early societal reactions and media.

​

I do think people will at least start to be comfortable in significant numbers sooner rather than later. Mid 2020s perhaps for enough for it to be relatively common (fifth-ish, enough so to not be shocking), and by 2030 for general acceptance (majority might consider one).

2

TemetN t1_jegkqwg wrote

Reply to comment by AsuhoChinami in The Luddites by scarlettforever

I don't think it's just a matter of childhood, desperation has been the foundation of revolution for a long time. It's funnily like capitalism in some ways, it's a matter of demand and supply. And while the demand might've been here for a while, now there might actually be a supply. Then again, it's not like most of us are actively involved in training these models.

17

TemetN t1_jegk5o2 wrote

AI has already demonstrated superiority in some areas (such as determining which questions to ask a student to improve test scores while studying), but honestly this is hard to predict for the simple reason that the current school system has been demonstrably a wreck for some time. Even individual states in America show clearly that some methods function better than others, yet due to a sclerotic system it's still unfixed in most areas (much less adapting to improvements more recent).

5

TemetN t1_jed01g3 wrote

I'd go so far as to call it dangerous honestly, and we can see that in the struggle for basic things. Many of which are much more normalized in less significant fields simply due to mass participation. On the plus side there is at least more public funding for research now, but I am glad to see larger calls for a massive public project on this.

1

TemetN t1_jeb3wvz wrote

This is helpful to the remnants of my faith in humanity - as a proposal, this has the advantage of both taking into account the potential upsides, and actually addressing the concerns by proposing a method whereby potential solutions could be more effectively generated.

​

As opposed to what inspired it, which is simply problems all the way down.

3

TemetN t1_je843ji wrote

You're proposing a take that isn't really compatible with progress. We already have an unusual degree of knowledge of both the potential risks and benefits. This isn't a matter of pessimism or optimism, it's a matter of weighing the probable results. And while the massively positive ones require minimal avoidance of bottlenecks (ones we've arguably already passed), Yudkowsky et al's position requires a series of assumptions not born out by previous examples.

Honestly, even apart from that though, are you so sure that investing in the field is really more dangerous than the current corporate piecemeal situation?

1

TemetN t1_je6o8ms wrote

This. Ethically speaking we should be prioritizing the lives we could potentially save by pushing this technology faster. We put the equivalent of tens of billions of dollars into the Manhattan project, we should be willing to do more for a project that is perhaps the single greatest undertaking in human history.

9

TemetN t1_jd5jmsq wrote

Top of my head? Apart from that, the other two big ones are the argument that the rate of progress is exponential in general, and AI's integration will further improve it. And Vinge's superhuman agents idea, which posits that we can't predict the results of AI R&D once it hits the point of being beyond human capabilities.

I tend to think that either of those is more likely (or rather, the first is inevitable and the second is hard to predict), and that we're in the runup to a soft takeoff now.

1

TemetN t1_jd5c7lt wrote

This seems to imply some sort of foom if I'm reading it right, in which case alignment would be the only really significant thing you could do in preparation, besides ensuring living that long. Honestly, I tend to consider this the least probable of the major proposed runups to the singularity given the number of potential bottlenecks and the current focus of research.

​

On the plus side, if aligned then foom would also likely deliver by far the fastest results - with the world effectively revolutionized overnight.

5

TemetN t1_jc8yc5e wrote

Over what time period? Someone gave a decent point earlier about manual dexterity, as at least in the short term complicated, physical, situationally dependent jobs will be difficult to automate. Those are going away within a decade-ish most likely though. In the long run, probably one of the last recourses will be things people aren't comfortable with automating (not so much therapists, but more along the lines of politicians). Apart from that, novelty artificial scarcity might also produce interest. Albeit I'm not exactly sure it'd be jobs by then.

1

TemetN t1_j9v40r7 wrote

The size is pretty much the most significant thing at a glance, the benchmarks stick to comparing to older models and ignoring more recent advancements even in those models. I'd be more enthused if they were open sourcing it, but despite them being more open than OpenAI lately it still seems to operate off some sort of weird 'can apply, but you'll never get approved' process.

2

TemetN t1_j9i50bu wrote

At a guess? Given doomposting really took off with the pandemic, I suspect it's a cultural reckoning with untreated mental health issues and a lack of decent coping mechanisms. It's not even just the pandemic, these rates have been rising for a decade or more, I suspect we're going to find out something like a common type of plastic causes mental health issues (among other things - the same diet these days has different effects, which implies it's more than that).

9

TemetN t1_j9c201w wrote

We're waiting basically. At this point there are multiple competitive approaches, and we're attempting to see which one is most easily scalable and fault tolerant. Once an approach is found that's both, it'll likely explode into more prominence. For now however, expect the continued rollout of normal roadmaps such as IBMs while waiting for a breakthrough.

12

TemetN t1_j97ctj9 wrote

Honestly, in contrast with a lot of people here I'm less certain this was against OpenAI specifically, but that's partially because OpenAI promptly went and said they were going to do the same thing. If anything, I'm more unnerved that it's a general movement away from sharing research - and we've seen the damage this song and dance does before. Frankly I'm disgusted with both OpenAI and DeepMind at this point.

2

TemetN t1_j94tv0c wrote

Hassabis mentioned the scaling thing something like six-ish months ago, which as far as I understood meant they were working on a sort of Gato-2, but it takes time. It's worth a reminder we still haven't seen GPT4, though it wouldn't surprise me to see both GPT4 and Gato 2 this year (in pointed fact that's my default).

3

TemetN t1_j8qxpox wrote

My timelines have my 50% range for AGI centered around 2024 (late 2024 to be fair from recollection, and at this point I wouldn't be shocked by earlier). Honestly though, I'd have to see something significant to really speed up my timelines more meaningfully than 'occurring inside the range expected'. Something like evidence of progress on strong AGI instead of weak. My timelines for that still have strongly operationalized (not quite the same as strong I suppose) AGI up around 2030.

1

TemetN t1_j8gmwj1 wrote

To be fair, Yudhowsky's argument on Pascal's mugging was actually interesting (particularly vis a vis his own writings funnily enough), but yes I very much consider him someone you have to sort through the writings of due to his focus on foom and... well, pessimism is an understatement, but I hesitate to call him a doomer since most of them don't even have coherent arguments.

​

Altman is still something of a hypeman though, and it is worth noting that both of them have argued in favor of very agressive AI timelines, which has been generally more towards where things have actually occurred as compared to the preponderance of people expecting ridiculously slow progress.

10

TemetN t1_j7m4vt3 wrote

Over what time period? Immediately, I think Microsoft, since frankly Google has hesitated and is attempting to put out an old product that is (not yet at least) living up to ChatGPT (despite it also being old). In the long run DeepMind may give Google an advantage though.

​

I voted for Microsoft on the assumption this was immediate/short term.

1

TemetN t1_j6ityj3 wrote

I just don't think most of them are that far sighted - and currently willing to pick a fight with their backers. Don't get me wrong, it wouldn't shock me if some were, but I don't expect much change in these terms (as in, the kind of new laws that would directly attack generative AI) anytime soon.

And honestly I don't think the current cases will be settled by new law, they're likely to fail on merits.

1