CypherLH

CypherLH t1_jedkz91 wrote

I assume the ongoing layoff wave in tech is probably AI-related. Perhaps not explicitly but most of the people making those hire/fire calls at tech companies are well aware of AI developments and probably have at least played around with chatGPT, etc. A lot of those jobs won't be coming back.

1

CypherLH t1_jdxe9w2 wrote

This is so true. I'm in a discussion group that is generally very skeptical of AI. A typical example of their goal post shifting is going from "haha, GPT3 can barely rhyme and can't do proper poetry" in 2021 to "well GPT-4 can't write a GREAT masterful poem though" now. Apply this across every domain...the ability of AI skeptics to move the goal posts is unbounded.

13

CypherLH t1_jbddriq wrote

A real war in Taiwan will likely disrupt sea trade routes to South Kore and Japan as well. If nothing else insurance costs will soar for shipping companies, increasing transportation costs. Worst case if the war is wide enough the broader western Pacific could be a maritime war zone and more deeply cut transportation links.

3

CypherLH t1_jaesy8l wrote

​

I get what you are saying but not sure what the basis for skepticism right now is. Things are developing INSANELY fast since early last year; its hard to imagine things developing any faster and more impressively than they did and still are. I guess you can assume that we're close to some upper limit but I don't see a basis for assuming that.

1

CypherLH t1_jabwb5z wrote

But we don't know how to make human brains aside from producing people of course ;) We do know how to create AI models though. Considering the rate of progress in just the past year I wouldn't want to bet against image generation and recognition technology.

1

CypherLH t1_ja8ey2a wrote

Maybe. Its also possible that AI's more explicit _recognition_ capability will end up being super-human since its not limited by evolutionary kludges, at least once we have proper multi-modal visual models.

To use the old cliche example; our aircraft aren't as efficient as birds...but no Bird can carry hundreds of passengers or achieve supersonic speeds, etc.

1

CypherLH t1_ja5mgs5 wrote

Well presumably humans and animals ARE first labelling/categorizing but it happens at a very low level...our higher brain functions then act on that raw data. You still need that lower level base image recognition functionality to be in place though. Presumably AI could do something similar, have a higher-level model that takes input from a lower level base image recognition model.

​

From an AI/software perspective that base image recognition functionality will be extremely useful once inference costs come down.

2

CypherLH t1_ja4wx0l wrote

I said _mostly_ solved. Labelling/geometry/categorization are huge prerequisite steps to get to "actions". I assume video generation/description will be the final step needed as it gives the model an "understanding" of relations between objects over time. In other words true scene recognition. In fact I assume multi-modal models that combine language/imagery AND video will end up being another leap forward since such neural nets would have a much more robust world model.

1

CypherLH t1_ja1zrzu wrote

this is mostly solved already actually. All of the large image generation tools are also image _recognition_ tools, and some of them can explicitly do image-to-text as well where they can highly accurately describe an image fed to it. We just haven't seen this capability impact any consumer markets yet outside of image generation, presumably because the inference for these AI models needs a lot of compute.

3

CypherLH t1_j8vdxku wrote

Reply to comment by Czl2 in Emerging Behaviour by SirDidymus

I'll grant there is a gap there..... but it actually makes the whole thing _weaker_ than I was granting...cause I don't give a shit about whether an AI system is "conscious" or "understanding" or a "mind", those are BS meaningless mystical terms. What I care about is the practical demonstration of intelligence; what measurable intelligence does a system exhibit. I'll let priests and philosophers debate about whether its "really a mind" and how many angels can dance on the head of a pin while I use the AI to do fun or useful stuff.

1

CypherLH t1_j8up0yr wrote

Reply to comment by [deleted] in Emerging Behaviour by SirDidymus

Your assertion is obviously true NOW and not many people are seriously claiming that chatGPT and other current LLM's are actually conscious or AGI. The thing is they sure seem to be showing a massive step down the path towards getting those things. A legit argument can be made that we're now looking at something approaching proto-AGI...which is wild, this was science fiction even a year ago.

1

CypherLH t1_j8uoh2l wrote

Reply to comment by Czl2 in Emerging Behaviour by SirDidymus

I understand the Chinese Room argument, I just think its massively flawed. As I pointed out before, if you accept its premise then you must accept that NOTHING is "actually intelligent" unless you invoke something like the "vitalism" you referenced and claim humans have special magic that makes them "actually intelligent"...which is mystic nonsense and must be rejected from a materialist standpoint.

The Chinese Room Argument DOES show that no digital intelligence could be the same as _human_ intelligence but that is just a form of circular logic and not useful in any way; its another way of saying "a non-human intelligence is not a human mind". That is obviously true but also a functionally pointless and obvious statement.

1

CypherLH t1_j8udpth wrote

Reply to comment by Czl2 in Emerging Behaviour by SirDidymus

Interesting points though I personally detest the Chinese Room Argument since by its logic no human can actually be intelligent either...unless you posit that humans have something magical that lets them escape the Chinese Room logic.

1

CypherLH t1_j8td9s8 wrote

Reply to comment by MrSheevPalpatine in Emerging Behaviour by SirDidymus

True. And maybe a good reason to NOT want an AI that acts human ;) For some things we want the classical perfect "super Oracle" that just answers our queries but doesn't have the associated baggage of human-level sentience. (whether that sentience is real or fake doesn't really even matter in regards to this issue)

2

CypherLH t1_j8tcuc3 wrote

Reply to comment by Czl2 in Emerging Behaviour by SirDidymus

Ok, fair enough. I still think using any sort of mirror analogy breaks down rapidly though. If the "mirror" is so good at reflecting that its showing perfectly plausible scenes that respond in perfectly plausible ways to whatever is aimed into it...is it really even any sort of mirror at all any more?

1

CypherLH t1_j8r9xuf wrote

Reply to comment by Czl2 in Emerging Behaviour by SirDidymus

The mirror analogy doesn't hold up. LLM's are NOT just repeating back the words you prompt them with. They are feeding back plausible human language responses.

It would be like a magic mirror that reflects back a plausible human face with appropriate facial emotive responses to your face...that wouldn't just be a reflection.

9

CypherLH t1_j7guxe5 wrote

Level 1 would be $30k and up, minimum. Market would be upper middle class and wealth people in households with children and both parents working, etc. Someone living on their own in an apartment or condo won't need this...but a family with a large house and lots of things keeping them busy....hell yes.

​

Level 2 the sky is the limit. Floor price would be $75k and probably way more. It'd be like getting a very high end luxury car.

1