Arowx t1_jeeoxx9 wrote

Yes, if you lived in a world of only words.

To navigate, explore and understand the real world, you would need senses and muscles.

Also, a much faster learning model than back propagation.

Or language is just a tool and kind of low bandwidth one that helps us label the world and communicate information via sound.


Arowx OP t1_jeegrav wrote

Just asked Bing and apparently AI can already write novels. With one report of a Japanes AI system that can write novels better than humans.

We would be inundated with millions of novels from people who wanted to write a novel and companies that would target specific novels at profitable demographics.

We would need AI critics just to help sort the wheat from the chaff.

The thing is it's like a DJ mixing records it could generate some amazing new mixes but if the pattern is not already out there it's very unlikely to find new patterns.


Arowx t1_jee00so wrote

Maybe it's just an improved search chat bot, pre-loaded with grammar and information relationship patterns.

Chat bots do have a history of fooling people into thinking they are more than they are e.g. A student at my Uni in the 90s was detected by the IT staff when they were logged in to the system for days and active nearly 24/7. Turns out the student was chatting up an early chat bot.

Could this just be chat bot love and we have not hit the low that happens when we figure out it's flaws.

On the other hand, if these AI tools let us build better AI tools faster and improve the hardware they run on then we might be on that S-curve.


Arowx t1_jeaip6u wrote

I would like to think were on the Slope of Enlightenment as GPT tools help us but there is the possibility that were just excited about a big pattern matching chat bot and somewhere on the way to the Peak of Inflated Expectations.

I'll go with 80:20 optimistic but also afraid of what might happen next.


Arowx t1_je8ej1p wrote

Counselors, Agency Staff sound like jobs with a strong knowledge or rule-based system that a Chat AI could do.

And as soon as someone figures out how to get GPT-5 to drive a Boston Dynamics Dog or Atlas and fixed location, based security will go automated.

Mind you could GPT-6 drive military drones or tanks?


Arowx t1_je6uri2 wrote

I think the numbers were 80% of jobs will be impacted by AI by 10% or more.

So mostly the same old job only you have to work with or manage AI's that do your own job*.

*If companies can do the same work with less people/time and more AI what happens to the excess people or time.

Do we need an AI Pay Law, where if your job is partially replaced with an AI then you should be allowed to work less or maintain your current wage level.


Arowx OP t1_je60q9d wrote

>While GPT-3.5, which powers ChatGPT, only scored in the 10th percentile of the bar exam, GPT-4 scored in the 90th percentile with a score of 298 out of 400, according to OpenAI.

The threshold for passing the bar varies from state to state. In New York though, exam takers need a score of 266, around the 50th percentile, to pass, according to The New York State Board of Law Examiners.

Only it did, it got 298 and only needs 266 to pass the NY bar exam.


Arowx OP t1_je4unoi wrote

What about photonics I think they have made some steps towards light-based computing.

In theory faster, use less energy and could get around the circuit wiring problems as photons don't interact with each other and can cross an empty space very fast.

>The new chip promises to be more than 300 times faster and denser than current electronic chips.


Arowx OP t1_je4kag3 wrote

My limited understanding is that the AIs need to be trained before they can be used.

They are working towards AIs that can learn and, on the fly, it would be a huge jump in capabilities. And put the first company that does it ahead of everyone else.


Arowx OP t1_je0isrn wrote

Or are we on the hype train/graph where a new technology appears shows promise and we all go WOW then we start to find it's flaws and what it can't do and we descend back into the valley of disillusionment.

Or what are the gaping flaws in ChatGPT-4?