MrEloi

MrEloi t1_jegbh7i wrote

With all the political/ethical moaning, I suspect that it will be greatly delayed .. at least for the general public.

It will spend months in 'safety testing' to avoid/control AGI .. during which time of course the rich & powerful will have access to it.

Any delay will however be a mistake : the 'amateurs' out there will use GPT-3.5 and GPT-4 with add-on code etc to simulate GPT-5.

If amateurs achieve AGI - or quasi-AGI - with a smaller model than GPT-5, then their ad hoc techniques will enable AGI on other small systems too.

In other words, a delay to GPT-5 to block AGI could in fact enable AGI on smaller platforms ... which would be contrary to what the delay proponents want.

40

MrEloi t1_jefy9t7 wrote

I have just asked my wife - who is an English graduate.

She said she would indeed have second thoughts about taking the course had AI been around at the time.

4

MrEloi t1_jee1g9o wrote

  • Geoffrey Hinton stated that we have had this technology for around 5 years, but it wasn't widely known. This suggests that some firms or governments have been using AI for maybe 2 or 3 years.
  • The AI gurus keep claiming that AGI is several years away .. but .. the rest of their comments hint at it being either here already, or just around the corner.
  • I first started having dark suspicions when I noticed some weird questions being posted on Reddit a year or so ago. They had the 'feel' of being posted by a childish, embryonic AI.
  • The recent petition from a stack of AI gurus and others requesting a halt to AI development is interesting ... clearly these informed experts feel that AGI is very, very close.
  • The way the world's politics and economics have been behaving recently seems almost irrational.

All-in-all, I sometimes feel that 'something odd is happening'.

I very much doubt that AI is controlling us already ... but ... perhaps governments and/or firms are using advice from AIs to manipulate us in strange ways?

3

MrEloi t1_jdhovux wrote

Well, I have asked it to design/invent three DIY tools that I needed.

One was stupid : it needed a microprocessor etc.

Another was unique - but too close to existing products to offer benefit.

However, one was novel and probably marketable - and took only 20 minutes in a chat with GPT-4 to finalize the design.

Just think how life will be when this catches on : everyone with an imagination and access to a 3D printer will be building all sorts of weird things.

5

MrEloi t1_jctzmvc wrote

In about 20 minutes, I worked with it to produce a design of a marketable hand tool .. which it had clearly invented!

I tried with another DIY product today : it spat out a design for me. I asked if this was an industry standard, and it said, no, it was a custom design for me based on several other designs on the market.

Many people must have the germ of a product in their minds : these AIs could help them reach the market.

Perhaps we will see all sorts of new gadgets out there in a year or two?

(I have also used it to write all sorts of software)

1

MrEloi t1_jbf97nv wrote

>Everybody lies.

In medicine, patients often say X but mean Y.

It's not really lying.

As a practitioner, it's your job to drag this info out of them.

1

MrEloi t1_jbe0c0x wrote

I have just retired from another medical domain.

TBH 95%+ of my job could have been automated.

A nurse or similar with basic training could have operated the equipment, and an AI could have instructed her/him of the required actions.

My main contribution was quizzing the patient to elicit what really was going on, and not what they said was happening.

A personal AI avatar could do this work - or the nurse could be prompted to ask a series of targetted questions.

No doubt, many medical domains could be fully (or almost so) automated,

1