Submitted by Sea-Photo5230 t3_z9hr3b in MachineLearning

From the blog "ChatGPT model interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response."

I tried out ChatGPT and have made a video on it. Seems impressive. Maintains context and memory well

Do checkout the video: https://youtu.be/MbzGbqnTctc

51

Comments

You must log in or register to comment.

purplebrown_updown t1_iygwqo9 wrote

This thing is insane. I’m kind of blown away. It’s scary good. And I’m someone who hates being sensationalist about AI.

47

Sea-Photo5230 OP t1_iygx7to wrote

It can also produce nonsense imaginary things. It is not good at solving mathematical equations. It does have some limitations but seems impressive

8

purplebrown_updown t1_iykyv2h wrote

Now that’s it’s been out thought it’s clear there are some big deficiencies. And it has the same problem as meta’s galactica - it’s overly confident when it’s obviously wrong.

5

Thorusss t1_iylei4q wrote

>it’s overly confident when it’s obviously wrong.

So too human?

10

maskedpaki t1_iz88px6 wrote

this is a tradeoff for being able to answer anything at all

open ai remarks that when it wants the bot to only answer something its 99% confident about say it just never answers because theres always that one part of the answer you arent sure is right.

3

C0hentheBarbarian t1_iyh4kex wrote

Results like this make me seriously question if I'll have a job in the future as an ML person. I understand the nature of the job will change etc but I can see myself becoming an overqualified prompt engineer.

31

Cheap_Meeting t1_iyhl471 wrote

I actually think prompt engineering is becoming less and less important with things such as ChatGPT.

19

enjinseoul t1_iyhii2a wrote

I think AI WILL help humans advance faster in conceptual learning, since we are no longer tasked with repetitive work to complete a task, instead it will take more brain power to start asking the right questions, creating the right prompts in order to get the results we want, we will become thought curators

4

Thorusss t1_iylenrb wrote

>an overqualified prompt engineer.

There are already GPT3 implementation to generate better prompts for text2image AIs...

4

jammer9631 t1_iyi8ut6 wrote

I asked it questions ranging from literary analysis of “The Plague” by Camus to “show me the best algorithm to sort 10000 items” and “what is the best career for the next 20 years” and it absolutely rocked. It is not perfect, but even when it wasn’t, it still added value.

17

enjinseoul t1_iyhi12z wrote

My university students are blown away, I could already see them planning to write entire papers using it, so that begs the question, is it plagiarism now if they just copy pasta?

14

__Maximum__ t1_iyrp93n wrote

The idea of assignments is to force yourself to solve a problem not tell someone else (AI or not) to solve for you, you can name it whatever you want but it's cheating

6

Jeffy29 t1_iyytd4r wrote

While I agree with you, the same thing can be said about calculators but at some point, we determined it's okay to use the calculators if you know the basics, because there will never be a time you won't have a calculator near you so these days using calculators is natural in higher learning, even very complex ones. If AI like this (and one day much smarter than this) will become as ubiquitous as calculators won't it change how we teach people just like we did with the calculators?

Though it's way too soon to have this conversation, this is very immature technology right now, but I think this will one day create a discussion in society.

2

__Maximum__ t1_iyziqwo wrote

If the assignment is a multiplication task to learn multiplication, then, of course, using a calculator is cheating. However, if the task includes multiplication but the goal of the idea has nothing to do with multiplication, then sure, go ahead and use a calculator because it will get you there faster.

These systems, on the other hand, are very capable. You can actually ask them to do a whole assignment or important parts of it, and often, they will do it. Some models scored more than average in multiple tasks. I can imagine that in a year or two, it will be rather easy to access these kinds of systems, but which will be much better than average in all tasks. Let's see what brings us GPT-4, which will be here at the latest February according to rumours.

3

maskedpaki t1_iz896ds wrote

thats the whole point tho.

​

its like maybe we just wont do low level code anymore and will just shift to higher level abstractions. Its not like anyone writes in assembly code anymore for example. In a graduate cs program you dont spend more than a few weeks on basic assembly code nowadays. So professors will just have to assign higher level problems that dont involve you coding every for-loop or typing out every class.

1

SatoriSlu t1_iymr8dn wrote

I have a question...I listen to "unsupervised learning" podcast. He has been constantly mentioning that these new generative AI models are going to take over alot of future cognitive work. He then goes on to say, "everyone needs to be ready for this". But what does that actually mean, concretely? What am I supposed to do to get ready? I'm already working in tech as a DevOps Engineer.

Do you feel like there will be a future need for "programmers" or "prompters" who know how to interact with with these AI and pick the right data sets instead of actually knowing how to write "syntax"?

5

ahtoshkaa2 t1_iyqga84 wrote

I'm a copywriter and I'm getting ready by earning as much money as I can right now and integrating ai into my work flow because I'll be out of work in the next 10 years or so.

I'll be fixing up one of my apartments and will rent it out when the time comes.

This is one field that will not be taken by ai. Everyone needs a place to live.

5

RelicDerelict t1_iypztsz wrote

You will be probably fine with your experience. DevOps is about a lot of communication among teams, AI can't do that yet. But for me, I feel very depressed after everyone told me to learn to code, I don't think I will get junior position ever.

3

arkuw t1_iyyg5kt wrote

I generally think of "devops" as a dying career. Yes the "cloud" created a lot of need for people who understand system administration and can apply it to swarms of computers (aka the cloud).

But cloud vendors realize that the next value add for them is to start working towards eliminating the need for "devops". Thus new platforms are emerging that stitch together common compute, db, storage and messaging components in a way that is palatable to most cloud customers thus reducing greatly the need for a "devops" team at customer companies.

So a "devops" career strikes me as a perilous career, AI takeover or not.

In the context of this thread howeveer, I'm thinking that the LLMs are more likely to impact programmers first. However it's the cloud vendors themselves that are working hard at eliminating jobs for "devops" people. That's the whole goal of "serverless" after all.

1

SatoriSlu t1_izsvffi wrote

You may be right. Although I think there's a shift to a more defined role in 'platform engineering'. In my experience, the business users are not programmers, but they have the domain knowledge, so they need guidance on what technologies to use and how to stitch them together.

I think there will be a shift to understanding which technologies to stitch together, proper configuration of those platforms, and securing them. The so called, providing 'golden paths' for business users to ingest philosophy.

But, anyway, I think I'm gonna shift to learn more 'analysis' like roles. I believe that's where humans will ultimately move towards, feeding these systems data, asking the right questions, learning how to prompt them, visualize the output, and analyze more deeply.

1

balder1991 t1_j04b2f5 wrote

I don’t think that IT jobs are going away anytime soon because despite we create things to make our work easier, there’s always a performance cost involved. You can’t add layers forever to push the complexity under the carpet. At some point, when something breaks, we need to be able to understand it and fix it, even if there is an AI helping us debug things faster.

1

Ruturajvihol t1_iys24qj wrote

dude i am doing my final assignment of the semester and it generates code that can take me 10 days in like a minute. I am thinking how is this even possible

4

mrdique t1_iyuteej wrote

The codes that ChatGPT gave you probably originated from a piece that took 5 hours for an experienced software engineer to write, and the AI model that refactored that piece likely took a bunch of highly accomplished scientists years of effort in research.

So it’s fair; you just need to learn.

5

draw2discard2 t1_izcqmmr wrote

I tried it out for about 90 minutes, trying to see what it could and couldn't do, and from the standpoint of interacting in a conversational way, or writing a good facsimile of intelligent-seeming human text it is a long way off. It is quite impressive in what it can process, but the output is not going to render human writing obsolete any time soon.

3