Viewing a single comment thread. View all comments

Surur t1_j4pus6t wrote

I kind if disagree. I am already using chatGPT to get answers to questions instead of typing into Google.

28

LogicalManager t1_j4pvkst wrote

Yep. I don’t need 20 links to jargon answers. I need good answers, links be damned.

11

Jaszuni t1_j4pvxr2 wrote

I’ve heard that the answers can sound plausible but sometimes incorrect. First, is that true? Second, how do you distinguish?

Also do you have examples?

23

aurialLoop t1_j4pymnm wrote

Yes ChatGPT will confidently tell you falsehoods as if they were true. There is no fact checking functionality in ChatGPT. It's still incredibly useful and gets a lot right. I've been using it to speed up tasks and do repetitive boring jobs that I would prefer to delegate if at all possible.

22

Jaszuni t1_j4q1pj8 wrote

Can you give an example of the tasks? Like what is it good for and where should people be cautious?

2

dlepi24 t1_j4trm9n wrote

Drafting business emails is pretty good, or templates/SOPs. I've had it write entire BCDR plans, MFA enrollment notices, etc.

It's great for quickly getting some code going for easy to medium tasks, or seeing a different approach to a problem can open other possibilities.

I've actually had it tell me more about some api endpoints than their online KBs have documented.

It's best used as a tool to aid you rather than thinking it's going to replace you.

2

RedditFuelsMyDepress t1_j4quqh1 wrote

I think a lot of people use it for programming, but it can sometimes give you code that doesn't actually work.

1

slackmaster2k t1_j4sx8vt wrote

Yeah it’s pretty impressive but sometimes funny. For example, it’ll just make up APIs for services that don’t exist. The code is good in that it would run, if it actually had something to run against.

1

aurialLoop t1_j5949cz wrote

People should be cautious of any answer it gives you where you don't already know the correct answer.

Where is it really useful? When you're asking it to perform either of the following:

1 Perform tasks that don't require knowledge of facts. E.g.

"Reword this email so that it is clearer"

"Reduce the length of this speech while still making the main points"

"Use the above strategic company goals and suggest ways my team who performs the following tasks can help realise those goals"

"Take the following ideas and write a poem that conveys their juxtapositions"

​

2 Perform tasks that you could do (and already have the required knowledge to evaluate yourself), but don't have time to do. E.g.

(As a computer programmer who knows how to read code): "write me a c# class that keeps track of objects in an environment"

"Write the scaffold of a business case for an automatic potato mashing machine"

"Suggest some game mechanics for a multiplayer card game where you have to work together to destroy an asteroid on the way to earth"

1

ArcadiaNisus t1_j4q5j1n wrote

My observation is if you ask a question with a bias it'll reinforce your bias.

For instance this exchange:

> Explain why dogs aren't mammals.

> Dogs are not mammals because they are not warm-blooded animals. Mammals are warm-blooded animals that have fur or hair and feed their young with milk from mammary glands. Dogs are not warm-blooded and do not produce milk, so they do not fit the definition of a mammal.

It's not only wrong, it's confidently wrong. But it's because I was asking it to confirm my presupposition. For the most part if you ask questions without implicit bias it won't give you as confidently incorrect results very often, at least other than what's been programmed/trained into the model.

Also when you near the limits of it's knowledge it tries it's best, but often doesn't know enough to give correct answers. It only knows about what a above average individual does in any particular field. Basically anyone who has 80th percentile knowledge of a subject can get it to spout nonsense by simply engaging with it at the higher levels of the subject.

For instance this exchange:

> Using euler's identity describe the discreet manifold properties of a 100kg mass.

> Euler's identity states that e^(iπ) + 1 = 0, where e is the base of the natural logarithm, i is the imaginary unit, and π is the ratio of a circle's circumference to its diameter. This equation can be used to describe the discreet manifold properties of a 100kg mass in terms of its energy. Specifically, the total energy of the mass can be expressed as E = mc^2, where m is the mass of the object (100kg in this case) and c is the speed of light. Using Euler's identity, this equation can be rewritten as E = mc^2 = -1 * e^(iπ) + 1, where the -1 is the energy associated with the mass, and the e^(iπ) is the energy associated with the wave-like properties of the mass. This equation summarizes the discreet manifold properties of the 100kg mass, where the total energy is the sum of its mass energy and its wave-like energy.

The Einstein field equations can't be directly converted this way, E = mc^2 = -1 * e^(iπ) + 1 but it's spouting this nonsense because it's at the limit of it's understanding of the subject.

12

NateCow t1_j4rn4hq wrote

>My observation is if you ask a question with a bias it'll reinforce your bias.

This. The first thing I told it to do (I work in visual effects) was explain why AI will never replace rotoscope artists, just for the lol's. It confidently made a lot of great points that I would personally argue in support of human artists. I then flipped it around and it confidently declared artists are no longer needed.

I also tried having it make an about page for my website, seeing as I've had a website and public profiles on various websites for a long time. It very confidently went on and on about how I'm a full-stack developer with lots of experience in various technologies.

I am not a full-stack developer. I've dabbled in coding and building some automation tools related to my own work, and a little web design, but that's it. I then told it to highlight my visual effects career, at which point it touted how I'm experienced in Houdini and Maya, two applications I have never touched.

Granted, the pool of information surrounding me was probably limited in terms of lots of corroborating data points. It was amusing nonetheless. I'm enjoying using it to generate little blurbs of website copy for a site I manage on the side.

3

angrathias t1_j4py4gx wrote

You need to consider what the question is in the first place and whether it’s provable easy enough. Like If you ask it a programming question, you simply run it, not really different from say stack overflow

1

NateCow t1_j4rnble wrote

Honestly I figure it's probably just stealing a bunch of stuff from Stack Overflow anyway. I was trying to figure out a regular expression the other night. All my Stack Overflow findings were really complex and I couldn't get any of them to work. Tried ChatGDP and it also spit out a super complicated one that.. didn't work.

1

Fuddle t1_j4ql1fp wrote

When? Everytime i visit the website it's "at Capacity"

5

Surur t1_j4qldo8 wrote

It helps to be in a different time zone lol.

5

jalalao t1_j4pwrjk wrote

If anything Google is just going to use AI to serve ads better. They don’t give a shit about you finding what you’re looking for

3

PurpsMaSquirt t1_j4qazem wrote

Bit of a short-sighted take. If they don’t understand what users are searching for, they’ll be serving ads ineffectively. This would be a waste of the user data they have collected over time. So yes, Google absolutely cares that they are getting you what you need (and quickly) so they can tailor ads for you better while ensuring you keep using them.

5

cartoonzi OP t1_j4q8a0h wrote

I agree but I think Google will end up making a better chatbot search engine after seeing what Microsoft does with Bing+ChatGPT. That’s how ChatGPT will “help” Google, by seeing if this is an approach worth pursuing

Also I’ll trust Google over Bing to give me better search results if they both have similar chatbots

3

rngeeeesus t1_j4qxdcg wrote

Google Assistant is already doing many of those things. I'm pretty sure Google has arrived there much earlier than OpenAI but they don't want to potentially cannibalize their business.

1