Submitted by [deleted] t3_zk92hu in Futurology

There's currently a lot of hype around chatgpt, many people have stated

that it could be used as an alternative to google, that it could write code for you and potentially replace developers entirely, and plenty more. So tonight decided to give it a real test run, with a real live scenario.

I've worked as a dev on core salesforce for 8 years and I have sporadically done Marketing Cloud development during that time. I'm however no Marketing Cloud expert dev, I still have plenty to learn about that particular cloud.

Tonight I needed to write some code (server side javascript SSJS) that would update the name of a data extension and had never done that particular task before, so asked #chatgpt how to do that and it spit out some code that looked reasonable to me based on what I had built in the past. I immediately thought, "Awesome! This was way faster than google and reading through docs, I'll be done super quick!". Boy I couldn't have been more wrong lol.

I updated the code #chatgpt gave me so that it would work for my particular scenario and ran it and it turns out the code #chatgpt gave me was completely wrong. It sent me on a wild goose chase for a couple hours because I assumed that somehow the small code modifications I made were the problem (even though barely altered anything).

So I eventually abandoned the code #chatgpt gave me and went back to what I typically do. In 30 minutes had figured out how to use wsproxy to update my data extension's name and -30 minutes later I had implemented the code and it worked just fine.

28

Comments

You must log in or register to comment.

Technical-Berry8471 t1_izz01td wrote

It's early days. Its data set is limited, an interesting experiment though.

16

IamChuckleseu t1_j011nxp wrote

It is 10 times bigger than its predecessor. It uses 45 TB dataset. You also operate under assumption that mode data will be better. This is not true. Smaller and carefully picked dataset like this one will be able to solve frequently asked questions. Ask for something out of the box of what it was trained for and it is lost. If you introduced data for that niche problems then you will also increase variety of AI responses which in turn will decrease current accuracy for more frequent problems.

5

Technical-Berry8471 t1_j02x8ec wrote

If you ask someone about a subject they have no knowledge of, would you expect a sensible and knowledgeable answer? I expect different AI models will be developed for different aspects of knowledge, with another AI assigning queries to different models. The real problem is assigning validity to knowledge in the provided data sets.

1

bigredthesnorer t1_izyfkqo wrote

But once you train it to program in Salesforce with real working code, won't it then know how to answer your question?

11

nultero t1_izyug8w wrote

These models only really work on languages that have a massive corpus of open source text to steal from.

There are orders of magnitude less Apex than there is JavaScript or Python. There are billions of problem permutations on complex codebases in the big languages that the ML models simply haven't seen in something as niche and unpopular as Apex.

So even if trained on Salesforce, its solutions would be far less stellar than that of the languages with huge, talented dev pools. Or it might spit out something entirely fake, like when I asked it for something complex in Apex just now and it spat out pristine Java instead lol

9

goodsam2 t1_izyk2kj wrote

I mean but it feels more augment and likely making it so that it's a team of 3 to do a team of 5's work.

1

LoKi_802 t1_izytbtz wrote

I’ve had similar experiences. However, I think its more interesting to think about its disruptive potential then how it is presently disruptive. It will only improve and potentially at an exponential rate.

7

xXTheFisterXx t1_izyy2l2 wrote

We are still in the beginning phases. It could also be possible that your prompt wasn’t specific or accurate enough.

6

Rogermcfarley t1_izze0gq wrote

I asked it to write a function in Lua to create a Mandelbrot fractal. It seemed to do that well enough. I think functionally it's better for creating new code than troubleshooting existing code. The rate of progress is staggering with machine learning so we're still in the nascent stages. 10 years from now the world will be unrecognisable considering how much better machine learning will be.

3

Ill-Construction-209 t1_j00q5ml wrote

Tonight, I asked it to write a simple program in VBA, and it did it perfectly.

2

chrisplusplus t1_j02uhzj wrote

Train it specifically on your code base and it would be better.

I asked for a bash script to create encrypted and pwrsistent live Ubuntu 22.04 usb with a separate encrypted partition all formatted with ext4.

This first result used rsync which was weird. I re-ran the exact input and this time it used dd and was absolutely perfect. Even asked for input for which mount point to use. Pretty cool.

2

perozeirani t1_j009to0 wrote

I found that it comes down to how well you specify the environment and how much detail you provide. If you ask for it with a vague description, you will no doubt have issues.

1

yaosio t1_j00fqq8 wrote

Why did you try to implement code if you had no idea what it did? Certainly you didn't know because it didn't work and you didn't know why. What if it did work but also did something you didn't know it did because you didn't understand the code? I'd be worried about the solution you found. You really need to look it over again and make sure it only does exactly what it's supposed to do and nothing more.

1

Orc_ t1_j00wzdd wrote

oh so you failed and it's chatgpts fault, your prompt was wrong

−1

resdaz t1_j08chp7 wrote

I would say it confidently giving you the wrong answer is chatgpt's fault, but that is just a limitation of the system.

1