Comments

You must log in or register to comment.

huitu34 t1_iznp3nl wrote

cool idea! Is there a way to bring this into notebooks? And even better: as a vscode extension?

209

senobrd t1_izo58tx wrote

There is GitHub CoPilot available as a VS Code extension, it uses OpenAI’s Codex model, I assume ChatGPT is accessing Codex under the hood when it receives a programming related inquiry, but I could totally be wrong.

As a side note, Copilot seems like a bit of a privacy concern so I would personally be wary of using it with any private or commercial projects.

55

RomanRiesen t1_izp8la9 wrote

No. The whole chatgpt/gpt-3.5 model builds on code-davinci-002 (which is maybe the one tuned for copilot, but I don't think this has been said publicly).

So amy prompt to chatgpt is a prompt to a differently fine-tuned version of copilot (or copilot-like).

28

Hyper1on t1_izqmn2n wrote

Copilot is a 12B model (for inference speed), chatGPT is the 175B one, not specifically trained on code I'm pretty sure. So chatGPT should give better results on average because of the better model.

10

Acceptable-Cress-374 t1_izrzi76 wrote

I also found it impressive that it explains in plain language what insights it gets from the code. That's a very big improvement over openpilot.

3

jsonathan OP t1_izszyra wrote

Working on something better than an extension. Coming soon.

5

RaptorDotCpp t1_iznoyl6 wrote

It's wrong though. range does not return a list. It has its own sequence type.

160

elbiot t1_izoh372 wrote

It was trained on python 2

36

maxToTheJ t1_izqh53h wrote

So overfit because its lack of generalization ie still wrong

0

jsonathan OP t1_iznwsl6 wrote

I noticed that. Generally it’s “right enough” to help you fix your error, though.

27

_poisonedrationality t1_izoe2xm wrote

I wouldn't say that. While I'm definitely impressed by its abilities It makes mistakes way too often for me to consider it "generally correct".

It is interesting that even when it makes a mistake it often has some reasonable sounding logic behind it. It makes it feel like it has some level of "understanding".

85

artsybashev t1_izorw1t wrote

yeah it is annoyingly confidently wrong. Even when you point out its mistake, it might try to explain like no mistakes where made. Sometimes it admits that there was a mistake. From a coworker this would be really annoying behaviour.

57

new_name_who_dis_ t1_izou962 wrote

Crazy that we are now far enough into AI research that we are comparing chatbots to coworkers.

40

artsybashev t1_izoujfv wrote

Yeah. A lot of times I get a better answer from chatgpt but you really need to take its responses witha grain of salt

8

jsonathan OP t1_izt0lfi wrote

In my experience, it has explained every error I’ve encountered in a way that’s at least directionally correct. Can you post a counterexample?

3

knowledgebass t1_izr3dpl wrote

You know people make a lot of mistakes, too, right?

1

_poisonedrationality t1_izr9ksp wrote

Yes. But I still wouldn't say it's "generally correct" because it makes mistakes far too often.

2

cr125rider t1_izolvlz wrote

Iterable go brrr without using all your memory

2

robot_lives_matter t1_izobq7u wrote

Honestly for someone who codes, the description is a bit annoying and adds no value. Sure if you have no coding experience it could be great. Maybe for beginners without a degree who want to learn coding

40

RomanRiesen t1_izp9ejs wrote

If you add "be as concise as possible" it cuts out a lot of the noise. But that is annoying to add everytime. But you can say thanks to the great retention "for all following answers be as concise as possible". All we need now is a a .chatgptrc file to add all the "global" prompts we want lol

18

Lampshader t1_izqjmdc wrote

"be as concise as possible"

> INT is not iterable

14

RomanRiesen t1_izreqi1 wrote

Yeah, true lol

Python errors are mostly good enough.

1

ShenaniGunnz t1_izqvqyc wrote

> without a degree

Lmao what does a degree have to do with anything?

6

sEi_ t1_izr8c3n wrote

Ye, I've been working fine in the field for 20+ years without a degree. But ok give him a slack we know what he intend to say.

3

FHIR_HL7_Integrator t1_izzjlwm wrote

Same here.

For anyone reading without a degree, find an ISO standard (obscure but not too obscure) involved with fundamental technology used in the open market, master it, and you'll be golden. That's my advice for those out there who find themselves without a degree but looking to advance. It doesn't matter if you have a degree when you know something really well that few others know.

1

robot_lives_matter t1_izraxc4 wrote

I don't have a degree either. But i always assumed they must be teaching all of this in your bachelor's so you don't need these details

1

alnyland t1_izp8yze wrote

It’s extremely long for what could be a few words tbh

5

caedin8 t1_izpqwb0 wrote

but potentially useful for say learning a new framework.

Go pick up a new tool, and walk through it and it can explain some concepts, like react hooks and stateful management and why the code does what it does

It is better as an interactive teacher than a code writer today.

1

maxToTheJ t1_izqhdce wrote

This. Aside with the correctness issue others pointed out. I aint got time to wait for that for a simple error. Most errors are “duh” errors that are obvious just from the compiler output or long tail subtle

−1

andreasblixt t1_izntugx wrote

Nice! I think its answer would be much more contextual if you made the prompt something like this:

Please explain concisely why this error in my Python code happened:

    Traceback ...
    ...

Based on these lines of code surrounding the trace:

    broken.py
     1 ...
     2 ...
    ...
    11 ...

I would aggregate those lines in a map of file_name: str -> line_numbers: set where you basically do lines["broken.py"].update(range(error_lineno - 5, error_lineno + 5)) (with boundary checking etc ofc) so that you can then aggregate all the relevant lines with context, without overlap.

24

andreasblixt t1_izo7j6o wrote

For the fun of it, I asked ChatGPT to generate the solution for creating the output above, and it didn't get it 100% but it's a pretty good start! One of my favorite things is how it implicitly got that it should do sorted(line_numbers) because set does not have a defined iteration order, or how it should make the line numbers right-aligned.


Here is some sample code that demonstrates how you can write a Python program that turns a raised exception into the desired output. This code is not complete, but it should give you a good starting point:

import traceback

def turn_exception_into_output():
  try:
    # Code that may raise an exception goes here
    pass
  except Exception as e:
    # Aggregate the lines of code surrounding the trace in a map
    lines = {}

    # Get the traceback object
    tb = traceback.extract_tb(e.__traceback__)

    # Loop through the traceback object
    for t in tb:
      file_name, line_number, _, _ = t

      # Initialize the set of line numbers for this file
      if file_name not in lines:
        lines[file_name] = set()

      # Update the set of line numbers with the range of relevant lines
      lines[file_name].update(range(line_number - 5, line_number + 5))

    # Print the error message
    print("Please explain concisely why this error in my Python code happened:")

    # Print the traceback
    print("\n".join(traceback.format_exception(e.__class__, e, e.__traceback__)))

    # Print the relevant lines of code
    print("\nBased on these lines of code surrounding the trace:")
    for file_name, line_numbers in lines.items():
      print(file_name)
      for line_number in sorted(line_numbers):
        print("{:>3} ...".format(line_number))

To use this code, you would call the turn_exception_into_output function whenever you want to handle an exception and print the desired output. For example:

try:
  # Code that may raise an exception goes here
  pass
except:
  turn_exception_into_output()

You could also use this code as part of a custom exception handler. For example:

import sys

def custom_exception_handler(exception_type, exception, traceback):
  turn_exception_into_output()

# Set the custom exception handler
sys.excepthook = custom_exception_handler

With this code, whenever an unhandled exception is raised, the custom_exception_handler function will be called, which will turn the exception into the desired output.

13

huitu34 t1_iznpp96 wrote

Wouldn't it be even better if you added the code to the prompt to gpt and ask it to give suggestions what the error might be and what one could fix?

18

WindForce02 t1_iznxrau wrote

Rust compiler: are you challenging me?

18

Deep-Station-1746 t1_izo734l wrote

Is this just rewording the TypeError's str description? What is the information context for the ChatGPT?

16

ExternalPanda t1_izqp771 wrote

You clearly have never worked with data scientists fresh from academia. Way too many treat error messages and stack traces as pure noise, they just freeze until you reword it in plain english for them.

3

jsonathan OP t1_izt16f5 wrote

No, it’s also proposing a fix for the error.

1

brunogadaleta t1_iznivhd wrote

Had the similar idea for an IntelliJ plugin, this morning.

10

sabouleux t1_iznn344 wrote

Is there a way to specify the interpreter / virtual environment using Python? Seems like the program is calling the interpreter on its own

4

elbiot t1_izohltd wrote

I assume the program is installed into the virtual environment and so is operating within it. That would be done with the console_scripts entry point

1

GavinBelson3077 t1_iznjbtm wrote

Could be useful for beginners

I guess

2

satireplusplus t1_iznkxx5 wrote

I've actually had it explain an obscure warning, faster than googling it and already tells you what to do to get rid of the warning.

I've also found ChatGPT super useful for mudane stuff too, create a regex for a certain pattern giving it just a description and one example, create a flask API end point with a description of what it does etc. Code often works out of the box, sometimes needs minor tweeks. But its much easier to correct a regex with one minor issue than writing it from scratch.

10

ReginaldIII t1_iznsvav wrote

Honest question, do you consider the environmental impact of how you are using this to avoid very basic and easy to do tasks?

−6

satireplusplus t1_izntr6m wrote

Amusing question. It's a tool like any other, you're using a computer too to avoid doing basic tasks by hand. Inference actually isn't that energy expensive for GPT type models. And the way I used it, it's probably more useful than generating AI art.

10

ReginaldIII t1_iznu5ry wrote

If people were constantly crunching an LLM every time they got a stack trace and this was a normal development practice despite it being largely unnecessary.

Then given it is all complete avoidable, would it not be a waste of energy?

> It's a tool like any other, you're using a computer too to avoid doing basic tasks by hand.

That's a nonstarter. There are plenty of tasks more efficiently performed by computers. Reading an already very simple stack trace is not one of them.

−6

satireplusplus t1_iznuvy5 wrote

Generating this takes a couple of seconds and it can probably be done on a single high end GPU (for example, eleuther.ai models run just fine on one GPU). Ever played a video game? You probably "wasted" 1000x as much energy in just one hour.

The real advantage is that this can really speed up your programming and it can program small functions all by itself. It is much better than stackoverflow.

5

ReginaldIII t1_iznvys0 wrote

Okay. But if you didn't do this you would not need to crunch a high end GPU for a couple of seconds. And if many people were doing this as part of their normal development practices then that would be many high end GPUs crunching for a considerable amount of time.

At what scale does the combined environmental impact become concerning?

It is literally a lot more energy consumed than is consumed by interpreting the error yourself, or by Googling and then accessing a doc page or stackoverflow thread. And it is energy that gets consumed every time anyone gets that error, regardless of whether an explanation for it has been generated for someone else already.

> Ever played a video game? You probably wasted 1000x as much energy in just one hour.

In terms of what value you get out of the hardware for the energy you put into it, the game is considerably more efficient than an LLM.

> The real advantage is that this can really speed up your programming and it can program small functions all by itself. It is much better than stackoverflow.

If an otherwise healthy person insists on walking with crutches all day every day. Will they be as strong as someone who just walks?

−6

dasdull t1_izo1b23 wrote

If you run a Google search, Google will also run a LLM on your query.

8

ReginaldIII t1_izo2mww wrote

They also cache heavily. Sustainability is a huge problem in ML and HPC.

In my job I spend a lot of time considering the impact of the compute that we do. It is concerning that the general public dont see how much extra and frivolous compute hours we are burning.

It's one thing to have a short flash of people trying out something new and novel and exciting. It is another to suggest a tool naively built on top of it with the intention of long term use and wide spread adoption.

The question of the environmental impact is legitimate.

3

Log_Dogg t1_iznww2c wrote

"Why would you use a calculator when you can just get the solution using a pen and paper?"

5

ReginaldIII t1_iznxeag wrote

A calculator can be significantly more energy efficient than manual calculations.

Crunching a high end GPU to essentially perform text spinning on a stack trace is not more efficient than directly interpreting the stack trace.

E: See this is a weird comment to downvote because it is literally correct. Some usages of energy provide higher utility than others. Radical idea, I know.

−2

antinode t1_izodbht wrote

Your comments bitching about this wasting electricity are wasting electricity.

4

ReginaldIII t1_izoe71e wrote

My comment attempting to have a civil discussion about sustainability of LLMs in production applications compared to yours intended only to be derisive and petty?

−1

antinode t1_izoldjm wrote

Dude stop wasting electricity with your comments you're contributing to climate change we're all going to die.

1

ReginaldIII t1_izolr3v wrote

If you really care about that then you care about this.

0

_poisonedrationality t1_izoeb0w wrote

People shitting on exploring AI technology for "environmental impact" are the worst type of griefers.

3

ReginaldIII t1_izoi4bq wrote

Nothing wrong with exploring new AI technology. But there is absolutely a point when you are talking about deploying a system for long term or widespread use where you should stop to consider the environmental impact.

The hostility from people because they've been asked to even consider the environmental impact is telling.

1

rohetoric t1_izo0hzs wrote

How did you do this? Via APIs?

2

wymco t1_izpny82 wrote

Yep, I think so. Would build some sort of server that can receive commands when the function is executed. The server will pass the query to the model (which in this case located with OpenAi) via Api (you can receive the api keys once registered with openai)...

Every query will cost you some pennies...Just a high level description...

1

QWxx01 t1_izo3mdt wrote

These are exiting times to be alive!

2

Tmaster95 t1_izozhdm wrote

That’s amazing! Something like this might help a lot of newcomers start coding

2

drakohnight t1_izplahe wrote

Man having something explain to me weird errors in python would help a ton 😂

2

KyleDrogo t1_izq7ou8 wrote

This is actually an incredible tool. Well done 👍

2

Pleasant-Cow-3898 t1_izr6dql wrote

What tool did you use to make your gif with the typing? Nice tool!

2

IveRUnOutOfNames66 t1_izrmp25 wrote

if I had an award it'd be yours

for others reading, click the coin icon at the top to receive free awards, it refreshes ever 2-3 days. Give this guy the recognition he deserves.

have my upvote till then!

2

DreamyPen t1_iznlauj wrote

I think its brilliant!

1

RawkinitSince86 t1_izo7y3j wrote

ChatGPT is legit. Was playing with it all yesterday

1

itsfreepizza t1_izp19iz wrote

Wait, does this support rust?

I'm still learning, tbh the error compiler helps but I need more info why is not working, and additional suggestions

1

dblinkzz t1_izq0ldk wrote

how did you record your screen like that?

1

eddiewrc t1_izrqlgz wrote

Wow, does it actually makes sense or is it random gibberish

1

ReginaldIII t1_iznsdsj wrote

That's such an unnecessarily wordy explanation. The error message literally explained it to you concisely.

If it produces such an unnecessary output for such a simple error message god help you when it is more complicated.

Further more, ChatGPT cannot do deductive reasoning. It can only take existing chains of thought from its training set and swap out the keywords consistently to apply that same logic template to something else which may or may not fit to it correctly.

This is a bad idea. And if I'm perfectly honest, a waste of electricity. Save the planet and don't push this as a legitimate usage.

0

Glittering_Tart51 t1_izoe6bj wrote

You're going hard with the waste of electricity bs.

The guy did a cool project, your comment is not constructive at all and just mean

I like his project, it's true that as a begginer these errors could be hard to understand sometimes. I don't think you should be mean and disrespectful to him if you don't like his idea.

Ypu should come up with more ideas to make his idea better. It would be a better use of electricity than just what you did.

24

ReginaldIII t1_izoevj0 wrote

Sustainability in ML and HPC is a huge part of my job.

If you dont consider that important and think its bs that doesnt actually change that an important part of my job is to consider it.

At no point was I mean to OP. Im not being mean to a person who is littering by telling them not to litter. And I'm not being mean to a person making and distributing confetti as their hobby by pointing out how it is also littering.

−10

Glittering_Tart51 t1_izoile1 wrote

We're not at you job. It's a reddit post about someone who's trying to build a tool to help other people.

If you're so good at your job, you might want to give insight or knowledge on how to improve his project.

Just saying the project is trash and a waste is not helping anybody here

6

ReginaldIII t1_izojcm7 wrote

Okay. Allow me to use my knowledge sustainability in HPC to help you solve this problem in a more environmentally friendly way.

Read the stack trace.

−9

ParanoidAltoid t1_izom5vz wrote

GPS costs pennies to produce this type of output, even with a 100% carbon tax such that the cost of pollution was internalized, it would cost less than a dime.

If you're going to be a useful expert reducing waste, you should account for the actual magnitude of the waste before you scold others. This is why half our public will to be environmental was blown on paper straws.

The benefit of testing out new ways to use GPT to code faster clearly outweighs the dollars of electricity spent running the model, if you can't see those tradeoffs but instead scold any miniscule use of electricity you don't like, I believe you are a hindrance to saving the planet.

4

jsonathan OP t1_izo5vgl wrote

It's not too wordy if you're a beginner.

9

Jegster t1_izqooid wrote

I teach high school kids coding. It looks really useful. Ignore Mr. Naysayer below.

5

[deleted] t1_izqjo9z wrote

[deleted]

1

ReginaldIII t1_izqlhrg wrote

> say what again

Organizations consider the energy impact of deploying different types of models for different purposes. It really is that simple.

1

hattulanHuumeparoni t1_izrv6ju wrote

> The error message literally explained it to you concisely.

Well if you are a programmer, an error like this is trivial and the explanation is wordy.

On the other hand, the first paragraph is close to a perfect explanation of the issue for a programming student. It does not expect you to know programming terminology, and reads like a textbook.

0

ReginaldIII t1_izs9637 wrote

Then maybe the programming student should read a book that covers debugging. They can read that in an offline fashion, and then apply that knowledge when it comes up in practice.

0

what_Would_I_Do t1_izq0ned wrote

I think it will be great for programmers just starting out. The first few weeks only tho

−1

pisv93 t1_iznz4kb wrote

How do you call ChatGPT? Afaik there's no API?

0

rafgro t1_izo1bmq wrote

They scrape the website with ChatGPT.

0

pisv93 t1_izo1j4r wrote

Probably against the OpenAI ToS?

2

Lundyful t1_izoekca wrote

need this for haskell

0

BUGFIX-66 t1_izphn6n wrote

Really? Can it find the bugs in this code?

https://BUGFIX-66.com

Originally the above site was to demonstrate the incompetence of Microsoft Copilot, but it works for ChatGPT just as well.

This is a test mostly OUTSIDE the training set, and incorrect answers are rejected.

Copilot can solve a few of the simple ones at the beginning (simple matrix multiplication, simple radix sort, etc., that appear often in the training data, and some of the harder ones whose solution appears on GitHub, e.g., the uncorrected prediction/correction compressor/decompressor whose solutions were front-page on Hacker News).

If you paste the puzzles in, how many can ChatGPT solve?

For how many does it need the hint?

0

jsonathan OP t1_izxqy9s wrote

This is unrelated. StackExplain doesn’t find and fix bugs, it explains error messages.

1

__Maximum__ t1_izrv6x1 wrote

Cool idea, but I can see this version wasting my time, especially if I don't pass the code with the error. I can see it very useful with their davinci coding model right now though, it's expensive. Let's hope stabilityAI or someone else publishes an open source model that is as good as openai's

0

betasintheta t1_iznht9t wrote

Will it work for windows?

−1