SWESWESWEh
SWESWESWEh t1_jdt2m8y wrote
Reply to comment by lambertb in [D] GPT4 and coding problems by enryu42
It often has errors, but if you just paste the errors into the chat it will generally fix them. In the early versions of chatGPT, I had issues with doing stuff like writing networking code in C++, but it still got me a lot of the way there.
I recently went over writing a high throughput async data pipeline in Java, and it did a great job of writing the code and even taught me a new design pattern. I had to make a few small changes here and there, but basically it turned a week of work into a couple hours. With the context of the written code there, I also had it write unit tests and documentation for me, and I was able to have it add more unit tests and also integration tests based on my feedback.
I'm fine with people underestimating how good ChatGPT is as a coding assistant, it just makes me look better because of how productive it makes me.
SWESWESWEh t1_jdt1ly6 wrote
Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42
So my guess is whoever ran the test did a bad job with the prompting. One of the examples they share, https://github.com/enryu43/llm_coding/blob/main/atcoder_eval/gpt4_solutions/abc294_c.py, the author complains it's not an efficient algorithm. So I just gave it a shot with ChatGPT:
Me: What does this code do?
def main(): N, M = map(int, input().split()) A = list(map(int, input().split())) B = list(map(int, input().split()))
C = sorted(A + B)
pos_A = [str(C.index(a) + 1) for a in A]
pos_B = [str(C.index(b) + 1) for b in B]
print(" ".join(pos_A))
print(" ".join(pos_B))
if name == "main": main()
ChatGPT: This code takes two lists of integers as input, merges them, sorts the merged list, and then prints the positions of the elements from the original lists in the sorted merged list.
Here's a step-by-step explanation of the code:
The main function is defined. Two integers, N and M, are read from the input and mapped to int using map(int, input().split()). Two lists, A and B, are read from the input, with each element being mapped to int using map(int, input().split()). A new list C is created by merging lists A and B and then sorting the resulting list using sorted(A + B). Two new lists, pos_A and pos_B, are created. These lists store the positions of the elements from lists A and B in the sorted list C. The positions are calculated using the index() method and are 1-indexed. The elements in pos_A and pos_B are joined into strings and printed, separated by a space. The if name == "main": statement checks if the script is being run directly (not imported as a module), and if so, calls the main function. Here's an example to illustrate how the code works:
Input: 4 3 2 1 4 3 5 6 7
Output: 1 2 4 3 5 6 7 The sorted merged list C is [1, 2, 3, 4, 5, 6, 7]. The positions of the elements from the original lists in the sorted list are 1, 2, 4, 3 for list A and 5, 6, 7 for list B.
Me: Could this code be re-factored to be my algorithmically efficient?
Yes, the code can be refactored to be more algorithmically efficient. The current implementation uses the index() method to find the positions of elements from lists A and B in the sorted list C. This approach has a time complexity of O(n^2) because index() is called for every element in both A and B.
To improve the algorithmic efficiency, you can use a dictionary to store the positions of the elements in the sorted list C. This will reduce the time complexity of finding the positions to O(n). Here's a refactored version of the code:
def main(): N, M = map(int, input().split()) A = list(map(int, input().split())) B = list(map(int, input().split()))
C = sorted(A + B)
positions = {value: index + 1 for index, value in enumerate(C)}
pos_A = [str(positions[a]) for a in A]
pos_B = [str(positions[b]) for b in B]
print(" ".join(pos_A))
print(" ".join(pos_B))
if name == "main": main()
The main changes in the refactored code are:
A positions dictionary is created that maps the values in the sorted list C to their 1-indexed positions. The pos_A and pos_B lists are created by directly looking up the positions of the elements from the positions dictionary. This refactored version of the code improves the algorithmic efficiency by reducing the time complexity of finding the positions of the elements from O(n^2) to O(n).
So it accurately analyzes that this code solves the problem without knowing the problem definition, and will spot the inefficiency if asked to actually write algorithmically efficient code. Seems fine to me
SWESWESWEh t1_jdolo78 wrote
Reply to comment by machineko in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
M1 macbook pro
SWESWESWEh t1_jdk8rtn wrote
Reply to comment by machineko in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Doing the lords work my friend. Does it work with Apple Silicon Metal shaders? I've trained my own models as both TF and pytorch support it but I've noticed a lot of people use cuda only methods which makes it hard to use open source stuff
SWESWESWEh t1_jd2s9ml wrote
Reply to comment by wojtek15 in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
Unfortunately, most code out there is using calls to cuda explicitly rather then checking the GPU type you have and using that. You can fix this yourself, (I use an m1 macbook pro for ML and it is quite powerful) but you need to know what you're doing and it's just more work. You might also run into situations where things are not fully implemented in Metal Performance Shaders (the mac equivalent to cuda), but Apple does put a lot of resources into making this better
SWESWESWEh t1_je33t7z wrote
Reply to comment by cegras in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
I've had a lot more luck solving novel coding problems with the GPT-4 version of chatGPT then Google. If you stick to older tech and libraries like Java and Spring that have been around forever, it's really good at solving fairly difficult problems if you just keep providing context. With Google, it's basically has someone done this exact thing on SO and gotten an answer, if not oh well