my alt

Harnessing GPT as a Coding Assistant: A Guide with a Grain of Salt

In the realm of artificial intelligence, GPT (Generative Pretrained Transformer) has emerged as a groundbreaking tool that has the potential to revolutionize numerous fields, including programming. However, while GPT can serve as a valuable coding assistant, it’s crucial to understand its limitations and the necessity of human expertise in the process.

GPT, developed by OpenAI, is a language prediction model that generates human-like text based on the input it receives. It’s like a more advanced version of Stack Overflow or Google search, providing code suggestions, debugging help, and even writing code snippets. However, it’s not a silver bullet solution for all coding challenges.

The Value of GPT as a Coding Assistant

GPT can be a powerful tool for programmers. It can help generate code snippets, suggest solutions to common problems, and even help debug code. It’s like having an extra pair of eyes that can scan through vast amounts of information in seconds.

For instance, GitHub’s Copilot, powered by OpenAI’s Codex, is a great example of GPT in action. It provides suggestions for whole lines or blocks of code as you type, helping you to code faster and with fewer errors. You can learn more about GitHub’s Copilot here.

But you can also simply ask chatGPT “create me a PHP code to perform the following task”, or, “create a SQL Query to get the following items from the WordPress database”, etc. It will provide often already working snippets, and often greatly explain what the code actually does, which is very valuable, specially for people learning

The Caveats: What to Keep in Mind

While GPT can be a valuable assistant, it’s essential to remember that it’s not infallible. On the contrary. It’s a tool trained on a vast corpus of data, but it doesn’t understand the code in the same way a human does. It doesn’t have a conceptual understanding of the code, and it can’t reason about the code’s purpose or the larger project context. And most importantly, it is a probabilistic algorithm under the hood, meaning, it simply “guesses” the next best character, word or sentence, and this is how it generates its content. Which is impressive enough, considering that the responses are often very accurate, but also, the main problem: the response is not result of reasoning, it is result of a “chance”, or probability of what string fits best as a continuation to the previous string.

An easy experiment you can perform to prove this, is to input something like echo "hello into chatGPT and send it to the remote server. The reply will likely be something like this:


It seems like you've started a PHP code snippet but haven't completed it. If you're trying to echo the word "hello," the correct syntax in PHP would be:

```
<?php echo "hello"; ?>
```
This code will output the word "hello" when executed. Remember to close the PHP tag with ?> if you're embedding PHP within an HTML file or if you're writing standalone PHP code, you can omit the closing tag.

GPT generates output based on patterns it has learned from its training data. It doesn’t know your specific project requirements or constraints. Therefore, the code it generates might not always be the most efficient, secure, or even correct solution. And as shown in the example above, it just assumed we want to write “hello”. What if we wanted to write “hello all”?, or similar? This is just a tiny example – you can extend this to see how the “completion” process works, and how GPT does not reason

The Limitations of GPT

GPT has several limitations that users must be aware of. First, it’s sensitive to input. A slight change in the phrasing of a request can lead to significantly different results. This can be a challenge when trying to get the model to generate specific code.

In our experience, and opinion, this is actually the biggest challenge we as humanity will have to deal with, the further the development of AI goes: we must ensure to have control over the AI, which clearly at the moment, using the “prompt” command structure, is utterly hard, if not impossible.

Second, GPT can sometimes generate incorrect or nonsensical code, as it doesn’t truly understand the code it’s generating. It’s also possible for GPT to generate code that is insecure or that introduces new vulnerabilities into your codebase. In our experience, unless specifically instructed to produce safe code, this is actually the case in 100% of the code it produces. And remember, it tries to complete your input. So it is highly possible that it just “makes up” some code, to satisfy your needs, but, the code might not work at all. We have often seen it literally “inventing” whole new PHP or WordPress hooks, functions and variables, which simply do not exist in the real world (but would be nice to have 😀 )

Third, GPT doesn’t have the ability to remember or learn from past interactions. It doesn’t have a memory of your past requests or the context of your project. This means it can’t build on past interactions or understand the broader context of your project. When using chatGPT it has a memory of the current chat, but even that is capped, at some point it will just forget what it wrote previously. There are ways to help that, using embeddings and/or sending back the whole conversation when making new requests, however, again it will try to complete the previous interaction, and cannot reason on the information provided.

The Need for Human Expertise

Despite the potential of GPT as a coding assistant, it’s not a replacement for human expertise. A skilled programmer’s understanding of the project’s context, the ability to reason about code, and the knowledge of secure coding practices are irreplaceable. At least, as of now, and in our opinion, in the years to come.

GPT can be a valuable tool in a programmer’s toolkit, but it’s not a standalone solution. It’s a tool that can augment human capabilities, not replace them. It’s essential to review and understand the code that GPT generates, ensuring it meets your project’s requirements and follows best practices for secure coding.

In conclusion, while GPT can serve as a powerful coding assistant, it’s not a substitute for human expertise. It’s a tool that, when used correctly and with an understanding of its limitations, can help programmers code more efficiently and effectively. However, it’s not a magic wand that can solve all coding challenges. As with any tool, it’s only as good as the person using it.

For more information on GPT and its applications, you can visit the official OpenAI website here. For a deeper dive into the limitations and ethical considerations of AI and GPT, this OpenAI paper is a valuable resource.


Related Articles

Based on our Cosine Similarity Analysis
Harnessing GPT as a Coding Assistant: A Guide with a Grain of Salt

Harnessing Recursive GPT Models for Code Creation and Optimization

This article delves into the intricacies of creating a recursive loop with Generative Pretrained Transformer (GPT) models for code creation and optimization. It discusses the concept of Recursive Self Training Transformers (RSTTs), their implementation, and the challenges of writing prompts.

Read more

Harnessing GPT as a Coding Assistant: A Guide with a Grain of Salt

Will AI Replace Bloggers? A Question Worth Asking!

AI has revolutionized modern technology in a way nothing has done this before. It can not only solve your problems but it also has become your assistant like Alexa or many other products.  Before getting into the details, let’s discuss why people are using or intend to use AI? It’s really hard to write the […]

Read more