I have been using GitHub Copilot in both Visual Studio and Visual Studio Code. More often than not, I have found the tool to be a great companion when programming. However, I have also had negative experiences in which the tool provides misinformation, resulting in wasted energy and effort.
In the past few weeks, I have been utilizing GitHub Copilot to help me with programming tasks. I have found the tool to be very useful, especially for writing unit tests for Angular projects. Just a few months ago before I knew about ChatGPT and other AI bots such as GitHub Copilot, I often had to Google to find the right codes to mock services, components, and other kinds of dependencies using Jasmine. Often, I found the codes after just one or a few searches. However, the process still took me out of the flow as I had to navigate outside of Visual Studio Code. One of the goals of GitHub Copilot is to help developers stay in the flow as much as possible, by attempting to generate the code based on the available context from the project or based on the comments in natural language. To this end, I feel GitHub Copilot has definitely achieved its goal. For instance, I can write a comment such as the following: “Create a mock API service using Jasmine. The mock should have the methods: ‘getCurrentUser’.” And GitHub Copilot comes up with the code that is just right, as shown in the below snippet.
{ // create a mock auth service. The mock should have the method called 'getCurrentUser' provide: AuthService, useValue: jasmine.createSpyObj(['getCurrentUser']) } /** output from Github Copilot [DEBUG] [getCompletions] [2023-09-28T13:56:46.989Z] Requesting completion at position 172:0, between " // create a mock auth service. The mock should have the method called 'getCurrentUser'\r\n" and "\r\n\r\n". [INFO] [default] [2023-09-28T13:56:47.081Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2023-09-28T13:56:47.271Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 190 ms [INFO] [streamChoices] [2023-09-28T13:56:47.272Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2023-09-28T13:56:47.272Z] request done: headerRequestId: [0bda2fe8-04d9-4a6b-8572-bb998c54eaaf] model deployment ID: [x43c9b5e45a36] [INFO] [CopilotProposalSourceProvider] Completion accepted **/
However, sometimes, GitHub Copilot appears to get lost. There are instances in which the bot is not able to come up with the code but instead, it keeps generating other comments or just nothing at all. In other cases, it generates factual code that appears to be syntactically correct, but the methods do not exist.
GitHub Copilot Chat is similar to ChatGPT, but it is available right within the Visual Studio Code IDE and designed to work with code. I have found the tool to be incredibly helpful and a time-saver. For instance, while I was learning about Azure OpenAI and trying to come up with some Python code to read text from a PDF file and use ChatGPT to summarize it, I realized my Python skills were a bit rusty. So, I asked the chat how to read a PDF and extract the text, and it provided me with almost correct code which I could adapt with just minor changes.
The above codes that the bot generated was based on an older version of PyPDF2. I asked the bot to generate the codes again using the latest version of the library, and it was able to fix the codes for me.
Another cool feature that I like about GitHub Copilot Chat is its awareness of the code under the current project. For instance, I could highlight a specific piece of code in my project and ask questions about it. I can even provide context about the code by giving the bot the location under which the code resides, such as the file name, method, or class name, without having to copy and paste, saving time and effort, especially when I would like to provide the entire file as context.
Although the bot is very helpful and interacts in a conversational way that sounds very much like a human, it’s still an AI generative model which can generate factual information. Even when I am aware of this fact, I still get caught off guard a few times.
For example, in a Python project, I asked the bot to show me how to install “python-dotenv”. However, I did not know the exact package name and just asked it how to install “dotenv”. The bot generated an output consisting of the steps for installing the package, but it did not realize that the package name “dotenv” is for Node, not Python. I followed the instructions it provided and got errors as a result. Being naive and rusty at Python, I wasted a good amount of time asking the bot about how to resolve the errors and doing more research until I realized the package name was incorrect.
Following the above instructions result in errors that provide no context about the wrong package name.
(myenv) C:\Users\tbo\projects\personal\azure-search-openai-demo\notebooks>pip install dotenv Collecting dotenv Downloading dotenv-0.0.5.tar.gz (2.4 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [73 lines of output] C:\Users\tbo\projects\personal\azure-search-openai-demo\notebooks\myenv\Lib\site-packages\setuptools\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( WARNING: The wheel package is not available. error: subprocess-exited-with-error python setup.py egg_info did not run successfully. exit code: 1 ...
Besides generating false information sometimes, the bot also seems to not have the most up-to-date information. For example, I asked it how to check which account is currently logged in via Azure Developer CLI, but it doesn’t have knowledge about Azure Developer CLI, which seems to have had its first release in late 2022. Instead, the bot assumed I was asking about Azure CLI. When I later clarified that I meant Azure Developer CLI, the bot informed me that the tool does not exist.
Despite the few instances where GitHub Copilot does not produce the results I expect, I have found the tool to be worth the cost, which, at the time of writing, is $10 for individuals. I find myself using the tool on a regular basis to generate unit tests, comments, refactor complicated logic, and optimize code. Knowing that it sometimes generates misinformation, I just have to be careful and double-check the code if I run into problems. If you have not already, I recommend you try out the tool. It is relatively easy to install GitHub Copilot for Visual Studio and Visual Studio Code, and you can find step by step instructions here.