(aka - Learning from pair programming with ChatGPT and CoPilot)
I’ve been learning a lot from my Berkeley Haas course in AI/ML, and not just from the material of the class. I’ve been increasingly using the co-pilot features of VS Code and ChatGPT-4o. I’m noticing my own work patterns changing as a result, and have come to some ideas about what this means for learning and work in general.
Here are a few of the top “noticings” that I thought would be good to share.
Principle #1: Leap before you look!
Prior to taking this course, I was AI/ML conversant, but had never been hands-on. To do my coursework, I’ve had to start using several deep pieces of tech for the first (or almost the first) time, including:
Python
VS Code
Pandas
Seaborn
Plotly
SciKitLearn
MatPlotLib
GraphViz
Each of these has a great deal of depth. Even a single function may have a large number of parameters. Roughly equivalent calls to similar functions in different libraries differ in parameter names and resulting behavior.
In the past, when I had to learn a new toolset, whether a deep application like VS Code, a language like Python, or a library like Pandas, I would have started with the documentation, likely gone through a few tutorials, and even read a complete book. This was necessary to grok the mental model that the designers of the product had come up with, so that I trained my brain how to “speak” around that model.
Now, I just get clear on what I want, and I ask my assistant to show me how to do it. Though copilot does a great job of suggesting code, ChatGPT is a much more comprehensive and patient teacher, explaining each code block and the models behind it.
I no longer hesitate to dive in and get things wrong. I no longer hesitate to ask (more on this later). My speed of acquisition of new tech is way up, and it is also far more fun.
Principle #2: Stop Remembering!
For the first few weeks of the class, I would try to remember the names of the function parameters, or the exact syntax of Python and Pandas idioms. Now I just ask. In most cases, I can execute the suggested code and it accomplishes my goal with minimal changes.
Machines are far better at precise repetition than human beings. Allowing the machine to do this for me frees me up to focus on what I am trying to do. My “hard work” is in formulating the right prompt. I have to get clear on the desired outcome, including any context and constraints (requirements) that would affect the output code.
I don’t even try to remember. It turns out that after seeing the code a few times, I pick up the syntax and idiom, but I don’t rely on my own memory, since that isn’t the core problem I’m trying to solve.
Principle #3: You don’t lose points for asking!
In the starting weeks of the course, my questions of the assistant were all begginnery (“what is the best way to iterate over the rows in a pandas dataframe?”, “how can i use .loc in pandas to select the row which has "foobar" in the ‘Name’ column”). I would never, ever, ask a human being such questions. Part of me is still ashamed to even formulate them, as I know the answer should be “RTFM”.
Now, I don’t have to RTFM. I have the world’s most patient teacher, who knows pretty much everything, at least about this particular domain, and who is willing to put up with any number of inane questions, and explain at the right level of detail.
I can even badger my teacher, peppering it with follow up questions and tweaks. Unlike the “guy down the hall” that I used to lean on during my early years at Microsoft, there is no reason to let the AI get back to something else.
No one knows, cares, or will think less of me for asking any kind of question. I have to ignore the editor inside my head that tells me they will. It's just old, and slow.
Principle #4: Learn how to learn!
Am I still learning? Since the “thing” actually produces most of the code, am I really learning how to solve these problems?
In a word: Yes! In a sentence: “Yes, and far faster than before”.
But, in a paragraph? Yes, but with a different path. Using AI, I am able to punch well above my weight class. I can produce credible looking answers to problems I barely understand in a short period of time. Indeed, I can go back and look at my submissions from early modules (we do a new module each week) and see where I was “glossing over” a lack of understanding. But, even in those early modules, my dance with the assistants was laying the groundwork for where I am now.
In the more freeform assignments, I go well beyond what is required, because there is no friction in doing so. I’ve spent hours working to get more compelling visualizations, because I know that I don’t have to understand everything involved to achieve the outcome.
Learning to learn with AI is about focusing on clarity, and focusing on building my ability to have the conversation with the AI which yields the next better product.
Principle #5: It’s not cheating!
I’m having to overcome the feeling that I am somehow cheating by letting the assistant either do, or at least strongly help me with, some of the heavy lifting. I suspect anyone reading this who has had the experience of using copilot in VS Code, has had the revelation when you type in the comment and copilot suggests the next line of code, then the next, and the next.
Isn’t this cheating? Is spell correction cheating? Is using a thesaurus cheating? Is using a cursive font cheating? Is cutting and pasting code from a blog cheating? Is auto-complete cheating?
My education began a long time ago. Lots of what we were focused on learning amounted to basic skills. I have to fight this training as I use the tools of today.
It’s true, sometimes, I do not understand the code, at least at first. Since it is generally data analysis, simple learning, or visualization code running on my own laptop, I’ll execute it without understanding it. To understand it, I’ll tweak it, and sometimes paste it into ChatGPT who will patiently explain it to me. Sometimes I will look at it and decide that I don’t need to understand how it works. If this were going into a production or critical system, I would make a different choice. But, for now, it is not cheating to let “it” produce code I don’t understand which accomplishes my small immediate objective. look at it and decide that I don’t need to understand how it works. If this were going into a production or critical system, I would make a different choice. But, for now, it is not cheating to let “it” produce code I don’t yet understand which accomplishes my small immediate objective.
Comments