On Programming with AI…
I completed my first significant “scratch” coding exercise using AI.
I needed to write a C# class to track a graph of HTML links and store them in a SQLite database. I asked GitHub CoPilot to write this while I was listening to a webinar. It dutifully complied. I looked at the code, but couldn’t really test it given my situation (I was on a Mac laptop, away from my normal dev machine).
I saw some things I didn’t like and told it to change those. Over the next 30 minutes or so, I kept coming up with new ideas, and telling CoPilot to add those. Every time, it rewrote the class. I then told it to come up with a set of unit tests, and it did that as well.
Here’s an example of some of the changes I was asking it to make
To the GraphManager class, add the ability to pass in a CSS selector to the Parse method, and isolate link extraction to only the results of that selector. Update the testing class to test that functionality.
Add a property that represents the default REL attribute to use if none is found on the link. Also, allow the ability to pass in a string to GetInboundLinks and GetOutboundLinks and filter the returned links to only those with that REL attribute.
In the end, my work is 99% percent done. Based on the output I saw, I should just be able to paste this into my existing codebase (I’ll probably wrap it in a class that follows the project’s coding conventions a little more closely).
Here’s what this has left me thinking –
The only way I was able to get a decent result out of this is because I knew what I wanted. I had written things like this before, and I absolutely could have written this myself, so I knew what to ask CoPilot. I knew how to evaluate what it had written and predict pitfalls, problems, and future usage.
So, in the end, all this did was prevent me from having to type this myself. It removed the rote work; the tedious work where my experience wasn’t a huge value-add in the first place. It let me concentrate on what I wanted.
In programming, there are low-level, prescriptive languages (assembler, C++), and there are high-level, declarative languages (Python, Ruby). It’s always been this way. Lower-level languages give you more control, but take more work. High-level languages require less work, but they assume a lot and insulate you from details. (A lot of programmers have only worked with high-level languages…)
I feel like AI is just a higher level language? Maybe the highest level language?
I’ve said this before (in the context of content strategy last time), but the future is “gist management” – getting the machine to understand the “gist” of what you want, by knowing what you want and being able to articulate it.
Programming has always been about teaching a machine to do something. AI is just another language with which we can do that. It’s easier, and expands programming to exponentially more people, but in the end, we’re still just teaching the machine.
Postscript
Added on
A follow-up to this –
Upon testing, it turns out the code wasn’t that great. There were at least three errors in it – one was related to my use case, one was just a weird coding technique that the database did not like, and one was just a blatant logical error that would have never worked.
In the end, I spent more time debugging this than I would have spent if I just wrote it from scratch.