<!-- Content Here -->

Where content meets technology

Apr 23, 2025

Vibe coding with AI

After hearing so much about how LLMs are revolutionizing software development, I decided to give it a try. Specifically, I wanted to be able to formulate my own opinions on whether:

  • AI will make software engineers obsolete
  • AI will transform the way software is developed and teams are organized
  • AI will unlock new opportunities for non-programmers to experiment
  • AI will allow me to overcome my own rust to be a productive programmer

Over the years I have learned that the best way to learn a technology is to "use it in anger." That is, use it to solve a real world problem. Unlike doing simple "hello world" experiments or running through tutorials, this technique gives you the best chance of running across a technology's rough edges. Plus, you get a useful solution out of it.

Some of the inspiration for my project came from a New York Times article about "Vibe Coding" where the reporter described how he used AI to write little personal applications to make his life easier. The problem that I wanted to solve was a natural tendency to fall out of touch with my friends. So I built a little relationship tracker that remembers the last time that I connected with each of my friends. My application synchronizes with a "Relationships" group in my address book and allows me to record when and how I connected with them. Friends that I haven't talked to in a while go from green to yellow to red. It's a simple little application but I use it regularly.

I went into the exercise with requirements but no AI coding experience. I knew that I wanted to write the back end in Python and host it as AWS Lambda functions. I am a decent (although rusty) Python developer so I felt like I could judge the quality of the backend code. But I wanted the front end to be in ReactJS and React got popular after I stopped doing web front end work. I wanted to try Claude Sonnet 3.7 because people were raving about it.

My first attempt was to go to the Claude website and request the whole application in one massively detailed prompt. That just gave me some code blocks that I could paste into files. Realizing the absurdity of my plan, I decided to use the Cursor IDE and program the old fashioned way: incrementally and iteratively. I started by asking the agent to build out a directory structure for the front end and back end components of my project and started describing backend requirements for synchronization and an API to retrieve them from the application's database. It took a while to figure out how my email/contacts/calendar service represented groups of contacts in their CardDAV API and Claude was very helpful in writing experimental code to interrogate the API. Once we figured it out, I was able to coach Claude to optimize the code to minimize API calls.

Creating the front end was almost effortless. Claude wrote functionally correct code and also made sensible choices on the user interface. The iteration cycle was super fast. If I wanted to experiment with a usability improvement, I just described my idea. Somehow it understood my directives for moving page elements around making them more visible.

The experience of working with Claude is magical. It reminds me of pair programming with a really good developer. It has all of the collaborative benefits of pair programming (different perspectives, two sets of "eyes," etc.), but without the cost. Sometimes the coding agent does foolish things or forgets a decision. The agent also writes a lot of code so you need to constantly clean out what you don't want to use. But I found myself welcoming opportunities to add value by guiding it. Most of all, I was exhilarated by the satisfaction of being so productive. The time from idea to execution is nearly nothing. Issues melt away as the agent tries different problem solving approaches. It's easy to get into the zone and forget the rest of the world.

I think that all developers should use a coding agent and, based on what my peers say, many of them already are. AI agents allow developers to iterate and experiment faster and I don't think they will replace them. However, I do think that AI diminishes the value of large offshore teams that offset additional management overhead and collaborative friction (language, time zones, etc.) with wage arbitrage. With an AI agent, a small development team can keep up with a product manager's ideas. AI does the typing so lots of hands on keyboards isn't as useful.

In addition to coding faster, AI coding agents help developers work across unfamiliar code bases. Somewhere I heard the analogy that maintaining someone else's code is like looking at art through a cardboard paper towel roll. The AI agent slurps up the whole program and understands how it works and where the changes need to be made. It is great for refactoring because it can quickly write unit tests to ensure changes don't break.

I am also wondering if lowering the cost of writing and maintaining custom code will shift the build vs. buy calculus. Buyers may be more tempted to build exactly what they need rather than compromise on software built for a larger market segment. Incumbent software category leaders may also lose their maturity advantage because a competitor can build critical feature parity so much faster.

My overall advice to anyone who is not thinking about AI coding is to start thinking about how to use it. Consider guidelines for use and processes for ensuring quality. Think about new opportunities that it unlocks and prior reasoning it invalidates. Whatever you do, don't ignore it.

Jun 22, 2023

AI and Content Licensing

In my last re-platforming of this blog, I accidentally dropped the Creative Commons Attribution licensing that I had been using. Blogging platforms treat licensing as part of the format rather than the content itself. The format is part of the CMS theme so when the CMS changes, the content moves but the licensing does not. I am still trying to make up my mind as to whether I think that is a good thing. But at the moment, I am thinking about the broader issue of content re-use and attribution in light of being used as AI training data.

People publish content for a variety of reasons. Personally, I write to explore and refine ideas and also for the potential to discuss these topics with people who stumble across my posts (although that rarely happens). There is also a recognition element. My blog is where people can associate me with what I know and think. Many websites and communities are built around the value of recognition. For example, sites like Stack Overflow have a culture around recognizing and rewarding expertise.

I have been using the Creative Commons Attribution license because I want people to use and further my ideas and I also want to be part of the ongoing discussion and evolution of those ideas. Based on the language of the license, I thought it would protect these interests:

"You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use"

But, according to the article "Should CC-Licensed Content be Used to Train AI? It Depends" (by Brigitte Vézina and Sarah Hinchliff Pearson), there is no agreement that any form of copyright applies to AI training.

Large Language Models, trained on terabytes of content (the GPT-4 dataset is 1 petabyte), create new value for content consumers wanting condensed answers. But that intermediation saps value from the content producers and publishers. The ChatGPT user has no idea if some pearl of wisdom came from me (doubtful) and I have no idea if my knowledge was accessed or what became of it.

I think that I will continue to write even though I know my words will be anonymized by AI. I still get the value of using writing to organize my thoughts and to develop my communication skills. Jack Ivers has a great post describing the reasons for writing every day. But I don't think I would be as excited to post answers on Stack Overflow unless I wanted to build adoption for a particular technology that I supported. I am even less likely to post an answer on Quora.

I wonder if AI chatbots will stifle other contributors' motivation. Perhaps it already has but I haven't heard much of an uproar. If generative AI drives the extinction of user generated content (which helps improve AI), the progress of knowledge will slow because it will not be able to incorporate new experiences.

Wikipedia is a bit different. Wikipedia contributors are mainly concerned about the accuracy of the content rather than attribution. In many ways, personal attribution taints the authority of the article with the possibility of bias. Consequently, you have to dig to find who wrote what. Wikipedia is already harvested by search engines and voice assistants (both Alexa and Google Assistant rely heavily on it). The contributors don't seem to mind.

For now, I have re-added a Creative Commons license to the footer of this blog and the syndication feed (Pelican, I might submit a pull request for that). Not that it does any good.