Hey ChatGPT, Write me a Haskell Course?
In last week’s article, I discussed how Monday Morning Haskell courses compare to other Haskell courses that I’ve seen out there online. Obviously I’m wildly biased, but I think MMH courses have some serious advantages.
But there’s still the elephant in the room…how do my courses compare to the possibility of using generative AI (e.g. ChatGPT) to learn Haskell instead? While AI is a great tool that has opened up a lot of doors in terms of learning complex concepts, human-developed courses still have some important advantages over the current way you would learn from a chatbot.
Analogy: Going to the Library
I’ll start my case by drawing an (imperfect) analogy. Suppose you are enrolled in a college and want to learn a particular subject, like physical chemistry. You could enroll in your school’s physical chemistry course. Or you could spend the same amount of time going to the library. After all, the library has tons of books on physical chemistry. So you could read all these books and gain the same level of insight, right?
In this example, most people would recognize the shortcomings of just going to the library. For example, you are now responsible for determining the curriculum and course of study.
You could, of course, look at the table of contents of an introductory book and just run with that way of organizing the material. But how much of that do you need to learn? Most college courses aren’t going all the way through a textbook, because the professor already has a good idea of what material is the most important, and has organized the course around that.
A professor will also know when and how to introduce supplemental material from other sources. If you’re just “learning from the library”, you’d be responsible for selecting which materials are the most important, and you probably aren’t qualified!
Also, while textbooks may have practice problems, and they may even have answers to those problems, you still have to do the work of figuring out which problems to study, and how many you need to study before you know the material. Taking a full course with assignments would solve this for you.
Finally, textbooks will rarely tell you about the human process of learning a particular subject. You probably aren’t going to read a sentence like “lots of students struggle to understand this, here’s a way of thinking about the problem that has helped a lot of them.” These are insights you’ll gain from working with a professor (who has taught real students) or other students in the class.
So let’s sum up the shortcomings of “learning from the library”:
- Direction - You must take on the cognitive overhead of determining which areas of the subject to study.
- Filtering - You must figure out how much detail is necessary, and how much practice you need to learn it.
- Human Learning Insight - Textbooks are generally lacking in the actual insights and breakthroughs that help students understand particularly challenging ideas.
From Physical to Online Learning
Now let’s consider what changes about the analogy if instead of comparing physical learning environments, we think about the current online learning environment. Entering an online course is significantly easier than enrolling in a university course. You don’t have to wait for the start of the semester, or go to a physical location.
But using ChatGPT as your “library” is vastly easier than studying from textbooks. In a matter of minutes, you can get tons of information on your screen that would have taken hours or days of effort at the library. And best of all, you can get information on virtually any topic, rather than just those that have pre-existing textbooks.
But I would still claim that using Chatbots for learning shares some of the drawbacks of “learning from the library”. And for these reasons, it’s still worthwhile to consider online courses where they exist instead of relying solely on ChatGPT. Some of these drawbacks might seem counterintuitive, but let’s think about it.
Direction of Study
You might think, “I don’t need to set my own direction, ChatGPT will do that for me!” And yes, you can ask it to lay out a syllabus for you (I did this myself in one of the examples below). This will give you a list of topics to study.
But it won’t just write out the whole course for you based on this initial syllabus in one go. You have to keep prompting it to provide you with the information you want. And it will get sidetracked, consistently asking you to go deeper and deeper down particular rabbit holes.
So it’s still up to you to determine how much you really want to study about particular topics, and you need to maintain the discipline to pull it back out and shift gears. A human-designed course puts these limits in there for you, so that you don’t need to carry that cognitive load.
Filtering
This brings us to the next issue of “filtering”. ChatGPT will provide you with a lot of information, all at once. You’ll ask a simple question, and get a very complicated answer with lots of tables comparing various different ways of looking at the question.
Sometimes, this is nice. It will expose you to ideas you wouldn’t have thought of otherwise. Sometimes though, it’s very distracting. It takes you away from the core of what you’re trying to learn. You have to make sure you aren’t getting dragged into an infinite loop of concepts.
The “practice” problem also exists. ChatGPT can keep coming up with practice problems, but it’s up to you to know how many you really need to study. In our case study below, we’ll also consider that it’s not necessarily the best tool for coming up with practice problems.
Again, a human-designed course does the filtering and measuring for you.
Human Insight
Once at my job, I was reviewing a teammate’s code that implemented a complicated algorithm. I told him, “after I looked closely at this one particular line, my understanding of this algorithm went from like 30% to 70%”, so adding an explanatory comment here would be very helpful!”.
This experience helped me understand the idea of “knowledge inflection points”. These are the key insights that really help you understand a topic. I’ve had several of these with various Haskell concepts, from monads, to folds, to data structures and certain algorithms. I’ve done my best to incorporate these insights into my course content.
An example from Solve.hs might be my understanding of “the common API” of Haskell data structures. This made it much easier for me to reason about using different structures in Haskell.
An AI probably wouldn’t frame the issue in the way I did, unless you already have the knowledge to prompt it. AI’s don’t have the experience of “learning” a concept piece-by-piece, and knowing when things finally “clicked”. You could try asking the chatbot what insights help people learn a topic, but it will only be able to try piecing that information together from what other people have written. On the whole, it still doesn’t beat the experience of someone who’s been there.
Human insights around learning are always going to get baked into a human-designed course, whereas AI is not generally going to be thinking in these terms.
Case Study: Learning Concurrency
I wanted to share a couple case studies that highlight some of the promise but also some of the frustrations with using AI for learning. Here’s a link to an extensive, multi-day study I did with ChatGPT to learn about concurrency topics. It helped me review a lot of topics I had learned in college (10 years ago), and also learn many new things. But there were still some pain points.
The “filtering” problem should be very evident. For each prompt I gave, ChatGPT provided tons of information. It was entirely up to me to figure out how much of this I really needed to know in order to be satisfied.
The “direction” problem is also clear. I started by asking for an organizational outline, and the chatbot duly obliged. But as I dug into certain topics, its preference was to ask me to keep going deeper down certain knowledge paths. I had to consistently drag it back to the syllabus it originally designed.
There were also no clear insights on what the key knowledge was. Over the course of the study, I figured some of these out for myself. But again, I had to filter through a lot of data to get there.
Another drawback I haven’t mentioned yet is the “memory” issue. Chatbots have limited, token-based memory, so they’ll forget what you’ve already learned over even a medium length study. My concurrency study introduced the idea of a “lock-free queue” using compare-and-swap operations early on. ChatGPT reintroduces this idea later as if I had never heard of it. Human-designed courses will avoid this sort of behavior.
I didn’t ask for practice problems in this study, so let’s consider another case study where I was specifically looking to do this in Haskell.
Case Study: Dijkstra’s Algorithm
In this quick study, I asked ChatGPT to come up with a practice problem for learning Dijkstra’s algorithm. Some things were good about its response, but some things weren’t.
On the positive side, the code works, the tests work, and some of the follow-up suggestions are also pretty good. For example, putting a bound on the number of nodes your path can have, or allowing multiple starts are simple extensions that didn’t occur to me when I was writing problems.
My main gripe is that the problems are a bit too obvious as graph problems. It started essentially with “implement Dijkstra’s algorithm” rather than giving me a practice problem using Dijkstra’s algorithm. And when I asked for a “disguised graph problem”, it gave me the delivery problem which wasn’t much of a disguise.
Also, the code used PSQueue, rather than the more beginner-friendly Data.Heap. This package may be better for certain things, but the type operator it uses would be a bit more confusing for a novice.
The line-by-line explanations were pretty good on the whole, but I don’t know that they’re a perfect substitute for really good visual/slide-based instructions like you would find in one of my courses.
With enough prompt engineering, you could get around these issues. But that’s exactly my point. It’s nice to not have to keep coming up with new prompts to get what you’re looking for, especially when you get a long explanation after every question.
Conclusion
Generative AI is a massive innovation for learning, especially on subjects that don’t have a lot of good guide material. But extensive, well-thought-out, human-designed content still has some significant advantages. The content is informed by the personal experience of someone who has actually been in your shoes and has had to learn something the same way you’ll learn it. This is not something an AI can relate to.
Prompt engineering involves a lot of cognitive effort. You have to constantly be directing the flow of what you’re supposed to learn, filter out the unnecessary parts, and then you have to learn it! While the freedom of being able to learn almost anything can be desirable, it can also be exhausting to always be directing the flow. It can be much easier and more helpful to just follow the lead of what another person has done.
I’ve used generative AI for learning and will continue to do so. But when human-designed content is available, I’ll look there first, and consider using AI as a supplement where I feel there are gaps.
When it comes to generating content, I don’t like AI as much, certainly not as a general purpose content producer. But it certainly has its uses. Looking back on course creation, I wish I had used it for writing test cases, for example. Another idea might be translating my work into other languages.
I’ll continue to experiment with AI going forward. But a solid guiding principle is that you should be using AI to enhance yourself, and not replace yourself. I still believe that human content has an edge over AI content for the same subject matter, so I encourage you to take another look at our courses at Monday Morning Haskell Academy, and to subscribe to our mailing list for future updates and discounts!