Teaching foreign policy majors to code

Four lessons about what to (not) do with first-time programming students.

Colin McCormick
5 min readAug 13, 2018
Sample code from STIA 315 at Georgetown University (Spring 2018).

This spring I taught a class on low-cost air quality sensors at Georgetown University. As I’ve described elsewhere, the class was a fun way for students in a policy school to get their hands dirty by actually building and testing working devices. Most of my students were majoring in Science, Technology and International Affairs (STIA) in the Walsh School of Foreign Service — and had almost no background in engineering or programming.

This experience taught me several lessons about teaching programming to foreign policy majors. Make no mistake, the students were very smart, and wrote extremely good papers on policy topics. But their education had so far included basically no coding, and many of them were deeply concerned that they might have to program in order to pass the class. On the first day I soothingly assured them that they wouldn’t have to do any programming, which of course was a complete lie.

This was Lesson #1 on the road to teaching coding to foreign policy majors: don’t tell them that you’re going to do it. Our air quality monitors were based on Arduino microprocessors, and they wouldn’t function unless they were programmed in Arduino’s IDE, using a variant of C++. If I had been a little more transparent about that on the first day, attendance might have dropped significantly. Coding inspires such fear and trembling among some students that it’s a profound psychological barrier to learning. So sometimes the best way to get around that is to downplay the idea that what they’re doing is programming, until it’s too late. And as I’ll mention in a moment, that’s exactly what happened.

As a result of having to manually type in the code, the students got all the typical errors when compiling, like missing parenthesis and misspelled variable names.

I began by having them install the IDE and run a pre-loaded demo program (“Blink”) just to see how the system works. After this I began showing them a few lines of code during each class and requiring them to type the code into the IDE. Since I distributed the lecture slides digitally, I needed a way to make sure they wouldn’t just cut-and-paste the code. Somewhat by accident, I hit on the idea of taking a screenshot of the code I had written and distributing that in the slides. As a result of having to manually type in the code, the students got all the typical errors when compiling, like missing parenthesis and misspelled variable names. As they went about fixing these “typos”, they got a backdoor introduction to basic debugging. Of course, I didn’t call it that, at least not at first.

As the semester went on they became more comfortable with adding new lines to the code, and we began talking more about what the code was actually doing. The most surprising misconception at this stage was that not all the students understood that the lines were executed sequentially. For many of the students, this wasn’t intuitive: for example, they didn’t immediately realize that defining a variable several lines after referencing it wouldn’t work. They also didn’t always understand that the code execution would pause until the statement on the current line finished running.

I suspect this point isn’t obvious to people who mostly write natural language essays, where the linear flow of ideas isn’t entirely strict — a reader can refer back to an earlier paragraph or infer that a later paragraph will come back to the same topic. This was Lesson #2: the basic paradigm of code (strictly sequential execution, with flow control and function definitions) isn’t always intuitive and should be explained clearly to first-time programmers. More broadly, this illustrates that there are hidden traps of confusion and misconceptions among first-time programmers that are very hard to see if you’ve been programming for a long time.

Why were there so many different ways to make the name of a parameter represent a number? As I quickly discovered in trying to explain my reasoning, there just shouldn’t have been.

Another problem was that I used several different ways to do the same thing in the code, without being clear about why. For example, I declared some parameters as constants and others (which weren’t being changed) as variables — mostly because I was being sloppy. I used different variable types (unsigned ints, floats, pointers to char, strings etc.) for reasons that weren’t very good — mostly trying to save memory, which wasn’t actually constrained — and that turned out to be baffling to the students. Why were there so many different ways to make the name of a parameter represent a number? As I quickly discovered in trying to explain my reasoning, there just shouldn’t have been. With some minor hits to performance, I could have used just two approaches: variable floats and strings. Yes, it’s not the right engineering solution, but it would have saved enormous amounts of confusion and still gotten the job done.

This was Lesson #3: avoid unnecessary complexity. Even if it seems like you’re teaching bad habits, I’d argue that first-time learners of code should be shown the smallest possible number of ways to accomplish a task, and only exposed to the complexity of refinements after they’ve solidly learned the basics. If you were learning to write in kindergarten, and the teacher gave you a crayon on the first day, a pencil on the second, and a fountain pen on the third, you’d be confused. It’s true that those are all tools with different uses, but you shouldn’t be asked to learn about all of them before you’ve mastered the basics of one. Code is the same: be as parsimonious with methods as possible. That seems obvious in hindsight, but it’s hard to break coding habits that you’ve had for a long time.

If you were learning to write in kindergarten, and the teacher gave you a crayon on the first day, a pencil on the second, and a fountain pen on the third, you’d be confused.

Finally, I found that near the end of the semester almost all the students were really enjoying the experience and felt a lot more comfortable about the idea of programming. None of them were running out to change their major to computer science, but they left the class with a solid basic understanding of what’s behind coding, and the confidence to potentially learn more in the future. I suppose that’s Lesson #4: It’s worth it. Getting students to the point where they aren’t afraid of programming and they know enough about how to continue learning more is an important victory. I’m looking forward to the next opportunity I have to teach more foreign policy majors to code.

--

--

Colin McCormick

Technologist, physicist, energy policy expert. Carbon Direct, Georgetown University, Valence Strategic, Conservation X Labs.