Asking learners to organize code blocks that have been shuffled around is an approach often referred to as Parsons Problems. There is an excellent short rationale for them, and also a more in-depth research study (along with various other online sources) if you’re interested in reading more.
The general idea is that there is a block of code that has been jumbled up, and the user needs to un-jumble it so that it’s back in the correct order.
The major advantages to this approach are that it’s very fast, requires more reading of code than writing of code (similar to the high ratio of reading to writing code that happens in the real world), and has a fairly clear criteria for success.
It has great promise for learners.
Still, all the approaches I’ve seen to this so far have two major aspects that could be improved upon:
- they require either the teacher/coach/mentor or the learner to select or write the block of code that is then jumbled up. This results in code that is syntactically valid, but not necessarily representative of real code blocks that people will encounter in the real world.
- the layouts all require dragging from the left to the right, and aren’t really suitable for smaller mobile screens. Students are less able, therefore, to work on these without the aid of a laptop/desktop.
A Micromaterials Approach
Instead of needing to write out these code snippets by hand, why don’t we just use actual code?
Thankfully, GitHub is a publicly available source of such code, and for this current project, I analyzed the top 100 most popular python-based repositories.
I tried to group the blocks into three distinct categories (3-6 lines, 5-9 lines, and 7-12 lines). Only using number of lines is a very coarse measurement of “difficulty”, and this approach would be greatly improved with a more nuanced way to measure complexity/difficulty (there isn’t a whole lot of research on exactly how to do this, but I’m still looking around…).
Also, since I’m targeting a mobile-first approach, the UI had to be usable on a small screen. So I ended up with something like this:
to provide an example of doing the exercise, here’s a partially re-ordered function, with the correct lines in green:
And if you’d like to see the original code in context, there’s a link to the line in the original source on GitHub.
- The set of functions is currently hardcoded into a big json file (that was easier to deploy and didn’t require me setting up a backend API anywhere), and the order is the same every time you access the page.
Eventually I’d like to just cycle through random functions from a corpus of 100,000 or so, such that a student could casually dip in and out for 5 minutes every day for a month and never see the same function.
There’s no way a single human (or even a team of them) could easily and reliably write out 100,000 code examples.
- It currently uses only python, so it’s not as useful for people unfamiliar with the syntax and conventions of that language.
I’m also investigating methods of parsing additional languages (JS, Ruby, Go, C+, etc) and adding in different options to choose a language on a landing page.
- it can only evaluate if the re-ordering results in the exact same original order, not if it results in a “correct” running of the function
As a simple example, if we found the following three lines in a function:
cookies = 10 monsters = 3 return cookies * monsters
…we would have to allow for either the
monsters line to come first…flipping the order of those two lines doesn’t affect the “correctness” of the function.
(as a side note, the ridiculous nature of the example above illustrates my point about inauthentic code)
This is the trickiest problem, since you would need to be able to convert the function back into an abstract syntax tree to be able to confirm this (or otherwise assert against some sort of test to confirm that the function provides the same output, which would only maybe work for functions without side-effects).
At any rate, I had fun making it, and hopefully it has some use for even the casual user.
Source code on GitHub is here