DevEdOps Dimensions

This is a video presentation I recorded for the 90DaysOfDevops virtual conference, and in it I described the Three Ways of DevEdOps:

  • short time from starting the activity to actively practicing the learning objective
  • immediate feedback from the system
  • building something to teach yourself/others

Admittedly, the term “DevEdOps” didn’t exist as term until I made it up for the purposes of this presentation, but I think that the DevOps culture has a lot of interesting implications for how we go about learning new things (for a more in-depth view on this, watch the video linked above).

What I wasn’t able to get to in the video is a few different dimensions of micromaterials, as they relate to the principles of DevEdOps.

Different Dimensions

Because the different micromaterials I’ve built so far can be very different from each other in a number of aspects, I thought I would try to write down and catalog what some of those aspects are.

It is highly likely that these dimensions will change and evolve, but since nobody’s really reading this anyway, it doesn’t matter and I can just make them up as I go!

For example, some of them (ie, ipinder) have a very short lead time, by which I mean the time between accessing the micromaterial and being able to work on the focused learning objective. An example of a micromaterial with a much longer lead time is the mongo-dojo. There’s no way to access it in a browser, it requires the availability of multiple tools (you need to be able to run virtual machines), and the setup for the actual activites is very manual (you just follow steps in the readme).

In addition, some micromaterials (ipinder is another good example of this), are very high in the dimension of automated activity generation. There’s no need for a human to hardcode in the activities for the user to practice, the system generates them automatically. Mongo-dojo is another example of something that doesn’t have a very high level of automation here, since all the activities and situations are driven by the user. You need to set up indexes and perform rolling updates yourself.

Similar to the above, automated feedback is something that’s very important, and here we see a wide range of values. For the very simple activities like ipinder, the feedback is fully automated and immediate. For things like the k8s-manifest-dojo, the feedback is still fairly immediate (the cluster is fixed more or less immediately after editing the manifest with the correct value), but not automated in the sense that the human user needs to observe the cluster state (probably by running kubectl get po) to confirm if the fix worked.

Another good dimension is authenticity. Here, something like the mongo-dojo really shines, as does the k8s-manifest-dojo, because you’re using the exact same commands (eg, mongo shell and kubectl) in the exact same environment (usually a terminal as opposed to a browser) that you would in real life. Ipinder has a very low value in this dimension, since the skill isn’t very integrated (you don’t need to actually do anything with the knowledge of whether an IP address is public or not) and even if you did, this would probably not take place in a browser.

Analysis of Previous Micromaterials

With these dimensions in mind, I thought it would be instructive to re-examine a few of the micromaterials I’ve made over the years with respect to where they fall on these different dimensions, using a very simple scale of values from high => medium => low.

https://github.com/lpmi-13/reflog-power – this is a very simple micromaterial to practice using the git reflog.

  • lead time – medium
    • You need to clone it onto your machine before it’s usable.
  • automated generation – low
    • There is only one activity to step through.
  • automated feedback – low
    • The user needs to inspect the system in order to see whether they did things successfully.
  • authenticity – high
    • This uses the exact commands you would run in the terminal when actually using the reflog to bring back a deleted branch.

https://github.com/lpmi-13/ipinder – micromaterial to practice identifying whether an IP address is publicly routable

  • lead time – high
    • after loading the site, you can immediately start practicing.
  • automated generation – high
    • the system randomly picks an IP address, and keeps doing that as long as you want.
  • automated feedback – high
    • you get immediate feedback on whether you correctly classified the IP address.
  • authenticity – low
    • this activity isn’t situated in a larger context of why it matters whether an IP address is publicly routable.

https://github.com/lpmi-13/parsons-problems – micromaterial to practice ordering lines of code inside a function

  • lead time – high
    • select a difficulty level and language and start ordering lines.
  • automated generation – medium
    • there are a number of preselected code blocks that the activity cycles through, but they’ve been curated from GitHub by a script.
  • automated feedback – high
    • the system tells you immediately if a given line is in the correct position.
  • authenticity – low
    • this activity comes from Computer Science education and while it does have empirical evidence for effectiveness as a learning activity, it is not at all likely to mirror an authentic task that somebody would need to do while working in a real codebase.

https://github.com/lpmi-13/cron-trigger – read a cron and write a cron

  • lead time – high
    • select whether you want to read a cron or write a cron and you can start immediately.
  • automated generation – high
    • like ipinder, the system presents you with randomized exercises as long as you want.
  • automated feedback – high
    • the user gets immediate feedback on the efforts to read/write crons.
  • authenticity – medium
    • since we usually need to convert a schedule into a cron, or read an existing cron in order to work out how often it triggers, this is authentic. However, it exists in a web browser, rather than somewhere like a terminal window in a remote shell session, which is usually where crons are configured. It would also be slightly more authentic in the “read a cron” section if it were a supply-type item instead of selection-type item (ie, you have a free text input instead of four choices to choose from).

https://github.com/lpmi-13/semver-questions – practice bumping versions based on scenarios or decide if an upgrade will break things

  • lead time – high
    • load it up in the browser and jump right in.
  • automated generation – medium
    • the exercises were curated by hand, but there are a number of them to cycle through.
  • automated feedback – medium
    • the system does provide automated scaffolding of what the correct answer should be, but it’s up to the user to compare this to what they intended.
  • authenticity – medium
    • deciding if something is a breaking change is an authentic tasks in upgrading dependencies, though this wouldn’t usually happen all inside of a web browser UI.

https://github.com/lpmi-13/tcpdump-mystery – use tcpdump to see which containers are sending a very high number of requests

  • lead time – medium
    • if running this locally, you’ll need a few dependencies like docker-compose, though it’s also possible to run it directly in GitPod. However, even if you do, it takes the system a bit of time to set everything up, so this isn’t as fast as having something immediately usable via a web interface.
  • automated generation – medium
    • while the system does randomize which of the containers is sending the excessive traffic, it will always be one of the existing containers, and so the range of potential activity inputs isn’t as varied as it is with something like ipinder.
  • automated feedback – medium
    • after following the steps in the instructions, the user still needs to run some commands to see if they’ve fixed the problem, though these are very simple and documented in the instructions.
  • authenticity – medium
    • this does take place entirely in a terminal window, though it’s probably a bit of a contrived situation.

https://github.com/lpmi-13/mongo-dojo – practice doing some things with a mongoDB replicaset, either in containers or VMs

  • lead time – medium
    • while this does have a number of dependencies to install, once they’re all there, it’s a simple command to have vagrant provision all the necessary services. Ditto with docker-compose if you’re running through the container-based activities.
  • automated generation – low
    • while the setup is semi-automated (see above), the user needs to follow the instructions in the readme in order to actually step through the activities.
  • automated feedback – low
    • the user needs to compare the eventual state of the system with what they expect in order to determine if they’ve successfully accomplished the objective.
  • authenticity – high
    • all the commands are what you would actually use to interact with a working MongoDB replicaset, and you would run them in a terminal, as you do here.

https://github.com/lpmi-13/sadpods-dns – fixing DNS issues in a running container

  • lead time – medium
    • this only runs in GitPod, and so there’s a bit of a wait for the system to come online and configure itself.
  • automated generation – low
    • while the setup is entirely automatic, there’s only one scenario, so once you’ve solved it, there’s no benefit to doing it again.
  • automated feedback – medium
    • once the user has fixed the issue, there’s one command to run to identify success, but it would be nicer if it didn’t depend on an additional action on the user’s part.
  • authenticity – high
    • this is a very authentic task, and the interaction takes place entirely in the terminal.

https://github.com/lpmi-13/howbigisthisjson – guess the size of a json payload (in bytes) and see if you’re right

  • lead time – high
    • you can go straight into it in the web browser
  • automated generation – high
    • the system will continue to feed you random json data as long as you want
  • automated feedback – high
    • similar to above, the system immediately tells you if you got it right. In the case of “hard mode”, you get an immediate measure of how close you were.
  • authenticity – low
    • to be fair, this isn’t really an authentic task at all, and it’s just something I made for myself to scratch a personal itch. The size of a json payload almost never actually matters in practice.

https://github.com/lpmi-13/jq-pilot – transform a json payload into the intended output using jq commands

  • lead time – medium/high
    • this one has two different values, because it’s my blog and I can do whatever I want! If you interact with it via the browser, you can go directly to https://jayq.party and see instructions for the activities. However, this relies on the good graces of render.io and whether they want to keep offering a free tier for running containers. If you clone and run it locally, there’s a bit of setup, either involving node/golang or just docker…but that’s still a fairly fast route. You could also have the best/worst of both worlds and run it in GitPod, which has the benefit of the web UI, but also requires some time for the apps to build/run.
  • automated generation – high
    • the system will provide random exercises as long as you want to practice.
  • automated feedback – high
    • when the user has correctly transformed the json, the system progresses to the next activity.
  • authenticity – high
    • jq is meant to be used in the terminal, and that’s what the user uses here as well. Additionally, the usual workflow is that you have some json, and you know what you want to get out of it, but you don’t know the how necessarily…so that’s what we practice.

https://github.com/lpmi-13/network-recon – move laterally between containers in different networks to find the flag

  • lead time – medium
    • this either involves some container building/running if you’re working locally or running it in GitPod, which has the usual startup caveats. But you’re dumped into the first container in GitPod, which is nice since you can immediately start exploring. If you run it locally, you’ll have to enter the first container manually after also manually running the setup script.
  • automated generation – medium
    • the IP addresses are randomly generated, as well as the ports that are being used for the openssh servers. However, the two subnets have predictable ranges (always a 10.X and a 172.X), so it’s a bit more predictable than it would be in a real situation.
  • automated feedback – low
    • the user needs to just get to the last container, following the steps in the readme.
  • authenticity – medium
    • while this is a very contrived situation, the usage of both nmap and ssh with non-standard ports is how you would normally use those tools in the terminal.

Additional Considerations, and other notes

I did originally have an idea of skills integration as a separate dimension, meaning something like in cron-trigger, where you need to read and understand an expression in cron syntax in order to convert it into words, but this seemed to align fairly well with the authenticity dimension, so I got rid of it as not adding much extra value.

I also kicked around the idea of something like ease of access, with the intent of showing the difference between something available directly via a web browser (AKA – the universal operating system) and something that needs to be copied/installed/run on the user’s system, but this also feels like it could be rolled into the lead time measurement, so I didn’t include it as a separate dimension.

It would probably also be good to mention that we’re not necessarily looking for things to max out each of the dimensions. Something that’s immediately available in the web browser without any wait time is probably not going to be highly authentic, but that’s okay.

As I continue to refine this idea of DevEdOps, I’m hoping to continue presenting about it and getting feedback from the community about what else would be helpful to discuss/explore.

Leave a comment