It’s amazing how simple architectural decisions at the start of a software project can kill its long-term potential. One example we run into all the time is with learning outcomes and tagging content — simple enough, right?
Most learning platforms treat learning outcomes as simple text tags — let your instructors tag content with learning outcomes, so students know what they are expected to learn. Good pedagogy. And it’s a simple enough feature to implement, so it’s considered low-hanging fruit.
However, when you want to experiment with things like pathways and relationships between the outcomes, or adaptive assessments based on your knowledge, you run into trouble. Big trouble. You can’t easily link text tags once they are in your system…you could probably come up with complex database queries to get the results you want, but that early decision to model the outcomes as text tags now forces you to jump through hoops to do anything more complex and interesting.
Ideally, from the start you treat learning outcomes as their own entities, with their own relationships to each other and to content. So the initial complexity and planning pays off in the long-term, with less refactoring and more powerful models.
And yes, I am partially tooting my own horn. My group has been working on these types of models for the education domain, though way before my time. As outcomes-based learning becomes more prevalent, it’s amazing to see how the early work on OSIDs (from ~15 years ago) seems almost prescient. Check out EDUCAUSE’s vision for next generation learning platforms, and you can see the parallels.