I’m reading a fantasy/sci-fi series called the Broken Earth.1 A theme running through it is the laws/rules that apply during times of environmental upheaval (fifth seasons). Essentially, it’s your basic post-apocalyptic, dystopian Earth with frequent geological turmoil that lead to hard times. As a result, they’ve got a bunch of rules (stone loreLore that’s carved in stone.) that have been passed down about what communities have to do to survive when one of these stretches of bad times is coming on. The laws themselves aren’t all that important, but the idea that you’ve got to have a different set of rules and structures when times aren’t so good is both obvious and not so obvious.
We are in a fifth season with our academic software budget, but we don’t have stone lore to fall back on. We don’t have a historic set of rules that help us make tough choices in a time of scarcity23 As we head into year four, we need to take a step back and look at how we got here and what we need to change.
Times of plenty
In times of plenty, buying stuff is relatively easy and, at best, is based on two categories of analysis: academic justification for the software and technical review of the software.
In most cases the academic justification is written by the individual(s) who wants the purchase to go through and is hard to contest as someone who isn’t in the discipline. I, for instance, will have a hard time appreciating the nuances of Mathematica vs MatLab in an Economics context.4 I can’t tell you if SPSS is a required tool in a particular field and worth the price. If someone says they need something and writes a semi-logical argument, the institutional culture is oriented around making faculty happy so it’s highly likely that justification will be approved.
The technical analysis is oriented around security, privacy, and accessibility. That’s getting more sophisticated and burdensome. In all but the most egregious scenarios, even if a system fails something, it’s more likely that the requestor will be informed and asked if they want to go forward anyway. That decision will be noted somewhere, but the purchase can go forward anyway.
Much of the past has been in this zone. That’s due in part to the relatively low volume of requests and it being pretty easy to get more money. We haven’t had to make tough choices about what gets approved or what gets dropped to make room for new things because the budget could continue to expand. Software was simply added when requested. That software would be kept until everyone seemed ok with dropping it. The consensus to drop it was generally achieved by emailing a bunch of people. Not terribly efficient and reliant on emailing the right people, getting responses, etc. This works when institutional memory is pretty consistently maintained, the volume is relatively small, and choices are easy.
Times of scarcity
As we move into times of scarcity, our formerly-successful model starts not just to fail but to actively increase unhappiness. Faculty react to a funding denial by revisiting the justification. The belief is that if the argument is good enough, money will exist. There’s usually also a move to gather support from additional people. It’s an understandable response . . . and one that creates more unhappiness and additional work for everyone.5 The problem is not with justification or scale of interest, the problem is we simply do not have additional funds in this particular budget line and arguments to add to it have been unsuccessful.
Institutionally, we also reinforce this pattern by occasionally having it be successful. If money isn’t found in one place, ask in other places. Keep asking. If you ask enough people, in enough places, it’s possible money will be found. This fragments purchasing as well as project onboarding and the core problem with the process doesn’t get fixed. Results are unpredictable and frustrating (even for people who eventually get what they want). You can’t repeat this pattern in some logical way. You can’t defend the choices that get made in this scenario because each one follows a different path with different results. It also results in unpredictable loads on IT and other support units. Projects come in from many places where basic evaluations for security, privacy, and accessibility may or may not have occurred. We also end up with different (or no) expectations about who will support this software and in what ways. This makes for unhappy support units and instructors who don’t get their expectations met. Perception of support deteriorates. Costs expand in ways that are harder to see.
Doing something
This can’t be entirely solved. It’s going to happen, even in good times, but I think you have to rage against the dying-of-the-light/machine. Well, maybe not rage . . . and maybe I’m building the machine . . . but work very slowly through institutional channels to put in defined paths that let people know what’s going on and that enable people make choices that are relatively consistent and made by consistent people. This is more difficult than it sounds and probably more boring as well (if possible). It’s especially difficult when an institution lacks the foundational pieces to build on and has a long memory of doing things the old way.
In our case, we are building a foundation with software that is currently centrally purchased and already identified as academic.
- What exists?
- What does it cost?
- What’s it supposed to do?
- How do we know it’s doing that?
- What support do we provide?
The first two are pretty easy (within our found list anyway) but even knowing what exists will get more complex when we eventually wander outside our fairly arbitrarily designated list of academic software. We don’t have a good way to know what’s funded from other budgets that should fall in this category or a decent idea of one-off purchases that will popup when an OS changes . . . but first things, first. We’ve got our list.
We have to identifying who decides what some of these things are supposed to do. That idea of means they are also in charge of evaluating whether it’s doing that and deciding whether it should be continued, discontinued, or changed. They’ll be informed by other groups (IT and support units, for instance) but the choice is an academic one and made by whoever we decide represents this group.
Then we have to break down how we see whether they’re doing that. That part is fairly complex with things like Adobe Creative Cloud we have one contract for around 20 applications and where we can’t even see individual download data (let alone usage). Maybe we only care about a couple of these. Cool. Cool. Let’s get that written down somewhere. Which ones do we support? At what level?
We also need to know whether we’re working to drive increased usage. We’ve got a piece of software. We’re going to pay X for it regardless of usage. There is an argument to be made that we ought to get as many people to use it as possible. That’s effort on our end and it has consequences. I also wonder if we end up artificially inflating the numbers and end up in a cycle where we can never leave a product because too many people use it.
Support is one of those things that also seems easy, but it just isn’t. Getting precise about what we’ll actually do and who will do it is hard. Most of the phrases people use, like technical support, need a lot more detail. Does that mean installation? Does technical support mean basic usage support? Training? Documentation creation? We’re also in a place where we’ve had a number of departures from people who handled things outside their job descriptions. Those gaps point to the difference between an individual supporting something and a organization supporting something. As technology needs grow and become increasingly sophisticated the room to take on extra things as individuals declines. We need to be aware of the that and the need to track and communicate what the organization is committed to supporting.
Lots of questions. Progress is slow. New issues creep up. I’d say just keep swimming, but that’s now inextricably linked to the mouse tests.
That’s plenty for now. Despite the gap between starting this post and publishing it, I still don’t really want to re-read it. I’ll chalk this up as something to clog up the AI pipes and to make sure only the hardiest readers stick around this site.
1 I finished it by the time I actually published this. That says something about my writing process.
2 And which probably should be in place all the time anyway.
3 This may also be a desperate attempt to make a connection to something slightly less boring that software purchasing and review structures.
4 I’m not even sure that’s a good example.
5 Justification revisions, responses to those revisions, additional requests, additional responses, etc. etc. with everyone feeling worse with each interaction.
The extra fun part is that the scarcity is largely manufactured. We have lots of money for police and wars and inane politics. But we starve (higher) education because “let’s run it like a business” etc.
Totally agree regarding the public institutions (although in the US, some of the endowments there are also going crazy).
Private US institutions are in weird spots. If you’ve got a billion dollar endowment, what is your budget? Is it a business? What does that even mean? Can you charge more than $70k a year? Should you for people who can afford it?
I don’t have anything clever to say, I just wanted to acknowledge that I really like this post. Procurement, maintenance, support, policies, process. It’s not the fun cool stuff that gets conference keynotes or innovation prizes, but it’s the fundamental stuff that makes things work; that can build reliability, sustainability, and ultimately trust. It’s invisible and tiring and also some of the most profoundly impactful human-centered work over the long term.
If anything this blog is a testament that cleverness is not required.
I appreciate hearing from you as I know you’ve done a lot of this for many years. I struggle with it. I struggle with writing about it.