The THOR team is hard at work helping forge the path to sustainable persistent identifier (PID) services – including ORCID iDs. As with any long-term goal, a bit of self-reflection is helpful for tracking your progress, considering your successes, and psyching yourself up to tackle challenges along the way. In the case of a project like THOR, we can help this self-reflection along by developing a structure to help us properly measure our success as we go. But this is often tougher than you might think.
In the early days of PID services, it was fine to be concerned only with uptake, since the priority was to get the word out. While we still have some work to do there, PID services have now matured to the point that we can no longer be satisfied solely with simply “getting the numbers up.” We need to tailor our messages in order to drive further innovation towards the interoperable future that THOR and our partners dream of. Having better information about underlying motivations for adopting PIDs and about who might be ready to do so will help us drive the creation of services that will make the whole system better. To further this warm and friendly mission, we need cold hard facts. So how do we go about finding those facts? And how do we turn them into something useful and, quite frankly, a bit less prickly?
What can be measured?
The first step in evaluating our progress was to set objectives that are actionable and measureable. Though it’s tempting to set strict performance targets, this is just setting yourself up for failure. If you define success as selling 50 widgets, and you only sell 48 then, by your own definition, you’ve failed. In THOR’s case, our driving purpose is infrastructure improvement, so we’re more interested in observable trends rather than concrete targets. Developing key performance indicators (KPIs) is helpful here. Remember that an indicator is just a way to consider trends (e.g. “number of widgets sold”), and it isn’t itself a target (e.g. 50 widgets).
How should it be measured? (With which indicators?)
The next step was to determine how to measure what we want to measure. The goal here is to select indicators that are valuable as well as meaningful. “Valuable” means that knowing the indicator’s status will help us to make a decision. “Meaningful” means that we understand what the indicator is actually tracking. If the trend line associated with our chosen indicator goes up, will we know what that means for us, and will we know how to react?
Part of the difficulty of selecting indicators in this way is that the most meaningful and valuable information for you might not be immediately available. When THOR first started down the indicator path, we just wanted easily gatherable quantitative measures; we weren’t looking to take on any complex user studies. However, some of the information we wanted wasn’t available, either because it wasn’t being tracked on a regular basis or because gathering it ourselves would have been a manual process we weren’t yet willing to take on.
How should it be measured? (Tool or no tool?)
Once you know what your objectives are and which indicators will help you track your progress to those objectives, you need a convenient way to monitor it all. Fancy tools may not be necessary, in fact most of the time they probably aren’t, depending on which indicators are important to your particular flavour of success. But we wanted to demonstrate some of the possibilities of having PID measures ready to aggregate — and if we’re honest, we do like fancy — so we developed a dashboard to keep everything in one place. (Read more about our process in our report.) Creating the dashboard was a good exercise in establishing what could be measured and how. It also gave us a chance to explore what meaningful metrics might be. For instance, we can see that PID uptake is on the rise, and we can see some information about the metadata that is associated with those PIDs, but this doesn’t actually give us any insight into causal relationships or let us know precisely why this trend is happening or even exactly who is involved.
Because we’re all about meaningful data, these adventures in measurement have led the THOR team to identify gaps in the available metrics surrounding PID service adoption and to consider which additional indicators might be useful for future work in the PID research space. We’ve now embarked on a more detailed gap analysis that will lead to a study of some of these missing measures. Since our goal is to drive PID service adoption, we’ve identified disciplinary coverage and geographic distribution as our most promising themes to pursue. We are now collecting the data we need to analyze PID adoption in X disciplines and Y countries – a full report will be available later this year.
Moving forward
So what have we learned throughout this process? First and foremost, not everything is as concrete as you might want it to be. When you’re dealing with humans and human behaviours, things get squishy. Second, since we’re only monitoring existing trends based on factors we don’t necessarily control, some information available to us will remain just “good enough” until others can do more detailed work to either improve the data or flesh it out. Our job for the remainder of the THOR project is to point out what would be most useful to know about interoperability, so that it can be studied.
The PID field is still evolving and has a lot of growth and changes left in store. Some potentially valuable information requires further study to tease out. Our service adoption study, beginning with the gap analysis, will help us make a start on that research, and we hope to gather some useful information that can set the stage for future work. We’ll also need help from the wider PID user and integrator community to improve existing metadata and to help us consider meaningful metrics.
As always, if you have questions or comments about THOR, please get in touch.