ORCID just received the Award for Meritorious Achievement from the Council of Science Editors, in recognition for its pioneering work creating an open digital author identification system and helping to advance CSE’s mission to improve scientific communication.
We are truly honored!
ORCID has from the start been a community effort. Adoption early on by researchers, editors, publishers, and associations has been critical in broader adoption and use by researchers, universities and funders.
We sincerely thank you for your support.
As I was flying home from the CSE meeting, I had the opportunity to reflect on how we can help each other transform the way research and scholarly works are created and shared. David Haber mentioned in his comments at the meeting, that we need to start thinking about publications as data, rather than a collection of words. It is now about 50 years since Doug Engelbart’s demonstration of the power and potential of interactive linked data, and we are still struggling with how to make his vision a reality in publishing. With publications as data, integration of machine-readable identifiers into processes and systems becomes natural. Identifiers can allow us to connect people, places, and things within and between systems—and then to present these connected data in interactive interfaces.
But, as Siva Vaidhyanathan admonished us during his keynote address, more data is not necessarily better. Big data without the intention to understand causality does not enlighten, it just takes up big space.
One really fabulous example of what can be done with big data and intentionality is ChemSpider. Here is a database of chemicals linked to identifiers, chemical structures, names, uses. Wow. Antony Williams provided a demonstration at the Allen Press Emerging Trends in Scholarly Publishing meeting last week, and I am still bowled over. There are nascent initiatives by Force11 to establish identifiers and rich linkages for resources and reagents, by Science Exchange to establish standardized experimental methods, by geoscientists to establish identifiers for geosamples, by CrossRef to establish standards for acknowledge funding, and efforts to support citation of datasets by Force 11, DataCite and ORCID, and figshare, to name a few. These initiatives are driven by the need to support transparency and reproducibility in experimental research. I would argue they are also critical for engendering public trust in the scientific process itself.
Moving from text to data allows the opportunity to componentize the publication (should I still be calling it that?) and make it possible to clearly articulate and credit authors for their specific contributions, such as images, datasets, plots, methods. Liz Allen and her colleagues have prototyped a tool for standardizing the description and collection of contributor roles. This has been taken up by NISO and CASRAI for development into a formal standard, and one can now imagine more precision in acknowledging credit, less focus on author order, and perhaps even a change from authorship model to one based on contributorship.
This makes me dream about a more seamless connection between researchers, contributors, and authors. The possibility for acknowledging and encouraging a broader range of contributions than papers. Of a continued conversation, through annotations, re-analysis, and post-publication peer review. Of shared access.
Is research a “cycle of repeated failure”, as Siva Vaidhyanathan said in his keynote address, only because we cannot publish negative results? Will research be a less agonizing occupation if we have a more certain understanding of how an experiment was conducted, with what resources and methods? If we can share more easily the bits that do work? Will we share more readily because we can get credit for these contributions outside (or inside) of a paper? Wouldn’t it be so very cool to be able to utilize the power of the Web to support 3-dimensional articles: integration of narrative with datasets, methods and tools, people, interactive re-analyzable charts, samples, funding, organizations, other works, annotations, and post-publication peer review?
And I then come crashing back down to earth (it is notable that the pilot is announcing our approach to the airport). How are we going to move toward this positive vision of sharing and attribution if we still cannot move metadata from a submission system into a publication? If co-authorship is not explicitly affirmed during the publication process? If we continue to think that duct-taping the text-based process is enough?
ORCID has come a long way since our launch eighteen months ago. The CSE award is an indicator of community awareness of the potential for persistent identifiers. But as a community, we have a long way to go to realize the potential of identifiers, and more broadly the potential of a data-based contributorship model of research. Together, we can do more, and as a community we must.