Electronic Resource Management in Libraries

Author: C. Sean Burns
Date, version 1: 2024-02-14
Email: sean.burns@uky.edu
Website: cseanburns.github.io/csb/
GitHub: @cseanburns

This short book is based on a series of lectures for my course on electronic resource management.

About This Book

This book is not a comprehensive work to Electronic Resource Management (ERM). However, it is meant to introduce students to ERM work, and should be read along with the readings linked to in each chapter. The linked readings are updated annually.

This book will be a live document. The content will be updated each year that I teach my course on electronic resource management, which is generally each fall semester. Updates will address changes in the field and to edit for clarity when I discover some aspect of the book causes confusion or does not provide enough information.

Please use the search function on this site to search for specific topics or keywords. If the reader desires a PDF copy of this work, the printer icon at the top right of the page will print to PDFs.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Chapter 1: Electronic Resource Librarians

Many of my incoming students are unaware that there is a specific role in libraries for electronic resource management, yet electronic resource management (ERM) in libraries plays a vital albeit invisible role. ERM librarians ensure the organized acquisition, access, and control of digital resources, and this requires them to manage e-books, online journals, databases, and other digital content. This work is increasingly complex, especially as libraries continue adapt to and contribute to the shape of the digital age.

While many are aware of the classic cataloger and reference librarian, electronic resource management librarianship is a specialized field that specifically focuses on the above challenges. These professionals employ a blend of technical skills and understanding of information access to negotiate licenses, oversee subscription renewals, troubleshoot access problems, and work collaboratively with other library staff to integrate electronic resources into the library system. This specialized work requires them to be generalists, too. That is, they must be aware of and support those other library staff and the library's mission to provide seamless access to information, adapting to the evolving needs of the academic community and the wider public.

This chapter introduces students to the field of electronic resource management. It begins with a section that discusses different perspectives of electronic resource management. Section 1.2 covers the kinds of activities ERM librarians engage in at their libraries. Section 1.3 argues that regardless of ERM librarian's specific skills and duties, a key characteristic of the field, given its need to work with continuously emerging technologies, is one of constant disruption.

The ERM Librarian


In this section, I will:

  • provide examples of electronic resources,
  • frame the topic of this course, and
  • discuss the readings.


This semester we're learning about electronic resources and about how to manage them. Let's begin by outlining the kinds of things that are electronic resources. Karin Wikoff (2011) outlines the major categories, and these include:

Ebook technology is rather complicated and differentiated depending on the copyright status, the file type (PDF, ePUB, TXT, etc.), and the purpose or genre (textbook, fiction or non-fiction, etc.). In some cases, ebooks are software applications and not just plain or marked up text. They also vary by platform or the application used to interact with the text, each of which may offer different types of functionality.

Linking technologies allow users to begin in one search system, like a discovery service, which extends that query to other systems without requiring the user to initiate searches in those other systems. For example, a user begins a search in a library's discovery system, like UK Library's InfoKat (powered by Primo (Breeding, 2006)). InfoKat identifies multiple articles there related to the query, even though those articles are all located in other full text systems, like EBSCOHost, ProQuest, or JSTOR.


The print only era of libraries was difficult enough for many reasons, but managing print and using print resources was comparably a more linear process. Electronic resources have raised the stakes. That might be expected: civilizations have had 500 years to develop and solve print technology, yet we have had only about three plus decades of experience with digital technology. We are a long way off from stability, and many of the challenges and frustrations ahead of us are not simply technical but also social and legal.

As you may surmise from the outline at the top of this page, electronic resources are a major part of any library, whether academic, public, school, or special. The need to manage them with efficient work flows requires attending to many parts of a big system. This will be much of what we will discuss and learn about, especially because there is a lot of complexity in these systems and these systems have had a major impact on librarianship itself.

Our Readings: The nature of ERM librarianship

Our readings this week provide introductions to electronic resource librarianship and help frame this course.

A Specialist and A Generalist

The first article by Stachokas (2018) surveys the history of this specialist/generalist librarian role. Stachokas (2018) finds that the electronic resource librarian has their feet planted in technical services and collection development, that this requires a holistic understanding of the electronic resource work flow, of its embeddedness in the scholarly and library ecosystem, and that this division is leading to different areas of specialization: those who focus on "licensing, acquisitions, and collection development" (p. 15), and those who focus on "metadata, discovery, management of knowledge bases, and addressing technical problems" (p. 15). This rings true to me. In my own observations, I've noticed that job announcements have increasingly stressed one of the above areas and not both.

The Technical Communicator as the "Bridge"

In the second article, Hulseberg (2016) uses the field of technical communication (TC) to interpret the field of electronic resource librarianship. Hulseberg (2016) takes the view that an electronic resource librarian is, among other things, a technical communicator. This is much different from being someone who helps patrons with their technical problems. Rather, this is someone who completes advanced work in documenting and reporting technical processes.

Hulseberg (2016) highlights four important themes about ERM: the interesting themes to me are Theme one: Metaphors of "bridge" and "translator", and Theme Two: Collaborating in a web of relationships. When I was an undergraduate, I imagined the job that I wanted would be one connected people from different silos to each other and helping them communicate. It turns out that, under Hulseberg's (2016) view, electronic resource librarianship does this work, which is pretty cool. However, the other themes are just as important, and in particular, Theme Four, about jurisdiction, highlights one of the major disruptive acts on librarianship in the last thirty or forty years.

As an example, consider that most people, researchers and scholars included, use non-library provided resources to locate information. Additionally, more works, scholarly and non-scholarly, are freely and publicly available on the web, e.g., as open access (OA). This might mean that the library is becoming disintermediated as a result of people using non-library services, like Google or Google Scholar, to retrieve freely available works on the web, instead of at the library. As a result, what becomes of the core jurisdiction of the librarian? And of the electronic resource librarian, in particular? In concrete terms: a recent paper (Klitzing, 2009) reported that researchers state they use Google Scholar 83% of the time and EBSCOhost 29% of the time to find relevant material. That raises strategic and technical questions about today's role of the librarian and library in the scholarly communication system.

To License or Not To License

The third article, by Zhu (2016), places a different theoretical lens on what it means to be an electronic resource librarian. Zhu (2016) posits that the licensing aspect of electronic resource management significantly influences ER librarianship identity. One reason why Zhu's (2016) findings are insightful is due to the fact that we often license electronic resources rather than buy them.

The crux centers around copyright law, which provides librarians with an important legal justification for lending works: the First Sale Doctrine. Copyright law provides copyright owners with the right to distribute their work, but the First Sale Doctrine holds that if you buy a copy of a book, then you have a right to lend or sell your copy. This doctrine is fundamental to librarianship but also raises problems since most digital works (ebooks, etc) are licensed and not bought by libraries. Thus the First Sale Doctrine does not apply to such works. That ALA has a guide on this issue (ALA, 2022).

Stachokas (2018), Hulseberg (2016), and Zhu (2016) present the historical and environmental forces that have shaped the work of electronic resource librarians and their professional identities, and each of these authors discuss important themes that function as evidence of these identities. In our discussions this week, we should focus on these themes and how to make sense of them.


Electronic resource librarianship is a fascinating area of librarianship. Because digital technologies have woven their way into all parts of the modern library ecosystem, and because these digital technologies bring with them a slew of technical, legal, and social challenges, electronic resource librarians have had, as these technologies have developed, to maintain a holistic view of this ecosystem just as they have had to specialize in key areas that require maintaining that holistic, interconnected view.

The course takes that holistic view and divides it into four parts for study. In the first part, we study the nature of the work itself: what it means to be an electronic resource librarian.

In the second part, we learn about the technologies that an electronic resource librarian uses and the conditions that shape these technologies. We will learn about integrated library systems (ILS) and how these systems conform (or not) to standards, and how they foster or obstruct interoperability and access.

In the third part, we focus on processes and their contexts. We will study the electronic resource librarian's workflow, the economics and the markets of electronic resources, what is involved in licensing these resources and negotiating with vendors.

At the end, we focus on patrons and end users; that is, those we serve. Because electronic resources are digital, when we use them, we leave behind traces of that usage. This means we will study how that usage is measured and what those measurements can validly say. Because usage leaves traces of personal information, we will examine topics related to the security of these resources and the privacy of those who use them. Electronic resources likewise means having to use websites and other e-resource interfaces, and hence we will study how electronic resource librarians are involved in user experience and usability studies.

Discussion Questions

As we start to address all of this, I want us to consider two questions:

  1. How do we manage all of this electronic stuff? Not only does it include complicated technology and has an impact on our patrons, but it involves all different sorts of librarians.
  2. What exactly is an electronic resource librarian? I like this basic question because, due to perhaps representations in the media (movies, TV shows, books) and the interactions we've had with librarians in our lifetimes, we all have pretty well-defined, whether accurate, images of what reference or cataloging librarians are. But what about an electronic resource librarian? This is something different, right? And it's not likely to be a position that's ever really captured and presented publicly.

Readings / References

Hulseberg, A. (2016). Technical communicator: A new model for the electronic resources librarian? Journal of Electronic Resources Librarianship, 28(2), 84–92. https://doi.org/10.1080/1941126X.2016.1164555

Stachokas, G. (2018). The Electronic Resources Librarian: From Public Service Generalist to Technical Services Specialist. Technical Services Quarterly, 35(1), 1–27. https://doi.org/10.1080/07317131.2017.1385286

Zhu, X. (2016). Driven adaptation: A grounded theory study of licensing electronic resources. Library & Information Science Research, 38(1), 69–80. https://doi.org/10.1016/j.lisr.2016.02.002

Additional References

ALA. (2022, June 27). LibGuides: Copyright for Libraries: First Sale Doctrine. https://libguides.ala.org/copyright/firstsale

Breeding, M. (2006). OPAC sustenance: Ex Libris to serve up Primo. Smart Libraries Newsletter, 26(03), 1. https://librarytechnology.org/document/11856

Klitzing, N., Hoekstra, R., & Strijbos, J-W. (2019). Literature practices: Processes leading up to a citation. Journal of Documentation, 75(1). https://doi.org/10.1108/JD-03-2018-0047

Wikoff, K. (2011). Electronics Resources Management in the Academic Library: A Professional Guide. ABC-CLIO. http://www.worldcat.org/oclc/940697515

Desperately Seeking an ERM Librarian


Class, in this section, I will:

  • frame the readings
  • discuss the readings, and
  • list some questions to guide our discussion for the week.

Our goal in this section is to understand how the job of electronic resource management (ERM) has changed throughout the years and to develop some ideas about where it is headed.


This week we read two articles (Hartnett, 2014, Murdock, 2010) that analyze electronic resource librarian job advertisements. Additionally, the reading list includes the NASIG core competencies for electronic resource librarianship. I suggest that you review the list of core competencies before you read the articles.

These articles are of interest since they capture a description of electronic resource librarianship in earlier years. There are many social, political, and economic conditions that have stayed about the same since these articles were published, and these conditions help fixate the work of the ERM librarian, but constant changes in technologies and types of electronic resources have meant that electronic resource librarians are constantly adapting to new work flows. Our Murdoch (2010) reading makes this point.

Though the technology differs, the descriptions and duties of ER jobs are still on the mark in many ways. I posted links to job announcements in the discussion forum to demonstrate this. Most of those job announcements were emailed to the SERIALST email list (please subscribe to it) and are not current job openings, but they were current within the last few years.

To follow up on more current advertisements for ER librarians, I did a Google search on August 24, 2023 using the following query:

"electronic resource librarian" job

Results are consistent with past qualifications listed in the links mentioned above and with the last section's discussions on the nature of ERM librarianship. I will withhold linking to these advertisements since I don't expect the links to persist as the positions get filled. But several sources outline the following requested qualifications, and I think the themes from the prior section come through in this list:

  • provide "consistent and reliable access to the Library's electronic resources and establishing workflows that maintain discovery and use of all Library collections"
  • analyze "feasibility of technical platforms"
  • "resource licensing and contracting"
  • "copyright compliance"
  • "vendor negotiations"
  • "Acts as a bridge across multiple Library units"
  • enhance "access and use"
  • design a "system of access"
  • analyze "staff and user issues with discovery of resources"
  • coordinate "system administration responsibilities for integrated library system (Ex Libris Alma/Primo VE) with Technical Services Librarian"
  • "create reports in Alma and individual databases as needed, including but not limited to usage statistics, user experience statistics, collection analysis and overlap"
  • monitor "listservs for Alma and Primo VE"
  • manage "administrative and troubleshooting functions for EZ Proxy and e-resource vendors for access and authentication"
  • evaluate "the scope and quality of research resources available"
  • "reviews and negotiates licenses for [...] purchased resources and manages the acquisition, activation, and troubleshooting of all purchased and subscribed electronic resources for the Library."
  • gathers and analyzes "serials, e-books, database usage, and other related assessment data"
  • oversees "the activities of the serials acquisitions unit"
  • maintain "responsibility for licensing and the management of electronic information resources [...] as well as the shared consortial resources"
  • assist in "planning and developing policies and workflows"
  • partner "with the Acquisitions Librarian and staff on the ordering and payment activity of electronic resources"
  • establish and "documents library procedures and best practices for the acquisition, licensing, implementation, assessment, and budgeting of electronic resources"
  • work "with colleagues [...] to optimize resource discovery"
  • work "with vendors to resolve technical issues and manages EZproxy for remote access"
  • "collects, analyzes, and presents use, purchase, and availability data of electronic resources"
  • "works collaboratively to support metadata maintenance for electronic resources and both print and digital serials"
  • develop and implement "submissions to a shared University institutional repository"
  • works "under the supervision of the Associate Director for Technical Services"

To stay current about the position overall and about the specific duties involved, I encourage you to bookmark and stay abreast of relevant journals in the area. In addition to the two publication titles used in this section's readings and to the SERIALST list I have asked you to subscribe to, I recommend that you bookmark relevant journals, like the Journal of Electronic Resources Librarianship, Against the Grain, and Journal of Electronic Resources in Medical Libraries

Our Readings

Now let's start with Murdock's (2010) overview of the electronic resource librarian's position in the first decade of this century, and proceed to Harnett's (2014) work that describes the electronic resource librarian's duties around ten years ago. NASIG's core competencies were also published around ten years ago and have only received minor revisions since then. We conclude by considering the current job advertisements listed above and elsewhere. With these, our goal in this section is to get a sense of the electronic resource librarian's job duties and available technologies from the earlier part of this century to now; to understand where it was, how it has evolved, and where it might be headed. Overall, you will get a sense of just how much this position has changed in the intervening years, what I refer to as constant disruption in the next section.

One of the useful aspects of Murdock's (2010) article is in section 4.3, which lists a 'timeline of commercial ERM developments and standards.' These technologies and standards continue today, and have, via hindsight, strongly shaped how electronic resources currently work. This timeline includes what's now referred to as the A-Z list of serials, which started in 2000, OpenURL linking technology from 2001, the combining of both integrated library systems (ILS) and electronic resource management systems (ERMS) in 2004, federated searching from 2005, SUSHI usage statistics protocol from 2006, and SERU from 2007. We will explore each of these topics in future sections.

Murdock's (2010) analysis shows that some duties started to wane in the early years. Website maintenance and deployment (see Fig 13), for example, completely dropped off the radar. This was likely due to the development of content management systems, i.e., turnkey website solutions, that are still in use today. For example, I remember Joomla and Drupal, both content management systems that can work as library websites, taking off around 2007 to 2009, around the time that Murdock shows this area of activity declining.

One of Murdock's (2010) conclusions is that employers sought "to hire those who are able to perform traditional librarian duties, such as reference and instructional service, in addition to e-resource specific tasks" (p. 37). Based on what we see in job advertisements today, this seems to be much less the case as electronic resource librarians have become more specialized, as the position has divided into two areas (cf., technical services aspect and collection development aspect), and as electronic resources have grown and become more dominant in the intervening years. ER librarians still liaison with their communities but in different ways (i.e., not as reference librarians).

A big change since 2012, when Harnett's (2014) research ends, is that more technology has moved to the cloud. What this means is that we rely less on onsite servers, where those servers might be managed by library IT (or other IT). Switching from local IT infrastructure to cloud based IT infrastructure requires different types of technological skills and suggests that conceptual knowledge is paramount. (This is my opinion, but it's based on years of doing and teaching systems administration work.) For example, it's more important to have a conceptual understanding of how metadata works than to know how to use some piece of software for managing metadata (aside: however, conceptual and practical knowledge cannot so easily be divorced from each other). Metadata standards and schemes change far less often than the software used to enter or administer metadata.

Cloud-based solutions have changed the field in other ways. Hosted software means the software isn't purchased but leased, and leasing involves outside vendors who must have the technological skill sets and resources to manage and provide the technology. So we rely, in very important ways, and are more dependent on other actors in the publishing and e-resource provider ecosystem. That involves a lot of trust, and it involves more negotiations between librarians and vendors, and the ability to collaborate and develop good, ethical relationships. The kind of work that may increase, as more software and data are hosted on the cloud, might be the kind of work that requires strong communication skills, strong negotiating skills, and knowledge of how licensing works, how copyright and contract law work, and how electronic resource collections work and inter-operate across platforms. However, again, this can be complicated. A couple of years ago there was an email on the SERIALST listserv by a librarian that asked whether librarians have retained powers to negotiate and sign contracts. The responses were mixed. Some librarians had the jurisdiction but many had lost it, per the email responses. Even under this scenario, though, technological understanding is necessary in order to negotiate the best deal for a library's stakeholders and to acquire the best and most seamless product needed by library users.

Even though IT continues to be outsourced, this doesn't mean that we become lax in our understanding of how the technology works; just as we can't become lax in how librarianship works even though librarians answer fewer reference questions than they did in previous years (i.e., you still need to know how to respond systematically and thoroughly to research and reference questions). What I mean is that, in order to communicate this topic well, to negotiate well, and to sign licenses that are beneficial to our communities, it helps to understand and be adept at the tech so that we are not bamboozled in those negotiations. Also, if something goes wrong, e.g., with the link resolver technology, we have to learn how to identify the problem; that is, whether the technological issue is the link resolver technology and not something else, like the OPAC technology.


I want you to think about these job advertisement studies in relation to what you are learning about electronic resources as well as in relation to the kinds of advertisements you've seen since whenever you started paying attention to them, like those outlined at the beginning of this section. In essence, think about where you see yourself in these advertisements and how they impact you.

Although it might be the case that most of you haven't had a chance to learn electronic resource back ends, that doesn't disqualify you from attempts to start reflecting on this part of librarianship. As library users of these technologies, you have gathered enough abstract and practical knowledge to know enough.

Questions for Discussion

We'll soon move away from reflective questions and get our hands dirty with specific technologies, licensing, etc., but for now let's reflect on the following questions:

  • Where do you think you stand in comparison to ERM job ads and to NASIG Core Competencies?
    • Where are you strong?
    • Where would you like to improve?
  • What can you (and we, as a course community) do to help each of you get there?
    • What's your path?
    • What can you practice?
  • Do a job search for electronic resource management. How do the job ads align with the how the position is described in the readings?

Readings / References

Hartnett, E. (2014). NASIG’s Core Competencies for Electronic Resources Librarians Revisited: An Analysis of Job Advertisement Trends, 2000–2012. The Journal of Academic Librarianship, 40(3), 247–258. https://doi.org/10.1016/j.acalib.2014.03.013

Murdock, D. (2010). Relevance of electronic resource management systems to hiring practices for electronic resources personnel. Library Collections, Acquisitions, and Technical Services, 34(1), 25–42. https://doi.org/10.1016/j.lcats.2009.11.001

NASIG Core Competencies Task Force. (2021, April 5). NASIG Core Competencies for E-Resources Librarians. https://nasig.org/Competencies-Eresources

Constant Disruption


I think we might conclude at this point that there are many different ways to frame the role of the electronic resource librarian. In The ERM Librarian chapter, Stachokas (2018) showed how the electronic resource librarian works across technical services and collection development and how this requires a holistic view as well as a specialized understanding of the various processes involved. Huleberg (2016) illustrated how framing electronic resource librarianship as a technical communicator yields important insights into the work and profession. Zhu (2016) used the licensing aspect of electronic resource management work to show how central this activity is to the field's identity.

In the Desperately Seeking an ERM Librarian section, we reviewed a list of qualifications from current job advertisements for electronic resource librarian positions. In Murdock (2010), we learned how these kinds of qualifications are tied to various technological developments. As the technology changes, and it changes a lot (see aside below), so do the qualifications. One of the big takeaways from Harnett's (2014) article, for me at least, is that conceptual knowledge of the relevant technology is more important than practical knowledge, although the two are not always so easily divorced from each other.

Aside: See The Library Technology Guides' page on The History of Mergers and Acquisitions in the Library Technology Industry to get a sense of how much change has taken place in the last 40+ years.

Based on the readings so far, I think it's safe to say that the work of electronic resource librarianship is one of constant disruption. And by that I mean, it might be more difficult to find a thread of continuity, from its early days to now, in this role than it would be in other areas of librarianship. That might be part of what makes this area so interesting, but it does present some challenges.


The first listed reading we have this week is by Marshall Breeding. Astute observers will note Breeding is one of the first cited authors in the two additional readings we have this week. We will read more from Breeding later, but I bring him up now because he oversees a website titled Library Technology Guides. If you would like to keep abreast of the recent news on the electronic resource industry, Breeding's website should be at the top of your list.

Breeding's What is ERM? article is a good one to start the week. He provides an outline of the various components of electronic resources, and he provides some historical context for those components.

The Focus on Academic Libraries is Misleading

One caveat, though: while all the articles on our list this week are focused on academic libraries, the terms, concepts, and processes described in these articles are relevant to other areas of librarianship such as public librarianship, school media librarianship, and so forth. Differences in processes and in some details will arise due to organizational or other contextual differences among these library types. Organizationally, for example, public libraries are connected to municipalities, county governments, state libraries, and to public library consortia. This presents unique organizational challenges, and it highlights the fact that municipal, county, or state laws will define how some processes must be handled and who must handle them. The same is true for public schools, and in such cases, school boards and school districts will likely be involved (see aside below). Contextual differences that shape electronic resource management include user communities. An academic library serves its students, faculty, staff, and perhaps the public to some degree. However, a public library serves its local community. Such differences will shape ERM workflows and other choices, like the chosen vendors and publishers. For example, academic libraries provide more scholarly sources, and public libraries provide more ebooks and audiobooks and less e-journals. This means that you might find services like OverDrive/Libby, Epic, and Hoopla offered by public libraries but not in academic ones, and that you will rarely find more advanced scholarly resources in public libraries. These differences in needs, due to different communities, will change the emphasis on some aspects of the ERM workflow.

Aside: As an example, see the impact that the current book bans are having on ebook providers and note all the parties involved in these situations, including: superintendents, county officials, county school system, school district employees, etc. Ingram, 2022.

In a newer development, ChatGPT is being used to satisfy the legal requirements Iowa school districts (Opsahl, 2023) must follow to remove books that contain any sexual content from school libraries (Paul, 2023). There's a worthwhile response to criticisms about this and about the overall law in a recent New York Times article (Exman, 2023).

Another reason why our reading lists lean toward the academic setting is not because electronic resource management is not relevant to public or other types of libraries, but it is because academic librarians publish more about electronic resources and public librarians publish very little. For example, I conducted a search in LISTA with the following query, which returned just three results in 2021 when I first ran the query, four results in August 2022, and four results again in August 2023. In fact, a new article wasn't published since the 2021 query. Rather, LISTA just added an addition item from 2015.

(DE "ELECTRONIC information resources management")  AND  (DE "PUBLIC libraries")

It does not get much better if I expand the query to include some additional thesauri terms. This query returns only six results in 2021, seven in 2022, and eight in 2023:

(DE "PUBLIC librarians" OR DE "PUBLIC librarianship" OR DE "PUBLIC libraries") AND  (DE "ELECTRONIC resource librarians" OR  DE "ELECTRONIC information resources management")

However, if I focus that query on academic libraries, then the results increase substantially. The following query returns 51 hits in 2021, 57 in 2022, and 61 in 2023:

(DE "ELECTRONIC information resources management")  AND  (DE "ACADEMIC libraries")

And this query returns 82 results in 2021, 90 in 2022, and 97 in 2023:

(DE "ACADEMIC librarians" OR DE "ACADEMIC librarianship" OR DE "ACADEMIC libraries") AND  (DE "ELECTRONIC resource librarians" OR  DE "ELECTRONIC information resources management")

We could continue to explore LISTA or other databases for relevant material on ERM and public libraries, and we would find more. For example, more results are retrieved when I attach terms like e-resources, integrated library systems, discovery platforms, or ebooks to a public library search in LISTA. But the results are nearly always much less than academic library searches. Another example, my local library system, Lexington Public Library, uses the CARL integrated library system, but there doesn't appear to be any articles in LISTA about it since 2015.

Anyway, you get the picture. There simply isn't a lot of material on electronic resource management from the public library perspective, and a quick search in the school media sphere mirrors this issue, too. The following search return zero results in 2021, 2022, and 2023:

(DE "LIBRARY media specialists")  AND  (DE "ELECTRONIC information resources management")

If you go into public librarianship or school media librarianship, I'd encourage you to publish on electronic resources. It would greatly benefit your peers and those of us who teach courses like this.

Back to Breeding. I understand some terms Breeding uses and the technologies he describes might still be new to us. Let me spend some time providing some additional background information and highlighting some things to look for in these three articles.

Librarians started to migrate to electronic resource management in the 1970s. Breeding mentions this, but the seeds were planted well before this. Nearly a decade ago, I published a paper that provides a historical account of the first library automation project, which took place in the 1930s with Hollerith punched cards. By the 1960s, the primary use of computers was to manage circulation, and in the late 60s and early 70s, library automation focused on managing patron records. In the early 1970s, tools became available to manage and search bibliographic records. Hence, computers were first used mostly to manage the circulation of books, then patron records, which allowed patrons to check out works electronically, and then we had the ability to search for works. If a work of interest was located using these tools, the work could be retrieved from the shelves or ordered via interlibrary loan by snail mail. Full text search came much later with the introduction of better storage media, like CD-ROMs, and saw major growth with the introduction of the internet to more institutions in the 1980s and the web in the early 1990s, which at its heart is nothing more than a big document retrieval system.

In the process of migrating from print to electronic, all sorts of things had to change, but all that change rests on the major premise of librarianship: to provide access by organizing information in order to retrieve information. Although you may have often heard that libraries are ancient entities, libraries and librarianship as we understand them today did not modernize until the late 1800s but more so starting in the 1920s and 1930s. It during this later time frame when some in the profession began to hone in on the major complexities and challenges, social and technological, involved in organizing and retrieving information in order to provide access. The challenges with organizing and retrieving information that they identified nearly 100 years ago were indeed major and problematic, but fortuitous, because it gave rise to what we now called library science, the rigorous study of libraries, librarianship, collections, users, communities, and so forth.

Yet consider when those people laid the groundwork for a library science nearly 100 years ago, librarians only managed print, and the primary means of accessing print collections was through a physical building. With the introduction of computer systems in the 1960s and with better networking technologies in the 1980s and 1990s, issues with organizing and retrieving information grew exponentially, and indeed, this exponential increase created new complexities and launched an entire new field, what we now call information science.

All right, back to the ground level. Let me highlight some key terms in Breeding's article. They include:

  • Finding aids
  • Knowledge bases
  • OpenURL link resolvers
  • ERM systems
  • Library service platforms (LSP)
  • Integrated library systems (ILS)

Not all but many of the above terms are defined in ODLIS: Online Dictionary for Library and Information Science.

Unless you already have some solid experience with these things, and even after reading Breeding's article, these terms may still be abstract. So, this week, you have two major tasks:

  1. Find real, practical examples. Pick one or two of the above terms and see how they work in practice. Then come back here and tell us what you found. Use the articles to help you locate actual products or examples. See Breedings Library Technology Guides for reference.
  2. Locate how these terms appear in either the Cote and Ostergaard (2017) article or the Fu and Carmen (2015). Note other terms that may appear in those two articles and comment on the role they play in the ERM workflow and the migration process.


As you work on this task, I ask that you pay attention to the emphasis on workflow. The idea of a workflow is a major theme in this course because it is a major part of electronic resource management. We'll come back to the idea throughout this work. Also, as we read the Cote and Ostergaard (2017) and the Fu and Carmen (2015) article, we will learn about how migrating to new systems is a major, expensive, and time-consuming project, and one of the great things about these two articles is that it documents the workflows used in these migrations. If you become involved some day in a migration process, you should use articles like these to assist you and to help you make evidence-based decisions about what you need to accomplish. And like what these authors have done, I would encourage you to document and publish what you learned. In the print era, there were some cases where librarians had to migrate to new systems, too. For example, some research libraries in the U.S. started by classifying collections using the Dewey Decimal Classification system, but then began to convert to the Library of Congress Classification system after the mid-20th century. This was no small task. Today, migration is big business in the library world (see Breeding's site for other examples) because the technology changes fast and because there are a number of competing electronic resource management products that librarians can choose for their communities, which they might be inclined to do if the migration provides an advantage to their users, communities, and themselves.

Readings / References

Breeding, M. (2018). What is ERM? Electronic resource management strategies in academic libraries. Computers in Libraries, 38 (3). Retrieved from https://www.infotoday.com/cilmag/apr18/Breeding--What-is-ERM.shtml

Cote, C., & Ostergaard, K. (2017). Master of “Complex and Ambiguous Phenomena”: The ERL’s Role in a Library Service Platform Migration. Serials Librarian, 72(1–4), 223–229. https://doi.org/10.1080/0361526X.2017.1285128

Fu, P., & Carmen, J. (2015). Migration to Alma/Primo: A Case Study of Central Washington University. Chinese Librarianship: an International Electronic Journal. https://digitalcommons.cwu.edu/libraryfac/30/

Additional References

Burns, C. S. (2014). Academic libraries and automation: A historical reflection on Ralph Halstead Parker. Portal: Libraries and the Academy, 14(1), 87–102. https://doi.org/10.1353/pla.2013.0051

Exman, B. (2023, September 1). Opinion | A Word on Censorship From the Book-Banning Monster of Iowa. The New York Times. https://www.nytimes.com/2023/09/01/opinion/book-ban-schools-iowa.html

Ingram, D. (2022, May 12). Some parents now want e-reader apps banned - and they're getting results. NBC News. https://www.nbcnews.com/tech/tech-news/library-apps-book-ban-schools-conservative-parents-rcna26103

Opsahl, R., May 26, I. C. D., & 2023. (2023, May 26). Governor signs education bills, including ban on school books depicting sex. Iowa Capital Dispatch. https://iowacapitaldispatch.com/2023/05/26/governor-signs-education-bills-including-ban-on-school-books-depicting-sex/

Paul, A. (2023, August 14). School district uses ChatGPT to help ban library books. Popular Science. https://www.popsci.com/technology/iowa-chatgpt-book-ban/

Chapter Two: Technologies and Standards

The chapter offers an introductory examination of key aspects within the field of electronic resource management (ERM) and library technology. Spanning four integral sections, the following sections explore:

  1. The intricate relationship between electronic resource management systems and integrated library systems. These systems are essential for the organization and distribution of digital content;
  2. The vital role of standards in ERM, which ensures uniformity and efficiency across electronic resources and workflows;
  3. Insights into data and software interoperability, allowing for seamless interaction between diverse systems and platforms; and
  4. A detailed analysis of electronic access and authentication, safeguarding and controlling access to digital resources.

Together, these lectures provide both aspiring information professionals and seasoned librarians a well-rounded understanding of the technological and methodological considerations that shape modern librarianship.



This week we learn about ERM and ILS software. What are these?

ILS, the Integrated Library System

ILS is an acronym for an integrated library system. We were introduced to the newer term library services platform (LSP) in the previous section. Although different in many ways (Breeding, 2015; Breeding, 2020), our discussions of the ILS and LSP this week are relevant to those types of integrated library systems that may also be library service platforms, the latter which are becoming more common these days.

The differences between ISP and LSP are both large and small. The main idea between them is the same in the sense that they are both "used by librarians to manage their internal work and external services," such as "acquiring and describing collection resources, making those resources available to their users through appropriate channels, and other areas of their [resource management] operations" (Breeding, 2020, para. 1).


In order to provide the above functions, the ILS/LSP has an administrative interface that librarians use to manage their resources. The interface contains a set of modules that are common among most software solutions, although they may be named variously.

In the above list, I've linked to documentation on modules provided by the Evergreen open source ILS system. LibLime's Bibliovation LSP offers comparably named modules for discovery, circulation, cataloging, serials, acquisitions, and systems administration. Other ILS/LSP solutions may offer specific modules dedicated to other items in the list, or those functions might be integrated into one of the above modules. Alternatively, new modules appear in LSPs that that take advantage of special LSP abilities and digital assets. For example, the Alma LSP provides modules dedicated to acquisitions, resources, discovery (via Primo), fulfillment, administration, and analytics. This is part of what differentiates a LSP from an ILS. Please take a moment and read about these modules in Evergreen's documentation, which is an ILS, and visit the Alma and LibLime links to learn more about their specifically LSP products.

User Interface

Each of you are familiar with an ILS/LSP from a user perspective and some of you are familiar with these systems from a librarian perspective. In your lifetimes, you have used OPACs (online public access catalogs) or discovery systems like InfoKat, which uses Alma's Primo discovery system, you have likely conducted a search for a serial, and you have most definitely borrowed a book from a library. The ILS/LSP makes these end user functions possible.

Until fairly recently, the OPAC was the primary way to locate and access items in library collections. However, in LSPs the OPAC has evolved into a discovery system, depending on what and how it searches its records and other factors. The Encyclopedia of Knowledge Organization describes the differences as such:

OPACs replicated and extended the functionality of the card catalogues they largely replaced in providing a finding aid to the books, journals, audiovisual material and other holdings of a particular library. The term discovery system has come into use in the early Twenty-first century to describe public-facing electronic catalogues which use the technology of the Internet search engines to expand the scope of the OPAC to include not only library-held content, including entries for journal articles and book chapters that were not typically part of traditional library catalogues, but also material held elsewhere which may be of interest to clients (Wells, 2021).

In other words, OPACS generally searched against pre-defined fields that are recorded in MARC such as author, title, subject, etc. and searched collections, at first print but later electronic, held by the library. A discovery system can search additional text, if available, and can more easily link to items not in the library collection but which can be acquired through interlibrary loan. A discovery system can also integrate with bibliographic databases and return results indexed by those databases. This saves the user from having to know about specific topical databases. For example, UK Libraries provides access to over 700 databases, and thus having a discovery system that can access those is beneficial. However, none of the above mean that a discovery system, like InfoKat, is aware of the totality of a library's collections. (And it's not always clear what's left out.)

In Totality

These administrative and end user interfaces make up the totality of the ILS/LSP software. In short:

  • an administrative interface is used by librarians to manage tasks provided through modules.
  • a public interface, such as an OPAC or discovery system, is used by librarians and patrons to access the library's collections.

An ILS/LSP is therefore, as Stephen Salmon stated in 1975, a "non-traditional" way of doing traditional things, such as "acquisitions, cataloging, and circulation," but which has now become fairly routine!

Electronic resource librarians might work extensively with specific resources or modules in order to administer the library's digital assets (e.g., contracts, etc.), but all librarians use one or more of the ILS/LSP modules. For example, when I worked in reference at a small academic library, I used the Millennium ILS to check out books to users, to fix borrowing issues, and to search for works in the OPAC. Later I primarily used the cataloging module when I moved to technical services. What a librarian uses frequently depends on the organizational structure of a library. As our reading by Miller, Sharp, and Jones (2014) show, the rise in electronic resources has vastly influenced the ways librarians structure their organizations, whose structures were originally informed by the dictates of a "print-based world".

To learn more about the ILS and current iterations that we now call LSPs, see the links in the text above and visit the following:

ERMS, the Electronic Resource Management System

ERMS is an acronym for electronic resource management system. Its function is born from the need to manage a library's digital assets, for example, the licenses that a library has signed. In order to manage assets like licenses, the ERMS can keep track of the signatories, the terms of the license/contracts, specific documents related to these processes, and more. An ERMS may or may not be integrated with a library's ILS software. But it's most likely integrated in a LSP solution. The Alma LSP, for example, is a LSP that also provides electronic resource management. In short, a library that uses an ILS may also be using a separate ERMS product. However, many LSPs have built in ERMS functionality, thereby not requiring a second ERMS product.

Like the ILS/LSP, ERM software is generally divided into modules that focus the librarian's work on particular duties and allow librarians to create work flows and knowledge management systems. In an ERM like the open source CORAL system, the modules include:

  • Resources: a module "provides a robust database for tracking data related to your organization's resources ..." and "provides a customizable workflow tool that can be used to track, assign, and complete workflow tasks."
  • Licensing: a module for a "flexible document management system" that provides options to manage licensing agreements and to automate parts of the process.
  • Organizations: this module acts as a type of advanced directory to manage the various organizations that impact or are involved in the management of electronic resources, including "publishers, vendors, consortia, and more."
  • Usage Statistics: a module providing librarians with usage statistics of digital assets by platform and by publisher. Supports COUNTER and SUSHI. We'll cover COUNTER and SUSHI later in the semester, but as a preamble:
    • COUNTER "sets and maintains the standard known as the Code of Practice and ensures that publishers and vendors submit annually to a rigorous independent audit", and,
    • SUSHI is a type of protocol to automate collecting data on usage statistics.
  • Management: this module provides a document management system aimed at "storing documents, such as policies, processes, and procedures, related to the overall management of electronic resources".


In our readings this week, we have three articles that speak to ILS/LSP and ERM software solutions and the relationship between the two.

As Fournie (2020) notes, the electronic resource market is consolidating into a few heavyweights but that this trend does not have to force libraries into solutions that lead to vendor lock-in or acceptance of walled gardens. In the process, Fournie (2020) describes two ERM solutions: Coral and Folio. The author's descriptions are helpful in understanding what these two software solutions are capable of providing.

The readings by Miller, Sharp, and Jones (2014) and Bahnmaier, Sherfey, and Hatfield (2020) provides some context about how these technologies impact librarianship. Miller et al. (2014) describe a case study (the literature review is also helpful) that shows how electronic resources have impacted organizational structure, job titles, budgets, and more. Likewise Bahnmaier et al. (2020) discuss aspects of this as well as reflect on various changes in library staffing and how this raises the importance of the library-vendor relationship.


With that background in mind, in this week's forum, I'll introduce you to the following systems:

We will see what services and modules they provide, and how they function. Be sure to visit the links in this page, especially any documentation. I'll ask that you log into the relevant services, test the demos sites, or watch the demo videos. This will help get some hands-on experience with them and also demystify what each do.


In prior semesters, we read articles by Wang and Dawes (2012) and Wilson (2011). I replaced those readings for the Fall 2022 semester, but for those interested, I briefly describe them below.

In the article by Wang and Dawes (2012), the authors describe the "next generation integrated library system", which should meet a few criteria that include the ability to merge ILS software with ERM software. ERM software solutions exist because integrated library systems (ILS) failed to include functions to manage digital assets. Basically, the ILS was still behaving with a print-mindset, so to speak, and was growing stagnant. Around the time the article was published, more ILS and ERM software began moving to the cloud, as was common among many software markets. This changed the game because it placed a bigger burden on software companies to maintain their software. Based on demand and need, the LSP was created as a next-generation ILS that included ERM functionality. So it's likely that even though the LSP might replace the ILS/ERM combo someday, it could be that we'll live in a dual world where some libraries use a LSP and some use the LSP/ERM combo.

Despite the technical aspects of these solutions, at its basic, both ILS/LSP and ERM software solutions focus on managing assets (books, serials, realia, etc) so that librarians can organize and users and librarians can retrieve those assets. There's no requirement to use any solution offered by a library vendor, and that's the point of the Wilson (2011) article, which shows how regular software can be used to function as a homegrown solution for creating and implementing an ERM work flow.

Readings / References

Bahnmaier, S., Sherfey, W., & Hatfield, M. (2020). Getting more bang for your buck: Working with your vendor in the age of the shrinking staff. The Serials Librarian, 78(1–4), 228–233. https://doi.org/10.1080/0361526X.2020.1717032

Fournie, J. (2020). Managing electronic resources without buying into the library vendor singularity. The Code4Lib Journal, 47. https://journal.code4lib.org/articles/14955

Miller, L. N., Sharp, D., & Jones, W. (2014). 70% and climbing: E-resources, books, and library restructuring. Collection Management, 39(2–3), 110–126. https://doi.org/10.1080/01462679.2014.901200

Additional References

Anderson, E. K. (2014). Chapter 4: Electronic Resource Management Systems and Related Products. Library Technology Reports, 50(3), 30–42. https://journals.ala.org/index.php/ltr/article/view/4491

Breeding, M. (2015). Library Technology Reports, 51(4). Chapters 1-5. https://journals.ala.org/index.php/ltr/issue/view/509

Breeding, M. (2020). Smart libraries Q&A: Differences between ILS and LSP. Smart Libraries Newsletter, 40(10), 3–4. [https://librarytechnology.org/document/25609][breeding2022]

Hosburgh, N. (2016). Approaching discovery as part of a library service platform. In K. Varnum (Ed.), Exploring Discovery: The Front Door to your Library’s Licensed and Digitized Content. (pp. 15-25). Chicago, IL: ALA Editions. https://scholarship.rollins.edu/as_facpub/138/

Salmon, S. R. (1975). Library automation systems. New York: Marcel Dekker.

Wang, Y., & Dawes, T. A. (2012). The Next generation integrated library system: A promise fulfilled? Information Technology and Libraries, 31(3), 76–84. https://doi.org/10.6017/ital.v31i3.1914

Wells, D. (2021). Online public access catalogues and library discovery systems. In B. Hjørland & C. Gnoli (Eds.), Encyclopedia of Knowledge Organization (Vol. 48, pp. 457–466). https://www.isko.org/cyclo/opac

Wilson, K. (2011). Beyond library software: New tools for electronic resources management. Serials Review, 37(4), 294–304. https://doi.org/10.1080/00987913.2011.10765404

Standardizing Terms for Electronic Resource Management


Awhile ago now, I conducted some historical research on a librarian named Ralph Parker. Inspired by technological advances in automation, specifically the use of the punched cards and machines, Parker began to apply this technology to library circulation processes in the 1930s and thus became the first person to automate part of the library's work flow. By the mid-1960s, Parker's decades long pursuit of library automation led to some major advances, including the founding of OCLC. Meanwhile, the punched card system he continued to develop eventually led to massive increases in circulation and better service to patrons. In the mid-60s he wrote the following about the installation and launch of a new punched card system to help automate circulation:

To the delight of the patrons it requires only four seconds to check out materials (as cited in Burns, 2014).

I think about that quote often. When I read that in his annual report in the archives at the University of Missouri, I could feel his giddiness with these results. Until this achievement, when a patron borrowed an item from the library, the process involved completing multiple forms in order to be sure that accurate records were kept. Accurate record keeping is important. Libraries need to protect their collections but also provide access to them. As stated in the Flexner (1927):

it is necessary that the library have control of these circulating books in several ways. It [the library] must know where they are, it must lay down rules to see that thoughtless people do not retain the books in their possession unfairly, and it must provide means for securing their prompt return. These and many other considerations combine to make it necessary for the [ circulation ] department to install and maintain very efficient methods to control the circulation of books, which are commonly known as routines (p. 6).

What were those routines in the 1930s and thenabouts? Why was Parker so excited about his system taking only four seconds to check out a work? Well, two routines are important for circulation. The first involves membership and the second involves charging or checking out works.


First, if the patron was not yet a member of a library, then they had to register to become one; hence, the first routine was to check their membership and register them as borrowers if they were not yet a member or if their membership had expired. If this was a public library, then the process might vary a bit depending if the member was an adult or a youth (or juveniles in the lingo of the time). Regardless, this routine basically involved completing an application card, creating a member record and filing that away for the library's use, and then giving the borrower a card of their own, i.e., their borrower's card.


Once membership status was confirmed or created, then the circulation librarian employed a system to charge books to the borrower. Several systems had been employed up through the late 1920s, including the ledger system, the dummy system, the temporary slip system, the permanent slip or card system, the Browne system, and eventually the Newark charging system (see Flexner, 1927, pp. 73-82 for details). Assuming the librarian in the 1930s used the Newark system, when a book was to be checked out, the librarian needed to enter the details on a "book card, a date slip and a book pocket for each book" (Flexner, 1927, p. 78). Flexner goes on to outline the process:

The date slip is pasted opposite the pocket at the back of the book. The date which indicates when the book is due to be returned or when issued is stamped on each of three records, the reader's card, the book card and the date slip. The borrower's number is copied opposite the date on the book card. The date on the date slip indicates at once the file in which the book card is to be found, and the [librarian] assistant is able to discharge the book and release the borrower immediately on the return of the volume (Flexner, 1927, pp. 78-79).

In essence, charging books or works to patrons involved a lot of paperwork, and you can imagine that it might be prone to error. However, the number of systems at the time and the discussions and debates around them show that the processes and routines were steadily becoming standardized, and standardization is a necessary pre-requisite to automation.

Parker's achievement in automation eventually improved the library experience for patrons as well as the librarians at the circulation desk, and indirectly their colleagues throughout the library. That is, once circulation standards stabilized, and once technology like punched cards became available generally, then it became possible to automate this process for the library. And this was good; the effects were that automation increased circulation and that an automated circulation process Saved The Time Of The Reader, down to four seconds to be exact!

This is all to say that standards and technology go hand and hand and that the details matter when thinking about standards. How does this relationship work? Standards enable multiple groups of competing interests to form consensus around how technologyshould work, and when this happens, multiple parties receive payoffs at the expense of any single party acquiring a monopoly. This is true for the design of screwdrivers, the width of railroad tracks, the temperature scale, and certainly also for how information is managed and exchanged. The internet and the web wouldn't exist or definitely not exist as we know it if not for the standardization of the Internet Protocol (IP), the Transmission Control Protocol (TCP), the Hypertext Transfer Protocol (HTTP), and other internet and web related technologies that enable the internet and the web to work for so many users regardless of the operating system and the hardware they use.


Our first article this week by Harris (2006) covers the basic reasons for the existence of NISO (the National Information Standards Organization) and the kinds of standards NISO is responsible for maintaining and creating. These standards are directly related to libraries and fall under three broad categories related to Information Creation & Curation, Information Discovery & Interchange, and Information Policy & Analysis. There are standards that touch on bibliographic information, indexing, abstracting, controlled vocabularies, and many other library important issues. If you have not before paid attention to NISO, you might now start seeing more references to the organization and the standards it publishes, especially because the international library community has worked closely with NISO to develop standards for various aspects of library work.

Another historical note: As Harris (2006) elaborates in the article, NISO came into existence in the mid-1930s. This was about the same time that Ralph Parker began working on his punched card system. Not long before this, in the late 1920s, the first library science graduate program launched at the University of Chicago, and in the early 1930s, the first research based journal started, The Library Quarterly. We often hear how long libraries have existed, and it's true that there were quite a few accomplishments before the 1930s, but it is this time period (for these and a number of other reasons) that marks the modern era of libraries.

We also are not simply interested in standardizing things like the forms used to catalog and charge a book, to create member records, to draw up licenses for an electronic resources, as we'll discuss later. We are also interested in standardizing, as Flexner (1927) would say, "routines", processes, or workflows. Thus, our additional readings are on TERMS, or Techniques for Electronic Resource Management. TERMS is not a true standard, but more of a de facto or proposed standard that helps outline the electronic resource management work flow. It was developed in order for librarians or others dealing with electronic resources to come to a consensus on the processes of electronic resource management. Version 1 of TERMS is described by the TERMS authors in an issue of Library Technology Reports. Although it has been replaced by a newer version, it still functions as a thorough introduction to the ERM work flow and provides guidance and suggestions on all aspects of electronic resource management. For example, in chapter 7 of the LTS report on TERMS version 1, the authors provide information on the importance of working with providers or vendors in case of cancellation of a resource. They write:

Do not burn any bridges! Many resources have postcancellation access, which means you need to keep up a working relationship with suppliers; this might also incur a platform access fee going forward, so this needs to be budgeted for in future years. Review the license to fully understand what your postcancellation rights to access may be. In addition, you may resubscribe to the resources in future years. Content is bought and sold by publishers and vendors. Therefore, you may end up back with your original vendor a year or two down the line!

Some of this material is repeated in version 2 of TERMS, but version 2 was created in order to address changes and to include more input from the community. Version 2 also includes a slightly modified outline, and includes the following parts:

  1. Investigating new content for purchase or addition
  2. Acquiring new content
  3. Implementation
  4. Ongoing evaluation and access, and annual review
  5. Cancellation and replacement review
  6. Preservation

At the same link just provided, they also write about this new version:

In addition to the works mentioned or cited in the original TERMS report, much has been written in the past few years that can help the overwhelmed or incoming electronic resources librarian manage their daily workflow. In the end, however, most of the challenges facing the management of electronic resources is directly related to workflow management. How we manage these challenging or complex resources is more important than what we do, because how we do it informs how successful and how meaningful the work is, and how well it completes our goal of getting access to patrons who want to use these resources.

As such, the outline and the content described in these two versions of TERMS is very much centered on the ERM work flow. TERMS is a guide and framework for thinking on the different aspects of the electronic resource life-cycle within the library, and it helps provide librarians with a set of questions and points of investigation. For example, let's consider Term item 1, which is to investigate new content for purchase or addition. In a presentation by the Emery and Stone (2014), they suggest that this involves the following steps, partly paraphrased:

  • outline what you want to achieve
  • create a specification document
  • assemble the right team
  • review the market and literature and set up trial
  • speak with suppliers and vendors
  • make a decision (Emery and Stone, slide 12, 2014)

Emery and Stone (2014) provide other examples, and the TERMS listed in this slide are from the first version. TERM no. 6, PRESERVATION, was added in version 2, and TERMS nos. 4 and 5 from version 1 were joined together.


This week you have a two part exercise:


Visit the NISO website and search for documentation on a standard, recommended practices, or technical reports and post about it. The differences between these publications follows:

Technical reports:

NISO Technical Reports provide useful information about a particular topic, but do not make specific recommendations about practices to follow. They are thus "descriptive" rather than "prescriptive" in nature. Proposed standards that do not result in consensus may get published as technical reports.

Recommended Practices:

NISO Recommended Practices are "best practices" or "guidelines" for methods, materials, or practices in order to give guidance to the user. These documents usually represent a leading edge, exceptional model, or proven industry practice. All elements of Recommended Practices are discretionary and may be used as stated or modified by the user to meet specific needs.

Published and Approved NISO Standards:

These are the final, approved definitions that have been achieved by a consensus of the community.

See https://www.niso.org/niso-io/2014/03/state-standards for the descriptions.


After reading about TERMS, try to place these TERMS in additional electronic resource management context. Please draw from your experience using the ILS and ERM software, from the readings, your personal work experience in a library, if you have that, or use your imagination. Specifically, it would be interesting if you could pick out aspects of systems like Coral or Folio that appear to facilitate standardized workflows.

Sources for NISO Tasks

Readings / References

Emery, J., & Stone, G. (2017, March 17). Announcing TERMS ver2.0. TERMS: Techniques for electronic resource management. https://library.hud.ac.uk/archive/projects/terms/announcing-terms-ver2-0/

Harris, P. (2006). Library-vendor relations in the world of information standards. Journal of Library Administration, 44(3–4), 127–136. https://doi.org/10.1300/J111v44n03_11

Heaton, R. (2020). Evaluation for evolution: Using the ERMI standards to validate an Airtable ERMS. The Serials Librarian, 79(1–2), 177–191. https://doi.org/10.1080/0361526X.2020.1831680

Hosburgh, N. (2014). Managing the electronic resources lifecycle: Creating a comprehensive checklist using techniques for electronic resource management (TERMS). The Serials Librarian, 66(1–4), 212–219. https://doi.org/10.1080/0361526X.2014.880028

Additional References

Burns, C. S. (2014). Academic libraries and automation: A historical reflection on Ralph Halstead Parker. Portal: Libraries and the Academy, 14(1), 87–102. https://doi.org/10.1353/pla.2013.0051, or: http://uknowledge.uky.edu/slis_facpub/6/

Breeding, M. (2015). Library Technology Reports, 51(4). Chapters 1-5. [https://journals.ala.org/index.php/ltr/issue/view/509][breeding2015]

Emery, J., & Stone, G. (2013). Library Technology Reports, 49(2). Chapters 1-8. https://journals.ala.org/index.php/ltr/issue/view/192

Emery, J., & Stone, G. (2014, July). Techniques for Electronic Resource Management (TERMS): From Coping to Best Practices [Conference]. 2014 AALL Annual Meeting and Conference, Henry B. Gonzalez Convention Center San Antonio, TX. http://eprints.hud.ac.uk/id/eprint/19420/

Flexner, J. M. (1927). Circulation Work in Public Libraries. American Library Association. https://hdl.handle.net/2027/mdp.39015027387052



Managing electronic resources in libraries involves a complex web of technologies and services, each with its own set of challenges. One such challenge is the intricacy of navigating paywalls to access a library's digital content. This section examines the complications that arise when accessing paywalled materials. We explore how technologies like OpenURL link resolvers can streamline this process and enhance interoperability between multiple services.


We take it for granted that we can seamlessly follow links to websites and webpages on the web, or do so without too much fuss. It gets more complicated when we want access to works that are behind paywalls, despite where such works have been found: search engines, bibliographic databases, OPACs (online public access catalogs), or discovery services. In these cases, direct links to sources identified in these services do not always work.

The issue becomes complex when a library subscribes to a journal or magazine. Access is often provided through third-party services, not just the publisher's default site. Example third-party services include bibliographic databases, like EBSCOhost or ProQuest. Also, libraries provide multiple discovery points and multiple ways to access the same works, such as through bibliographic databases with overlapping scopes. Bibliographic databases can tell us that an item exists when we search for it, but a library may not subscribe to the publication or the item might be in the stacks, stored off site or at another library altogether. These issues, along with the challenges presented by paywalls, necessitate additional layers of complexity, such as the use of proxy servers for user authentication, that make access more complicated.

Let's consider an example. The journal Serials Librarian is published by Taylor & Francis Online / Routledge, and has the following site as its homepage:


The journal is indexed in EBSCOhost's Library, Information Science & Technology Abstracts (LISTA) database and in ProQuest's Social Science Premium Collection (SSPC) database, among other places (e.g., it can also be found in Google Scholar, Google Search, a library's discovery platform, and more). This means that an article like the following can show up based on a query on any of the above platforms, even if none of these search or discovery platforms provide full text access to the article:

Brown, D. (2021). "Through a glass, darkly:: Lessons learned starting over as an electronic resources librarian. The Serials Librarian, 81(3–4), 246–252. https://doi.org/10.1080/0361526X.2021.2008581

All these access points are good for the user, but they present a technological problem, too. That problem is: how do I link to the main source?

One way to know if our library provides access to the above source and others like it is through a link resolver. We see UK's link resolver in action when we see a View Now @ UK button or link. When we click on that button or link in a database like the above mentioned LISTA or SSPC, we trigger the link resolver, and that routes us through the library's discovery service. In LISTA, that link looks like this:


In Social Science Premium Collection, the link looks like this:


Clicking on either of the above links in their respective databases sends us to Primo, UK Library's discovery layer.

If we had clicked on EBSCOhost's View Now link, the Primo link will result in the following:


Look closely at those links (scroll to the right to view their entirety), and you will see that the article's metadata is embedded in the URLs. Among other things, you can see the publication title, the article title, the author's name, the DOI, and more.

That metadata is used to trigger a search query in the library's discovery platform (at UK, that's InfoKat Discovery by Primo). It specifically initiates a GET HTTP Request, which is designed to request data from a resource/server, in this case InfoKat Discovery, using the metadata embedded in the URLs.

This is primarily the work of an OpenURL link resolver, which is designed to provide access to a target (main content) despite the source (where item was found) by initiating queries in an OPAC or discovery platform using the metadata embedded in a URL.

OpenURL is a technical solution to the paywall problem. It is designed to help users of electronic resources access a source in a library's collection based on a citation/record that the user discovered in a search result, an article's list of references, or wherever else the link resolver might show up. It is meant to work for all items in a library's collection, including print items, since print items have records in the catalog or discovery service, and those records have their own URLs.

Use Cases

Google Scholar Example

Let's consider a search scenario in Google Scholar. To start, users can affiliate themselves with a specific library through Google Scholar's settings. Once that affiliation has been set up, Google Scholar leverages a knowledge base (a structured database containing details about a library's collections) to facilitate access to paywalled content.

It works like this:

  • Metadata extraction: Google Scholar extracts the article's (or other content) metadata, which includes details such as the title, author, DOI, and publication year, from its database.
  • Administrative metadata: Additional metadata about the institution, such as an institutional ID number, is added to this information.
  • URL query formation: This collective metadata is converted into a URL query designed to search the library's collections through its discovery service.
  • User presentation: Users are presented with various target options for retrieving the article, or in some cases, they are taken directly to the full text (e.g., if only one target option exists in their library's collections).

The term target options refers to the different ways to obtain the article or acquire access to other content. These options may include:

  • Full text access from various and possibly multiple vendors or publishers. This is why a record in a discovery service may have multiple links to content.
  • Information about the article's physical location if available on a library's shelves.
  • Options to request the work through interlibrary loan.

To link Google Scholar to an affiliation:

  1. Go to https://scholar.google.com/
  2. Open Settings
  3. Click on the Library Links tab
  4. Search for your affiliation
    • e.g., University of Kentucky
  5. Add and save

Now when you search in Google Scholar, you should see View Now @ UK links (if your affiliation is University of Kentucky) next to search results that your affiliation has in its collections.

See Link Resolver 101 for additional details and this historical piece on link resolvers (McDonald & Van de Velde, 2004). Also, Alma provides Google Scholar documentation that is useful to read through. See also Google Scholar's documentation.

Consider conducting a basic keyword search in Google Scholar using the term electronic resources. If you are affiliated with a specific library, let's take the University of Kentucky (UK) as an example, you should see a View Now @ UK link next to at least some of the search results. An OpenURL, which we will dissect in detail later, contains the article's metadata and identifies Google as the source:


The original publisher of this article is Emerald, and the full text is available through Emerald eJournals Premier. This information is processed by Primo, UK's discovery and delivery service. Primo redirects our query to the UK Library's proxy service, OpenAthens (as of the summer of 2023; formerly it was EZProxy). After authenticating ourselves using our secure (we hope!) university account login, we gain access to the full text from Emerald.

Should alternative databases like EBSCOhost and/or ProQuest provide access, and not the original publisher (e.g., Emerald in this case), Primo would present us with options to select our preferred database for viewing the full text.

In scenarios where there is only one source to the content provided by the library, the transfer to Primo to OpenAthens to the Emerald full-text occurs swiftly, providing a seamless user experience.

Dissecting an OpenURL

Understanding the anatomy of an OpenURL can help us comprehend how metadata is transmitted and processed within library systems. As an example, let's dissect a specific Primo URL to identify its individual components.

The following Primo URL is an OpenURL link, which means Primo follows the standards of OpenURL. (See ANSI/NISO Z39.88-2004 (R2021) The OpenURL Framework for Context-Sensitive Services. It is composed of various fields and values that make up the metadata. For readability, I have broken up the URL into individual lines by metadata fields. The metadata fields begin after the openurl? keyword:


The link resolver technology, which operates in the background, translates the metadata to interact with appropriate library services. In this specific URL, the 'institution', 'vid', and 'sid' fields serve as administrative metadata that help to identify the institution and source information. The key fields used to retrieve the article are:

  • aulast: The author's last name
  • id: The DOI (Digital Object Identifier)
  • auinit: The author's first two initials
  • atitle: The title of the article

These fields play crucial roles in ensuring that the correct resources are fetched from the library's database, making the OpenURL an important element in providing access.

One feature of the above URL is Percent-encoding, a process used to encode URL-unfriendly characters, such as spaces, into a parsable format. Percent-encoding employs UTF-8 encoding, a common character encoding standard. Read about UTF-8 percent-encodings and the characters they correspond to.

In Case of Interlibrary Loan

We can see another instance of this within Primo itself. In UK's InfoKat, I search for the phrase electronic resources and filter by the WorldCat options from the drop down box to the right of the search box. By filtering for WorldCat options, I'm more likely to retrieve records that are not in UK Library's collections.

The first option is a work titled Electronic Resources. Selection and bibliographic control. Since this is not available via UK Libraries, I would have to request the item through interlibrary loan. When I do that, the link resolver triggers ILLiad, which is used for interlibrary loan. Note how the OpenURL looks much different here. Essentially, the OpenURL is contextual, and its context reflects the service being used (i.e., EBSCOhost, ProQuest, Google Scholar, Primo, Illiad, etc.) which determines the metadata elements in the URL. Note that some elements are empty (e.g., rft.date=& is an empty value for the date field versus rft.genre=book&, which holds the value book for the genre field).



Our readings this week by Kasprowski (2012), Johnson et al. (2015), and by Chisari et al. (2017) discuss link resolver technology, migration to new link resolver services, and methods to evaluate link resolver technology from both the systems and a user's perspective. It may not be necessary to learn how to hack your way through the OpenURL syntax, as I have above (or below: See Appendix A), or other aspects of link resolver URL formatting, but it is a good idea to acquire a basic understanding of how the URLs work in this process.

Let me re-emphasize that the key way that link resolvers work is by embedding citation metadata within the link resolver URL, including administrative metadata. This is another reason to have high quality metadata for our records, as our readings note. By implication, if we find, perhaps by an email from a library patron, that a link has broken in this process, it might be that the metadata is incorrect or has changed in some important way. Knowing the parts of this process aids us in deciphering possible errors that exist when the technology breaks.

For this week, see the ExLibres Alma link resolver documentation, which is the link resolver product used by UK Libraries. Let's discuss this documentation in the forum. I want you to find and explain other instances of link resolvers. Be sure to provide links to these examples and articulate ways the technology can be evaluated.

Documentation to read and discuss:

Link Resolver, Usage

Additional information

Appendix A

How I Enhanced Zotero by Hacking OpenURL

Since OpenURL compatible link resolver technology is partly based on query strings (more on this below), as we have seen, we can glean all sorts of information by examining these URLs: the query string component that contains the metadata for the source but also the base component that contains the vendor and institutional information and also the URL type. When I worked on this section, I was able to learn that Primo/Alma uses two URL types to request resources: a search URL and an OpenURL. We can see this the URLs. The base search URL looks like this:


The base OpenURL differs just a bit (see the end of the URL):


The base search URL appears when searching the university's discovery service. However, the OpenURL only appears when needed and during transit between the source and before reaching the target: e.g., after clicking on a View Now @ UK link and before being redirected to the full text version. I copied my institution's specific OpenURL when I clicked on a View Now @ UK link and before it redirected to the OpenAthens page.

My students often identify great problems to solve or are the source of great ideas. In a previous semester, one of my students in my electronic resource management class noticed that Zotero has a locate menu that uses OpenURL resolvers to look up items in a library. By default, Zotero uses WorldCat, but it can use a specific institution's OpenURL resolver. I had completely forgotten about this. When I investigated whether my institution was listed in the Zotero locate menu, I found that it was not nor was it listed on Zotero's page of OpenURL resolvers.

At the time, I didn't know what my institution's exact OpenURL was, but I was able to figure it out by comparing the syntax and values from other Primo URLs listed on Zotero's page of OpenURL resolvers. By comparing these OpenURLs, I was able to derive my institution's specific OpenURL (base component plus institutional info), which is:


I added that to Zotero, and it worked, and then I posted the OpenURL info to Zotero's forum, and they've added it to their OpenURL resolver page. If others are curious about how to add this info to Zotero, another library has created a video on this. The directions cover adding a specific OpenURL to Zotero and on how to use Zotero's Library Lookup functionality.

Appendix B

A Basic URL

I mentioned query strings above. Theses are a part of a URL that include instructions to query engines, database, or websites (like Wikipedia). The parameters (i.e., search terms) are part of a query string, too. It's also important to understand the base part of a URL (link) because the link in link resolver is the part of the whole process. A URL for an article can looks like this:


This URL contains the following components:

  • https:// : indicates the secure hypertext transfer protocol (HTTP)
  • www : indicates the subdomain
  • emerald : indicates the second level domain name
  • .com : indicates the top level domain

Under a standard web server configuration, the rest of the URL (after the .com) indicates a possible directory path to the location of the article on the Emerald web server, but it's likely that the content management system used by Emerald does not necessarily map to directories or folders on their servers.

The DOI (digital object identifier) for this article is embedded in the above URL and is specifically 10.1108/10650740510632208. The DOI is composed of a prefix and a suffix. The prefix includes the following elements:

  • 10 : signifies the DOI registration agency, which is always "10" for DOIs managed by the International DOI Federation (IDF)
  • 1108 : the registrant code that distinguishes the organization that registered the DOI (e.g., publisher, data center, library)

The suffix refers to the following element:

  • 10650740510632208 : a character string (in this case, of numbers) that refers to the article. This string is created by the registrant

The DOI itself can be used to create a permanent URL for the above work be adding a https://doi.org/ to the beginning:


Readings / References

Chisare, C., Fagan, J. C., Gaines, D., & Trocchia, M. (2017). Selecting link resolver and knowledge base software: Implications of interoperability. Journal of Electronic Resources Librarianship, 29(2), 93–106. https://doi.org/10.1080/1941126X.2017.1304765

Johnson, M., Leonard, A., & Wiswell, J. (2015). Deciding to change OpenURL link resolvers. Journal of Electronic Resources Librarianship, 27(1), 10–25. https://doi.org/10.1080/1941126X.2015.999519

Kasprowski, R. (2012). NISO’s IOTA initiative: Measuring the quality of openurl links. The Serials Librarian, 62(1–4), 95–102. https://doi.org/10.1080/0361526X.2012.652480

Additional References

McDonald, J., & Van de Velde, E. F. (2004, April 1). The lure of linking. Library Journal. Library Journal Archive Content. https://web.archive.org/web/20140419201741/http://lj.libraryjournal.com:80/2004/04/ljarchives/the-lure-of-linking/

Electronic Access


Access is the paramount principle of librarianship, and all other issues, from censorship to information retrieval or to usability, are on some level derived from or framed by that principle of Access.

This week we devote ourselves to a discussion of electronic access. To start, let's begin with Samples and Healy (2014), who provide a nice framework for thinking about managing electronic access. They include two broad categories, proactive troubleshooting and reactive troubleshooting of access.

  • proactive troubleshooting of access: "defined as troubleshooting access problems before they are identified by a patron". Some examples include:
    • "letting public-facing library staff know about planned database downtime"
    • "doing a complete inventory to make sure that every database paid for is in fact 'turned on'
  • reactive troubleshoot of access: "defined as troubleshooting access issues as problems are identified and reported by a patron". Some examples include:
    • "fixing broken links"
    • "fixing incorrect coverage date ranges in the catalog"
    • "patron education about accessing full text"

The goal here, as suggested by Samples and Healy (2014), is to maximize proactive troubleshooting and to minimize reactive troubleshooting. The Samples and Healy (2014) report is a great example of systematic study. The authors identify a problem that had grown "organically," collected and analyzed data, and then generalized from it by outlining a "detailed workflow" to "improve the timeliness and accuracy of electronic resource work." Practically, studies like this promise to improve productivity and work flows and foster job and patron satisfaction. Such studies also help librarians identify the kinds of software solutions that align with their own workflows and patron information behaviors. If interested, I suggest reading Lowe et al., 2021 about the impact of Covid-19 on electronic resource management. Six authors individually describe access issues at their respective institutions and show how issues of pricing, acquisitions, training, user expectations, and budgets affect electronic access. I suggest reading articles like this in light of the framework provided by Samples and Healy (2014) because stories like these, about this impact of the pandemic on electronic access, can help guide us in developing proactive troubleshooting procedures minimize future issues, pandemic or otherwise, at our own institutions.

Samples and Healy (2014) say something important against a common assumption about electronic resources, particularly those provided by vendors:

The impression that once a resource is acquired, it is then just 'accessible' belies the actual, shifting nature of electronic resources, where continual changes in URLs, domain names, or incompatible metadata causes articles and ebooks to be available one day, but not the next (The Complexity of ERM section, para. 6).

Hence, unlike a printed work from the long ago print-only era that, once cataloged, may be shelved for decades or longer without major problems of access, electronic resources require constant and active attention to maintain accessibility to them. Ebooks, for example, can create metadata problems. For example, often what's important about scholarly ebooks, in particular, are the chapters they include, and hence metadata describing ebook components is important, along with providing links to those chapters in discovery systems. This difference between item-level cataloging and title-level cataloging, as Samples and Healy describe, can lead to confusing and problematic results when considering different genres and what those genres contain.

Or, note that they discuss how a series of links are involved starting from the source of discovery, e.g., an OPAC or a discovery layer, to the retrieved item, and how difficult it might be in determining which of these links and which of those services is broken when access becomes problematic.

Let me highlight a few key findings from their report:

  • Workflows: why does this keep coming up? It's because workflows help automate a process---simplify and smooth out what needs to be done, and because this is only possible when things are standardized.
  • Staffing: we'll discuss this more in another section, but part of the problem here is that ERM has had a major impact on organizational structure, but one where different libraries have responded differently. This lack of organizational standardization has its benefits regarding overall management practices and cultures, but it also has huge drawbacks---and that's the difficulty in establishing effective, generalized workflows that include key participants, and to minimize as many dependencies on any one person.
  • Tracking: if there's no tracking, there's no method to systematically identify patterns in problems. And if that's not possible, then there's no method to solve those problems proactively. It becomes all reactive troubleshooting, and reactive troubleshooting, as Samples and Healy indicate, results in poor patron experiences. We'll discuss tracking when we during the week on Evaluation and Statistics.

We commonly get the line that discovery systems are a great solution to all the disparate resources that librarians subscribe to. Or, if we do think about problems with such systems, we are often presented with a basic information retrieval problem, such that the larger the collection to search, the more likely a relevant item will get lost in the mix. Carter and Traill (2017) point out that these discovery systems also tend to reveal access problems as they are used. The authors provide a checklist to help track issues and improve existing workflows.

Buhler and Cataldo (2016) provide an important reminder that the mission of the electronic resource librarian is to serve the patron. This should remind us that the internet and the web have flattened genres. By that I mean they have made it difficult to distinguish among works like magazine articles, news articles, journal articles, encyclopedia articles, ebooks, etc. Though the Buhler and Cataldo (2016) reading is student-focused, other studies have hinted at the same issue they describe across other populations. It's important, if possible, to recognize these issues as ERM librarians and work to resolve them in the ways that you would be able to.

Myself, I grew up learning about the differences between encyclopedia articles, journal articles, magazine articles, newspaper articles, book chapters, handbooks, indexes, and dictionaries because I grew up with the print versions, which by definition, were tangible things that looked different from each other. Today, a traditional first year college student was born around the year 2004 and grew up reading sometime in the last decade. The problem this raises is that although electronic resources are electronic or digital, they are still based on genres that originated in the print age, yet they lack the physical characteristics that distinguished one from the other. E.g., what's the difference between a longer NY Times article (traditionally a newspaper article) and an article in the New Yorker (traditionally a magazine article) today in their online forms? Aside from some aesthetic differences between the two, they are both presented on web pages, and it's not altogether obvious, based on any kind of cursory examination, that we can tell, as regular users, that they're entirely different genres. However, there are important informational differences between the two, how they were written, how they were edited, how long they are, and who they were written by that might still lead us to consider them as different genres. Even Wikipedia articles pose this problem. Citing an encyclopedia article was never an accepted practice, but this was only true for general encyclopedias. It was generally okay to cite articles from special encyclopedias because they focused on limited subject matters like art, music, science, culture, and were usually more in-depth in their coverage. Examples include the Encyclopedia of GIS, the Encyclopedia of Evolution, The Kentucky African American Encyclopedia, The Encyclopedia of Virtual Art Carving Toraja--Indonesia, and so forth. There are studies that show that Wikipedia provides the same kind of in-depth coverage of some special encyclopedias, thus helping to flatten the encyclopedia genre, too.

The flattening holds true for things like Google. The best print analogy for Google is that of an index, which was used to locate keywords that would refer to source material. The main difference between these indexes and Google is that the indexes were produced to cover specific publications, like a newspaper, or specific areas, like the Social Science Citation Index or the Science Citation Index, both of which are actual, documented, historical precursors to Google and to Google Scholar. But today, these search engines are erroneously considered source material (e.g, "I found it on Google"). Few, I think, would have considered a print index as source material, but rather as a reference item, since it referred users to sources. Nowadays, it's all mixed up, but who can blame anyone.

Example print indexes:

Access and Authentication

In this section, we'll delve into the technological frameworks that facilitate access to and authentication of library electronic collections. Given that a significant portion of these resources are behind paywalls, libraries employ specialized software to verify user credentials before granting access. These authentication measures are not just best practices but are often mandated by contractual agreements with content providers.

There are two main technologies used to authenticate users. The first is through an IP / proxy server, and the second is through what is called SAML authentication. We address these two authentication types below.

Proxy Authentication

EZproxy (OCLC) is the main product of the first type. When we access any paywalled work, like a journal article, you may notice something like ezproxy.uky.edu in the string of text in a URL. For example, the following is an EZProxy URL:


Note that UK Libraries, which I use in these examples, is transitioning away from EZProxy and adopting OpenAthens, which is SAML based. More on that below.

The interesting thing about this URL is that it has a uky.edu address even though the article is in a journal that's hosted in Elsevier's ScienceDirect database. The www-sciencedirect-com part of the address is a simple subdomain of ezproxy.uky.edu (you can tell because the components are separated by dashes instead of periods), As a subdomain, it is no different than the www in www.google.com or the maps in maps.google.com. The original URL is in fact:


As opposed to the first URL, the interesting thing about the original URL is that it is in fact a sciencedirect.com address. Even though "sciencedirect" appears in the uky.edu URL, it is not a "sciencedirect.com" server. They are two different servers, from two different organizations, and are as different as uky.edu and google.com.

The reason we read an article or some other paywalled content at a uky.edu address and not at a, e.g., sciencedirect.com address is because of the way proxy servers work. In essence, when we make a request for a resource, like a journal article or a bibliographic database, that's provided by a library, our browser makes the request to the proxy server and not to the original server. The proxy server then makes the resource request to the original server, which relays that content back to the proxy server (EZproxy), which then sends the content to our browser. This means that when we request an article in a journal at sciencedirect.com or jstor.com, our browser never actually makes a connection to those servers. Instead, the proxy server acts as a go-between. See Day (2017) for a more technical and yet accessible description of the process.

Proxy servers provide access either through a login server or based on the user's IP address. If we're on campus, then our authentication is IP based, since all devices attached to the university's network are assigned an IP from a pre-defined range of IP addresses. This makes access to paywalled content fairly seamless, when on campus.

If we are off-campus, access is authenticated via a login method to the proxy server. When we attempt to access paywalled content from off-campus, we will see an EZproxy login URL. This looks something like this for accessing the ScienceDirect database:


Aside from ScienceDirect, you can see a list of other subscribed content that requires EZproxy authentication here:


SAML Authentication

The second main technology used to authenticate and provide access is based on what is called SAML authentication. The main product that provides SAML authentication for libraries is OpenAthens.

SAML, or Security Assertion Markup Language, is an XML-based standard that exchanges and authorizes data between parties, in particular, between an identity provider (IdP) and a service provider (SP).

Unlike a proxy / IP authentication process, SAML's main function is that of a identity verification system. Under this method, libraries offer a single sign-on process, and once authenticated, patrons have access to all SAML ready content or service providers. The process is similar to the Duo Single Sign-On service many universities use for authentication. In the OpenAthens case, users are authenticated via an identity provider, which would be the library or the broader institution (and usually via some other software service). The library provides identification by connecting to its organization's identity management system, such as adfs, or Active Directory Federation Services. Once a patron has been authenticated, a confirmation is sent to the content provider, which then provides access to the content to the patron. For more details, see What is SAML? and this detailed OpenAthens software demo.

One of the benefits of this method is that URLs are not proxied, which means that content is not delivered to the patron from a proxy server like EZproxy. Instead, patrons access the original source directly. From a patron's perspective, this facilitese sharing clean, unproxied URLs. As far as I can tell, one of the downsides might be privacy related. With a proxy server, users don't access the original source, but instead the source is delivered through the proxy server, which by definition, masks the patron's IP address and browser information. This wouldn't be true under the SAML method.

Note: The library would have access to EZproxy logs, which would include much of the user's activity while using the proxy.

In a bit more detail, a SAML-based authentication process is described below:

  1. User Request: A user tries to access a resource on the service provider (e.g., a paywalled library article).
  2. Redirection: If the user is not already authenticated, the service provider redirects the user to the identity provider (IdP), often passing along a SAML request.
  3. Authentication: The IdP challenges the user to provide valid credentials (e.g., username and password). If the user is already authenticated with the IdP (e.g., already logged into a university portal), this step may be skipped.
  4. Assertion Creation: Upon successful authentication, the IdP generates a SAML assertion, which is an XML document that includes the user's authorization information.
  5. Response: The IdP sends this SAML assertion back to the service provider, often as part of a SAML response package.
  6. Verification: The service provider verifies the SAML assertion (often by checking a digital signature) to ensure it came from a trusted IdP.
  7. Access Granted: Once the assertion is verified, the service provider grants the user access to the requested resource.
  8. Session: A session is established for the user, allowing them to access other resources without needing to re-authenticate for a certain period.

In the context of a library, the IdP could be a university's authentication system, and the service provider could be a database of academic journals. When a student tries to access an article, they would be redirected to log in through the university's system. Once authenticated, the university's system would send a SAML assertion to the journal database, confirming that the student is authorized to access the content.

This method is particularly useful for organizations like universities that have multiple service providers (e.g., different databases, internal services, etc.) but want to offer a single sign-on (SSO) experience for their users.


The Samples & Healy (2014) and the Carter & Traill (2017) articles address troubleshooting strategies with electronic resources. One additional thing to note about these readings is how the organizational structure influences workflows and how the continued transition from a print-era model of library processes to an electronic one remains problematic. Even once that transition is complete, both readings make the case that strategy and preparation are needed to deal with these issues. The Buhler & Cataldo (2016) article shows how confusing e-resources are to patrons and how the move to digital has complicated all genres, or "containers", as the authors name them. Such "ambiguity" has implications not only for how users find and identify electronic resources but on how librarians manage access to them.

I added the EZproxy and OpenAthens content in order to complete the technical discussions we have had in recent weeks on integrated library systems, electronic resource management systems, link resolvers, and standards. These authentication and access technologies complete these discussions, which, altogether, cover the major technologies that electronic resource librarians work with to provide access to paywalled content in library collections. Both technologies aim to provide seamless access to paywalled content, as nearly as seamless as accessing content via a search engine or other source. Although neither will never be able to offer completely seamless access as long there are paywalled sources in library collections, the job of an electronic resource librarian is often to make sure they work as well as possible. This will often mean working with vendors and colleagues.

Additional Sources

Readings / References

Samples, J., & Healy, C. (2014). Making it look easy: Maintaining the magic of access. Serials Review, 40, 105-117. https://doi.org/10.1080/00987913.2014.929483

Carter, S., & Traill, S. (2017). Essential skills and knowledge for troubleshooting e-resources access issues in a web-scale discovery environment. Journal of Electronic Resources Librarianship , 29(1), 1–15. https://doi.org/10.1080/1941126X.2017.1270096

Buhler, A., & Cataldo, T. (2016). Identifying e-resources: An exploratory study of university students. Library Resources & Technical Services, 60, 22-37. https://doi.org/10.5860/lrts.60n1.23

Additional References

Breeding, M. (2008). OCLC Acquires EZproxy. Smart Libraries Newsletter, 28(03), 1–2. https://librarytechnology.org/document/13149

OCLC. (2017, September 22). EZproxy. OCLC Support. https://help.oclc.org/Library_Management/EZproxy

OpenAthens transforms user access to library resources, replacing EZproxy and IP address authentication. (2021, June 2). About UBC Library. https://about.library.ubc.ca/2021/06/02/openathens-transforms-user-access-to-library-resources-replacing-ezproxy-and-ip-address-authentication/

Botyriute, K. (2018). Access to online resources. Springer International Publishing. https://doi.org/10.1007/978-3-319-73990-8

Day, J. M. (2017, April 25). Proxy servers: Basics and resources. Library Technology Launchpad. https://libtechlaunchpad.com/2017/04/25/proxy-servers-basics-and-resources/

Lowe, R. A., Chirombo, F., Coogan, J. F., Dodd, A., Hutchinson, C., & Nagata, J. (2021). Electronic Resources Management in the Time of COVID-19: Challenges and Opportunities Experienced by Six Academic Libraries. Journal of Electronic Resources Librarianship, 33(3), 215–223. https://doi.org/10.1080/1941126X.2021.1949162

Chapter Three: E-Resource Stewardship

This chapter on e-resource stewardship explores the complex and essential aspects of managing electronic resources within libraries. The following sections cover:

  1. the ERM workflow, which details the procedures and practices involved in the effective handling of digital materials;
  2. markets and economics of e-resources, which offers insights into how copyright law influences the financial dynamics and market trends that shape electronic resource availability and pricing;
  3. licensing, which focuses on the legal considerations and contractual agreements essential to obtaining and providing electronic content;
  4. negotiations for e-resources, which examines the strategies and techniques employed in securing favorable terms and conditions for libraries and their patrons; and
  5. acquisitions and collection development, which helps illuminate the careful planning and execution involved in curating a diverse and relevant digital collection.

The goal with these sections is to provide a holistic view of e-resource stewardship, equipping you with the knowledge and skills necessary for the contemporary electronic resource landscape.



If all goes according to plan, this week's readings on electronic resource management and on workflow analysis should put prior material into context and act as a bridge to the material we discuss in the remaining sections of this work.

To recap: in the beginning we learned about:

  • what it means to be a librarian who oversees or is a part of electronic resource management,
  • what kinds of criteria are sought for in new hires, and
  • why electronic resources have introduced so much constant disruption across libraries.

Among other things, this latter point is largely due to the fact that the print-era largely involved (or at least more so) a linear process of collection management and information use that was fundamentally altered with the introduction of electronic resources.

Then we learned about the functions and modules offered by:

  • electronic resource management software,
  • integrated library system software, and
  • library service platforms.

To understand those systems better, we learned about standards, technical reports, and recommended best practices by studying documents prepared by NISO and its members, and component technologies such as:

  • OpenURL link resolvers
  • Access and authentication technologies:
    • EZProxy
    • OpenAthens

We also learned about:

  • technical and workflow standards

And we discussed:

  • why standards are important, whether they are technical or address workflows,
  • why interoperability is required, and
  • what happens when access to electronic resources break.


This week things will start to make connections at a faster pace. In the first Anderson article (chapter 2), we gain a clearer idea of what a knowledge base is and how it works. We learn more about how integrated library systems and ERM systems work together (or fail to). We dip our toes into newer topics like licensing, COUNTER, and SUSHI, which we'll cover in greater detail towards the end of this module.

In the second Anderson article (chapter 3), we learn how to carefully consider a library's work flow before selecting which ERM software to purchase. (This is why workflow-based standards are important, even if they are not true technical standards.) We do this because we select systems based on the needs of the librarians, which may be vastly different across libraries, and which must rely on different aspects of the overall process. As you read this chapter, keep in mind the Samples and Healy article from the previous section and the discussions of proactive versus reactive troubleshooting.

As hinted at in these readings, especially in the section on acquisitions, budgets, subscriptions, and purchasing in Anderson's paper on the Elements of Electronic Resource Management, and in the multiple discussions about the role vendors play in electronic resource management, the market and the economics of this area of librarianship weigh heavily on everyday realities. We will follow up on this in the next section when we begin to read more about the market and the economics of electronic resources. For example, in both Anderson readings, we learn about the CORE recommended practice (RP), or the Cost of Resource Exchange, that was developed by NISO. CORE brings together three aspects of our previous discussions: software, funds, and interoperability. Here the CORE RP describes how the ILS and ERM systems can communicate the costs of electronic resources between each other. Its existence hints at the pressures librarians have had in having to deal with complex budget issues. Although these articles were published before the pandemic, the pandemic has made these issues more complicated for libraries.

While we spent time discussing technical standards, we also learned about TERMS, an attempt to standardize the language and processes involved with electronic resource management. We see more connections in this week's readings. Aside from the CORE standard, we learn about standardizing attempts at licensing, which includes the COUNTER and SUSHI usage-related standards that outline the communication, collection, presentation, and the formatting of usage statistics for electronic resources such as ebooks, journals, databases, and more.

We have discussed interoperability and what it takes for multiple systems to connect and transfer data between each other. We primarily discussed this with respect to link resolver technology, and we did this not just because we should know about link resolvers as important components of electronic resource management, but also because link resolvers are a good example of the kind of work that is involved for systems to communicate properly. There are other forms of interoperability, though, and coming back to CORE again, the Anderson article (chapter 2) provides a link to a white paper on the interoperability of acquisitions modules between integrated library systems and electronic resource management systems. This paper defines 13 data elements that were determined to be desired in any exchange between ILS software and ERM software for these software to communicate usefully with each other. By that, I mean, the data points enable meaningful use of both the ILS software and the ERM software, and include:

  • purchase order number
  • price
  • start/end dates
  • vendor
  • vendor ID
  • invoice number
  • fund code
  • invoice date
  • selector
  • vendor contact information
  • purchase order note
  • line item note
  • invoice note (Medeiros, 2008).

That white paper contains examples and worthwhile use cases and stories from major libraries, and these cases are helpful reads. The paper provides a sense of how standards are created through a process of comparing, contrasting, and coordinating needs and contexts among different entities.

These Anderson readings are great because they illustrate the whole ERM process. If you are able, visit the journal issue for these two readings and read the other chapters that Anderson has written, but in particular, the Electronic Resource Management Systems and Related Products.

In short, this section's topic provides a foundation for the remaining topics we study. In particular, they will help frame what we learn when we study the markets and economics of the electronic resource industry, the process of licensing and negotiation, and about the evaluation and statistics of usage. Think of this section as a foundation, a transition between, and a reflection on all we have studied thus far, and what we will study going forward.

Readings / References

Anderson, E. K. (2014). Chapter 2: Elements of electronic resource management. Library Technology Reports, 50(3). https://journals.ala.org/index.php/ltr/article/view/4492/5257

Anderson, E. K. (2014). Chapter 3: Workflow analysis. Library Technology Reports, 50(3). https://journals.ala.org/index.php/ltr/article/view/4493/5259

Additional References

Anderson, E. K. (2014). Chapter 4: Electronic resource management systems and related products. Library Technology Reports, 50(3). https://journals.ala.org/index.php/ltr/article/view/4491

CORE Standing Committee (NISO). (2010). CORE: Cost of Resource Exchange. https://www.niso.org/standards-committees/core-cost-resource-exchange

Medeiros, N., Miller, L., Chandler, A., & Riggio, A. (2008). White Paper on Interoperability between Acquisitions Modules of Integrated Library Systems and Electronic Resource Management Systems (p. 28) [White Paper]. Library of Congress. https://old.diglib.org/standards/ERMI_Interop_Report_20080108.pdf

Samples, J., & Healy, C. (2014). Making it look easy: Maintaining the magic of access. Serials Review, 40, 105-117. https://doi.org/10.1080/00987913.2014.929483

Market and Economics


I think it's fair to claim that current copyright laws heavily influence the price of electronic resources, and in this section, we cover the basics of copyright, how copyright creates monopolies, and how those monopolies, in the digital era, are able to demand substantial sums of money for electronic resources, and finally, how this impacts library budgets.

Below I'll discuss copyright and the first sale doctrine, I show how digital works have disrupted some basic ways that libraries function. Then I'll discuss the impact that this law has on complicated e-resource collections and costs.

Copyright law grants a monopoly to the person or corporate owner of an intellectual property. That is, copyright owners have exclusive rights over the material that they own, where the owners may be a person or an organizational entity. Section 106 of the law grants copyright owners the following rights:

(1) to reproduce the copyrighted work in copies or phonorecords;

(2) to prepare derivative works based upon the copyrighted work;

(3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;

(4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;

(5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and

(6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.

Source: Copyright Section 106

These exclusive and all-encompassing rights are designed to allow copyright owners a monopoly of their property. While it is good to motivate the creation of intellectual property, without restrictions, there would be little to no benefit to the public. For example, if those exclusive rights were followed without limitation, then it would mean that the exchange of money for a work between a copyright holder and a buyer for something like a printed book or a DVD would not entail a transfer of ownership of that physical copy; that is, it would not allow the buyer of the physical item any distribution rights of the item once the first exchange has been made. Under such a scenario, libraries would be able to buy physical books but would not be able to lend them.

To address this, the First Sale Doctrine (also see the Justice Department's explanation) helps avoid the issue granted by the list of exclusive rights listed in Section 106. Because of the first sale doctrine, made precedent in the early 20th century and then codified into law in 1976, you, I, or a library may buy a physical copy of a work, like a book, a DVD, a painting, and literally own that specific copy. First sale doctrine does not grant reproduction rights, as listed in Section 106 of the 1976 copyright law, but it does allow anyone to distribute the singular, physical representation or embodiment of the work that they have purchased. Thus, the first sale doctrine is why libraries were able to thrive throughout the 20th century, lend material, and preserve it. More mundanely, it's also why I can buy a book at a bookstore and later give it away or sell it to someone.

The digital medium makes things messier, as it tends to do. There are two big reasons for this. First, digital works are not subject to the same distribution constraints as physical works are, and the first sale doctrine is about distribution rights and not reproduction rights. If I have a physical copy of some book and give you my copy of that book, then I no longer have that copy. However, if I have a copy of a digital file, then as we all know, it is relatively trivial to share that file with you without losing access to my own copy. Since digital works can be copied and distributed without losing access to the copies or to the original, the First Sale Doctrine does not necessarily apply to digital copies. Consequently, in the digital space, there are fewer limitations on supply, including on lending.

Second, many digital works are like software, or at least, they are intertwined with the software and hardware needed to display them. This is true for all kinds of documents, like HTML pages, which need a web browser or a text editor to read them; or audio files, which need a media player to listen to them. But consider ebooks as an example. Whether a printed book is the size of a quarto or as large as a folio, whether it is all text, whether it includes images, or whether it has pop-ups makes no difference to its basic distribution potential and ergo its copyright status. These physical works are, in essence, self-contained. Ebooks, however, arrive in all shapes and sizes. Project Gutenberg distributes public domain ebooks in various file formats that include plain text documents, that have no presentation markup like bold, italics, and like, HTML documents with markup, XML documents like EPUB, and then also PDFs and others. Recently, they started using AI to convert their books to audiobooks Why so many file formats? Text is text, right? In the print space, a book is simply text printed on pages of paper, even if it is sometimes printed on different sized pages or using different type settings. But these various markups exist because they each offer technological or presentational advantages and are often tied to specific pieces of software and hardware.

This is especially true for proprietary, encrypted file formats, like the ones that Amazon created for use only on Kindles (other e-reader and other stores also), or the popular MP3 file format for audio recordings that only recently became patent free. While file formats like these may not be necessarily counted as software, depending on how we define software, it is certainly true that file formats and the specific software applications that display or play them are intertwined. If you are old enough, you may remember the headaches caused with files created as .doc in some early version of Microsoft Word that later failed to display properly in a future Microsoft Word version or in some other word document software or on some other operating system. WordPerfect 5.1 was a popular word processing applications in the 1990s, and it's not clear if files created with that application, or other popular word processing applications at that time, would open today, at least without intervention. In short, these complexities introduce obstacles to the first sale doctrine and raise other copyright issues because of the connection to software, which is also often copyrighted.

The main idea here, though, is that copyright holders and publishers have little financial interest in selling actual digital copies of works since they cannot prevent future distribution without special technologies. Instead, they are motivated to license material, and retain ownership, and sometimes explicitly tie that material to specific pieces of software and hardware, like the Kindle, which would have to be bought. That, we should note, adds additional expense.

Take note of the recent lawsuit against the Internet Archive (IA) in Hachette v. Internet Archive. In the early days of the pandemic, the Internet Archive started lending books through OpenLibrayr.org that were mostly not available as ebooks. The IA only lent these books if a library owned a physical copy and limited lending to the amount of physical copies owned. This is called Controlled Digital Lending (CDL), which the IA argued was Fair Use. Four publishers sued the IA and won the initial suit in March. The decision will be appealed but has already emboldened other media companies to sue the IA. But at stake is the basic notion of whether ebooks can be bought or must be perpetually licensed, until at least they enter the public domain.

Impact on Ebook Collections

What does this mean for libraries in the digital age? It means that libraries buy less and rent or license more, and renting means that they continually pay for something for as long as they want access to it. As Sanchez (2015) puts it,

At its simplest, this takes the form of paying x dollars per year per title during the length of the contract (Forecasting sect, para 4).

When the total supply of works increases, e.g., the total number of published books increase, as they do each year, then it means renting more and more without ever completely acquiring. (It also means, holding collections at a stable number, providing access to less.) When budgets are cut or remain stagnant, this ultimately entails a decline in the collection a library has to offer, or if not a decline in the collection, then cuts in some other areas of a library, like the number of librarians or other staff. This is the conundrum that Sanchez raises in his article.

If that alone were the issue, maybe librarians could discern other sustainable ways to proceed, but Sanchez (2015) raises additional issues and questions: what if publishers raise the prices for digital content at an annual rate faster than what they already raise for print content (reasonable assumption)? If so, does that mean that librarians will be able to afford fewer titles, digital or print, unless they raise their budgets, and, as they weed, how would that impact the physical space of the library? (See figure 2.3, specifically, from Sanchez's article. The plot shows just how much could be lost and how little gained if the forecasts Sanchez discusses come true.)

There are many ways to put constraints on the supply of an item in the digital landscape, as opposed to limiting supply in the physical space, which involve fewer methods. That is, it's relatively easy for publishers and others to restrict the supply of physical works. They simply have to limit how many of those physical works are manufactured (e.g., the number of print runs). But given the nature of digital content, restricting supply is driven by the technologies available to do so, and since there are so many publishers and distribution points, then each one of these points will often create their own unique type of constraint on the supply. The result is that there will be a number of confusing methods implemented to limit constraint, even if these limitations are marketed as selling points. In practice, this may mean that only a limited number of people may "check" out a work from a library at one time, or access a database at one time, and so forth. Thus, the budget issue has an impact on access and usability.

There have been recent attempts to address these issues. Paganelli (2022) describes some state by state efforts to lessen the financial burdens on libraries that e-content entails. However, as Paganelli notes, these efforts have largely not succeeded. And Brewster Kahle, the founder of the Internet Archive, has recently highlighted the growing attacks on library budgets that make this the outcomes for library budgets more pessimistic. From a publisher's perspective, Sisto (2022) argues that the general narrative about the tension between libraries and publishers is misleading. Instead, the author argues, the landscape is much more complex, and the publishers have made a number of attempts to "make their e-lending policies better for librarians" (2018-2019: Policy Updates and Different Opinions sect, para 6). Personally, I'm not sure I buy (or lease) many of Sisto's arguments, but I think one thing is clear: the e-lending market is complex and miscommunication abounds.

Read more about copyright:


Impact on eJournal Collections

Although ebooks likely represent the biggest impact on public library budgets, academic libraries are largely concerned with scholarly journals. Like Sanchez (2015), Bosch, Albee, & Henderson (2018) show that the major issue is that academic library budgets are declining or holding flat even though prices continue to increase for journal titles and even though the number of published articles increase. This raises an interesting phenomenon: although researchers are hurt by the lack of access to research, researchers are also part of the cause of the supply.

The authors also note that part of the drive to publish includes a drive to publish in so-called prestigious journal titles, where prestigious is determined by how well cited the title is. The authors refer to a few citation-based metrics that the research community uses to determine prestige. These include the long-established Impact Factor, which can be examined in the Journal Citation Reports (JCR) provided by Clarivate Analytics, as well as newer ones, such as the Eigenfactor and the Article Influence Score, which can also be examined in JCR (the eigenfactor.org site is not well updated, at the time of this writing).

One motivation for using a citation metric as the basis of evaluating journal titles is because citation metrics indicate, at some level, the use of the work. That is, a citation to an article in a journal title means, hopefully, that the authors citing that article have read the article and used the knowledge from that work to add new knowledge. Historically, when Eugene Garfield invented the Impact Factor, it was partly as a tool for librarians to use in collection management because he recognized this use-based theory of citations.

However, citation metrics should never be the sole or even primary tool to evaluate research. While they may provide good information, there are many caveats. First, there are different fields of research, and some fields cite at different rates and at different volumes than other fields, and also for different reasons. This is why, in Table 5 of the Bosch, Albee, and Henderson (2018) article, the cost per cite for journals in the Philosophy & Religion category are so much higher that the cost per cite of titles in other categories. Authors in P&R simply have different citation and publishing behaviors than authors in other categories. Second, citations do not capture all uses of a journal. For example, there are many journal titles that I might use in my courses but may not use in my research, and this is true for other faculty, yet citation metrics won't reflect that kind of use. The authors refer to altmetrics, which was invented to help capture additional non-citing uses of scholarly products, but altmetrics is still in its infancy and is largely dependent on data sources and scholarly behavior that are problematic themselves. Third, there are various issues with the metrics themselves. The Impact Factor is based on an outdated calculation and is thus not a very appropriate statistical measure. The other metrics were created to address that but may have other problems. And four, the use of the metrics, regardless of which one, tends to drive publishing behavior---such that journal titles with higher metrics tend to attract more submissions and more attention, thus driving more citations to them. Such skewing drives demand to publish in those journals. Thus, citation based metrics are comparable to a kind of capitalist economic system where, as the sociologist of science Robert Merton noted, the richer get richer (in citations) and the poor get poorer. The issue then is that prestige, defined in this way, does not necessarily indicate quality: just use.

The authors also discuss some issues with Gold Open Access (OA) and the idea that Gold OA may compound the cost problem. This is where authors pay a publication fee, or an article processing charge (APC), once a manuscript has been accepted by a journal (there are other types of Gold OA cost models). We can do a quick off the cuff and rough calculation to see why this might compound the problem. As an example, PLOS ONE is one of the largest gold OA journals and in 2023, they charge an APC of $1,931. This is $336 more than what they charged in 2020, when I wrote the first draft of this section, which represents an overall 21.07% increase (see Table 1 below). In 2018, 32 papers were published in PLOS ONE that included at least one author from the University of Kentucky, totaling $51,040 in APCs for the 50 total institutions that were associated with these papers. This amounts to about $1020 per institution, in 2018 dollars, paid for by the authors and not libraries. For UKY authors, this also amounts to over $32,640 spent on APCs (32 * $1020). This is about $27K more than the average price of the most expensive category, Chemistry, as reported in Table 1 of the reading. So even if open access reduced costs to libraries, it still may not reduce cost on taxpayers, who fund much of of this research.

YearPLOS ONE APC Fee$ increase% increase
Table 1: APC fees for PLOS ONE

The 2022 public access mandate issued by The White House Office of Science and Technology Policy might make things more interesting. The OSTP memorandum states that "federally funded research must be publicly accessible without an embargo on their free and public release." This mandate requires that federal agencies with research and development expenditures to develop their own policies, and that could mean that such policies result in Gold OA agency-specific mandates or Green OA mandates. Green OA allows pre-prints to be made available (article versions before peer-review) or post-prints (article versions after peer-review) but not publisher versions (versions after formatting, etc.). We'll see how this plays out.

Readings / References

Bosch, S., Albee, B., & Henderson, K. (2018). Death by 1,000 cuts. Library Journal, 143(7), 28–33. https://www.libraryjournal.com/story/death-1000-cuts-periodicals-price-survey-2018

Sanchez, J. (2015). Chapter 2. Forecasting public library e-content costs. Library Technology Reports, 51(8), 9–15. Retrieved from https://journals.ala.org/index.php/ltr/article/view/5833

Additional References

Paganelli, A. (2022). Legally Speaking—States Unsuccessful in Providing Financial Relief of eBook Terms for Libraries. Against the Grain, 34(3). https://www.charleston-hub.com/2022/07/legally-speaking-states-unsuccessful-in-providing-financial-relief-of-ebook-terms-for-libraries/

Sisto, M. C. (2022). Publishing and Library E-Lending: An Analysis of the Decade Before Covid-19. Publishing Research Quarterly, 38(2), 405–422. https://doi.org/10.1007/s12109-022-09880-7

Licensing Basics


I think it's fair to say that what characterizes electronic resource management the most is licensing. For if something is licensed, then it's most likely e-content or related, and by definition, it's not owned by a library, and thus it's a temporary item in the collection, and thus requires special management.

Licensing requires understanding other aspects of electronic resource management, too. While an ERM librarian's job duties might be solely focused on the technical aspects of the work, i.e., those things we covered in prior sections, an ERM librarian whose primary job duty is to participate in the licensing process must have a good grasp of the technical for no reason other than that it is the technical that is licensed. In other words, it's probably a good thing to understand what is being licensed. Thus, this is why we spent time on the technical aspects before working on licensing.

Before we get into licensing, we should refresh ourselves on the basics of copyright law, since what is being licensed is a fact of its existence. We should also know something about contract law, and how the two types of laws are related, since licensing entails negotiating, signing, and managing contracts.

There is a complicated tension between copyright law and contract law. In short, copyright is a temporary right because of its close connection to the concept of the public domain. In the US, copyright was established in the Constitution under Article 1, Section 8, Clause 8, which states:

[The Congress shall have Power ...] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

The clause in the U.S. Constitution grants copyright holders specific, exclusive rights, which inherently creates a tension between public and private interests. On one hand, these exclusive rights operate much like private property, allowing copyright owners to control the use of their creations and even transfer or sell these rights through contracts, which are in personam by nature (Rub, 2017). On the other hand, these rights are not absolute; they are granted for a "limited time" and with the ultimate aim of serving the public interest by enriching the cultural and intellectual commons.

This dual nature of copyright--its function as a form of private property that can be transacted, and its overarching goal to promote public welfare--adds layers of complexity to its legal treatment. While copyright law grants certain exclusive rights that can be enforced against individuals through in personam legal actions, these rights are also designed to be temporary and ultimately benefit the public. Therefore, copyright exists as both a personal right that can be legally enforced against specific individuals and a mechanism intended to serve broader societal goals.

in personam: made against or affecting a specific person only; imposing a personal liability (Google Dictionary).

In short, copyright exists in a unique space where it functions as a basic right, embedded in the earliest part of the US Constitution; it exists as a temporary bulwark against public ownership for the public good; and unlike other, basic rights, copyright is transferable, since it is a type of property right.

Copyright is a type of economic right. However, other types of rights cannot be treated as property. These include natural rights (life, liberty, and the pursuit of happiness), civil rights (right to vote, right to a fair trial, protections against discrimination), political rights (right to protest, run for public office), and social rights (right to healthcare, education, and housing).

This all gets more complicated when licensing enters the picture, and this largely has to do with three things. First, the history of software reflects the history of print, in as much as source code relied on a physical medium, like a floppy disk or magnetic tape, to be distributed, just as a book had to be printed to be distributed. Second, the internet and greater availability of higher bandwidth speeds enabled, like books, software to be distributed as easily as any other digital object. Third, it wasn't originally clear that software, or source code in particular, was copyrightable, but the Copyright Office at the Library of Congress eventually decided that it was eligible. This eligibility became codified in an 1980 amendment to the 1976 Copyright Act. This cemented the ability for software copyright owners to lease their source code, especially once that code was free of a physical expression, and thereby evade the First Sale Doctrine, to users without transferring ownership of the expression to users.


Licensing Agreements

Licensing agreements offer copyright or intellectual property (patents, trademarks) owners a contractual framework. This framework functions as an agreement among two or more parties. It enables the parties involved to participate, within some range of time (all contracts must have start and end dates), in an owner's intellectual property under certain conditions. Librarians enter into licensing agreements of all sorts. Licenses cover bibliographic databases, ILS/ERM software, and of course, e-content.

Entering a licensing agreement for e-content means that libraries do not own that content but only have access for a limited period of time, as defined in the contract. This is unlike print works, which fall under the first-sale doctrine. The existence of a licensing agreement between a library and an intellectual property owner thus entails lack of ownership of the item (think of item as defined by the Functional Requirements for Bibliographic Records FRBR model).


Weir (2016) provides a nice outline of licenses and what they include. According to Weir, there are two general types of licensing agreements:

  • End user agreements: these are generally the kind that people accept when they use some kind of software or some service.
  • Site agreements: these are the agreements librarians get involved in when they negotiate for things like databases. Here, site refers to the organizational entity.

Licenses must include a variety of components. Weir (2016) outlines the parts of a standard license. They include:

  • Introductions: this includes information about the licensee and the licensor, date information, some information about payments and the schedule.
  • Definitions: this section defines the major terms of the contract. Weir includes, as examples, the licensee, the licensor, authorized user, user population, and whether the contract entails a single or multi-user site.
  • Access: This covers topics such as IP authentication and proxy access.
  • Acceptable use: Included here are issues related to downloading, storage, print rights, interlibrary loan (ILL), and preservation.
  • Prohibited use: What people cannot do: download restrictions, etc.
  • Responsibilities: What the licensee's (the library) responsibilities are. Be careful about accepting responsibility for actions that the library would have a difficult time monitoring. Then also, what are the licensor's responsibilities. This might include topics such as 24 hour access.
  • Term and terminations: Details about the terms of the contract and how the contract may be terminated. Be aware that many libraries are attached to either municipal, county, or state governments and must adhere to relevant laws.
  • Various provisions

As an example, the California Digital Library, via the University of California, provides a checklist and a copy of their standard license agreement.

The checklist covers four main sections, additional subsections, and is well worth a read:

  • Content and Access
  • Licensing
  • Business
  • Management

Licenses generally must include a scope (what rights are being granted and what limitations exist), a duration (how long the license will last), territory (where the license applies), fees or royalties (whether the licensee has to pay for the license, and if so, how much), revocation terms (under what circumstances the license can be revoked, and warranties and liabilities (any guarantees provided by either party and limitations on liability).

Standardizing Agreements

Like the various technologies that we have covered, NISO and its members moved toward forming a document that helps create a common license framework. The result was SERU: A Shared Electronic Resource Understanding. Although all licenses share some basic similarities, as discussed in Weir (2016) above, the details of the hundreds of licenses a large library has to handle can get lost in a sea of variability.

SERU is a NISO recommended practice helps to resolve this. It fosters a common approach to some aspects of the licensing process and in fact can be used as "an alternative to a license agreement" if a provider and a library agrees to use it. Like the standard licensing structure that Weir (2016) outlines, SERU includes parts that describe use, inappropriate use, access, and more but also posits other stipulations, such as confidentiality and privacy.

NASIG Core Competencies

We have addressed the NASIG Core Competencies for Electronic Resources Librarians (NCC) in earlier sections. The NCC is a reminder of the centrality of licensing for the ERM librarian. Section 1.2 states:

Thorough knowledge of electronic resource licensing and the legal framework in which it takes place. Since licenses govern the use of most library electronic resources and have conditions that cannot knowingly be violated, an ERL with responsibilities related to licensing must demonstrate familiarity with how and for whom an organization licenses content, as well as the concepts, implications, and contract language pertaining to such issues as archival rights, perpetual access and interlibrary loan. A practical working understanding of issues such as copyright and fair use will allow ERLs to obtain the least restrictive, most library-friendly licensing terms during publisher/vendor license negotiations.

Even though I have covered NCC in earlier sections, it's a reminder that when we talk about electronic resource management, we talk about a comprehensive list of responsibilities, skills, technologies, and more, and that we should keep this on our radar. Second, the Regan (2015) specifically mentions these competencies with respect to the importance of learning more about the licensing process.

I think it's still true, seven years after publication and as Regan (2015) states, that few library schools cover licensing and license negotiations and more broadly, electronic resource management, but I think that's changing. Even if it's covered, like here, Regan's advice is still important and the essential questions they ask are relevant even if you have extensive experience with the process.

What is also relevant is the additional theme that Regan (2015) covers about the importance of advocating. Many people are nervous about the idea of having to negotiate a license for an e-product, but in reality, such work is not done outside of a team. And that team will likely include people who work outside of libraries. This makes it important to advocate for the library and the licensing process.


LIBLICENSE is a resource to assist librarians in crafting, adopting, and managing licenses for electronic resources. The project is aimed at university libraries, but is relevant to other library types, too. The model licenses page provides a link to the main template, but also links to additional model licenses that cover the United States, the United Kingdom, and Canada from various institutional perspectives, including consortial licensing.

The LIBLICENSE model license includes helpful details, such as types of authorized uses and provisions on:

  • course reserves
  • course packs
  • electronic links
  • scholarly sharing
  • scholarly citation
  • text and data mining

It's important to note that anything that is covered in a license is subject to negotiation between the library and the vendor. That does not meant that terms will be accepted between the parties, but if something is unfavorable or not in the best interests of your institution and patrons, then that needs to be discussed by all parties.

These model licenses are invaluable not only from a practicing perspective but also from an educational perspective. The more you review, the more comfortable you will become working with them. And, as we will learn in the next section, one of the most important parts of the negotiating process is being prepared to negotiate. That entails being familiar with the basic license model.

Readings / References

North American Serials Interest Group. (2013). NASIG core competencies for electronic resources librarians. https://nasig.org/Competencies-Eresources

Regan, S. (2015). Lassoing the Licensing Beast: How Electronic Resources Librarians Can Build Competency and Advocate for Wrangling Electronic Content Licensing. The Serials Librarian, 68(1–4), 318–324. https://doi.org/10.1080/0361526X.2015.1026225

SERU: A Shared Electronic Resource Understanding, NISO, http://www.niso.org/publications/rp/RP-7-2012_SERU.pdf

Additional References

Rub, G. A. (2017). Copyright survives: Rethinking the copyright-contract conflict (SSRN Scholarly Paper No. 2926253). https://papers.ssrn.com/abstract=2926253

Weir, R. O. (2012). Licensing Electronic Resources and Contract Negotiation. In R. O. Weir (Ed.), Managing electronic resources: a LITA guide. Chicago: ALA TechSource, an imprint of the American Library Association.

Licensing and Negotiating


Now that we know the basic structure and contents of a license for electronic resources, we can discuss the negotiation process.

We know that because of projects like SERU, some licensing may be avoided, and projects like LIBLICENSE will help streamline the process because they provide helpful boilerplate. When we enter negotiations for some products, especially big ticket products, it's helpful to know what that process might look like, and of course, how to go about it.

Principled Negotiation

Abbie Brown (2014) offers a number of great tips in her talk, and I want to highlight a few things.

First, her discussion of principled negotiation and assertive communication is important. Principled negotiation is about keeping negotiations professional, even when it feels personal or when something triggers our anger in the licensing process. Before you enter negotiations, keep in mind the objective you want to attain; and use reason, creativity, and problem solving skills to get there.

Assertive Communication

Assertive communication is being willing to express yourself. This is not the same as being aggressive, which is not warranted. Brown's (2014) suggestion to have other people look at your emails, that might have been written in anger, before sending is golden.


Brown (2014) also discusses, based on her experience, some stereotypes that get in the way of negotiating. These include:

  • Librarian stereotypes
  • Library stereotypes
  • Vendors stereotypes:

Stereotypes, from either direction, should be avoided. People exist on each side of the table, and although each side has their own self-interests, stereotypes prevent connection. And reflecting on any stereotypes we have also reduces anxiety in the negotiation process.


As a negotiator, Brown (2014) described having to work with lots of people, of having to negotiate with people in her own library. This is a great insight. It's very important to talk through things with colleagues and vendors.

Put it in Writing

Brown's (2014) point about putting things in writing will help you. You want to write write well and succinctly, but if it's in writing, then it's documented and can be easily archived and retrieved.

Negotiation Workflows

Smith and Hartnett (2015) provide a real world example of the negotiating process that includes a work flow around licensing. Remember, document everything and revisit your documentation. Importantly, use that documentation to formalize checklists. Having a workflow in place around licensing will help make your work more efficient and help ensure that all bases are covered.

Dygert and Barrett (2016) cover the specifics of licensing: what to look for, what shouldn't be given away, how to negotiate principally, and more. Likewise, Dunie (2015) gets into the specifics of the negotiation process, which includes definitions of terms, business models, and strategies.


Becoming a skillful negotiator takes practice, but this section will help prepare you to prepare for the process. The main point I want to make is this: if you find yourself in a position where one of your job responsibilities is to negotiate with vendors for e-resources (or for anything else), then come back to these sources of information and spend additional time studying them and taking notes on them. Sources like these, and others like them, such as those listed by Garofalo (2017), in the literature, will prepare you if you study them. Being prepared is the most important step.

Readings / References

Brown, A. (2014). Negotiation of E Resource licensing pricing terms. (2014, September 17). https://www.youtube.com/watch?v=LET4MWO7egI

Dunie, M. (2015). Chapter 3. Negotiating with content vendors: An art or a science? Library Technology Reports, 51(8), Article 8. https://journals.ala.org/index.php/ltr/article/view/5834

Dygert, C., & Barrett, H. (2016). Building your licensing and negotiation skills toolkit. The Serials Librarian, 70(1–4), 333–342. https://doi.org/10.1080/0361526X.2016.1157008

Garofalo, D. A. (2017). Tips from the trenches. Journal of Electronic Resources Librarianship, 29(2), 107–109. https://doi.org/10.1080/1941126X.2017.1304766

Smith, J., & Hartnett, E. (2015). The licensing lifecycle: From negotiation to compliance. The Serials Librarian, 68(1–4), 205–214. https://doi.org/10.1080/0361526X.2015.1017707

Additional References

ALA. (2006, August 25). Libraries and licensing. https://web.archive.org/web/20180611070938/http://www.ala.org/advocacy/copyright/librariesandlicensing/LibrariesAndLicensing

Chesler, A., & McKee, A. (2014). The shared electronic resource understanding (seru): Six years and still going strong. Information Standards Quarterly, 26(04), 20. https://doi.org/10.3789/isqv26no4.2014.05

LIBLICENSE: http://liblicense.crl.edu/

Acquisitions and Collections Development


Collection development and acquisitions is a complex problem for electronic resources. To rehash, in the print-only days, acquiring resources was a more-than-it-is-now linear process. Librarians became aware of an item, sought reviews of the item, possibly collected the item, described it, and then shelved it. And maybe, depending on the type of the library, weeded it from the collection at some future date during their regular course of collection assessment.

The above is a simplistic take, but there are additional vectors to be aware of with electronic resources. First, as we have learned, libraries may not own digital works, and different subscription services require different kinds of contracts. Second, electronic resources (ebooks, journal articles, databases, etc.) require different handling and disseminating procedures due to differences in technologies and licenses. Martin et al. (2009) pinpoint the issue when they write that:

As much as we would like to think our primary concerns about collecting are based on content, not format [emphasis added], e-resources have certainly challenged many long-established notions of how we buy, collect, preserve, and provide access to information (p. 217).

Collection Development

Although a world where the format dictates so much makes an intriguing world, it can be problematic and worrisome that it does dictate so much. We think that content, and not format, should be king and that "collection managers should focus on the content of the information provided, regardless of the actual form in which the information arrives" (Harloe & Budd, 1994, p. 83). We have already learned that some formats cost more, but we must also ask new questions: how does format (or form) either prevent or facilitate access? If you catch the implicit gotcha there, you can see that a thread connects acquisitions, collections development, and usability, since usability is an access problem. We will cover usability issues later.

In a collection development course, you would unquestionably focus on content and on the work that is involved creating a collection development policy (CDP), which I hope you do or spearhead if your library does not have one. Content and CDPs are relevant to the acquisition, collection, and management of e-resources. However, in a major way, it is also important to understand how the management of electronic resources have impacted librarian work flows and how that has shaped, or re-formed, library organizational hierarchy.

Organizational Hierarchy

A quick note about the organizational hierarchy, which are often graphically presented in organizational charts. I developed an organizational chart based on my readings of librarian departmental reports written during the late 1950s and early 1960s by librarians at the University of Kentucky, and thus, well before electronic resources took over. These departmental reports are archived at the University of Kentucky's Special Collections Research Center. Organizational charts have been around since the 1800s, but I do not think they were commonly used in libraries until at least the latter half of the 20th century. (I didn't see one in my research on UK Libraries for this time period.) Thus, I inferred an organizational structure based on the detailed reports written by the various department heads in the library at the time (See Fig. 1). When you compare my chart based on the past to the most recent one provided by UK Libraries (see aside below), it's clear that additional complexities have been added.

Organizational chart UK Libraries late 50s / early 60s
Fig. 1. This is a derived organizational chart based on annual reports of University of Kentucky departmental head librarians. Research is based on reports held at the UK Library's Special Collections Research Center.

Aside 1: The most recent organizational chart for UK Libraries is from 2019, and they may have decided to stop making them. Much has changed since then, including a new Dean, but the general point I am making should hold about the complexity of the modern library.

Aside 2: The concept of organizational charts dates back to the 19th century, Based some cursory searching, the first known chart was created in 1855 by Daniel McCallum, a railway general superintendent. McCallum, with the assistance of a draftsman and civil engineer named George Holt Henshaw, designed an organizational chart for the New York and Erie Railway to showcase the division of administrative duties and the number and class of employees engaged in each department (Organimi 2020, Lanteria 2021). This chart was initially referred to as a "Diagram representing a plan of organization" and was not yet called an organizational chart (Organimi 2020, Pingboard).

The terminology "organization chart" became more common in the early 20th century, and by 1914, a certain Brinton advocated for broader use of organizational charts (Wikipedia Organizational Chart). The use of organizational charts gained more traction in industrial engineering circles and became more popular among businesses and enterprises in the latter half of the 20th century (Miro 2021).

These early developments set the stage for modern organizational charts, which have now become crucial tools for delineating responsibilities, hierarchy, and the structural framework within organizations across various sectors. From a historical perspective, a study of organizational charts of the years can shed insight on how the organization evolves, especially when cross-referencing that evolution with other changes, such as the introduction of electronic resources in libraries.

This complexity is very interesting. The growth in electronic resources, associated technologies, and markets do not explain all of it: knowledge has become more specialized, and library organizational structure reflects that; student populations have grown considerably since then, in size and heterogeneity, and library structure will reflect that; and the theory and praxis of library management has evolved throughout the decades, and library structure will reflect that. Other issues are at play, and it is certainly true that they are all interconnected. However, I do think that technology and e-resources account for a large portion of the increasing complexity that we see here in the difference between these two organizational charts at different times of the Library's history.

But again, consider the influence of format. Lamothe (2015) finds that if electronic e-reference sources are collected and perpetually updated, then they get continually used. If it's a static e-resource, then usage declines. I hope additional studies pursue this line of questioning because it raises questions about the expectations that patrons have about content; perhaps something about how fresh they expect that content should be. It also suggests that a resource like Wikipedia has an advantage, since many articles on Wikipedia are regularly updated (although not all), and that might lead to a perception of Wikipedia as fresh, relative to what is in a library's collection.

Open Educational Resources

Let's switch topics to discuss Open Educational Resources (OER), which is a hot topic these days. Textbook prices, England et al. (2017) notes, have skyrocketed in recent decades. It's suggested that college students budget up to $1,240 per year in books and supplies, and public elementary and secondary schools expend nearly 2.5 billion dollars per year on textbooks. In response, libraries have notably moved to highlight open educational resources at some level. For example, UK Libraries provides resources about Open Educational Resources and provide a LibGuide on OER.

Sites such as oercommons.org, openstax, LibreTexts, and others function as digital libraries of open access educational resources. The idea is to eliminate these exhorbitant costs, which put great burdens on taxpayers, families, and students who pay for them, by providing high quality, open access textbooks and other educational resources.

Most of these resources are, I believe, pushed to faculty to replace proprietary textbooks. However, it's useful to ask whether libraries ought to collect and acquire these resources, which would involve promoting OER at a whole other level. For example, librarians could catalog OER items and add records for them in their catalogs or discovery systems (see Hill & Bossaller, 2012 for a comparable discussion). Or should libraries not be involved at all? This seems like a duh kind of question, but libraries, public or academic, have not traditionally collected textbooks. So, should they? Would this change their fundamental mission? Would it change the game for them as educational institutions?

Aside: If interested in following developments in open educational resources, then I highly recommend subscribing to the SPARC Open Education Forum email list.

Collection Development Policies of Electronic Resources

Finally, I'd be guilty of a serious wrongdoing if I did not discuss the importance of having a collection development policy (CDP) and using that policy to guide the collection, acquisition, and assessment of electronic resources. I want to emphasize the importance of a CDP for e-resources. Unfortunately, not all libraries, even at major institutions, create or use a CDP. If you end up working at such a library, I highly encourage you to convince your colleagues of its importance. A CDP should define a collection, and then include most if not all the following topics:

  • mission, vision, and values statement
  • purpose of CDP statement (scope may be included here)
  • selection criteria: this could be general but it could also include subsections that focus on specific populations, genres, resource types, and more
  • assessment and maintenance criteria
  • challenged materials criteria (esp important at public, K-12 libraries)
  • weeding and/or replacement criteria

It can be helpful to see how libraries treat electronic resources in those that have policies that include them. The following two CDP policies, one from the University of Louisiana (UL) and one from the Lexington Public Library (LPL). do contain sections on electronic resources. The UL CDP is not their main policy but a sub-CDP that focuses on electronic resources. The LPL's policy is their main policy. Although it does not include a long discussion of electronic resources, electronic resources are mentioned. Neither approach is wrong because each are catered to the specific libraries and their purposes, communities, and vision statements.


This section addressed collection development of electronic resources, how the format of electronic resources influences workflows, how electronic resources influences organizational hierarchies, how the movement to make educational resources open has extended to libraries, and how collection development policies address electronic resources.

Readings / References

England, L., Foge, M., Harding, J., & Miller, S. (2017). ERM Ideas & Innovations. Journal of Electronic Resources Librarianship, 29(2), 110–116. https://doi.org/10.1080/1941126X.2017.1304767

Lamothe, A. R. (2015). Comparing usage between dynamic and static e-reference collections. Collection Building, 34(3), 78–88. https://doi.org/10.1108/CB-04-2015-0006

Martin, H., Robles-Smith, K., Garrison, J., & Way, D. (2009). Methods and Strategies for Creating a Culture of Collections Assessment at Comprehensive Universities. Journal of Electronic Resources Librarianship, 21(3–4), 213–236. https://doi.org/10.1080/19411260903466269

Additional References

Harloe, B., & Budd, J. M. (1994). Collection development and scholarly communication in the era of electronic access. The Journal of Academic Librarianship, 20(2), 83–87. https://doi.org/10.1016/0099-1333(94)90043-4

Hill, H., & Bossaller, J. (2013). Public library use of free e-resources. Journal of Librarianship and Information Science, 45(2), 103–112. https://doi.org/10.1177/0961000611435253

Chapter Four: Patrons

The chapter on patrons addresses the critical intersection between users and electronic resources within libraries. Comprising three main topics, these sections explore: 1. user experience, which offers an analysis of how patrons interact with digital materials, emphasizing usability, accessibility, and satisfaction; 2. the evaluation and management of e-resource usage, which provides an examination of methods to assess, monitor, and enhance the utilization of e-resources to help ensure that librarians meet the diverse and evolving needs of their communities; and 3. security and privacy concerns, which sheds light on the imperative of safeguarding user information and privacy in the digital age. This includes considerations related to authentication, data protection, and ethical handling of personal information. Together, these lectures reflect a user-centric approach to e-resource management and serve as an essential guide for ERM librarians committed to enhancing patron engagement, trust, and satisfaction.

User Experience


What is user experience? Dickson-Deane and Chen (2018) write that "user experience determines the quality of an interaction being used by an actor in order to achieve a specific outcome" (Intro section, para 1). Parush (2017) highlights adjacent terms like human-computer interaction (HCI) and usability. Let's say then that HCI encompasses the entire domain of interaction between people and computers and how that interaction is designed and that user experience (UX) focuses on the quality of that interaction. These are not precise definitions. Some might use the terms UX and HCI interchangeably. As ERM librarians, though, the job is to focus on the quality of the patron's experience with electronic services, and this entails understanding both the systems and technologies involved and the users interacting with these systems and technologies.

Dickson-Deane and Chen (2018) outline the parameters involved with UX. Let me modify their example and frame it within the context of an ERM experience for a patron:

  • Actor: A user of the web resource, like a library website.
  • Object: The web resource, or some part of it.
  • Context: The setting.
    • What's happening?
    • What's the motivation for use?
    • What's the background knowledge?
    • What's the action?
  • User Interface: The tools available by the object and the look and feel.
    • More specifically, Parush (2017) states that "the user interface mediates between the user and computer" and it includes three basic components:
      • controls: The tools used to control and interact with the system: buttons, menus, voice commands, keyboards, etc.
      • displays: The information presented to the user or the hardware used to present the information: screens, speakers, etc.
      • interactions and dialogues: The exchange between the system and the user including the use of the controls and responding to feedback.
  • Interaction: What the actor is doing with the UI
  • (Intended/Expected/Prior) User experience: The intended or expected use of the object. The user's expectations based on prior use.
  • (Actual) User experience: The actions that took place; the actions that had to be modified based on unintended results.

These parameters would be helpful for devising a UX study that involves observing patrons interacting with a system and then interviewing them to complete the details. Note that the same kind of systematic thinking can be applied to evaluate other user experiences, like those between a librarian and an electronic resource management system. Often the focus is on patron user experience, but it's just as important to evaluate UX for librarians and to consider UX when selecting an ERM system or an ILS system.

In any case, these parameters help us step through and highlight the complicated process of interacting with a computer, generally, or a library resource, more specifically. As with many other topics we've discussed here, we can also incorporate these parameters into a workflow for evaluating UX.

Complex Library Websites

It is due to the complexities involved and a focus on the systems that Pennington (2015) argues for a more UX centered approach to library website design. Think about your own state of knowledge of ERM before you started learning about this area of librarianship. For example, now that you know somewhat how link resolvers function, your experience using them as patrons and your understanding of their technical aspects as librarians provide you with a set of skills and experiences that make you more likely to identify the cause of a malfunction if you find one. With this ability to suss out an issue, it becomes easier to solve, and the experience itself involves less anxiety. However, most users and patrons of these systems will not have any technical knowledge of these systems. Thus, when these systems break on them, their frustration with the experience might lead to unfortunate outcomes: they may not retrieve the information they need; they may not reach out to a librarian for help; or they may stop using the library's resources in preference for something of inferior quality. We need to remember that this happens, and if possible, to build in proactive troubleshooting processes that anticipate and solve them before they do happen to patrons.

Here's the crux, though. As you gain more skill and expertise with these systems, you will eventually lose the ability to see these systems as a novice user, and that distance will only grow over time. It is therefore, as Pennington (2015) argues, important to gather data from users. User experience research nurtures a user centered mindset.

Indeed, Kraft et al. (2022) used focus groups and surveys to collect user experience data on a library's implementation of its A-Z Database List. The results of this study are interesting. As Kraft et al. (2022) point out, librarians have long made efforts to reduce the use of library terminology in their messaging to patrons, since this only serves as a point of confusion. However, their focus group participants described contrasting opinions about how color was used on the site, and how color was used had some fairly dramatic effects on which sources were selected to pursue.

The Data That Exists

In addition to user studies that require conducting direct research and reading prior studies that require literature searches, we should also know that libraries already possess a wealth of data to explore, and this data could provide needed insight. Here, as we've learned before, workflows play an important role in applying mechanisms to track, report, and fix problems with various electronic resource systems. Browning (2015), for example, describes the use of Bugzilla, software that's commonly used for software development for bug tracking and generating reports about what breaks. Once problems are identified, they can be categorized and assigned to facilitate quick solutions. Thus, whereas one approach is to understand what we can learn from data about usage (Fry, 2016), Browning (2015) describes what we can learn from data about breakage. Both kinds of data offer substantial understanding about user experience.


I agree with McDonald (2016) that despite having around 30 or so years of experience with web-based and other electronic resource types, we are still in the throes of disruption. There's much yet to learn about design for the web, just like there's a lot of left to learn about how to design a home or office, and nothing will be settled for a while. Although I doubt if there will be any single dominate user experience or user interface, since there are many cultures, backgrounds, and aesthetics, I'm fairly sure the low-hanging fruit problems will work out soon enough. Remember though that 95% of the cause of all of this complexity is due to copyright issues, which necessitate the entire electronic resource ecosystem and the complications that introduced by working with vendors who work with different, but overlapping, publishers, etc. If something were to change about copyright, then it's a whole new ballgame.

On a final note, you might be wondering how information seeking is related to HCI and to UX. For example, we learned from Kraft et al. (2022) that color can influence what resources patrons investigate. Anytime we interact with a computer (broadly speaking) in order to seek information, then we have an overlap with UX. There are areas of non-overlap, too. We don't always use computers to look for information, and we don't always look for information on computers. UX is like this, too. UX is not always about computers but can be about user experience generally. I bring this up because if you do become involved with UX work at a library (or elsewhere), then I'd encourage you to refer also to the information seeking and related literature when it's appropriate to do so. Remember, it's all about users and it's also all interconnected.

Readings / References

Browning, S. (2015). Data, Data, Everywhere, nor Any Time to Think: DIY Analysis of E-Resource Access Problems. Journal of Electronic Resources Librarianship, 27(1), 26–34. https://doi.org/10.1080/1941126X.2015.999521

Kraft, A., Scronce, G., & Jones, A. (2022). Virtual focus groups for improved A-Z list user experience. The Journal of Academic Librarianship, 48(4), 102541. https://doi.org/10.1016/j.acalib.2022.102541

Pennington, B. (2015). ERM UX: Electronic Resources Management and the User Experience. Serials Review, 41(3), 194–198. https://doi.org/10.1080/00987913.2015.1069527

Additional References

Adams, A. L., & Hanson, M. (2020). Primo on the Go: A Usability Study of the Primo Mobile Interface. Journal of Web Librarianship. http://www.tandfonline.com/doi/abs/10.1080/19322909.2020.1784820

Dickson-Deane, C., & Chen, H.O. (2018). Understanding user experience. In M. Khosrow-Pour (Ed.), *Encyclopedia of Information Science and Technology. Fourth Edition pp. 7599–7608. IGI Global. https://doi.org/10.4018/978-1-5225-2255-3.ch661

Hamlett, A., & Georgas, H. (2019). In the Wake of Discovery: Student Perceptions, Integration, and Instructional Design. Journal of Web Librarianship, 13(3), 230–245. https://doi.org/10.1080/19322909.2019.1598919

Parush, A. (2017). Human-computer interaction. In S. G. Rogelberg (Ed.), The SAGE Encyclopedia of Industrial and Organizational Psychology (2nd edition, pp. 669–674). SAGE Publications, Inc. https://doi.org/10.4135/9781483386874.n229

Pennington, B., Chapman, S., Fry, A., Deschenes, A., & McDonald, C. G. (2016). Strategies to Improve the User Experience. Serials Review, 42(1), 47–58. https://doi.org/10.1080/00987913.2016.1140614

Evaluation and Statistics


We've discussed problems with defining terms, and we have learned that much effort has been expended into standardizing them. We have also seen that the topics that we've covered---technologies, standards, access, usability, workflow, markets, licensing---are linked in some way. All this complexity makes the measurement of usage that much more complicated. The complication arises when electronic resources (or basically all activity on the web and internet) are accessed from client machines, a computer server somewhere keeps a log of that access. Having logs available makes it seem that we can have accurate data about usage, but it's not a guarantee, and the insight we may glean is always difficult to acquire no matter how much data is available to us.

999.999.999.999 - - [18/Nov/2022:04:40:38 +0000] "GET /index.html HTTP/1.1" 200 494 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0"

Example web server access log entry, with obfuscated IP address. Simply by visiting a site, the web server is able to log the client's IP address, the timestamp, the page the client requested, the client's operating system type and version, and the client's web browser type and version.

Access log data like above can be good data to explore, but we have to be mindful that all data has limitations, and that there are different ways to define what usage means. For example, the log snippet above indicates that I visited that a page named index.html on that server, but does that mean that I really used that website even though I accessed it? Even if we can claim that I did, what kind of use was it? Can we tell? (We can actually learn quite a lot from web server access logs, and there is software, like Google Analytics, that would be able to collect additional usage data.)

As with other things we have discussed, there have been efforts to standardize electronic resource usage. It's an important process because usage data informs collection development and benefits the library in other ways. Discussions about usage do belong to the domain of electronic resource librarianship, but it also overlaps with other areas of librarianship, such as systems librarianship or collection development. Here we might see job titles like library systems administrator.

Project Counter

Project Counter is the primary attempt to standardize how usage is defined, measured, collected, and shared. It is a Code of Practice that provides informative and consistent reporting of electronic resource usage. From Project Counter:

Since its inception in 2002, COUNTER has been focused on providing a code of practice that helps ensure librarians have access to consistent, comparable, and credible usage reporting for their online scholarly information. COUNTER serves librarians, content providers, and others by facilitating the recording and exchange of online usage statistics. The COUNTER Code of Practice provides guidance on data elements to be measured and definitions of these data elements, as well as guidelines for output report content and formatting and requirements for data processing and auditing. To have their usage statistics and reports designated COUNTER compliant, content providers MUST provide usage statistics that conform to the current Code of Practice.

These reports were designed to solve a problem that will likely never completely be solved, but it's still an important and useful effort. The main goal of Counter is to provide usage reports, and the reports, for version 5 of Counter, cover four major areas:

  • Platforms
  • Databases
  • Titles
  • Items

And you can see which reports these four replace in a table in Appendix B of the Code of Practice.

Counter 5 was designed to include better reporting consistency, better clarity of metrics that measure usage activity, better views of the data, and more. In order to clarify the purpose of Counter, let's review the introduction to the Code of Practice, which articulates the purpose, scope, application, and more of Counter.

Pesch (2017) provides a helpful introduction to the history of Project Counter and the migration from Counter version 4 to version 5. Table 1 in Pesch describes the four major reports. Most of the reports are self-explanatory. Database, Title, and Item reports cover what they describe, but Platform reports might be less obvious. These reports include usage metrics at the broadest level and of things like EBSCOhost databases, ProQuest databases, SAGE resources, Web of Science databases, and so on. These reports come into play when users/patrons search in the overall platform but not in any single database provided by the platform. For example, UK Libraries subscribes to the ProQuest Databases and for us, that includes 35 primary databases. Users can search many at the same time or search any single one. The same holds for platforms like EBSCOhost, Web of Science, and others. This is the platform level.

Scott (2016) illustrates a nice use case for how Counter reports can inform collection development. We've addressed the Big Deal packages that more libraries are trying to move away from because such deals often include access to titles that are not used or not relevant to a library community. Here Scott shows that it might be possible to avoid subscribing to some services using this data, but it's also important to closely read through and understand the problems associated with interlibrary loan, the metrics, and other limitations described in the Conclusion section of this article.

The Value of Metrics

We move away from Project Counter with the Stone and Ramsden (2013). I introduce this article because it highlights how metrics can be used to assess the value of a library, which is often underestimated by administration but constantly required in order to garner the resources needed to improve or sustain a library's resources. Here Stone and Ramsden investigate the correlation (not causation) between library usage and student retention. Increasing the latter is the Holy Grail of college and universities. If this were a public library report, it might be interesting to see how well electronic library usage correlates to continued usage and how such a correlation might result in various outcomes defined by the library. One nice thing about the Stone and Ramsden article is that it does not depend on quantitative metrics alone but supports its findings through qualitative research. There's only so much a usage metric can say.

Using Metrics

I would like you to be aware of the code{4}lib journal and this article by Zou is pretty cool. Although this article overlaps with some security issues, a topic that we'll cover in the final section, the article also provides a way of thinking outside the box about the metrics that you have access to as an electronic resource librarian. Here, Zou describes a process of taking EZproxy logs (compare the example entry with the web server entry I included above) and turning them into something useful and dynamic by incorporating some additional technologies. Recall that EZproxy is software that authenticates users and provides access given that authentication. We use EZproxy at UK whenever we access a paywalled journal article. That is, you've noticed the ezproxy.uky.edu string in any URL for a journal that you've accessed via UK Libraries' installation of EZproxy, and the URL https://login.ezproxy.uky.edu/login is the log in URL. Zou specifically references the standard way of analyzing these logs (take a look at the page at that link), which can be insightful and helpful, but Zou's method makes the analysis of these logs more visual and real-time. The main weakness with Zou's method is that it seems to me to be highly dependent on Zou doing the work. If Zou leaves their library, then this customized analysis might not last. Still, it's good to know that if you have an interest in developing skills with systems administration, with various other technologies, and with some basic scripting language, this kind of thing, and more, is possible.

Getting Creative

Smith & Arneson (2017) detail very creative and fun ways to collect usage data about resource usage when vendors do not provide usage data. In the first part of this article, Smith describes how they analyzed their link resolver reports to infer what users were accessing in their collections. Arneson's section describes using a Linux file search utility called grep to construct search queries of the EZproxy logs and deduce usage of specific electronic resources. Since both methods require sifting through log entries like the one I highlighted above, the process requires some sleuthing, testing, time, and patience. However, once figured out, the process and reports can easily be automated.


Librarians used a variety of techniques to collect usage data in the print era, but like many things we've learned about, electronic resources have complicated things. First, because more data is available about usage with electronic resources, before that data can be used, it has to be defined. Project Counter is an attempt to define what usage means and how to report it.

Quantitative metrics should will never be able to provide a complete picture of how a library's collections are used, but they are an important part. Not only do they help librarians manage their collections, they also help librarians show proof of their collection's importance to their communities. Furthermore, with a little skill, practice, and creativity, usage logs can also be used to build cool apps (Zou, 2015) or help fill in the gaps when vendors fall short in their requirements (Smith & Arneson, 2017).

Readings / References

Pesch, O. (2017). COUNTER Release 5: What’s New and What It Means to Libraries. The Serials Librarian, 73(3–4), 195–207. https://doi.org/10.1080/0361526X.2017.1391153

Scott, M. (2016). Predicting Use: COUNTER Usage Data Found to be Predictive of ILL Use and ILL Use to be Predictive of COUNTER Use. Serials Librarian, 71(1), 20–24. https://doi.org/10.1080/0361526X.2016.1165783

Stone, G., & Ramsden, B. (2013). Library impact data project: Looking for the link between library usage and student attainment. College & Research Libraries, 74(6). http://doi.org/10.5860/crl12-406

Zou, Q. (2015). A novel open source approach to monitor Ezproxy Users’ activities. code{4}lib Journal, 29. http://journal.code4lib.org/articles/10589

Smith, K., & Arneson, J. (2017). Determining usage when vendors do not provide data. Serials Review, 43(1), 46–50. https://doi.org/10.1080/00987913.2017.1281788

Privacy and Security

Breeding (2016) begins with the following statement:

Libraries have a long tradition of taking extraordinary measures to ensure the privacy of those who use their facilities and access their materials.

This is mostly true but not entirely so. When I was an undergraduate, I remember going to the library to look for books on a sensitive topic. I saw a book on the shelves that looked relevant, and when I pulled it off the shelf and opened it, I noticed that a friend of mine had checked the book out before me because their name was written on the due date card in their handwriting. Even though I had grown up with these due date cards in library books, it had never occurred to me before then how these cards could pose a problem with privacy. At the time, I decided not to check out that book because of that issue.

We might be comforted in thinking that the kind of information that was supposedly revealed to me in the book that my friend had checked out, and that I had opened, is the kind of information that would not scale up easily. It was a serendipitous event that involved me looking for a book on the same topic and then just happening to pick the one book that my friend had used. It's not likely, then, that this might pose a big problem at scale.

However, let's think of that information in that due date card as metadata, and then ask, how could we use it? The sociologist Kieren Healy did that kind of thing with membership lists from colonial times. He showed that using limited data like the one I found in that book, some important things could be discovered. For example, Healy imagined that if the British had access to simple social network analysis methods in 1772, they could have identified that Paul Revere was a patriot and then have used that information to prevent or interfere with the American Revolution. I encourage you to read his blog entry and his follow-up reflection because it is a neat what-if hypothetical case study.

Most libraries in North America have replaced due date slips with bar codes, and while this has removed the problem above, the overall migration from paper-based workflows to electronic ones have raised other problems. Not long after the Patriot Act was passed after 9/11, FBI agents ordered a Connecticut librarian to "identify patrons who had used library computers online at a specific time one year earlier". Per the law, the librarians involved were placed under a gag order, which prevented them from speaking out. This led to a lawsuit against the US Attorney General. Eventually the librarians were released from their gag order and allowed to discuss the event.

There are occasionally big, dramatic cases like the one described above, but privacy and security issues are often much more mundane but still quite important. Since many users of libraries of all types visit library homepages, then encrypting all the web/internet traffic is important. A couple of years ago, the major web browsers announced that they would no longer support Transport Layer Security (TLS) protocol versions 1.1 or earlier, and that any site that had not yet migrated to TLS version 1.2 or above will not be accessible. TLS is used to encrypt web traffic. This news came out in early March 2020, and the pandemic followed soon after. The browser vendors thus postponed blocking poorly encrypted websites for a few months. It took a while for some websites to begin using the new version of TLS and some sites were inaccessible for a while even with the advanced notice.

In fact, for a while I had to enable an insecure connection to libraries.uky.edu in Firefox if I wanted to visit it. When I enabled it, then my activity on libraries.uky.edu was potentially visible to others under certain conditions. Note, however, that once I signed into my library account, I'm transferred to Primo's cloud service, and then to UK's OAuth page. These parts of the encryption chain are fairly strong. So it's only activity specifically on libraries.uky.edu that was not good at the time.

Breeding (2016) introduces a variety of technologies and policies that are related to security and privacy. These encompass important technological considerations, like web traffic encryption. There are also important policy considerations, too, like how third party vendors implement privacy and security mechanisms, like the Primo example above. DiVittorio and Gianelli (2021) discuss the issue of privacy and security issues with third party vendors. Overall, their findings highlight the lack of alignment between the values of librarians and the profit-based motives of vendors. It's also important to note how unresponsive many vendors were to their requests to participate in data collection. Remember that SERU has a section in its recommended practice dedicated to Confidentiality and Privacy. In case you work at a library that does not use SERU, this is how SERU can be useful to you. It can inform us of the kinds of provisions that a library ought to have in a license if the default provisions a vendor proposes do not include the necessary components.

There are some pros and cons with intentionally choosing services that make library usage less private. A number of library sites use Google Analytics to track site usage and other metrics. Understanding how patrons use library websites is important, but this means that our actions, albeit somewhat anonymized, on these library sites are being collected and stored by Google (or some other analytic service). Also, many websites, library websites included, use fonts that are hosted on other servers, but doing so entails another tracking mechanism, too, since anytime a page is visited that contains a font sourced from another server, a log entry is added of that usage. There's a trade-off, as I mentioned. If we want to learn more about how users interact with library websites in order to improve usability and accessibility, then we have to have some of this data. Here's where a privacy policy might come into play.


Like all things I've covered in this work, the move to electronic resources has disrupted how we think about and handle the privacy and security of our patrons. The DiVittorio and Gianelli (2021) article, in fact, highlights how more than ever, much of what we might like to protect and keep secure is completely out of librarians' control, and that this has potential ramifications for our communities, and especially, for our marginalized and unprotected ones.

Readings / References

Breeding, M. (2016). Chapter 1. Issues and technologies related to privacy and security. In Privacy and Security for Library Systems. Library Technology Reports, 52(4), 5-12. http://dx.doi.org/10.5860/ltr.52n4

DiVittorio, K., & Gianelli, L. (2021). Ethical financial stewardship: One library’s examination of vendors’ business practices. In the Library with the Lead Pipe. https://www.inthelibrarywiththeleadpipe.org/2021/ethical-financial-stewardship/