Accepted Conference Talks

This list represents the accepted conference talks that you can expect to hear at the conference. Thank you to all who submitted talks this year.

Programmatic approaches to bias in descriptive metadata

“Cleaning” descriptive metadata is a frequent task in digital library work, often enabled by scripting or OpenRefine. But what about when the issue at hand isn’t an odd schema, trailing whitespace, or inconsistent capitalization; but pervasive racial or gender bias in the descriptive language? Currently, the work of seeking to... more

Building A Better Database List with APIs

With a combination of automation and human attention, we were able to build a better Database A-Z List for our users and streamline our workflow by feeding LibGuides the data from our ILS’s available APIs. In this talk, I will cover how we achieved our goals and the lessons we... more

Looking for Some Hot Stuff: Collaborating on the Disco Index Project

The MIT Libraries has recently experimented with creating our own Discovery Index, an indexing platform that will be used to populate searches and discovery from multiple sources across the Libraries via a public API. The “Disco Index” will allow us to rely less on vended sources while maintaining more control... more

Providing Computational Access to Records of American Capital Punishment

This talk will overview a two-year project to digitize and expose data from the most complete collection on American executions using the open architectures of Hyrax and Arclight. We’ll show how connecting data to digital source material provides important context for executions with limited or conflicting documentation, and allows for... more

Machine Learning and Metadata with the Charles Teenie Harris Archive

In July 2018, the project team (co-led by an archivist and a creative technologist) conducted a one-week of intensive, dedication to scripting and testing experimental code to document the limitations, capabilities, and costs of machine learning, text parsing, computer vision, and crowdsourcing technologies on making a meaningful contribution to archival... more

Enrich Library Collection Analysis using Python

At our library, we have initiated a new project to harvest data from the CrossRef API using Python in order to understand faculty publication and citation preferences. I’ll present how we can query data based on author, DOI, or title. I’ll describe how I pull and parse the JSON data... more

Of Ethics, Users, and Data: Building a National Conversation for Web Privacy and Web Analytics

Privacy is still a thing today—despite the best efforts of our favorite commercial web giants and state governments to turn privacy fully into a thing of the past. In the face of overwhelming surveillance and tracking pressures, libraries continue fighting for privacy on behalf of ourselves and our communities.

To... more

Natural Language Processing for Discovery of Born-Digital Records

How do we move from discussing new technologies to actually implementing them? This presentation will cover several applications of NLP (natural language processing) for improving discovery of born-digital records in special collections and archives, focusing on two NLP-centered projects at the North Carolina State University Special Collections Research Center. The... more

Building REST API-backed Single Page Applications (SPAs) with Vue.js

Vue.js is a progressive JavaScript framework that is rapidly becoming a popular alternative to the likes to React and Angular for front-end web development, with adoption by companies including Adobe, GitLab, Facebook, and Alibaba. As a progressive framework, Vue.js does not need to be implemented as an entire framework, and... more

Ethics for information professionals

“People who organize information for discovery and use not only make information accessible but also provide the lens through which others experience it. Designing information spaces involves making and imposing value choices, which positions us firmly in the realm of ethics. This topic is especially relevant as we hear more... more

Consortial discovery and resource sharing: making it happen with (mostly) standard tools

With decreasing buying power in collections budgets and increasing emphasis on collaborative collection building across local and regional consortia, institutions may be looking for easier ways to expose and deliver these shared resources. The Triangle Research Libraries Network members (Duke, NCCU, NCSU, and UNC-CH libraries) have been building shared collections... more

"Blockchain for Libraries" is Snake Oil

Blockchain technology has proven to be an plausible, perhaps miraculous, underpinning for the sale, transfer and tracking of large integers. Libraries need to become adept in blockchain technology to the extent that they want to license, track and lend large integers. In other words, not ever.

“Blockchain” is being used... more

Aggregation Without Aggravation: auditing metadata at scale

As one of the Digital Public Library of America (DPLA)’s largest service hubs, Mountain West Digital Library aggregates metadata from over 70 institutions in the US Intermountain West. How can we efficiently and comprehensively audit thousands of metadata records for digitized special collections? We meet this challenge through the adoption... more

Webrecorder: Developing an Open-Source High-Fidelity Web Archiving Toolset

The talk will present the open source technology stack that powers Webrecorder and address some of the many challenges and possible solutions facing web archiving today. First, it will include a brief technical overview of high-fidelity web capture and replay, and general approaches to make high-fidelity web content accessible in... more

GDPR for American Public Libraries

The General Data Protection Regulation of the European Union came into full effect May 2018. The most visible impact of GDPR has been a cascade of cookie approvals and subscription confirmations to conform with the law. But what does a regulation in the EU mean for American public libraries? How... more

Get To Know WCAG 2.1

In the summer of 2018, W3C published a new version of Web Content Accessibility Guidelines (WCAG). WCAG 2.1 fills in gaps that were identified in WCAG 2.0 to improve accessibility across devices and for users with additional types of needs, particularly those with low vision or cognitive disabilities. This session... more

Machine Learning based metadata generation for library archives

As libraries and cultural heritage organizations continue to acquire and digitize cultural and historical treasures, in the hopes of making them available to the general public, it is important to create quality descriptive metadata to increase the visibility of content. Creating quality metadata is a time-consuming process, and not all... more

Using MediaWiki + WikiBase as a platform for library linked data: a pilot study

In this talk, we will provide a high-level overview of the MediaWiki and Wikibase platform, share the details of our recent 16 library pilot project, highlight the advantages and disadvantages of the platform, review our extensions, and share lessons learned and evaluations from the project participant libraries. Wikidata has evolved... more

Algorithm Bias Study

Recent discourse in information literacy has raised questions about bias in the Google search algorithm. In our study, we consider whether pedagogy that raises awareness about how databases are designed by humans with pre-existing biases should be an important aspect of how librarians teach information literacy. As a first step... more

Ringers of Jupyter: The Jupyter Notebook As Faux Web App

In the dark basement of an academic library, a project manager ponders. The current dilemma, involving data entry, metadata manipulation, and file management, is a poser. It requires a hefty dose of automation, a smattering of written instruction, a handful of hyperlinks, and manual examination of image files. Oh, and... more

Shear forces: a conceptual model for understanding (and coping with) risk, change, and technical debt

A student searches in vain for a seat close enough to a power outlet to plug in her laptop. The IT department maintains a Windows XP server to support critical software for which there’s no modern replacement. Your local fitness studio has a never-used, wall-mounted iPod dock set into the... more

Automating link management: When institutional infrastructure works against you

Managing resource links in academic libraries is increasingly challenging, especially in an e-preferred environment, as it involves keeping millions of links up to date at any given time. This presentation outlines a project undertaken in 2018 to solve this problem and the challenges encountered along the way.

I will begin... more

Why building a complete index of open access to research articles is hard and how you can help

Our nonprofit has gathered every open access scholarly article–over 20 million of them–into one free database. Come hear about the obstacles in assembling the index: confusing definitions! poor metadata quality! standards that weren’t standard! And more! We’ll detail the issues, how we’ve overcome them with a completely open-source solution, and... more

The Websites that Librarians Love... but are they ACCESSIBLE?!?

DuckDuckGo. LibGuides. Google Scholar. Libraries “love” these sites. Librarians recommend them. Our patrons sometimes use them. But is that a good thing? We have many metrics to evaluate how well a site performs. One of these (albeit often overlooked) is web accessibility. So how well do these sites fare when... more

Project ReShare: Development & Piloting of an Open Source Resource Sharing Platform

Libraries desire a user-centered resource sharing state, where patrons have seamless and informed access to information in any library, in any format. ReShare, as an open source, community-owned resource sharing platform, will significantly expand libraries’ current resource sharing capabilities and capacity, putting the patron and learner at the center of... more

Optimizing Library Web Content for Voice Search

About 20% of Google searches are currently voice searches. By 2020, it is likely that 50% of all searches in the United States will be done by voice. How can libraries ensure that the content we provide is adapted and optimized for people searching from their mobile devices or voice-activated... more

Code It Yourself! Teaching Collections Staff to Script

We will share the University of Georgia Libraries’ method for training collections staff to script using Python through a combination of a peer learning group and expert training from our Libraries’ developer. The peer learning group (Lib Learn Tech) provides a support group for staff to work through online training... more