How To Import Excel Data into VuFind

by Jonathan Jacobsen Tuesday, January 08, 2019 4:52 PM

Recently we had a new client come to us looking for help with several subscription-based VuFind sites they manage, and ultimately to have us host them as part of our managed hosting service. This client had a unique challenge for us: 3 million records, available as tab-separated text files of up to 70,000 records each.

Most of the data sets we work with are relatively small: libraries with a few thousand records, archives with a few tens of thousands, and every so often, databases of a few hundred thousand, like those in the Arctic Health bibliography.

While VuFind and the Apache Solr search engine that powers it (and also powers our Andornot Discovery Interface) have no trouble with that volume of records, transforming the data from hundreds of tab-separated text files into something Solr can use, in an efficient manner, was a pleasant challenge.

VuFind has excellent tools for importing traditional library MARC records, using the SolrMarc tool to post data to Solr. For other types data, such as records exported from DB/TextWorks databases, we’ve long used the PHP-based tools in VuFind that use XSLTs to transform XML into Solr's schema and post it to Solr. While this has worked well, XSLTs are especially difficult to debug, so we considered alternatives.

For this new project, we knew we needed to write some code to manipulate the 3 million records in tab-separated text files into XML, and we knew from our extensive experience with Solr that it's best to post small batches of records at a time, in separate files, rather than one large post of 3 million! So we wrote a python script to split up the source data into separate files of about 1,000 records each, and also remove invalid characters that had crept in to the data over time (this data set goes back decades and has likely been stored in many different character encodings on many different systems, so it's no surprise there were some gremlins).

Once the script was happily creating Solr-ready XML files, rather than use VuFind's PHP tools and an XSLT to index the data, it just seemed more straightforward to push the XML directly to Solr. For this, we wrote a bash shell script that uses the post tool that ships with Solr to iterate through the thousands of data files and push each to Solr, logging the results.

The combination of a python script to convert the tab-separated text files into Solr-ready XML and a bash script to push it to Solr worked extremely well for this project. Python is lightning fast at processing text and pushing data directly to Solr is definitely faster than invoking XSLT transformations.

This approach would work well for any data. Python is a very forgiving language to develop with, making it easy and quick to write scripts to process any data source. In fact, since this project, we've used Python to manipulate a FileMaker Pro database export for indexing in our Andornot Discovery Interface (also powered by Apache Solr) and to harvest data from the Internet Archive and Online Archive of California, for another Andornot Discovery Interface project (watch this blog for news of both when they launch).

We look forward to more challenges like this one! Contact us for help with your own VuFind, Solr and similar projects.

Java 11 date parsing? Locale, locale, locale.

by Peter Tyrrell Monday, January 07, 2019 11:39 AM

Java is undergoing some considerable licensing changes, prompting us to plan an all-out move from Oracle Java 8 to OpenJDK Java 11 this Spring for every Solr instance we host. I have been running covertly about the hills setting traps for Java 11.0.1 to see what I might snare before unleashing it on our live servers. I caught something this week.

Dates! Of course it's about parsing dates! I noticed that the Solr Data Import Handler (DIH) transforms didn't handle making created dates during ingest. (In DIH, we use a script transformer and manipulate some Java classes with javascript. This includes the parsing of dates from text.) Up until now, our DIH has used an older method of parsing dates with a Java class called SimpleDateFormat. If you look for info on parsing dates in Java, you will find years and years of advice related to that class and its foibles, and then you will notice that in recent times experts advise using the java.time classes introduced in Java 8. Since SimpleDateFormat didn't work during DIH, I assumed that SimpleDateFormat was deprecated in Java 11 (it isn't actually), and moved to convert the relevant DIH code to use java.time.

Many hours passed here, during which the output of two lines of code* made no goddamn sense at all. The javadocs that describe the behaviour of java.time classes are completely inadequate, with their stupid little "hello, world" examples, when dates are tricky, slippery, malicious dagger-worms of pure hatred. Long story short, a date like '2004-09-15 12:00:00 AM' produced by Inmagic ODBC from a DB/Textworks database could not be parsed. The parser choked on the string at "AM," even though my match pattern was correct: 'uuuu-MM-dd hh:mm:ss a'. Desperate to find the tiniest crack to exploit, I changed every variable I could think of one at a time. That was how I found that, when I switched to Java 8, the same exact code worked. Switch back to Java 11. Not working. Back to Java 8. Working. WTF?

I thought, maybe the Nashorn scripting engine that allows javascript to be interpreted inside the Java JVM is to blame, because this scenario does involve Java inside javascript inside Java, which is weird. So I set up a Java project with Visual Studio Code and Maven and wrote some unit tests in pure Java. (That was pretty fun. It was about the same effort as ordering a pizza in Italian when you don’t speak Italian: everything about the ordering process was tantalizingly familiar but different enough to delay my pizza for quite some time.) The problem remained: parsing worked as-written in Java 8, but not Java 11.

I started writing a Stack Overflow question. In so doing, I realized I hadn't tried an overload method of java.time.format.DateTimeFormatter.ofPattern() which takes a locale. I had already dotted many i's and crossed a thousand t's, but I wanted to really impress anyone reading the question that I had done my homework, because I hate looking ignorant, so I wrote another unit test that passed in Locale.ENGLISH and, ohmigawd, that solved the problem entirely. If you have been following along, that means that "AM/PM" could not be understood by the parser, even with the right pattern matcher, without the context of a locale, and obviously the default locale used by the simpler version of DateTimeFormatter.ofPattern() was inadequate to the task. I tested and Locale.ENGLISH and Locale.US both worked with "AM/PM" but Locale.CANADA did not. Likely the latter is my default locale, because I do reside in Canada. Really? Really, Java? We have AM and PM here in the Great White North, I assure you.

I don’t know if this a bug in Java 11. I’m merely happy to have understood the problem at this point. Just another day in the developer life, eh? Something that should be a snap becomes a grueling carnival ride that deposits you at the exit, white-faced and shaking, with an underwhelming sense of minor accomplishment. How do you explain to people that you spent 8 hours teaching a computer to treat an ordinary date as a date? Write a blog post, I guess. Winking smile

* Two lines of code. 8 hours of frustration. Here it is, ready?

import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Locale;

public class App {

    public LocalDateTime Parse (String dateText, String pattern) {

        DateTimeFormatter parser = DateTimeFormatter.ofPattern(pattern, Locale.ENGLISH);
        LocalDateTime date = LocalDateTime.parse(dateText, parser);
        return date;

    }
}

Restore Local Params in Solr 7.5

by Peter Tyrrell Thursday, December 06, 2018 12:13 PM

Local params are a way of modifying Solr's query parser within a query, setting all related parameters with a shorthand syntax. Super convenient for modifying query behaviour on the fly; local params are disabled by default in Solr 7.5, when employing the edismax query parser.

TL;DR

Modify the request handler in solrconfig.xml to add the so-called 'magic field' uf=_query_ back to the uf (User Fields) parameter to restore the kind of local params behaviour that was default prior to Solr 7.5.

<str name="uf">* _query_</str>

The above allows users to do fielded searches on any field, plus allows them to use local params in their queries.

Why Local Params? An Example

Say we are developers who want to use the MoreLikeThis feature of Solr. There are multiple ways of setting this up, as described in the Solr Reference Guide. But, say we are also developers who are using SolrNet to create requests to and handle responses from Solr. (As indeed is the case for use here at Andornot, in our Solr-backed Discovery Interface.)

One of SolrNet's strengths is that it maps Solr responses to strongly-typed classes. On the other hand, its weakness is that you can really only map query result documents to one strongly-typed class. (Not strictly true, but true from a practical, please-don't-make-me-do-contortions point of view.)

No response from Solr can deviate too far from that mapping. Other bits can be tacked on to the response and be handled by SolrNet (highlighting, spellcheck, facets, etc.), but these must be components that are somehow related to the context of the documents in the main response. In the case of MoreLikeThis, you have to set up the component so that each query result document generates a list of 'like' documents. Having to generate such a list for each document returned slows down the query response time and bloats the size of the response. Quite unnecessary, in my opionion. I much prefer to generate the list of 'like' documents on the fly when the user has asked for them. An easy way of doing that without messing with the SolrNet mapping setup is to use local parameters.

Say our user finds an intriguing book in their search results called "More Armadillos Stacked on a Bicycle". Perhaps, our user muses, this book is a sequel to a previous publication regarding such matters. They feel a thrill of anticipation as they click on a 'More Like This' link. (I know I would.)

{ 
   'id': 123, 
   'title': 'More Armadillos Stacked on a Bicycle', 
   'topic': [ 
      'armadillos', 
      'fruitless pursuits', 
      'bicycles' 
   ] 
}

When using local params, the 'More Like This' query can use the same Solr request handler and all the parameters embedded within it, but swap out the query parser for the MLTQParser. The bits that are needed to complete the MoreLikeThis request are passed in via local param syntax, still within the main query parameter! (Perhaps I did not need that exclamation mark, but the armadillos-upon-a-bicycle adrenaline has not yet worn off.)

/select?q={!mlt qf=title,topic mintf=1 mindf=1}123

The local params syntax above says "find other documents like id=123 where extracted keywords from its various fields find matches in title and topic." The convenient part for the developer using SolrNet is that the response maps neatly to the kind of response we expect from a regular query: a set of result documents mapped to a strongly-typed class, which makes the response easy to handle and display using existing mechanisms.

Why Not Local Params?

I suppose we can imagine a clever and malicious user who is able to use the power of local params to hack Solr queries in order to get at information that perhaps they otherwise shouldn't. If, as a developer, you need to ensure that users are limited in their scope, then disabling local params and even further locking down the uf (User Fields) parameter to deny certain fielded searches is right and good.

Manitoba Law Library Launches New Catalogue, including Collection of Historic Judgments

by Jonathan Jacobsen Thursday, October 11, 2018 8:56 AM

The Manitoba Law Library has launched a new online catalogue featuring not only their print and electronic library resources, but a collection of over 17,500 judgments from Manitoba courts spanning 1970 to 1998. 

The new site is available at https://catalog.lawlibrary.ca and is powered by our Andornot Discovery Interface on top of Inmagic DB/TextWorks databases.

While Manitoba judgments made since 1998 are already available digitally in CANLII, the historic judgments in this collection were not previously available online or in any electronic form. Law Library staff scanned print copies of these judgments, then turned to Andornot to create a search engine for the collection.

"The Great Library has long been known to have this "secret" database of unreported judgments. Our goal was to make this collection available to everyone who wanted it, and to be able to retrieve it themselves."

-- Karen Sawatzky, Director of Legal Resources, Manitoba Law Library Inc.

Andornot created a DB/TextWorks database of judgment records out of a combination of a spreadsheet of metadata, listings of the scanned judgment PDF files on disk, and custom programming to extract additional metadata, such as Court Name, from acronyms in an Accession Number.

As the scanned print copies had not yet been OCRd to convert the images to text, we ran a process to do so for all 17,500 files. This allows the full text of the judgment to be indexed and made searchable in the new site.

This Judgments database, along with a library catalogue database also now managed with DB/TextWorks, is indexed in the https://catalog.lawlibrary.ca site.

This new site offers users the features they expect from library catalogues and all search engines: spelling corrections, "did you mean" search suggestions, relevancy ranked results powered by sophisticated algorithms, and facets such as subject, name, date and type of material to quickly and easily refine a search. When searching the historic judgments, users can also refine their search by Court.

If any search words were found in the full text of a judgment, a snippet of the relevant passage showing the words in context is display in search results. The user may then click a single button to open the judgment in their browser, showing the original scanned document, but with their search words pre-highlighted, where ever they may appear in the document. This feature saves the user from having to download, open and search all over again within the PDF for the relevant passage.

"We wanted to make it easier for our users to find material, whether it is an e-book, a print book, or a report, as well as upgrade the look and feel of our catalog. This system also allows us to create useful reports that help us demonstrate the value of our collection."

-- Karen Sawatzky

Contact Andornot for information management and search solutions for your legal or unique collections.

Galt Museum and Archives Launches New Collections Search

by Jonathan Jacobsen Thursday, October 04, 2018 9:43 AM

The Galt Museum and Archives in Lethbridge, Alberta has launched a new search engine for their cultural collections at https://collections.galtmuseum.com 

This new site is powered by our Andornot Discovery Interface. This modern search engine provides features that users have come to expect, including spelling corrections, "did you mean" search suggestions, results ranked by relevancy, and facets to help narrow down the results further, such as by name, topic and date.

Previously, users were only able to search the archives, museum artifacts and library collections through three separate searches. Now, with the Andornot Discovery Interface, researchers can search all materials at once and discover related records quickly and easily. Over eighty percent of the resources in the site include photographs, especially of artifacts in the museum, making for a visually engaging experience researching the history of Lethbridge and surrounding area.

Once results are found, a user can save them for later review, share them on Pinterest, Google+ and other social media, or request more information from the museum and archives.

The graphic design of the site was adapted from the fonts, colours and layout of the main museum website, for a seamless transition between the two. The bright colours add to the fun factor when using the site, without detracting from the resources and the many historic photos in search results.

Like many museums and archives, the Galt has for many years managed their collections with Inmagic software. A series of DB/TextWorks databases continue to be home to metadata about the archives, museum artifacts, and a small library. The museum is running the latest version, so has access to many new features, but still within the familiar and easy-to-use interface they are used to.

"This is a big step forward in terms of both appeal and usability, and the integrated search -- across archives, collections and library databases -- is the feature that we long wished for."

Andrew Chernevych
Archivist, Galt Museum & Archives

Contact Andornot to discuss options for better management and searching of your cultural collections.

Month List