Andornot’s Professional Development Grant for 2019 Awarded to Marla Dobson

by Jonathan Jacobsen Wednesday, February 06, 2019 12:24 PM

For the third year in a row, Andornot is pleased to award a Professional Development Grant to a working professional, to aid them in attending a conference or workshop.

This year's recipient of the $1,000 grant is Marla Dobson, Curator of the Museum of Health Care in Kingston, ON.

In her application for the grant, Marla writes:

"As the Curator for the Museum of Health Care at Kingston, I have responsibility for planning, organizing, and supervising exhibition development, collections development and maintenance, as well as programming support. I care for a collection of 40,000 objects related to the history of medicine and health care in Canada. I also act as an ambassador for the museum, building its public profile within the regional community as well as at national and even international events."

The collection is available at https://mhc.andornot.com, with a search interface developed from our Andornot Discovery Interface, and hosted by our Managed Hosting service.

Marla adds:

"I wish to attend the Canadian Museums Association National Conference because it is vital that I develop and expand my professional network within the Canadian museum community. I am new in my position and as an emerging professional, wish to expose myself to workshops and networking events that will firstly, improve my ability to be a successful curator, and secondly, help me make connections with other organizations with which we could partner on projects and exhibitions."

Andornot strongly believes in the value of attending conferences to foster professional development. We attend events across Canada all year long to learn about new trends and technologies, meet with clients, and share our expertise with like-minded folks.

We receive many excellent applications for this grant each year and face a tough decision in choosing just one. We thank all who showed an interest in the grant and only wish we could send everyone to a conference.

We look forward to meeting you at one of the conferences we'll be attending this year.

VuFind 5.1 Released

by Jonathan Jacobsen Monday, February 04, 2019 11:48 AM

Version 5.1 of the VuFind Open Source discovery software has just been released. This minor release adds several new features and fixes.

Some key additions:

  • Configurable user account notifications, making activity (such as fines, available holds, overdues, etc.) more readily visible to the user.
  • A richer, fully customizable user feedback system, allowing the creation of custom forms in the VuFind interface for collecting not just feedback, but also purchase suggestions, survey responses, or anything else the administrator configures.
  • Optional dynamic DOI-based link augmentation in search results (currently supporting Third Iron's BrowZine service, but also extensible for other applications).
  • An experimental driver for integration with the FOLIO platform, available for early adopters (but subject to change as the platform evolves).
  • Better code generation tools, increasing the ease of creating new VuFind plug-ins.
  • Full Vietnamese language support in the user interface.

Additionally, several bug fixes, new configuration options, performance enhancements and minor improvements have been incorporated. Full details of this release are available at https://vufind.org/wiki/changelog#release_51_-_2_4_2019

Andornot offers development and hosting of VuFind as part of our Managed Hosting service. VuFind is an ideal entry-level discovery interface for small special libraries with primarily biblipgrahic information, provding the style of search experience users expect in 2019. For other kinds of cultural information, we recommend our Andornot Discovery Interface.

Learn more about VuFind and some of the sites we've developed here, then contact us to discuss VuFind, hosting and our other solutions for managing and searching cultural information.

How To Import Excel Data into VuFind

by Jonathan Jacobsen Tuesday, January 08, 2019 4:52 PM

Recently we had a new client come to us looking for help with several subscription-based VuFind sites they manage, and ultimately to have us host them as part of our managed hosting service. This client had a unique challenge for us: 3 million records, available as tab-separated text files of up to 70,000 records each.

Most of the data sets we work with are relatively small: libraries with a few thousand records, archives with a few tens of thousands, and every so often, databases of a few hundred thousand, like those in the Arctic Health bibliography.

While VuFind and the Apache Solr search engine that powers it (and also powers our Andornot Discovery Interface) have no trouble with that volume of records, transforming the data from hundreds of tab-separated text files into something Solr can use, in an efficient manner, was a pleasant challenge.

VuFind has excellent tools for importing traditional library MARC records, using the SolrMarc tool to post data to Solr. For other types data, such as records exported from DB/TextWorks databases, we’ve long used the PHP-based tools in VuFind that use XSLTs to transform XML into Solr's schema and post it to Solr. While this has worked well, XSLTs are especially difficult to debug, so we considered alternatives.

For this new project, we knew we needed to write some code to manipulate the 3 million records in tab-separated text files into XML, and we knew from our extensive experience with Solr that it's best to post small batches of records at a time, in separate files, rather than one large post of 3 million! So we wrote a python script to split up the source data into separate files of about 1,000 records each, and also remove invalid characters that had crept in to the data over time (this data set goes back decades and has likely been stored in many different character encodings on many different systems, so it's no surprise there were some gremlins).

Once the script was happily creating Solr-ready XML files, rather than use VuFind's PHP tools and an XSLT to index the data, it just seemed more straightforward to push the XML directly to Solr. For this, we wrote a bash shell script that uses the post tool that ships with Solr to iterate through the thousands of data files and push each to Solr, logging the results.

The combination of a python script to convert the tab-separated text files into Solr-ready XML and a bash script to push it to Solr worked extremely well for this project. Python is lightning fast at processing text and pushing data directly to Solr is definitely faster than invoking XSLT transformations.

This approach would work well for any data. Python is a very forgiving language to develop with, making it easy and quick to write scripts to process any data source. In fact, since this project, we've used Python to manipulate a FileMaker Pro database export for indexing in our Andornot Discovery Interface (also powered by Apache Solr) and to harvest data from the Internet Archive and Online Archive of California, for another Andornot Discovery Interface project (watch this blog for news of both when they launch).

We look forward to more challenges like this one! Contact us for help with your own VuFind, Solr and similar projects.

Java 11 date parsing? Locale, locale, locale.

by Peter Tyrrell Monday, January 07, 2019 11:39 AM

Java is undergoing some considerable licensing changes, prompting us to plan an all-out move from Oracle Java 8 to OpenJDK Java 11 this Spring for every Solr instance we host. I have been running covertly about the hills setting traps for Java 11.0.1 to see what I might snare before unleashing it on our live servers. I caught something this week.

Dates! Of course it's about parsing dates! I noticed that the Solr Data Import Handler (DIH) transforms didn't handle making created dates during ingest. (In DIH, we use a script transformer and manipulate some Java classes with javascript. This includes the parsing of dates from text.) Up until now, our DIH has used an older method of parsing dates with a Java class called SimpleDateFormat. If you look for info on parsing dates in Java, you will find years and years of advice related to that class and its foibles, and then you will notice that in recent times experts advise using the java.time classes introduced in Java 8. Since SimpleDateFormat didn't work during DIH, I assumed that SimpleDateFormat was deprecated in Java 11 (it isn't actually), and moved to convert the relevant DIH code to use java.time.

Many hours passed here, during which the output of two lines of code* made no goddamn sense at all. The javadocs that describe the behaviour of java.time classes are completely inadequate, with their stupid little "hello, world" examples, when dates are tricky, slippery, malicious dagger-worms of pure hatred. Long story short, a date like '2004-09-15 12:00:00 AM' produced by Inmagic ODBC from a DB/Textworks database could not be parsed. The parser choked on the string at "AM," even though my match pattern was correct: 'uuuu-MM-dd hh:mm:ss a'. Desperate to find the tiniest crack to exploit, I changed every variable I could think of one at a time. That was how I found that, when I switched to Java 8, the same exact code worked. Switch back to Java 11. Not working. Back to Java 8. Working. WTF?

I thought, maybe the Nashorn scripting engine that allows javascript to be interpreted inside the Java JVM is to blame, because this scenario does involve Java inside javascript inside Java, which is weird. So I set up a Java project with Visual Studio Code and Maven and wrote some unit tests in pure Java. (That was pretty fun. It was about the same effort as ordering a pizza in Italian when you don’t speak Italian: everything about the ordering process was tantalizingly familiar but different enough to delay my pizza for quite some time.) The problem remained: parsing worked as-written in Java 8, but not Java 11.

I started writing a Stack Overflow question. In so doing, I realized I hadn't tried an overload method of java.time.format.DateTimeFormatter.ofPattern() which takes a locale. I had already dotted many i's and crossed a thousand t's, but I wanted to really impress anyone reading the question that I had done my homework, because I hate looking ignorant, so I wrote another unit test that passed in Locale.ENGLISH and, ohmigawd, that solved the problem entirely. If you have been following along, that means that "AM/PM" could not be understood by the parser, even with the right pattern matcher, without the context of a locale, and obviously the default locale used by the simpler version of DateTimeFormatter.ofPattern() was inadequate to the task. I tested and Locale.ENGLISH and Locale.US both worked with "AM/PM" but Locale.CANADA did not. Likely the latter is my default locale, because I do reside in Canada. Really? Really, Java? We have AM and PM here in the Great White North, I assure you.

I don’t know if this a bug in Java 11. I’m merely happy to have understood the problem at this point. Just another day in the developer life, eh? Something that should be a snap becomes a grueling carnival ride that deposits you at the exit, white-faced and shaking, with an underwhelming sense of minor accomplishment. How do you explain to people that you spent 8 hours teaching a computer to treat an ordinary date as a date? Write a blog post, I guess. Winking smile

* Two lines of code. 8 hours of frustration. Here it is, ready?

import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Locale;

public class App {

    public LocalDateTime Parse (String dateText, String pattern) {

        DateTimeFormatter parser = DateTimeFormatter.ofPattern(pattern, Locale.ENGLISH);
        LocalDateTime date = LocalDateTime.parse(dateText, parser);
        return date;

    }
}

Restore Local Params in Solr 7.5

by Peter Tyrrell Thursday, December 06, 2018 12:13 PM

Local params are a way of modifying Solr's query parser within a query, setting all related parameters with a shorthand syntax. Super convenient for modifying query behaviour on the fly; local params are disabled by default in Solr 7.5, when employing the edismax query parser.

TL;DR

Modify the request handler in solrconfig.xml to add the so-called 'magic field' uf=_query_ back to the uf (User Fields) parameter to restore the kind of local params behaviour that was default prior to Solr 7.5.

<str name="uf">* _query_</str>

The above allows users to do fielded searches on any field, plus allows them to use local params in their queries.

Why Local Params? An Example

Say we are developers who want to use the MoreLikeThis feature of Solr. There are multiple ways of setting this up, as described in the Solr Reference Guide. But, say we are also developers who are using SolrNet to create requests to and handle responses from Solr. (As indeed is the case for use here at Andornot, in our Solr-backed Discovery Interface.)

One of SolrNet's strengths is that it maps Solr responses to strongly-typed classes. On the other hand, its weakness is that you can really only map query result documents to one strongly-typed class. (Not strictly true, but true from a practical, please-don't-make-me-do-contortions point of view.)

No response from Solr can deviate too far from that mapping. Other bits can be tacked on to the response and be handled by SolrNet (highlighting, spellcheck, facets, etc.), but these must be components that are somehow related to the context of the documents in the main response. In the case of MoreLikeThis, you have to set up the component so that each query result document generates a list of 'like' documents. Having to generate such a list for each document returned slows down the query response time and bloats the size of the response. Quite unnecessary, in my opionion. I much prefer to generate the list of 'like' documents on the fly when the user has asked for them. An easy way of doing that without messing with the SolrNet mapping setup is to use local parameters.

Say our user finds an intriguing book in their search results called "More Armadillos Stacked on a Bicycle". Perhaps, our user muses, this book is a sequel to a previous publication regarding such matters. They feel a thrill of anticipation as they click on a 'More Like This' link. (I know I would.)

{ 
   'id': 123, 
   'title': 'More Armadillos Stacked on a Bicycle', 
   'topic': [ 
      'armadillos', 
      'fruitless pursuits', 
      'bicycles' 
   ] 
}

When using local params, the 'More Like This' query can use the same Solr request handler and all the parameters embedded within it, but swap out the query parser for the MLTQParser. The bits that are needed to complete the MoreLikeThis request are passed in via local param syntax, still within the main query parameter! (Perhaps I did not need that exclamation mark, but the armadillos-upon-a-bicycle adrenaline has not yet worn off.)

/select?q={!mlt qf=title,topic mintf=1 mindf=1}123

The local params syntax above says "find other documents like id=123 where extracted keywords from its various fields find matches in title and topic." The convenient part for the developer using SolrNet is that the response maps neatly to the kind of response we expect from a regular query: a set of result documents mapped to a strongly-typed class, which makes the response easy to handle and display using existing mechanisms.

Why Not Local Params?

I suppose we can imagine a clever and malicious user who is able to use the power of local params to hack Solr queries in order to get at information that perhaps they otherwise shouldn't. If, as a developer, you need to ensure that users are limited in their scope, then disabling local params and even further locking down the uf (User Fields) parameter to deny certain fielded searches is right and good.

Month List