Sample Solr DataImportHandler for XML Files

by Peter Tyrrell Thursday, June 07, 2012 11:51 AM

I spent a lot of time in trial and error getting the Solr DataImportHandler (DIH) set up and working the way I wanted, mainly due to a paucity of practical examples on the internet at large, so here is a short post on Solr DIH for XML with a working sample, and may it save you many hours of nail-chewing.

This post assumes you are already somewhat familiar with Solr, but would like to know more about how to import XML data with the DataImportHandler.

DIH Overview

The DataImportHandler (DIH) is a mechanism for importing structured data from a data store into Solr. It is often used with relational databases, but can also handle XML with its XPathEntityProcessor. You can pass incoming XML to an XSL, as well as parse and transform the XML with built-in DIH transformers. You could translate your arbitrary XML to Solr's standard input XML format via XSL, or map/transform the arbitrary XML to the Solr schema fields right there in the DIH config file, or a combination of both. DIH is flexible.

Sample 1: dih-config.xml with FileDataSource

Here's a sample dih-config.xml from a an actual working site (no pseudo-samples here, my friend). Note that it picks up xml files from a local directory on the LAMP server. If you prefer to post xml files directly via HTTP you would need to configure a ContentStreamDataSource instead.

It so happens that the incoming xml is already in standard Solr update xml format in this sample, and all the XSL does is remove empty field nodes, while the real transforms, such as building the content of "ispartof_t" from "ignored_seriestitle", "ignored_seriesvolume", and "ignored_seriesissue", are done with DIH Regex transformers. (The XSLT is performed first, and the output of that is then given to the DIH transformers.) The attribute "useSolrAddSchema" tells DIH that the xml is already in standard Solr xml format. If that were not the case, another attribute, "xpath", on the XPathEntityProcessor would be required to select content from the incoming xml document.

<dataConfig>
    <dataSource encoding="UTF-8" type="FileDataSource" />
    <document>
        <!--
            Pickupdir fetches all files matching the filename regex in the supplied directory
            and passes them to other entities which parse the file contents. 
        -->
        <entity
            name="pickupdir"
            processor="FileListEntityProcessor"
            rootEntity="false"
            dataSource="null"
            fileName="^[\w\d-]+\.xml$"
            baseDir="/var/lib/tomcat6/solr/xxx/import/"
            recursive="true"
            newerThan="${dataimporter.last_index_time}"
        >

        <!--
            Pickupxmlfile parses standard Solr update XML.
            Incoming values are split into multiple tokens when given a splitBy attribute.
            Dates are transformed into valid Solr dates when given a dateTimeFormat to parse.
        -->
        <entity 
            name="xml"
            processor="XPathEntityProcessor"
            transformer="RegexTransformer,TemplateTransformer"
            datasource="pickupdir"
            stream="true"
            useSolrAddSchema="true"
            url="${pickupdir.fileAbsolutePath}"
            xsl="xslt/dih.xsl"
        >

            <field column="abstract_t" splitBy="\|" />
            <field column="coverage_t" splitBy="\|" />
            <field column="creator_t" splitBy="\|" />
            <field column="creator_facet" template="${xml.creator_t}" />
            <field column="description_t" splitBy="\|" />
            <field column="format_t" splitBy="\|" />
            <field column="identifier_t" splitBy="\|" />
            <field column="ispartof_t" sourceColName="ignored_seriestitle" regex="(.+)" replaceWith="$1" />
            <field column="ispartof_t" sourceColName="ignored_seriesvolume" regex="(.+)" replaceWith="${xml.ispartof_t}; vol. $1" />
            <field column="ispartof_t" sourceColName="ignored_seriesissue" regex="(.+)" replaceWith="${xml.ispartof_t}; no. $1" />
            <field column="ispartof_t" regex="\|" replaceWith=" " />
            <field column="language_t" splitBy="\|" />
            <field column="language_facet" template="${xml.language_t}" />
            <field column="location_display" sourceColName="ignored_class" regex="(.+)" replaceWith="$1" />
            <field column="location_display" sourceColName="ignored_location" regex="(.+)" replaceWith="${xml.location_display} $1" />
            <field column="location_display" regex="\|" replaceWith=" " />
            <field column="othertitles_display" splitBy="\|" />
            <field column="publisher_t" splitBy="\|" />
            <field column="responsibility_display" splitBy="\|" />
            <field column="source_t" splitBy="\|" />
            <field column="sourceissue_display" sourceColName="ignored_volume" regex="(.+)" replaceWith="vol. $1" />
            <field column="sourceissue_display" sourceColName="ignored_issue" regex="(.+)" replaceWith="${xml.sourceissue_display}, no. $1" />
            <field column="sourceissue_display" sourceColName="ignored_year" regex="(.+)" replaceWith="${xml.sourceissue_display} ($1)" />
            <field column="src_facet" template="${xml.src}" />
            <field column="subject_t" splitBy="\|" />
            <field column="subject_facet" template="${xml.subject_t}" />
            <field column="title_t" sourceColName="ignored_title" regex="(.+)" replaceWith="$1" />
            <field column="title_t" sourceColName="ignored_subtitle" regex="(.+)" replaceWith="${xml.title_t} : $1" />
            <field column="title_sort" template="${xml.title_t}" />
            <field column="toc_t" splitBy="\|" />
            <field column="type_t" splitBy="\|" />
            <field column="type_facet" template="${xml.type_t}" />
    </entity>
      </entity>
    </document>
</dataConfig>

Sample 2: dih-config.xml with ContentStreamDataSource

This sample receives XML files posted direct to the DIH handler. Whereas Sample 1 is able to batch process any number of files, this sample would work on one at a time as they were posted.

<dataConfig>
    <dataSource name="streamsrc" encoding="UTF-8" type="ContentStreamDataSource" />
    
    <document>
        <!--
            Parses standard Solr update XML passed as stream from HTTP post.
            Strips empty nodes with dih.xsl, then applies transforms.
        -->
        <entity
            name="streamxml"
            datasource="streamsrc"
            processor="XPathEntityProcessor"
            rootEntity="true"
            transformer="RegexTransformer,DateFormatTransformer"
            useSolrAddSchema="true"
            xsl="xslt/dih.xsl"
        >
            <field column="contributor_t" splitBy="\|" />
            <field column="coverage_t" splitBy="\|" />
            <field column="creator_t" splitBy="\|" />
            <field column="date_t" splitBy="\|" />
            <field column="date_tdt" dateTimeFormat="M/d/yyyy h:m:s a" />
            <field column="description_t" splitBy="\|" />
            <field column="format_t" splitBy="\|" />
            <field column="identifier_t" splitBy="\|" />
            <field column="language_t" splitBy="\|" />
            <field column="publisher_t" splitBy="\|" />
            <field column="relation_t" splitBy="\|" />
            <field column="rights_t" splitBy="\|" />
            <field column="source_t" splitBy="\|" />
            <field column="subject_t" splitBy="\|" />
            <field column="title_t" splitBy="\|" />
            <field column="type_t" splitBy="\|" />
        </entity>        
    </document>
</dataConfig>

How to set up DIH

1. Ensure the DIH jars are referenced from solrconfig.xml, as they are not included by default in the Solr WAR file. One easy way is to create a lib folder in the Solr instance directory that includes the DIH jars, as the solrconfig.xml looks in the lib folder for references by default. Find the DIH jars in the apache-solr-x.x.x/dist folder when you download the Solr package.

dih-jars

2. Create your dih-config.xml (as above) in the Solr "conf" directory.

3. Add a DIH request handler to solrconfig.xml if it's not there already.

<requestHandler name="/update/dih" startup="lazy" class="org.apache.solr.handler.dataimport.DataImportHandler">
    <lst name="defaults">
        <str name="config">dih-config.xml</str>
    </lst>
</requestHandler>

How to trigger DIH

There is a lot more info re full-import vs. delta-import and whether to commit, optimize, etc. in the wiki description on Data Import Handler Commands, but the following would trigger the DIH operation without deleting the existing index first, and commit the changes after all the files had been processed. The sample given above would collect all the files found in the pickup directory, transform them, index them, and finally, commit the update/s to the index (which would make them searchable the instant commit was finished).

http://localhost:8983/solr/update/dih?command=full-import&clean=false&commit=true

Video: Slay Monster PDFs with pdfbox

by Peter Tyrrell Friday, October 21, 2011 3:38 PM

A screencast of the lightning talk presentation I gave at the Access 2011 Conference on October 21, 2011 in Vancouver, BC, entitled "Quick and Dirty's Guide to Slaying Monster PDFs," in which I show how to use pdfbox to slice up large PDFs for indexing to make search more meaningful.

http://youtu.be/Pn4MW6bs7a8

Tags: Solr | video

Solr and the Trend to Open Source Search

by Peter Tyrrell Thursday, June 02, 2011 11:06 AM

On Saturday I caught a cold. On Sunday, I caught a flight to San Francisco to attend Lucene Revolution - the biggest open source search conference on the planet – to catch up on the latest developments with Apache Lucene/Solr.

From the Solr project website:

“Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search. Solr is highly scalable, providing distributed search and index replication, and it powers the search and navigation features of many of the world's largest internet sites.”

I was joined there by developers and representatives from AT&T, CareerBuilder.com, Corbis, Ebay, eHarmony, EMC, Etsy.com, HathiTrust, Healthwise, Intuit, Travelocity, Trulia, Twitter, Woot, and Yelp, to name a few. (Yes, I’m name dropping – just trying to appear cool here.)

I’ve been working with Solr for the better part of a year, and I thought it very impressive, but the conference blew my socks off in terms of what Solr can do. To think that the best-of-breed search performance in the world is open source! (Solr beats Google in overall search performance. No, I’m not exaggerating.)

Solr is a game-changer, there’s no doubt. No longer is open source just a freebie alternative, it is the go-to standard that is beating the pants off of proprietary search engines. I heard quite a few stories of prominent household-name enterprises switching to Solr and reducing costs while simultaneously vastly increasing their capabilities and performance. 

Happily, Solr not only scales up to the largest data collections ever created by our species, but also down to the relatively modest needs of the rest of us. It democratizes search. A kid making a website in his parent’s basement can utilize the same cutting-edge search features as a multinational corporation, and that’s not just convenient, it’s necessary, because search is vital now to every level of our interaction with information.

I can’t wait to apply what I learned at the conference back at El Rancho Andornot. Also, I need to keep ahead of that kid in the basement.

Tags: Solr

Month List