Bug 6891

Summary: LDIF interoperability - a proposal
Product: Koha Reporter: Xan Charbonnet <xan>
Component: ToolsAssignee: Galen Charlton <gmcharlt>
Status: NEW --- QA Contact: Bugs List <koha-bugs>
Severity: enhancement    
Priority: P5 - low CC: chris, robin, ruth
Version: unspecified   
Hardware: All   
OS: All   
Change sponsored?: --- Patch complexity: ---
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:
Attachments: Export Apollo screenshot

Description Xan Charbonnet 2011-09-21 00:56:50 UTC
Hello Koha Community,

I'm Xan Charbonnet with Biblionix.  We're a small company that provides a hosted ILS, Apollo, to small and medium-sized public libraries in the US.

We're not open-source, but we do strongly believe in open interchange: library data belongs to libraries, and shouldn't be used as a hostage to keep libraries chained to their automation systems.

Far too many systems do treat their customers this way.  There are even Koha consortia which are charging their librarians multi-thousand dollar extraction fees just to get their own data out!

We've developed what we call LDIF (library data interchange format), which is a W3C XML Schema that defines a way for library data to be stored and exchanged in a vendor-independent manner.  We believe this is an important capability which has been missing in the library world.  Any Apollo library is able to generate an LDIF file at any time, and download (practically) everything that Apollo knows about their library in a single, well-defined file, complete with the schema.  Ideally, that file would be easily importable into any other ILS.

Part of making LDIF "real", of course, is engaging other ILS vendors.  The Koha Community seemed like the place to start.  Is there anyone working on the Koha project who would have an interest in developing support for this fledgling standard?  And of course, in working with you, there will likely be changes to be made to LDIF, which will make it better for everyone.

LDIF Schema: http://www.biblionix.com/ldif.xsd
Example library export (I recommend that you download this rather than view it in your browser; it's a 4MB XML file): http://www.biblionix.com/demolibrary_2011-09-20.xml
Comment 1 Robin Sheat 2011-09-21 02:54:40 UTC
I'm all for methods of importing and exporting data, and a quick look at the schema suggests that this seems effective. It doesn't cover absolutely everything, but does seem to have all the critical bits.

A few quibbly points:
* having the MARC as base64 encoded content in a field is missing the more useful aspects of XML. It would seem to be that it'd be better to namespace it in as MARC-XML. This'd make it possible to use XSLT to transforms on that if needed.
* LDIF is a confusing name, as it's already the LDAP Data Interchange Format. Although, I suspect that it's hard to pick unique names these days...
* The address element is very US-centric, and doesn't include country.
* How does it work that there are three people with the same surname and barcode and ID? Assuming it's children, I hope that one of them doesn't keep their mother's surname, as that would make the schema very awkward.
* A schema and an example isn't quite enough to make good use of it (although it's a billion times better than being forced to reverse engineer!), is there actual documentation somewhere, perhaps a heavily annotated version of the XSD would do.

Also, your file is 40MB rather than 4, which really is too big for firefox to handle gracefully :)
Comment 2 Xan Charbonnet 2011-09-21 04:10:08 UTC
Thanks so much for the thoughts!

* Agreed on using XML instead of base64 for the MARC.  That was kind of a shortcut to start off with, and you've given some good reasons to make the switch.
* Good point about the LDIF name.  That is by no means set in stone either.  I'll see if we can come up with something that's less ambiguous.  If you or anyone else has any suggestions for another name, by all means let's hear them.
* You're right about the US-centric addresses.
* When you're talking about three people having the same surname and barcode and ID, I assume that's when a single <patron> has multiple <firstName>s?  Some libraries issue "family" cards, where everybody in the family shares the same account.  It's a little clunky and fairly rare, but Apollo does support it, which is why the schema allows for multiple first names for a patron.  For most libraries there will be exactly one first name per patron.

Let me tweak the spec, as well as our importer/exporter, and I'll post back here when there's a new version that addresses these issues.  Should be able to annotate the XSD as well, which is a great idea.

And no wonder my browser crashed and burned when opening that "4MB" file!  :-)
Comment 3 Robin Sheat 2011-09-21 05:42:42 UTC
A couple of other things:
* It doesn't appear to have authority records anywhere.
* Holidays are just a list of dates, they don't seem to have support for repeating values. Understandably, some software may not support it, but something like <date repeat="yearly">... should perhaps work if possible.
* The holdingTypeLists are a bit odd.
For example, it groups holding types (I assume these are similar in spirit to item types or collection codes) with funds and vendors. Also holdings do:

    <holding status="active" added="2009-07-08" barcode="78061" biblio="b51350263" usageCount="2" priceCents="2000" ca
ll="791.43" id="h51006265" edited="2009-07-08">
      <type>ht12</type>
      <type>ht49</type>
    </holding>

where ht12 says it's a 700-799 item, and ht49 says that it's from Brodart. Semantically, these things are pretty much unrelated and yet they're grouped together, which is weird.

* What does usageCount on biblios do? I assume its total checkouts on items, but not sure about biblios.
* We'd probably end up adding a koha-specific namespace to it in the end to allow the addition of things that it doesn't make sense to support in the spec, but it's still best to get a solid foundation for the lowest-common-denominator interchange.
Comment 4 Xan Charbonnet 2011-09-21 14:58:08 UTC
Robin,

* Apollo doesn't import or export authority records separately; we only deal with what's in the bibliographic records.  I'm certainly okay with adding authority records to this spec.

* Doing repeating holidays could work...  If exporting software doesn't support repeating holidays, it can export the full list of dates, and if importing software doesn't support repeating holidays, when reading "LDIF", the importer can create individual holidays on that date for the following few years.

But would this really be very helpful?  Something like Christmas is fixed, but at least around here, the "observed" dates for various holidays wander all over the calendar.  Columbus Day, MLK Day, Washington's birthday, and others are usually observed on a Monday.  And of course Good Friday and Easter move all over.  Ideally, there'd be a way in the spec to say, eg, "the library is always closed on Columbus Day (observed)", but I don't think there's any kind of standard code that allows us to refer to holidays, and we don't want this spec to have to include the names of all the international holidays.

So I'm not sure that repeating dates are worth the trouble.  Maybe it is still worth it, even if it only works for the holidays that are fixed.

* Here's the thinking on holding types (or item types, or collection codes).  Every system handles these things a little differently.  Some systems (including now Apollo, because we import from these other systems) have two orthogonal sets of holding types.  One may be based on call number and the other on medium, for example.  It's up to the library what they want to use them for (if they want them at all).  Some systems in fact have three or more such sets of types.

We wanted "LDIF" to allow for all of these, so rather than having a "type" attribute on the holding, and elements that define "type", and then also a "type2" attribute, and elements that define "type2", and so on to "typeN", we went with a way to store as many sets of types as are required.

From there, it became apparent that funds, vendors, and what we and other systems call "shelf location" were also lists of basically the same nature.  So rather than explicitly support each of these, they are also considered to be "holding types" in the XML.

When migrating from one system to another, very often a library wants to swap types around, putting the secondary types into primary and vice-versa, swapping fund codes and vendors (I suppose because they've been putting things in the wrong place before), using shelf location as the material type, ignore and not import one list or other, etc.  There are all kinds of things that people want to do with these "types".

What this approach allows is for the importing software to configure a mapping for what to do with each of the types.  This way, no matter what the library wants to do with them, the importer code doesn't have to be changed, nor does there need to be a step transforming the XML.

I'm not married to this approach.  Do you think it's too confusing?

* Some systems (including Apollo) keep a usage count for biblios.  Whenever an item checks out, the usage count for that item is incremented, as well as the usage count for that biblio.  It isn't hugely important, but it means that for biblios with a lot of items and a lot of turnover on those items, items can be weeded without losing the count for the number of times the title has circulated.  The biblio usageCount can be absent, in which case the importer (for a system that supports it) should probably add up the item usageCounts to create the biblio usageCount.

* ILS-specific extensions should be fine, and that makes sense.  Of course we'll want to make sure that anything that makes sense in the main spec goes there.  Also it would be nice to have an export including the extensions validate against the schema, but I suppose an ILS could use its own schema, and declare that it's the same as "LDIF Version X" plus extensions.

The main ILS-specific stuff that might be required (at least, that I can think of right now) would be the set of configuration options for a library.  Those wouldn't really translate from one system to another, generally, but would be handy to have in the export for backup/restore purposes.  Maybe there's a way to allow the storage of those in key/value pairs in the main spec.

Let us work on some updates based on this conversation and we'll see how the XSD looks after that.  Thanks for all your help!
Comment 5 Xan Charbonnet 2011-09-21 15:46:42 UTC
What would you think about using this for addresses?
http://docs.oasis-open.org/ciq/v3.0/prd02/xsd/default/xsd/
(specifically, xAL.xsd)
Comment 6 Robin Sheat 2011-09-21 21:10:41 UTC
* koha would need to be able to export authorities, but they're just MARC records, so it'll be pretty simple.

* I'll have a look at how Koha does repeating holidays, I think it's for things like christmas and so on, and here we have a few that are always on, say, the 25th of April (which sucks when that's a weekend and you don't get the day off :) I think some libraries also include days they don't open, e.g. Sundays, in there, but I'd have to check.

* When viewed from that point of view your holdingTypes thing makes sense. Although, I'd be inclined to rename it as it's no longer just about holdingTypes. holdingAttributes perhaps? Your point about these typically being remapped is quite valid, it wouldn't be too bad to present a list of all these, and have the person doing the import work out what fields they should be linked up to. Also, it's a place to put things we haven't thought of, or that some systems do and others don't.

* Ah, I don't think Koha tracks usage at a biblio level specifically (although generating it from stats would be easy) so doing that didn't occur to me.

* My knowledge of how XML namespaces work falls down a bit here, but I wonder if it's possible to include a namespace that would allow the file to validate against both LDIF (therefore anything LDIF compatible will work just fine), but if something knows how to handle another namespace, it can do that too. This way anything that can't would be blind to the fact there's data it can't understand there, but it'd still get the core stuff. It should be possible, I need to get a bit further on my reading about such things to know for sure.

* Oh, sorta related to the above: it may be good to have a <system name="koha" version="3.04.04" /> type entry so that things like the holding type mappings can perhaps be worked out by default.

* I think that xAL.xsd would cover every eventuality we're likely to run into :)
Comment 7 Xan Charbonnet 2011-09-27 03:09:43 UTC
Okay!  There's a new version of the schema posted at http://www.biblionix.com/ldif.xsd , and a new demonstration export at http://www.biblionix.com/demolibrary_2011-09-26.xml .  Mostly because of the change to MARC XML, the file is now 84MB, but it compresses very well!

Along those lines (and I'm getting way ahead of myself here) it might be beneficial to specify a standard way to distribute this file.  Say a .tar.bz2, including the schemas and the document at particular locations within that archive.  That would turn LDIF (or whatever we want to call it) into a single-file solution, and the compressed file would be much easier to deal with.  bzip2 gets the 84MB file down to 4MB.  Importers and exporters would deal with that archive, decompressing and compressing as required without making the user do it.

Here is the list of changes since the last version.  This list also appears in the schema itself.

    * Added documentation
    * Added the <system> element for describing the software
      that generated a document
    * Added a way to represent days of the week the library
      is closed
    * Internationalized phone numbers
    * Converted from base64-encoded MARC to MARC in XML,
      referring to the official LoC MARC XML schema
    * Added support for authorities, via the same MARC in XML
      schema
    * Changed "types" (eg, holdingTypeLists) to "memberships"
      (eg holdingMembershipLists)
    * Removed the concept of "age" in favor of a
      patronTypeMembership
    * Replaced patronType and holdingType from checkouts
      with a reference to applicable membershipLists
    * Swapped the order of holdingMembershipLists and
      patronMembershipLists (only because it's easier to
      explain the holding version in the inline documentation)
    * Removed cardRenewals (didn't seem worth the trouble)
    * Simplified purchaseRequests
    * Moved reserves inside of patrons
    * A few other minor housekeeping things

Still need to figure out addresses; I'm a little worried that xAL.xsd is too broad and complex for use here...  But I wanted to get out what I had so far.  Any thoughts appreciated!
Comment 8 Robin Sheat 2011-09-27 03:49:43 UTC
This is from just looking at it very quickly.

* Specifying a recommended compression, e.g. bz2, is a good thing. I don't think we need to bundle the XSD along with it, as it should be validatable from the information in the header anyway. That also makes it easier for the loading software to see what version it's expected to work with.

I think most XML validators will download the XSD if they don't know it already.

* The internal documentation is great.

* days_of_week_closed is a bit odd, using a bitmask like that. Being XML-y, might it be better to have <saturday /><sunday /> inside there, or something more like that.

* I note that you say that MARC::Record is capable of creating MARC-XML to go in the biblio records "with a little tweaking". What tweaking is that, and is it something we can/should send upstream?

I'll hopefully get a deeper look at it some time later this week.
Comment 9 Xan Charbonnet 2011-09-27 04:03:46 UTC
Once again you've correctly pointed out when I'm doing something not the XML way.  Obviously I need the help!  It does make sense to be a little more verbose in the XML, and let bzip do the work of making it smaller.

MARC::File::XML's as_xml_record() output looks like this by default:
<?xml version="1.0" encoding="UTF-8"?>
<record
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/ standards/marcxml/schema/MARC21slim.xsd"
    xmlns="http://www.loc.gov/MARC21/slim">
  <leader>01172cam a2200301   4500</leader>
...etc etc...
</record>

which I change by stripping the XML declaration, by removing the attributes on the <record> tag, and by adding the "marc:" prefix to all the elements, thus:
<marc:record>
  <marc:leader>01172cam a2200301   4500</marc:leader>
...etc etc...
</marc:record>

So it's not an issue of the output of that module being invalid against the official MARC XML Schema; it just needs a little massaging to fit into our document.  So I don't think it needs to be an upstream issue.
Comment 10 Xan Charbonnet 2011-09-27 04:51:58 UTC
The bitmap has been removed.  Here's what the <holidays> specification looks like now:

<xs:complexType name="holidays">
    <xs:sequence>
        <xs:element name="days_of_week_closed" minOccurs="0">
            <xs:complexType>
                <xs:attribute name="sunday" type="xs:boolean" default="false"/>
                <xs:attribute name="monday" type="xs:boolean" default="false"/>
                <xs:attribute name="tuesday" type="xs:boolean" default="false"/>
                <xs:attribute name="wednesday" type="xs:boolean" default="false"/>
                <xs:attribute name="thursday" type="xs:boolean" default="false"/>
                <xs:attribute name="friday" type="xs:boolean" default="false"/>
                <xs:attribute name="saturday" type="xs:boolean" default="false"/>
            </xs:complexType>
        </xs:element>
        <xs:element name="date" type="xs:date" maxOccurs="unbounded"/>
    </xs:sequence>
</xs:complexType>

Much more readable and XML-like in the document:
  <system name="apollo" vendor="biblionix" version="2011-09-26.02"/>
  <holidays>
    <days_of_week_closed monday="1" thursday="1"/>
    <date>2006-11-24</date>
    <date>2006-11-25</date>
   ...
  </holidays>
Comment 11 Xan Charbonnet 2011-09-30 21:01:19 UTC
I've just posted version 013; here's the changelog:
    * Support for multiple branches
    * Added SMS email address to phones
    * Internationalized addresses
    * Give the full external URL in MARC XML's schemaLocation
    * Remove the bitmask for days of the week closed in favor
      of a more XML-like solution

The location for the latest schema has changed to:
http://www.biblionix.com/ldif/ldif.xsd
And the location for any particular version is, eg,
http://www.biblionix.com/ldif/ldif_013.xsd

Here's the latest example export:
http://www.biblionix.com/ldif/apollo_demonstration_2011-09-30.xml.bz2

Also here's a script I wrote to validate documents:
http://www.biblionix.com/ldif/ldif_validator.zip

That script accepts either an XML file or a bzipped XML file, and uses a parser which doesn't soak up all the memory on the machine (so many of them do for a file this big!).

I think we're really getting somewhere here; we should come up with a name for the thing.  Does the other LDIF intersect with the library world much?
Comment 12 Xan Charbonnet 2011-11-29 23:55:49 UTC
Just an update on a couple of things.

I mentioned this bug in a post to the perl4lib mailing list about a fix to MARC::File::XML:
http://www.nntp.perl.org/group/perl.perl4lib/2011/11/msg2948.html

Version 015 of "LDIF" has been posted:
http://www.biblionix.com/ldif/ldif.xsd
http://www.biblionix.com/ldif/ldif_015.xsd

    Changes from version 014:
    * Added support for patron password
    * Added support for custom fields ("userdefs") for patrons
      and holdings

    Changes from version 013:
    * Added an "urgent" option for patronNotes
    * Removed the "require"ment for a number of fields
    * Changed amountPaidCents in fines to default "0" rather
      than required
Comment 13 Xan Charbonnet 2012-07-12 17:03:46 UTC
LDIF has continued to improve, and has proven its usefulness.  We have written a number of converters which take uploaded data from other systems and convert it to LDIF, so that we only need to maintain one importer.  Taking this step has made it possible to improve our importer in ways which would not have been practical with the one-importer-per-system method.

If LDIF is this useful when only one side of the equation deals with it, imagine how useful it will be when multiple library systems can import and export it.

This is the right thing to do.  Libraries need to be able to get their data out of any system in a standardized way.  I believe this is far more important than the development methodology used for any particular system.  If your goal is truly ILS openness and vendor independence, let's work together and make LDIF interoperability a reality!



Back in May we got to version 017, with the following changes:

Changes from version 016:
* Copy and volume fields added for holdings
* Call suffix for holdings added
* Added regular expressions for the ILS to map call numbers to material types
* Reserve "placed" and "resolved" now dateTime instead of date
* Added "latest activity" attribute for patrons

Changes from version 015:
* holidays element no longer requires at least one date
Comment 14 Chris Cormack 2012-07-12 21:01:45 UTC
(In reply to comment #13)
> LDIF has continued to improve, and has proven its usefulness.  We have
> written a number of converters which take uploaded data from other systems
> and convert it to LDIF, so that we only need to maintain one importer. 
> Taking this step has made it possible to improve our importer in ways which
> would not have been practical with the one-importer-per-system method.
> 
> If LDIF is this useful when only one side of the equation deals with it,
> imagine how useful it will be when multiple library systems can import and
> export it.
> 
> This is the right thing to do.  Libraries need to be able to get their data
> out of any system in a standardized way.  I believe this is far more
> important than the development methodology used for any particular system. 
> If your goal is truly ILS openness and vendor independence, let's work
> together and make LDIF interoperability a reality!
> 
> 
While LDIF is undoubtedly a good thing. Free software is not a development methodology. 

Perhaps you should reread your last paragraph, you will find that it is easier to win allies if you don't imply they are liars when talking to them.
Comment 15 Xan Charbonnet 2012-07-12 21:23:22 UTC
Please pardon my terminology.  What I'm trying to say is that having an open standard for ILS data, and having the freedom to move from one ILS to another easily, is, for most librarians of limited skill and/or budget, at least as important a freedom as being able to view, modify, and distribute the source code of their ILS.  These freedoms are by no means mutually exclusive, and I think that an open interchange format would work very well as part of what Koha can offer.

I think the tone of the last paragraph in my previous comment came across poorly.  It was meant to be a cheerleading call to action to get people excited about making something cool happen.  It seems to have come across as an accusation, which it was not, and I'm sorry for the miscommunication.  I'm not sure where I called anybody a liar but I'm quite sure that was not my intention!

In any case, I do hope that we can work together and make LDIF interoperability a reality.
Comment 16 Jared Camins-Esakov 2012-07-12 22:25:52 UTC
If you take a look at bug 8268, you can see how the Export tool's user interface works. It's pretty straightforward. You could also, of course, submit a patch in the form of a command-line script to put in misc/migration_tools, if you're more comfortable with that.
Comment 17 Xan Charbonnet 2012-07-12 22:34:54 UTC
Jared,

Thanks; that discussion is informative and does dovetail with this topic.  We may do just what you suggest, and write a Koha patch to do LDIF.  My hesitancy with that is that we may be putting the cart before the horse; we aren't Koha experts and don't know what assumptions are in LDIF that might not apply to other ILSes.  I was hoping for some feedback on specifically how the Koha data would fit into the current LDIF specification.  Robin has been very helpful along those lines with this thread.  Do you think the thing to do from here is to write some code and see how things shake out?
Comment 18 Jared Camins-Esakov 2012-07-12 22:46:47 UTC
(In reply to comment #17)
> My hesitancy with that is that we may be putting the cart before the horse; we
> aren't Koha experts and don't know what assumptions are in LDIF that might
> not apply to other ILSes. [...] Do you think the thing to do from here is to
> write some code and see how things shake out?

Well, based on your comment earlier, it sounds like LDIF is working as an intermediate export format for the data from most ILSes that you run into, so I think the next obvious step is to modify your Koha->LDIF script so that it is suitable for inclusion in Koha. Incidentally, I'd be very interested in seeing what the LDIF export in Apollo looks like to librarians, if you'd be able to point me at a demo.
Comment 19 Xan Charbonnet 2012-07-12 23:00:28 UTC
Created attachment 10804 [details]
Export Apollo screenshot
Comment 20 Xan Charbonnet 2012-07-12 23:03:55 UTC
Makes sense to me.  The Koha that we've run into is whatever version of LibLime's fork is being run by two consortia in west Texas.  I hope that the converter we've got mostly applies!  And our converters are geared towards the data that we want to bring in; I'm sure there would be other features to add to LDIF in order to get full Koha->Koha functionality going.  But this would be a good start.  I'll see what we can put together.

I've also attached a screenshot of our export page to this bug.  It's just a file download for the librarians.
Comment 21 Chris Cormack 2012-07-12 23:12:58 UTC
(In reply to comment #20)
> Makes sense to me.  The Koha that we've run into is whatever version of
> LibLime's fork is being run by two consortia in west Texas.  I hope that the
> converter we've got mostly applies!  And our converters are geared towards
> the data that we want to bring in; I'm sure there would be other features to
> add to LDIF in order to get full Koha->Koha functionality going.  But this
> would be a good start.  I'll see what we can put together.
> 
Hmm unfortunately it may not :( There is now 4 years of divergence in code base and db structure. But it will at least be a good start.


Koha to Koha will be easier (in fact the bug jared linked does this, including all your configuration) forks based on Koha ... might be a bit trickier. That's where LDIF would be handy, if you can get it added to the forks that is.


> I've also attached a screenshot of our export page to this bug.  It's just a
> file download for the librarians.

Cool.
Comment 22 Xan Charbonnet 2012-07-12 23:22:03 UTC
Yes, I'm afraid that the forks won't be too interested in such a feature.  Hopefully we'll be able to convince them.  Getting it into mainline Koha would be a great first step.
Comment 23 Xan Charbonnet 2012-07-18 03:53:44 UTC
Okay!  I followed the instructions on the site and have installed the development version of Koha on a Debian Squeeze VM.  It's up and running.

Since I'm not intimately familiar with the database structure, and am attempting to write an exporter, it would be really helpful to have a database which is populated with reasonable data.

I could probably muddle around and create a few biblios, holdings, circulations, etc, but it would make for a much better exporter if I had a bigger sample of less-contrived data to work with.

Is there any reasonable way for me to get a .sql file of a populated Koha database?  I appreciate any advice.
Comment 24 Robin Sheat 2012-07-18 08:12:21 UTC
I haven't looked to see what they contain, but here are a couple that are used in the patch testing sandboxes:

http://git.koha-community.org/gitweb/?p=contrib/global.git;a=tree;f=sandbox;h=83030c458a12c39c56fddbab3c60b329655bc612;hb=HEAD

Additionally, the Koha database schema is here:
http://schema.koha-community.org/
Comment 25 Xan Charbonnet 2014-10-08 01:45:10 UTC
Hello Koha Community,

I'd like to resurrect this discussion, if possible.  Since we last spoke, LDIF has gone through a number of revisions (you can see the latest .xsd, including the changelog, here: https://www.biblionix.com/ldif/ldif.xsd

(We haven't yet come up with a better name than LDIF, to resolve the conflict with the LDAP standard.  Certainly open for ideas on that.)

And we now have a script that reads directly from a Koha database and outputs LDIF.  That script is available here: https://www.biblionix.com/ldif/koha_to_ldif.pl.txt

It would be really exciting if you would consider integrating an LDIF extraction feature into Koha, perhaps based on this script.  I don't know whether such a feature would ultimately be better as part of the UI or as a command-line script, but figured that this script would be a good starting point.  I'm sure there are many things that Koha experts would do differently, and there's probably a place or two where things are US-centric (processing states in addresses, most prominently), but if you have any interest in adapting this for inclusion in Koha, then we'd be glad to make any changes that would be appropriate, including of course relicensing it under GPL3.

Let me know if there's anything I can clarify or do to make this a more worthwhile feature.

Thanks!