In order to store RDF data, we need to be able to connect to a triplestore. This enhancement simply allows us to connect to a separate triplestore.
Created attachment 63393 [details] [review] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context, which returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore.
Created attachment 63403 [details] [review] Bug 18585 - Connect to RDF triplestore (add dependency) Add RDF::Trine to the list of Perl dependencies
Created attachment 63411 [details] [review] [SIGNED-OFF] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context, which returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore. Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Created attachment 63412 [details] [review] [SIGNED-OFF] Bug 18585 - Connect to RDF triplestore (add dependency) Add RDF::Trine to the list of Perl dependencies Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Created attachment 63413 [details] [review] [SIGNED-OFF] Bug 18585 - Followup - fix pod Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
At least the Fuseki triplestore has different endpoints for reading and writing data. With Fuseki as the triplestore, this bug only adds a read RDF::Trine::Model object. Would it be possible to do a followup that would aloow us to create both read and write objects? (As two separate objects.)
(In reply to Magnus Enger from comment #6) > At least the Fuseki triplestore has different endpoints for reading and > writing data. With Fuseki as the triplestore, this bug only adds a read > RDF::Trine::Model object. Would it be possible to do a followup that would > aloow us to create both read and write objects? (As two separate objects.) Hi Magnus, I'm not sure why you think it adds a read-only object. I can 100% assure you that the RDF::Trine::Model object is both read and write. The URL you provide is to the dataset (e.g. http://semantikoha.libriotech.no:3030/david-koha-test). From there, RDF::Trine::Model will build the "update" and "query" URLs dynamically for reading and writing using the SPARQL 1.1 standard. (In fact, this is why RDF::Trine doesn't work with older versions of Virtuoso, since Virtuoso before 6.1.7 or so didn't comply with SPARQL 1.1.)
(In reply to David Cook from comment #7) > (In reply to Magnus Enger from comment #6) > > At least the Fuseki triplestore has different endpoints for reading and > > writing data. With Fuseki as the triplestore, this bug only adds a read > > RDF::Trine::Model object. Would it be possible to do a followup that would > > aloow us to create both read and write objects? (As two separate objects.) > > Hi Magnus, > > I'm not sure why you think it adds a read-only object. I can 100% assure you > that the RDF::Trine::Model object is both read and write. The URL you > provide is to the dataset (e.g. > http://semantikoha.libriotech.no:3030/david-koha-test). From there, > RDF::Trine::Model will build the "update" and "query" URLs dynamically for > reading and writing using the SPARQL 1.1 standard. (In fact, this is why > RDF::Trine doesn't work with older versions of Virtuoso, since Virtuoso > before 6.1.7 or so didn't comply with SPARQL 1.1.) Actually, as I read through the code at http://cpansearch.perl.org/src/GWILLIAMS/RDF-Trine-1.016/lib/RDF/Trine/Store/SPARQL.pm, it looks like they don't dynamically change the URL. Rather, they just use the "update" and "query" parameters depending on the operation specified, but it's all using the same URL which as I mentioned above is just to the dataset.
Looking at https://jena.apache.org/documentation/serving_data/, it looks like "/data" is a read/write endpoint, and after visiting http://semantikoha.libriotech.no:3030/david-koha-test/ and http://semantikoha.libriotech.no:3030/david-koha-test/data, it seems that both map to the same place. In which case... maybe it is worthwhile having separate read and write endpoints in the configuration file...
However, RDF::Trine::Store::Memory is read/write and I imagine RDF::Trine::Store::DBI is as well. Maybe worth looking at a few triplestores to see what the norm is.
Virtuoso still seems to have the wrong parameters but looks like it has a read/write endpoint (https://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VOSSparqlProtocol#HTTP Request Methods).
More info about Virtuoso even though we can't use it anyway with RDF::Trine (https://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtTipsAndTricksGuideSPARQLEndpoints).
Thanks for clarifying, David! I think the problem was that I put http://example.com/david-koha-test/query in the config, and that made the object read only. On the other hand, having separate read and read/write objects could perhaps make some sense, like using the read only object when you know you are only going to display data, like in the OPAC. (Especially for my librarian-configurable queries, to protect against people doing stupid/evil things :-)
(In reply to Magnus Enger from comment #13) > Thanks for clarifying, David! I think the problem was that I put > http://example.com/david-koha-test/query in the config, and that made the > object read only. > > On the other hand, having separate read and read/write objects could perhaps > make some sense, like using the read only object when you know you are only > going to display data, like in the OPAC. (Especially for my > librarian-configurable queries, to protect against people doing stupid/evil > things :-) That's very true, and that's something that I had been thinking about before and totally forgot about! I think using a read-only endpoint for the librarian-configurable queries is a great idea, and closes off a security hole. Does Fuseki allow you to configure permissions for user accounts as well? So even if you do use a read/write endpoint, only certain users can insert or delete triples?
I'm thinking a little bit about what this would look like... I'm thinking maybe: Code: C4::Context::triplestore($model_name) Config: model_name: query module: url: username: password: realm: model_name: update module: url: username: password: realm: model_name: custom module: url: username: password: realm: #NOTE: I doubt we'd ever use custom... but the flexibility would be there. Maybe there will be a case where in the future we want to be able to connect to multiple triplestores for some reason. Who knows? -- This would actually work well with how RDF::Trine::Store::DBI is set up since it uses "model name" in its constructor. http://search.cpan.org/dist/RDF-Trine/lib/RDF/Trine/Store/DBI.pm
I ran out of time today and it's Friday, but I'll look at posting a revised patch on Monday. The patch needed to be rebased anyway because of some changes to koha-conf.xml I believe.
Created attachment 64252 [details] [review] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context. It takes a model name as a parameter and returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too, or any other RDF::Trine::Store::* backend.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore.
Created attachment 64253 [details] [review] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context. It takes a model name as a parameter and returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too, or any other RDF::Trine::Store::* backend.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore.
Magnus, how does that look now? That's probably a safer way of doing things. That way, people could inject malicious or flawed SPARQL into a "read" query and do zero damage to the triplestore. In the long run, having separate endpoints also might make it easier to manage load on the triplestore. A person could update the canonical triplestore and then send queries to a mirror which could be specified in the configuration file.
Updating my existing code now and this is turning out to be interesting... In a test, I was using a "fake" model name, but my test failed, because one of the unrelated methods is hard-coded to use the "query" model. I changed the model name in the test and it worked fine, but just something to keep in mind.
Assigning to David as he is the author of the patch. Patch still applies cleanly.
Created attachment 69632 [details] [review] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context. It takes a model name as a parameter and returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too, or any other RDF::Trine::Store::* backend.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore. Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Looks like this one doesn't apply anymore... I'll rebase it in a minute
Created attachment 71016 [details] [review] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context. It takes a model name as a parameter and returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too, or any other RDF::Trine::Store::* backend.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore.
Because of a few new dependencies, the patch doesn't apply anymore: > CONFLICT (content): Merge conflict in C4/Installer/PerlDependencies.pm
Created attachment 73897 [details] [review] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context. It takes a model name as a parameter and returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too, or any other RDF::Trine::Store::* backend.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore.
Created attachment 78568 [details] [review] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context. It takes a model name as a parameter and returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too, or any other RDF::Trine::Store::* backend.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore.
Rebased against master
Created attachment 78975 [details] [review] Bug 18585: Missing triplestore configuration causes warnings in logs This patch checks that the triplestore configuration is defined before trying to find it and read it.
Created attachment 83610 [details] [review] Bug 18585 - Connect to RDF triplestore This commit adds a 'triplestore' method to C4::Context. It takes a model name as a parameter and returns a RDF::Trine::Model object if triplestore.yaml is properly configured. (At the moment, it just supports RDF::Trine::Store::SPARQL, but it would be trivial to add support for RDF::Trine::Store::DBI too, or any other RDF::Trine::Store::* backend.) This code will provide a base from which people can use RDF::Trine for querying and updating a RDF triplestore. Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Created attachment 83611 [details] [review] Bug 18585: Missing triplestore configuration causes warnings in logs This patch checks that the triplestore configuration is defined before trying to find it and read it. Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
For what it's worth, I'm no longer convinced what I have here is the best way forward, but I suppose it's better than nothing.
I'd like to ask for a second sign-off, see also bug 18713 comment 32
(In reply to Katrin Fischer from comment #33) > I'd like to ask for a second sign-off, see also bug 18713 comment 32 I've added some comments to 18713, but I think it's worth pausing on this bug too. I've been working with Fedora Commons 4.x for the past year or two, and it's opened up my eyes more to RDF. Fedora uses RDF as its native metadata format but it doesn't use a triplestore to store the RDF. Instead, it uses ModeShape (a JCR - Java Content Repository data store) and Infinispan to store the data in a single file or a database. It works pretty well although there are some performance issues with ModeShape in terms of parent-child relationships. What I mean to say is that I'm not convinced that we need a triplestore just yet. I've also worked with Apache Fuseki a bit over the past few years, and it's not the best bit of software. I've founded and help resolve a few major bugs, but it isn't super robust. I've noticed it seems to have issues with fragmentation as well which means its disk usage can sky rocket when you update or delete data. Like Fedora, I think the best bet is to have a more conventional data store, and then if we want to use a triplestore we can add the RDF to the triplestore like we add metadata to ElasticSearch or Zebra. It's nice to trust a more established data store and be able to destroy and rebuild the external triplestore. For bigger libraries, it's worth noting that Apache Fuseki doesn't scale at all, so things like load balancing and replication aren't really possible. -- Going back to Fedora... I haven't found a need for a triplestore yet. If you wanted to have a SPARQL endpoint, it could be useful. Otherwise, you just want to index your data into a search engine or display the record on a screen... and for those you don't need a triplestore.
In practice, I've found you only really need a RDF triplestore if you want to query your data or provide public access to your data. Before doing that, we'd wnat to store the RDF in the database first and then "index" it into a triplestore.
I'm going to refer to this Quora answer (https://www.quora.com/Does-wikidata-store-data-as-RDF-Triples-If-yes-what-kind-of-datastore-is-used) where someone from WikiMedia talks about WikiData: "The primary data storage is dumb JSON blobs in an SQL database. MediaWiki thinks of the data as “pages”, so we also store them as pages. This also makes versioning a lot easier. I know of no scalable solution for versioned data in a triple store. The Wikidata Query Service does use an RDF triple store to allow SPAQL queries against the current version of the data. We use BlazeGraph to store the data and run the queries. It scales well, and is fully free software. Virtuoso was also an option, but the free version lacks some critical features." Leveraging that idea, we could store RDF/XML (or JSON-LD) in Koha, and then we could "index" it into a RDF triplestore, so that Koha could use it, and so that the public could use a public SPARQL endpoint. I'd be interested in checking out BlazeGraph as well.