Summary: | would be nice to remove records from breeding tables. | ||
---|---|---|---|
Product: | Koha | Reporter: | Michael Hafen <michael.hafen> |
Component: | Staff interface | Assignee: | Paul Poulain <paul.poulain> |
Status: | CLOSED FIXED | QA Contact: | Bugs List <koha-bugs> |
Severity: | enhancement | ||
Priority: | P2 | CC: | chris, katrin.fischer, magnus |
Version: | Main | ||
Hardware: | PC | ||
OS: | All | ||
Change sponsored?: | --- | Patch complexity: | --- |
Documentation contact: | Documentation submission: | ||
Text to go in the release notes: | Version(s) released in: | ||
Bug Depends on: | |||
Bug Blocks: | 8149 | ||
Attachments: | proposed patches and new files. |
Description
Chris Cormack
2010-05-20 23:38:40 UTC
already done (checked in 3.4) How is it done in 3.4? Is there a way to delete unused records from the breeding tables? when you've imported datas in your catalogue, you've a "clean" link to remove the content of this import Yes, but by searching Z39.50 targets I thought more than the downloaded record was added to the breeding tables? I see no way to delete those at the moment. well spotted Katrin. This could easily be achieved with the following SQL requests : DELETE FROM import_records WHERE import_batch_id IN (SELECT import_batches.import_batch_id FROM import_batches WHERE import_batches.batch_type ="z3950") AND import_records.upload_timestamp<= DATE_SUB(now(), INTERVAL 1 DAY); DELETE FROM import_batches WHERE import_batch_id NOT IN (SELECT DISTINCT import_batch_id from import_records); The 1st request deletes all z3950 entries that are older than one day (too much from far, but harmless) The 2nd request deletes the "fake batch header" that is generated on each z3950 query We can put those SQLs in 2 places: * on any z3950 search = before doing the new search, clean the database. Easy to setup, efficient, although a little loss of performances. I think this loss of performance is small compared to how long z3950 servers answer * add a script to cronjob, that cleans every night. A little bit harder to write, and has to be setup during installation, which put some pain to the sysadmin. I prefer the 1st option, let me know which one you prefer. First option seems fine by me, as does a simple clean Z39.50 search results button. (In reply to comment #5) > well spotted Katrin. > > This could easily be achieved with the following SQL requests : > > DELETE FROM import_records WHERE import_batch_id IN (SELECT > import_batches.import_batch_id FROM import_batches WHERE > import_batches.batch_type ="z3950") AND import_records.upload_timestamp<= > DATE_SUB(now(), INTERVAL 1 DAY); > > DELETE FROM import_batches WHERE import_batch_id NOT IN (SELECT DISTINCT > import_batch_id from import_records); > > The 1st request deletes all z3950 entries that are older than one day (too > much from far, but harmless) > The 2nd request deletes the "fake batch header" that is generated on each > z3950 query > > We can put those SQLs in 2 places: > * on any z3950 search = before doing the new search, clean the database. > Easy to setup, efficient, although a little loss of performances. I think > this loss of performance is small compared to how long z3950 servers answer > * add a script to cronjob, that cleans every night. A little bit harder to > write, and has to be setup during installation, which put some pain to the > sysadmin. > > I prefer the 1st option, let me know which one you prefer. Either the first option, or adding those sql statements to the cleanup_database script with a default of 1 day. Either of those options would be fine with me. I think I'd prefer having it in cleanup_database if only so the delay to removal can be set by the sysadmin. But I'm fine with which ever gets the job done. I believe this is fixed by an option of the cleanup_database cronjob: 62 --z3950 purge records from import tables that are the result 63 of Z39.50 searches Please open a new bug if more functionality is needed. |