Bug 6199 - Allow bulkmarcimport to blank duplicate barcodes rather than skipping items
Summary: Allow bulkmarcimport to blank duplicate barcodes rather than skipping items
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Tools (show other bugs)
Version: 3.8
Hardware: All All
: P3 enhancement (vote)
Assignee: Robin Sheat
QA Contact: Bugs List
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2011-04-18 07:23 UTC by Robin Sheat
Modified: 2013-12-05 20:04 UTC (History)
5 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Patch for 3.2.x (6.15 KB, patch)
2011-04-18 07:51 UTC, Robin Sheat
Details | Diff | Splinter Review
Patch for master (6.18 KB, patch)
2011-04-18 07:52 UTC, Robin Sheat
Details | Diff | Splinter Review
sample records (1.26 KB, application/octet-stream)
2011-05-30 06:23 UTC, Katrin Fischer
Details
Another test MARC file (385 bytes, application/octet-stream)
2011-05-30 07:37 UTC, Robin Sheat
Details
Bug 6199 - allow bulkmarkimport.pl to remove duplicate barcodes (7.88 KB, patch)
2011-10-12 04:42 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 6199 - allow bulkmarkimport.pl to remove duplicate barcodes (7.99 KB, patch)
2012-03-10 16:59 UTC, Jared Camins-Esakov
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Robin Sheat 2011-04-18 07:23:45 UTC
If bulkmarcimport.pl sees a duplicate barcode, it throws away the whole item. The forthcoming patch allows it to optionally blank out the barcode of any duplicates found. This is useful when migrating from software that may have created duplicates.
Comment 1 Robin Sheat 2011-04-18 07:51:35 UTC Comment hidden (obsolete)
Comment 2 Robin Sheat 2011-04-18 07:52:02 UTC Comment hidden (obsolete)
Comment 3 Katrin Fischer 2011-05-30 05:52:18 UTC
I tested the 'patch for master' on current master.

I tested with a file containing 5 records, a few items with some duplicate and some unique barcodes.
bulkmarcimport.pl -b -v -file test.mrc -l test.txt

Before applying the patch I got errors running indicating the duplicate barcoes. 5 items were added to my database. Reimporting the same file gave me more duplicate errors and no additional items were added. 
bulkmarcimport.pl -b -v -file test.mrc -l test.txt

I applied the patch and ran: 
bulkmarcimport.pl -b -v -file test.mrc -l test.txt -dedupbarcode

I got no error messages: 
.....
5 MARC records done in 0.341071128845215 seconds

I reindexed, searched Koha and checked my items table. No new items were added.
Comment 4 Katrin Fischer 2011-05-30 06:23:50 UTC
Created attachment 4296 [details]
sample records
Comment 5 Robin Sheat 2011-05-30 07:35:25 UTC
$ KOHA_CONF=~/koha-dev/etc/koha-conf.xml misc/migration_tools/bulkmarcimport.pl -file duplicates2.mrc -v
.Item not added (bib 65, item tag #2, barcode duplicate): duplicate barcode duplicate
..Item not added (bib 67, item tag #1, barcode duplicate2): duplicate barcode duplicate2
.
4 MARC records done in 0.205533981323242 seconds

> select count(*) from items;
+----------+
| count(*) |
+----------+
|        3 |
+----------+

This is as expected.

(after deleting all the stuff added: )

$ KOHA_CONF=~/koha-dev/etc/koha-conf.xml misc/migration_tools/bulkmarcimport.pl -file duplicates2.mrc -v -dedupbarcode
....
4 MARC records done in 0.162705183029175 seconds

> select count(*) from items;
+----------+
| count(*) |
+----------+
|        5 |
+----------+

This is also as I'd expect.

Running it again without deleting anything gives a whole lot of SQL error messages due to the item numbers being embedded in the records.
Comment 6 Robin Sheat 2011-05-30 07:37:56 UTC
Created attachment 4297 [details]
Another test MARC file

This is another MARC file that I created for testing this. It has the barest minimum of information in it required to get it to import.
Comment 7 Katrin Fischer 2011-05-30 09:04:57 UTC
Hi Robin, 
I will test with your file tonight. At the moment I have no idea what I did differently to what you did. Perhaps if you are online we can do it step by step.
Comment 8 Katrin Fischer 2011-05-30 21:12:40 UTC
Dropped my database and recreated it with en sample data. 
Created a library with the branch code C.

Ran: ./bulkmarcimport.pl -b -file ../../testdata.marc -dedupbarcode -v

Checked:
- Only 3 items in my items table, no null barcodes

Ran: ./bulkmarcimport.pl -b -file ../../testdata.marc -dedupbarcode -v

Checked: 
- Still only 3 items in my items table.

Perhaps someone else should try this.
Comment 9 Robin Sheat 2011-05-30 23:48:44 UTC
You don't say if you got any errors when you ran it. Also, can you put the output of:

show create table items;

here. Mine is:
CREATE TABLE `items` (
  `itemnumber` int(11) NOT NULL AUTO_INCREMENT,
  `biblionumber` int(11) NOT NULL DEFAULT '0',
  `biblioitemnumber` int(11) NOT NULL DEFAULT '0',
  `barcode` varchar(20) DEFAULT NULL,
  `dateaccessioned` date DEFAULT NULL,
  `booksellerid` mediumtext,
  `homebranch` varchar(10) DEFAULT NULL,
  `price` decimal(8,2) DEFAULT NULL,
  `replacementprice` decimal(8,2) DEFAULT NULL,
  `replacementpricedate` date DEFAULT NULL,
  `datelastborrowed` date DEFAULT NULL,
  `datelastseen` date DEFAULT NULL,
  `stack` tinyint(1) DEFAULT NULL,
  `notforloan` tinyint(1) NOT NULL DEFAULT '0',
  `damaged` tinyint(1) NOT NULL DEFAULT '0',
  `itemlost` tinyint(1) NOT NULL DEFAULT '0',
  `wthdrawn` tinyint(1) NOT NULL DEFAULT '0',
  `itemcallnumber` varchar(255) DEFAULT NULL,
  `issues` smallint(6) DEFAULT NULL,
  `renewals` smallint(6) DEFAULT NULL,
  `reserves` smallint(6) DEFAULT NULL,
  `restricted` tinyint(1) DEFAULT NULL,
  `itemnotes` mediumtext,
  `holdingbranch` varchar(10) DEFAULT NULL,
  `paidfor` mediumtext,
  `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `location` varchar(80) DEFAULT NULL,
  `permanent_location` varchar(80) DEFAULT NULL,
  `onloan` date DEFAULT NULL,
  `cn_source` varchar(10) DEFAULT NULL,
  `cn_sort` varchar(30) DEFAULT NULL,
  `ccode` varchar(10) DEFAULT NULL,
  `materials` varchar(10) DEFAULT NULL,
  `uri` varchar(255) DEFAULT NULL,
  `itype` varchar(10) DEFAULT NULL,
  `more_subfields_xml` longtext,
  `enumchron` text,
  `copynumber` varchar(32) DEFAULT NULL,
  `stocknumber` varchar(32) DEFAULT NULL COMMENT 'stores the inventory number',
  PRIMARY KEY (`itemnumber`),
  UNIQUE KEY `itembarcodeidx` (`barcode`),
  KEY `itembinoidx` (`biblioitemnumber`),
  KEY `itembibnoidx` (`biblionumber`),
  KEY `homebranch` (`homebranch`),
  KEY `holdingbranch` (`holdingbranch`),
  KEY `itemstocknumberidx` (`stocknumber`)
) ENGINE=InnoDB AUTO_INCREMENT=50 DEFAULT CHARSET=utf8
Comment 10 Katrin Fischer 2011-05-31 05:52:19 UTC
No errors at all.

CREATE TABLE  `koha`.`items` (
  `itemnumber` int(11) NOT NULL AUTO_INCREMENT,
  `biblionumber` int(11) NOT NULL DEFAULT '0',
  `biblioitemnumber` int(11) NOT NULL DEFAULT '0',
  `barcode` varchar(20) DEFAULT NULL,
  `dateaccessioned` date DEFAULT NULL,
  `booksellerid` mediumtext,
  `homebranch` varchar(10) DEFAULT NULL,
  `price` decimal(8,2) DEFAULT NULL,
  `replacementprice` decimal(8,2) DEFAULT NULL,
  `replacementpricedate` date DEFAULT NULL,
  `datelastborrowed` date DEFAULT NULL,
  `datelastseen` date DEFAULT NULL,
  `stack` tinyint(1) DEFAULT NULL,
  `notforloan` tinyint(1) NOT NULL DEFAULT '0',
  `damaged` tinyint(1) NOT NULL DEFAULT '0',
  `itemlost` tinyint(1) NOT NULL DEFAULT '0',
  `wthdrawn` tinyint(1) NOT NULL DEFAULT '0',
  `itemcallnumber` varchar(255) DEFAULT NULL,
  `issues` smallint(6) DEFAULT NULL,
  `renewals` smallint(6) DEFAULT NULL,
  `reserves` smallint(6) DEFAULT NULL,
  `restricted` tinyint(1) DEFAULT NULL,
  `itemnotes` mediumtext,
  `holdingbranch` varchar(10) DEFAULT NULL,
  `paidfor` mediumtext,
  `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `location` varchar(80) DEFAULT NULL,
  `permanent_location` varchar(80) DEFAULT NULL,
  `onloan` date DEFAULT NULL,
  `cn_source` varchar(10) DEFAULT NULL,
  `cn_sort` varchar(30) DEFAULT NULL,
  `ccode` varchar(10) DEFAULT NULL,
  `materials` varchar(10) DEFAULT NULL,
  `uri` varchar(255) DEFAULT NULL,
  `itype` varchar(10) DEFAULT NULL,
  `more_subfields_xml` longtext,
  `enumchron` text,
  `copynumber` varchar(32) DEFAULT NULL,
  `stocknumber` varchar(32) DEFAULT NULL,
  PRIMARY KEY (`itemnumber`),
  UNIQUE KEY `itembarcodeidx` (`barcode`),
  KEY `itemstocknumberidx` (`stocknumber`),
  KEY `itembinoidx` (`biblioitemnumber`),
  KEY `itembibnoidx` (`biblionumber`),
  KEY `homebranch` (`homebranch`),
  KEY `holdingbranch` (`holdingbranch`),
  CONSTRAINT `items_ibfk_1` FOREIGN KEY (`biblioitemnumber`) REFERENCES `biblioitems` (`biblioitemnumber`) ON DELETE CASCADE ON UPDATE CASCADE,
  CONSTRAINT `items_ibfk_2` FOREIGN KEY (`homebranch`) REFERENCES `branches` (`branchcode`) ON UPDATE CASCADE,
  CONSTRAINT `items_ibfk_3` FOREIGN KEY (`holdingbranch`) REFERENCES `branches` (`branchcode`) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8
Comment 11 Magnus Enger 2011-06-15 08:37:51 UTC
Downloaded the 2 sample files as sample6199-1.mrc and sample6199-2.mrc. 

mysql> select count(*) from items;
+----------+
| count(*) |
+----------+
|      142 |
+----------+

* TEST BEFORE PATCH, SAMPLE 1

$ perl misc/migration_tools/bulkmarcimport.pl -b -v -file sample6199-1.mrc -l test-1-pre.txt
.Item not added (bib 5173, item tag #2, barcode duplicate): duplicate barcode duplicate
..Item not added (bib 5175, item tag #1, barcode duplicate2): duplicate barcode duplicate2
.
4 MARC records done in 0.396538972854614 seconds

$ more test-1-pre.txt 
id;operation;status
5173;insert;ok
5173;insert;ok
5174;insert;ok
5174;insert;ok
5175;insert;ok
5175;insert;ok
5176;insert;ok
5176;insert;ok
file : sample6199-1.mrc
4 MARC records done in 0.396538972854614 seconds

mysql> select count(*) from items;
+----------+
| count(*) |
+----------+
|      145 |
+----------+

* TEST BEFORE PATCH, SAMPLE 2

$ perl misc/migration_tools/bulkmarcimport.pl -b -v -file sample6199-2.mrc -l test-2-pre.txt
.Item not added (bib 5177, item tag #1, barcode barcode1): invalid homebranch C
.Item not added (bib 5178, item tag #1, barcode barcode1): invalid homebranch C
.Item not added (bib 5179, item tag #1, barcode barcode2): invalid homebranch C
.Item not added (bib 5180, item tag #1, barcode barcode2): invalid homebranch C
.Item not added (bib 5181, item tag #1, barcode barcode3): invalid homebranch C
5 MARC records done in 0.272064924240112 seconds

mysql> select count(*) from items;
+----------+
| count(*) |
+----------+
|      145 |
+----------+

So, i create a library with branchcode C and repeat the import: 

$ perl misc/migration_tools/bulkmarcimport.pl -b -v -file sample6199-2.mrc -l test-2-pre.txt
..Item not added (bib 5183, item tag #1, barcode barcode1): duplicate barcode barcode1
..Item not added (bib 5185, item tag #1, barcode barcode2): duplicate barcode barcode2
.
5 MARC records done in 0.248023986816406 seconds

$ more test-2-pre.txt 
id;operation;status
5182;insert;ok
5182;insert;ok
5183;insert;ok
5183;insert;ok
5184;insert;ok
5184;insert;ok
5185;insert;ok
5185;insert;ok
5186;insert;ok
5186;insert;ok
file : sample6199-2.mrc
5 MARC records done in 0.248023986816406 seconds

mysql> select count(*) from items;
+----------+
| count(*) |
+----------+
|      148 |
+----------+

Before proceeding i load a dump i did of my database before doing the first import, and reindex, then apply the patch and do: 

mysql> select count(*) from items;
+----------+
| count(*) |
+----------+
|      142 |
+----------+

To verify that the -dedupbarcodes is present in the script:
$ perl misc/migration_tools/bulkmarcimport.pl -h

* TEST AFTER PATCH, SAMPLE 1

$ perl misc/migration_tools/bulkmarcimport.pl -b -v -file sample6199-1.mrc -l test-1-post.txt -dedupbarcode
....
4 MARC records done in 0.420637130737305 seconds

$ more test-1-post.txt 
id;operation;status
5173;insert;ok
5173;insert;ok
5173;insert;ok
5174;insert;ok
5174;insert;ok
5175;insert;ok
5175;insert;ok
5175;insert;ok
5176;insert;ok
5176;insert;ok
file : sample6199-1.mrc
4 MARC records done in 0.420637130737305 seconds

mysql> select count(*) from items;
+----------+
| count(*) |
+----------+
|      145 |
+----------+

* TEST AFTER PATCH, SAMPLE 2

$ perl misc/migration_tools/bulkmarcimport.pl -b -v -file sample6199-2.mrc -l test-2-post.txt -dedupbarcode
.....
5 MARC records done in 0.232513904571533 seconds

$ more test-2-post.txt 
id;operation;status
5177;insert;ok
5177;insert;ok
5178;insert;ok
5178;insert;ok
5178;insert;ok
5179;insert;ok
5179;insert;ok
5180;insert;ok
5180;insert;ok
5180;insert;ok
5181;insert;ok
5181;insert;ok
file : sample6199-2.mrc
5 MARC records done in 0.214529037475586 seconds

mysql> select count(*) from items;
+----------+
| count(*) |
+----------+
|      148 |
+----------+

* DOUBLE CHECKING

mysql> select biblionumber, barcode from items where biblionumber > 5172;
+--------------+------------+
| biblionumber | barcode    |
+--------------+------------+
|         5173 | duplicate  |
|         5174 | duplicate2 |
|         5176 | unique     |
|         5177 | barcode1   |
|         5179 | barcode2   |
|         5181 | barcode3   |
+--------------+------------+

Checking the number of items manually in the OPAC: 

5173: 1 item
5174: 1 
5175: 0 
5176: 1
5177: 1
5178: 0
5179: 1
5180: 0
5181: 1

Here's the output of "show create table items;": 

CREATE TABLE `items` (
  `itemnumber` int(11) NOT NULL AUTO_INCREMENT,
  `biblionumber` int(11) NOT NULL DEFAULT '0',
  `biblioitemnumber` int(11) NOT NULL DEFAULT '0',
  `barcode` varchar(20) DEFAULT NULL,
  `dateaccessioned` date DEFAULT NULL,
  `booksellerid` mediumtext,
  `homebranch` varchar(10) DEFAULT NULL,
  `price` decimal(8,2) DEFAULT NULL,
  `replacementprice` decimal(8,2) DEFAULT NULL,
  `replacementpricedate` date DEFAULT NULL,
  `datelastborrowed` date DEFAULT NULL,
  `datelastseen` date DEFAULT NULL,
  `stack` tinyint(1) DEFAULT NULL,
  `notforloan` tinyint(1) NOT NULL DEFAULT '0',
  `damaged` tinyint(1) NOT NULL DEFAULT '0',
  `itemlost` tinyint(1) NOT NULL DEFAULT '0',
  `wthdrawn` tinyint(1) NOT NULL DEFAULT '0',
  `itemcallnumber` varchar(255) DEFAULT NULL,
  `issues` smallint(6) DEFAULT NULL,
  `renewals` smallint(6) DEFAULT NULL,
  `reserves` smallint(6) DEFAULT NULL,
  `restricted` tinyint(1) DEFAULT NULL,
  `itemnotes` mediumtext,
  `holdingbranch` varchar(10) DEFAULT NULL,
  `paidfor` mediumtext,
  `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `location` varchar(80) DEFAULT NULL,
  `permanent_location` varchar(80) DEFAULT NULL,
  `onloan` date DEFAULT NULL,
  `cn_source` varchar(10) DEFAULT NULL,
  `cn_sort` varchar(30) DEFAULT NULL,
  `ccode` varchar(10) DEFAULT NULL,
  `materials` varchar(10) DEFAULT NULL,
  `uri` varchar(255) DEFAULT NULL,
  `itype` varchar(10) DEFAULT NULL,
  `more_subfields_xml` longtext,
  `enumchron` text,
  `copynumber` varchar(32) DEFAULT NULL,
  `stocknumber` varchar(32) DEFAULT NULL,
  PRIMARY KEY (`itemnumber`),
  UNIQUE KEY `itembarcodeidx` (`barcode`),
  UNIQUE KEY `itemstocknumberidx` (`stocknumber`),
  KEY `itembinoidx` (`biblioitemnumber`),
  KEY `itembibnoidx` (`biblionumber`),
  KEY `homebranch` (`homebranch`),
  KEY `holdingbranch` (`holdingbranch`),
  CONSTRAINT `items_ibfk_1` FOREIGN KEY (`biblioitemnumber`) REFERENCES `biblioitems` (`biblioitemnumber`) ON DELETE CASCADE ON UPDATE CASCADE,
  CONSTRAINT `items_ibfk_2` FOREIGN KEY (`homebranch`) REFERENCES `branches` (`branchcode`) ON UPDATE CASCADE,
  CONSTRAINT `items_ibfk_3` FOREIGN KEY (`holdingbranch`) REFERENCES `branches` (`branchcode`) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=151 DEFAULT CHARSET=utf8

* CONCLUSION

With the patch applied and -dedupbarcode there are no warnings about duplicate barcodes and the log file produced by -l seems to indicate that more items are added, but the items do not show up in the items table. Marking as failed QA. 

Also: The POD says the option is called -dedupbarcodes but GetOptions is looking for dedupbarcode without the s: 'dedupbarcode' => \$dedup_barcode,
Comment 12 Robin Sheat 2011-10-12 03:39:21 UTC
I'm not sure what's changed, but I'm seeing the behaviour that you are now too.
Comment 13 Robin Sheat 2011-10-12 04:42:16 UTC Comment hidden (obsolete)
Comment 14 Paul Poulain 2011-10-24 11:38:17 UTC
Updating Version : This ENH will be for Koha 3.8
Comment 15 Paul Poulain 2011-10-25 15:05:53 UTC
Bug versionned for master. entries will be made against rel_3_8 once the patch has been applied (see thread about that on koha-devel yesterday)
Comment 16 Sophie MEYNIEUX 2012-02-03 13:00:38 UTC
I have done a sample file with only one item with a duplicate barcode.

Without the patch, bulkmarkimport says "Item not added : duplicate barcode" and item is not created

With the patch ans -dedupbarcode parameter, bulkmarrecord does not report duplicated barcode, and item is created without barcode as expected. 

The pb I have is that the item is created with its own notice rather than using existing one. I don't know if I can sign-off the patch then.
Comment 17 Robin Sheat 2012-02-06 21:43:37 UTC
What do you mean by "created with its own notice"?
Comment 18 Robin Sheat 2012-02-07 05:16:46 UTC
Assuming you mean record, it's supposed to create its own item record. It's for when you're coming from a bad ILS that allows multiple barcodes and sometimes these are on totally unrelated records. So, it creates an item just like normal, but without the barcode.
Comment 19 Jared Camins-Esakov 2012-02-07 16:37:47 UTC
Sophie,

I think you may have confused this feature with record overlay, which is similar, but not what this is supposed to do. This is for when you specifically *do not* want to overlay records, but the library has items with duplicate barcodes (I have seen this). The behavior you describe is exactly what this is supposed to accomplish.

Regards,
Jared
Comment 20 Jared Camins-Esakov 2012-03-10 16:59:50 UTC
Created attachment 8146 [details] [review]
Bug 6199 - allow bulkmarkimport.pl to remove duplicate barcodes

This adds the -dedupbarcode option that allows bulkmarkimport to erase
a barcode but keep the item of any items it finds with duplicate
barcodes.

Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Comment 21 Paul Poulain 2012-03-21 15:27:06 UTC
QA comment:
I don't see why we have those lines:
+    # Clone record as it gets modified
+    $record = $record->clone();

Same for 
+    # We modify the record, so lets work on a clone so we don't change the
+    # original.
+    $record = $record->clone();

$record can already be modified before this patch. What am I missing ?

marking failed QA. switch back to signed off if you have an explanation...
Comment 22 Robin Sheat 2012-03-21 20:18:10 UTC
(Just working from memory here, but) in the first case, ModBiblioMarc modifies the MARC record. This a) isn't really good for it to do (it's an undocumented side-effect), and b) impacts the process of re-adding if adding a record failed, as without this, the copy you have will be changed (and damaged.) In particular, it removes items that fail, so tweaking and readding them is impossible.

For the second one, I think the reasoning is similar, although I can't remember the details. But basically, the function modifies the MARC passed into it, which is bad.
Comment 23 Paul Poulain 2012-03-28 15:54:43 UTC
(In reply to comment #22)
> (Just working from memory here, but) in the first case, ModBiblioMarc
> modifies the MARC record. This a) isn't really good for it to do (it's an
> undocumented side-effect), and b) impacts the process of re-adding if adding
> a record failed, as without this, the copy you have will be changed (and
> damaged.) In particular, it removes items that fail, so tweaking and
> readding them is impossible.
I don't understand why that's a problem, but I accept the argument, and mark passed QA, because it can't harm, and don't change a lot of things.

My feeling was that, when you have read a record, you have a $record that you can do for whatever you want, it's not related to the MARC::Record you have in the file you're reading.
So $record or $record-> can be used the same way
Comment 24 Jared Camins-Esakov 2012-12-31 00:49:38 UTC
There have been no further reports of problems so I am marking this bug resolved.