Version 6 (modified by 12 years ago) ( diff ) | ,
---|
Version 1 of the i2b3 pathology data integration load uses two SSIS packages, loaded on the UHL Data warehouse server, to move data from the DWPATH database on the UHL data warehouse server to i2b2 databases on the UHLSQLBRICCSDB\UHLBRICCSDB server. One package moves data to the production i2b2 database, the other moves data to the test database.
Packages
The two packages are identical and only differ in the destination database and the set of patients that they extract.
Package Name | Destination Database | Patient Set | |
Test | Pathology To I2B2.dtsx | i2b2_b1_data | Test patients from i2b2_b1_data |
Production | Pathology To I2B2 APP03.dtsx | i2b2_app03_b1_data | Actual patients from i2b2_app03_b1_data |
Procedure
Both packages use the same following three steps which utilise the DWBRICCS database on the UHL data warehouse server and the i2b2ClinDataIntegration database on the I2B2 server as an intermediate databases.
1. Import I2B2 Patients to DWBRICCS
Runs the stored procedure USP_DWH_IMPORT_BRICCS_PATIENTS in the DWBRICCS database on the UHL data warehouse, which copies patients from the destination database on the I2B2 database server to the DWBRICCS database using a linked SQL server.
2. Delete Pathology from I2B2
Runs the stored procedure USP_DWH_DELETE_PATHOLOGY_FROM_I2B2 in the i2b2ClinDataIntegration database, which deletes all pathology data from the Observation_Fact table of the destination i2b2 database. These are identified as all records where the concept_cd begins with 'PAT:'.
3. Insert Pathology into I2B2
Runs the stored procedure USP_DWH_INSERT_PATHOLOGY_I2B2 in the i2b2ClinDataIntegration database, which loads data into the destination i2b2 database from the view UVW_BRICCS_PATHOLOGY_RESULTS in the DWBRICCS database on the UHL data warehouse as a linked server.
Duplicate Processing
Version one of the data load identifies some records as being duplicates because they have the same patient, sample collection datetime and concept code. When a duplicate is identified it discards the most recent record. This is probably not correct for several reasons:
- If there are more that two duplicates, it only discards one record and so there will still be a duplicate.
- Common sense and reason 1 suggest that it should be keeping the most recent record.
- There may be a better way to identify which record is correct. For example, if the result has been suppressed (result suppression will not solely solve the problem).
- Both records may be valid.
Paul Smalley has looked at the duplicates and thinks that most of them are not duplicates as they have a different specimin number. He also says that the received date and time should be unique for them, but that the sample time may be 'unknown' if the sample was not input into an electronic system.
ACTION (RB): Check for duplicates with the same specimin number. Are all duplicates supressed?
ACTION (RB): Check for sample date and time population.
SOLUTION: Would date and time received be a reasonable start date?
SOLUTION: Can we add the specimin number as a modifier? No, modifers are for different aspects of the same observation. So the observation would have a modifer code of 'specimin number', but the actual unique number would be the value. So the record would still be a duplicate.
SOLUTION: Since we are reloading all the data every time. Could we just add a second on for every preceeding duplicate?