Dear Marcel,
I have used the conquest suite on many different machines for over 10 years now, very successfully, and I cannot thank You enough for all the fantastic work.
As I got more familiar with conquest I have extended my demands on it in parallel, and now ran into a few issues for which I hope you will be able to recommend a good strategy:
I have now 400 persons in our research system with 3000-1000 images from 1-3 studies and 7-12 series each; this is expected to increase to 4-5000 persons once we go multicenter.
1. The series needed for research are automatically retrieved using research database-generated batch files from our hospital PACS (or in the future from our global repository) once a patient has consented to using the series.
2. In a second step I am using batch files which move the data to a pseudonymization instance, where they are modified using the anonymize_script.lua generating newuids.
3. In a third step series are sent to the instance in our research system where series are stored and backed up and converted to nifti formatted files for later use in whole brain analyses tools. One batch file handles one command on one series.
So I have PACS -> CQ1_CLEARNAME -> CQ2_PSEUDO -> CQ3_RESEARCH (CQ1 and CQ2 reside on the same Windows 7 Pro 64 bit machine with 32GB RAM CPU E5 with SSD as System Drive and 4 TB external USB3 HD).
Everything went well moving the series from CQ1 to CQ2. The Pseudonymization step went well for the initial 1 mio or so then the SQLLITE started throwing errors of not being able to delete images from the database. Rerunning my batch scripts picked up remaining images but allocated different newuids to studies and series so that I now have a huge amount of split studies and series.
I found the reason why SQLLITE threw the errors, even on a SSD drive above 1 GB SQLLITE in multithread simply was not fast enough to modify the entries because the entries were still locked by the previous step. I have since moved CQ2_PSEUDO to MYSQL with slightly less initial performance which picked up significantly as it went into > 1 mio entries and nicely, so far no errors.
Now to reduce error sources I would like to change the anonymize_script.lua to a pseudonymize_script.lua which can lookup up patient_id and patient_pseudonym in a list file and still maintain de_pseudonymization capability so any series that is sent to CQ2_PSEUDO using --moveseries/--movestudy/--movepatient gets automatically pseudomyzed correctly by an import filter but can also be retrieved back.
I just cannot get my head around how to change the scripting to do this in 1.4.19b.
If you find the time I appreciate Your help,
Regards
Julian