Posts by Echinos

    When I use the sample hl7 file that comes with the conquest install, it doesn't populate the Study UID field in the worklist (used a DBF viewer to look).
    There is a field with a Study UID in the sample file, but it doesn't show up in the worklist. I have put test data all through the hl7 file, and I can't find the field that maps to the Study UID field.


    Is there a field that does? Is there another way I could get a Study UID in there? I need to query the worklist from 2 locations, and the UID needs to be the same on both; one is the dicomized paper such as req, worksheets, etc.


    Thanks!

    Could I also use an importconverter to see if the last character is a period, and then append a couple tags to the UID to repair it?


    I have written scripts that do this, using dcmtk, and it works, but I'd like to be able to just send images to conquest, and have it repair them if they are invalid.


    I'd rather not generate new UIDs if the UID is valid, that will cause other issues - maybe I can use newuids only if the last character is a period?


    Edit: It seems I can. This should work great - thanks Marcel.

    Just for the sake of completeness, I have discovered that, for some reason, the Study UIDs on the cases that cause this error end in a period, which of course is a Bad Thing(tm). Currently I am blaming it on the server I moved the images from (non-conquest). I'm going to write a script to use dcmtk's dcmodify command to generate new, clean Study UIDs.

    I've done yet more investigation, and it turns out that:


    Status: a900 Status Information:-
    Data Set does not match SOP Class (Failure)


    This error is from another device that uses the dcmtk library that has the same images on it. The problem didn't originate on the conquest server, so, I may have to look elsewhere for a solution, but is there any way I can get around this problem with Conquest?

    I did a little testing, and it seems that the time period is from around the beginning of 2008 until april-ish 2009. There have been a couple instances where some of one modality will transfer from that day but other modalities won't, for example CT transfers but CR doesn't.

    I have an issue sending some of the images from our Conquest server. It seems like images with a date newer than 2008 don't want to go anywhere, but older cases have no issues being pushed or pulled. It might not be specifically January 1st, 2008 that is the dividing line, it just seems that 2008 and newer generally don't transfer.


    Other DICOM devices that query the Conquest server get "C004", and when I try to push from Conquest, I see the following:


    Query Distinct Tables: DICOMImages, DICOMSeries, DICOMStudies


    Columns : DICOMImages.SOPClassUI, DICOMImages.SOPInstanc, DICOMSeries.Modality, DICOMSeries.SeriesDesc, DICOMSeries.SeriesInst, DICOMSeries.SeriesNumb, DICOMStudies.StudyDate, DICOMStudies.PatientNam, DICOMStudies.PatientID, DICOMStudies.StudyInsta,DICOMImages.ObjectFile,DICOMImages.DeviceName


    Where : DICOMStudies.StudyDate = '20080112' and DICOMStudies.PatientID = '81499' and DICOMSeries.StudyInsta = DICOMStudies.StudyInsta and DICOMImages.SeriesInst = DICOMSeries.SeriesInst


    Order : (null)
    Records = 3
    Number of images to send: 3
    MyPatientRootRetrieveGeneric :: RetrieveOn
    Locating file:MAG2 81499\1.2.392.200036.9125.3.69122024135.64542514381.12929284_1001_001001_12628080750c27.dcm
    Locating file:MAG2 81499\1.2.392.200036.9125.3.69122024135.64542514382.12929285_1002_001002_12628080770c28.dcm
    Locating file:MAG2 81499\1.2.392.200036.9125.3.69122024135.64542514382.12929286_1003_001003_12628080790c29.dcm
    Sending file : Y:\81499\1.2.392.200036.9125.3.69122024135.64542514381.12929284_1001_001001_12628080750c27.dcm
    Image Loaded from Read Ahead Thread, returning TRUE
    Retrieve: remote connection dropped after 0 images, 3 not sent
    C-Move (PatientRoot)
    UPACS THREAD 199766: ENDED AT: Mon May 17 12:37:42 2010
    UPACS THREAD 199766: TOTAL RUNNING TIME: 4 SECONDS

    Marcel - I do need the data to be clean after a move, so thanks for that. I have been considering the dirty data / clean data separation option, it may be what I need to do.


    Mattb - Thanks for the info, I'd certainly appreciate the VB project if you can find it. What you've explained is what I'll have to do... Wrong data is going to cause problems on another device that the data is being sent to. There are probably over 10,000 records that are really going to need to be corrected, so it'll be programmatically or not at all. :)


    I'll start looking into how to use dgate to clean up the data. Many thanks.


    Brian

    I have a conquest server with almost 250,000 studies in it, or around 10TB of data. There are thousands of cases that have no accession number, or the accession number is 0. Also, because of a common, but bad, practice in the past, there are also thousands of duplicate accession numbers.


    There are also other fields such as the patient birthday that we would like to clean up.


    It seems that the easiest way to accomplish this would be to dump the data that we are missing from the RIS into a CSV, make a new table and import it, and update the dicomstudies and dicompatients tables.


    Are there any caveats to doing this? The system is still receiving new cases.


    I suppose I could always regenerate the database if I mess it up, but of course, a regen would take quite a while.


    There are some fields that are in two tables, will it cause weird behaviour if I only update one table? I assume it would only mean that the results would be different depending on if you were doing a patient or study level query.


    Brian

    Yes, I have some tablet PCs i would like to use KPACS on, but they are used in portrait mode, and KPACS won't run in that resolution.


    Is this possible?

    Also, it seems that the best way to do this would be to uninstall Conquest, and re-install after I have the SQl database in place, and then re-initialize the database with the existing data... sound good?


    Is there any issue with using Mysql 5.1?

    The main thing I'm trying to do is just speed things up. It's not crashing, although I do get "out of memory" erros sometimes - however it doesn't crash.


    The error is probably that I haven't increased my dicom.ini setting for more images, it's around a million. After I migrate, that should disappear ;)


    I am having more modalities come online, and I am going to be migrating data from another non-conquest PACS to this one, so I just want to try and help it handle the incoming data faster.


    The other thing is that the server is starting to get a lot of data on it, and once in a while it ends up re-indexing, by itself (maybe after the "out-of-memory errors?), and there's nearly a terabyte on there so far, so indexing takes a while, and it makes waves since the modalities start reporting errors, and I start getting calls :)


    Brian

    I also just tried exporting the .dcm files onto a CD from the modality, and tried to import into kpacs. The import failed. No more info than that, the log just said that it failed (with debug level).


    Time to go home for the weekend.

    I get the following when transferring a particular mammo series to kpacs from conquest:


    DIMSE Warning: (ARCHIVE,TECH): DIMSE_receiveDataSetInMemory: dset->read() Fa
    iled (Illegal Call, perhaps wrong parameters)
    storescp: Store SCP Failed:
    0006:020d DIMSE Failed to receive message
    storescp: DIMSE Failure (aborting association)



    It happens on multiple kpacs workstations, and I've only encountered it with this series, so far. I deleted the series from conquest, re-sent from the modality, and I get the same problem. So, my Best Guess (tm) is that there is something particular about this series that's causing the issue, but what?


    Regards,


    Brian

    I have a server downtime coming up, and I'm thinking of changing the database backend on the conquest server to something other than the default DbaseIII backend.


    I am assuming that SQL will be the best choice for this - any comments, suggestions or caveats?


    Regards,


    Brian