Posts by blub_smile

    @radtreveller
    That decision sounds reasonable, although we went with compression and v2 to keep the server tower as small as possible and still having the possibility to up to 4TB in the future.
    By the way what stripesize did u choose for the RAID?
    greetz

    Hi Marcel!


    Right now I have cleared the DB again in order to reproduce the error again.


    Here is what I can say for now:
    A)
    1) I start with an empty conquest Archive and DB
    2) I copy a dozen of studies to conquest using eFilm
    3) then copy the studies of patient 040544 (seperatley)
    - no error, everything works


    B)
    1) I start with an empty conquest Archive and DB
    2) I copy a about 20 to 30 studies at a time (total 312) to conquest using eFilm
    3) during that copy operation at some point patient 040544 is send to conquest
    - error occurs, well until now I only was able to reproduce it once!


    I will try it again tomorrow.


    I get the impression that the heavier the load of CPU or maybe Conquest is, the more likely it is to get an error!?


    C)EFilm always copies multiple studies at once, maybe this could cause problems?
    What if Efilm sends one image of study 1 and one image from study 2 from patient 040544 at the same time - would in this situation the entrie be "locked"?


    D) is there any possibility to turn the debug level always to "on". this would make it possible tu run it on our real server with conquest in background. Doing this I hope to get more of these errors because they are somehow hard to reproduce


    greetz
    Stephan

    Hi!
    When all your data is already on your HD where you want Conquest to store it you can point the MAG0 path in the Dicom.ini to that path and just re-initialize the DB with the conquest user interace.
    If you do this, the already stored files will NOT be modified according to your setting for the filesyntax - this setting only affects incoming files in the future, for example send to conquest by your CT / MRI scanner.


    When you want to store your files according to your desired file syntax you have to sent them to conquest by a DICOM client. i.e. KPACS eFilm etc.


    For moving files in the future:
    Just copy the archive directories to your new drive and edit the dicom.ini MAG0 device path according to the new storage path - thats all :-)



    With the compatibility I am not sure, but this is only important when you want to read the files directly from folders with other programs.
    Settings 8 and 9 should be ok I think.
    greetz


    PS: what kind of program did you use to copy the 800GB of files?

    Hi!


    I had quite a hard time to reproduce the SQL error.
    After moving almost 350 studies I finally got one SQL error message.
    It happened with a Patient who was examined twice in two days and so the database entries in table DicomPatient already existed.
    I was however not able to reproduce the error with this patient a second time, this is why I think it is some random error.
    The log file reads as follows: (Debug Level 4)


    PACS trouble:
    20070212 11:20:48 ***Failed MYSQLExec : INSERT INTO DICOMPatients (PatientID, PatientNam, PatientBir, AccessTime) VALUES ('040544', ABC, '19440504', 1171275616)
    20070212 11:20:51 ***Error: Duplicate entry '040544' for key 1




    20070212 11:21:06 ***SQL: INSERT INTO DICOMPatients (PatientID, PatientNam, PatientBir, AccessTime) VALUES ('040544', 'ABC, '19440504', 1171275616)
    20070212 11:21:07 ***Error: Duplicate entry '040544' for key 1
    20070212 11:21:09 ***Error: 0: :


    20070212 11:21:13 ***Error saving to SQL: 040544\1.2.840.113619.2.135.3596.3078814.7796.1170078362.911\1.2.840.113619.2.135.3596.3078814.7709.1170078713.951\1.2.840.113619.2.135.3596.3078814.7709.1170078714.17.v2


    Server log:
    12.02.2007 11:21:21 [PACS] Add to Table: DICOMSeries
    12.02.2007 11:21:21 [PACS] Columns: SeriesInst, SeriesNumb, SeriesDate, SeriesTime, SeriesDesc, Modality, PatientPos, Manufactur, ModelName, ProtocolNa, FrameOfRef, SeriesPat, StudyInsta, AccessTime
    12.02.2007 11:21:21 [PACS] Values: '1.2.840.113619.2.135.3596.3078814.7709.1170078717.781', '2', '20070210', '112355', 'T2 SAG frFSE', 'MR', 'HFS', 'GE MEDICAL SYSTEMS', 'SIGNA EXCITE', '060155/', '1.2.840.113619.2.135.3596.3078814.7796.1170078362.935', '060155', '1.2.840.113619.2.135.3596.3078814.7796.1170078362.936', 1171275680
    12.02.2007 11:21:21 [PACS] ***Error saving to SQL: 040544\1.2.840.113619.2.135.3596.3078814.7796.1170078362.911\1.2.840.113619.2.135.3596.3078814.7709.1170078713.951\1.2.840.113619.2.135.3596.3078814.7709.1170078714.17.v2
    12.02.2007 11:21:21 [PACS] Server Command := 0001
    12.02.2007 11:21:21 [PACS] 0000,0002 26 UI AffectedSOPClassUID "1.2.840.10008.5.1.4.1.1.4"
    12.02.2007 11:21:21 [PACS] 0000,0100 2 US CommandField 1
    12.02.2007 11:21:21 [PACS] 0000,0110 2 US MessageID 3597
    12.02.2007 11:21:21 [PACS] 0000,0700 2 US Priority 0
    12.02.2007 11:21:21 [PACS] 0000,0800 2 US DataSetType 0
    12.02.2007 11:21:21 [PACS] 0000,1000 54 UI AffectedSOPInstanceU "1.2.840.113619.2.135.3596.3078814.7709.1170078704.551"
    12.02.2007 11:21:21 [PACS] Server Command := 0001
    12.02.2007 11:21:21 [PACS] 0000,0002 26 UI AffectedSOPClassUID "1.2.840.10008.5.1.4.1.1.2"
    12.02.2007 11:21:21 [PACS] 0000,0100 2 US CommandField 1
    12.02.2007 11:21:21 [PACS] 0000,0110 2 US MessageID 3600
    12.02.2007 11:21:21 [PACS] 0000,0700 2 US Priority 0
    12.02.2007 11:21:21 [PACS] 0000,0800 2 US DataSetType 0
    12.02.2007 11:21:21 [PACS] 0000,1000 50 UI AffectedSOPInstanceU "1.2.840.113619.2.5.1762864713.5114.1168260429.843"

    Hi!
    I encoutnered the same SQL error several times before.
    It really seems to happen randomly and in my case only with images from the GE MRI machine.
    I noticed an increased frequenzy when updating from 1.4.11 to 14.4.12.
    Since Version 1.4.12c I could not find this massage in the logs again.
    However 1.4.12c is only running for a few days now.
    On one day we had about 100 entries of SQL errors just within a few minutes, SQL server itsself ran smoothly during that time.
    I managed to identify a couple of those studys causing this error message and tried to reproduce them, until now I was not successfull.
    However I found out that all images of these studies were stored correctly.
    I might have a theory about this:
    maybe this error only occurs when conquest is under heavy load.
    I get this idea because we send all our MRI studies from an Efilm machine at the end of the day to Conquest. Efilm is sending a few studies at a time instead of one by one.
    greetz

    Hi!
    The manual itsself is quite long with 147 pages.
    What do you think of making two different pdf files. One for the DICOM conformance statement and the appendix 1 with all changes and new features, and the other file with "just" the users guide / manual and the appendix 2.
    I think that would make it easier to navigate within the users guide.
    The way it i now there is a lot of information that many users do not need, or at least confuses them in the beginning.
    greetz

    Hi!
    Yesterday I tried to delete a few patients from the archive.


    However conquest was not able to delete either, patient, study or series.
    At home I was able to reproduce this behaviour on my notebook, conquest states the job was done. But in reality nothing ever changed.
    Serverstatus log shows no error, neihter does pacstrouble.
    just the massage: "deleting database entry ABC, deleting from gui." is written in the log.
    But all files remain on disk and the database keeps all entries.


    Anyone else having this problem!?


    Conquest 1.4.12b
    My SQL 5.0.27


    greetz
    Stephan

    Hi Buschranger


    You have to copy the libmysql.dll from the Mysql installation folder into the server directory.
    When first launching the Server intrerface for installation the MySQL option will be available for selection.
    I think you will benefit from SQL the bigger your archive gets, and you will be able to perform more flexible searches - I think bde only save Patient IDs
    (please correct me if Im wrong)


    for MySQL I can recommend using MySQL admin for setup, it is very easy to use and offers many options for maintnance.
    greetz Stephan

    Hi!
    Thank you very much for this quick answer!
    I think this would be the corresponding birthdate error:


    ***Inconsistent PatientBir in DICOMPatients: PatientID = '171086' PatientID = '171086', Old='19861010', New='19861017'


    greetz
    blub

    Hi!
    We very often get error messages in the PACSerror logfile similar to this one:


    Inconsistent PatientNam in DICOMPatients: PatientID = '290132' PatientID = '290132', Old='AAAAA', New='BBBBB'


    When I seatch the SQL file I am only able to find patient A or B, but never both - on queries from eFilm both show up :-).


    I do understand what it means and why it comes up (we always put the DOB as the Patient ID) but I am not sure how conquest handles such an error and if it is important.
    So it would be very nice if someone could tell me how Conquest does handle certain IDs/Fields when they already exist and if there are IDs that schould never be the same
    thanks
    blub

    Hi, web based would be very nice.


    Useful would be:
    - studie / patient management. /page where admin can make queries, selects Patient/Studies and have the options to delete /move / rename etc.. .
    A option to list studies of a certain period would also be very interesting, for example to delete all studies from 2004.
    - page to manage acrena map
    - edit dicom.ini from web browser
    - main maintnance options, like rebuild, restart etc.


    ..thats what come into my mind so far...


    I can not be of any help in development, but I can offer help in testing :-)

    Hi!


    Well when will need the program again I will have about 4 million files ot transfer :-). I hopeit will work.
    But i will test it in a few weeks with only one million, when it completes that job I am confident that it will work with even more.
    greetz

    Bushranger


    Mhh.. I have used Nero for amounts up to 500GB, I am pretty sure that it is not the size but the amount of files in total (660k) that was the prob.
    But you are right I will not use it again next time :-).


    Bookworm
    I just took a quick look at the prog, sounds promising. I will try it out when I have some time left :-).
    By the way do you know how many files you copied?

    @buschranger


    Yes this is exactly what I did :-).
    I just had software problems doing this copy operation, Nero BackitUp and even Windows explorer crashed/stopped twice - so it was a little difficult to manage.
    So my question was merely if there are other and more reliable possibilites
    for such a file copy operation (maybe special programs etc.), especially when there are TBs of data to be copied in the future.
    greetz

    The 360GB were stored on a 500GB drive, so I just hooked it up to the new pc and started copying.
    Well when I got the time I might try to copy our archive over ethernet to check that out, maybe this is more sophisticated the copying it with the windows explorer :-)

    About 8 weeks ago we moved the conquest server from our test-installation to a more powerfull workstation.
    By that time we had gathered about 360GB in 660k files.
    I found it really difficult to move all the files to theire new location on the 2TB RAID 5.
    Nero BackitUp 2.7 showed an error message at around half of the job stating that there wasnt enough space on destination drive -lol - and the Windows Explorer crashed two times.
    Overall this operation took me two and a half days to complete- - ouch!
    Though we will most likely run out of space in about 18 months I am already getting a headache thinking about it :-), by then we will have about 5 million files. (we will most likely build up a more powerfull, and then dedicated server)


    I am very interested on any suggestion / solution to copy such an archive, maybe even with a verify option that will not take weeks to complete :-)


    greetz