Posts by akanarya

    Thanks Marcel,


    Generally there is no error messages, but for last try, it deleted a few patients then "can not delete image x" message for one study, then cancelled out without warning (a cancel/break message will be good). I checked that study, it coppied all images to jukebox with success, it deleted only 3 images out of 102 images of that study from source, then break the process. May be there is a collision between simultaneous read/write requests to same disk, i dont know.


    so what should i do?


    - delete studies by hand? if so should i query the db to know which images are labelled to jukebox or simply compare source-target disks and delete the source ones?


    if i repeate the movedatatodevice without selectlruforarchival the what it does?;
    does it recopy, recompare and delete? what is the process?


    one more question: is there any limit for the size of movedata process? ie can I move 1 tb at once?

    Hi,


    I use conquest over 2 years in my hospital, thanks developers again.


    Since we arrived to online-storage limit, i begun to move some my data to offline disks(jukebox) to make a free space on online-disks.
    I use --selectlruforarchival and --movedatatodevice commands succesively.


    Problem is that, i select data (say 25gb) then dgate copies and compares, but when the delete operation is invoked by dgate after compare, it doesnt work accordingly.
    I test with 1gb data, worked perfectly,
    then try 10gb data worked
    but for 25gb it didnt delete the original images after select,copy and compare.
    Now i have images in server disks but database dicomimages table points jukebox device (correct case of course, it should do so, because they were copied)
    so have can I find and delete original images that are pointed to jukebox in database?
    is there an option in dgate?
    i can query database then delete the studies by hand, but it is hard for 25gb data. :)


    i use 1.4.13 on linux.


    Thanks Ali


    ps: problem may be arise because of that i run archiving in prime time (in between heavy send/receive transfers-disks are in use) :?:

    Hi Conquest users,


    I open this topic to collect data from you about Conquest applications working in hospital/clinic environments.
    So that, people of search on pacs systems can easily take reference about Conquest.


    I begin with myself:


    I run Conquest totally two years in our hospital in Turkey. First half year was for testing, succesive 1.5 year in production.
    3 modalities are connected (MR,CT,CR).
    We have 2x Sun Fire Intel Xeon servers with Sun Storage Array (SAN).
    Each server has single quad-core Xeon processor with 4 gb ram and 2x146 gb SAS disk. Disks are RAID 1.
    SAN has 6x1TB SATA disk. They are combined with RAID 1 which makes total 3 TB archiving capacity.
    Servers run with Suse Linux Enterprise Server 10.
    Database is PostgreSQL. I choosed pgcluster to replicate database.
    Clustering is made with Suse heartbeat tool.
    Images are stored in Storage. Database are stored in server disks and replicated between two servers synchronously.
    Conquest version is 1.4.13.
    Viewer is K-Pacs.


    We have now
    100.883 studies
    1944 GB image
    2.9 GB db size
    Whole studies query time is 59 seconds


    Because of free-time problem, I could think only this amount of info.
    Any modifications are welcome and your datas are appreciated.


    Ali Kanarya

    Good news, the problem is fixed. I started the kpserver with "--fork" parameter then it worked.
    Anyway any other recomendations-tunes about cluster is welcome.
    Ali

    Hi,


    I have been using conquest server aproximately 4 months in my hospital.
    We are at the milestone to migrate our simple server to a 2-node cluster.
    Nowadays i do some experiments and settings on cluster servers.
    I installed suse linux enterpise and configure clustering (hi-avaliability) on them.
    Then installed conquest both of them with seperate postgre database.
    Since i am doing experiments both machines' configurations, paths, conquest
    databases are compleyely identical.


    Cluster works well, when one node fails the other node handles the retrieve requests from
    k-pacs.


    But when active node failure is occured during a patient data is being fetched from k-pacs,
    k-pacs is locked.


    I have to restart k-pacs gui end server on my windows client.
    Then k-pacs works, and other node (migrated) responses.


    I tried the "DicomMoveOnSeriesLevel = 0" in k-pacs.ini
    At that case k-pacs gui is not locked and retrieves the patient data (some portion received until failover),
    but i have to restart k-pacs server in order to rerieve next patient data from migrated node.


    I hope i could explain my situation, i write here because i think that there is no problem with conquest.
    Surely there are some firends here who configured cluster and k-pacs clients as in my situation.
    Helps are apriciated. Ali


    Note: I didn't to any move operation TO cluster yet, it requires shared device, database replication etc.

    Thanks friends,


    Mysqldump file size reached to 104 MB after 50 days (CR is connected).
    Creation of that size of dump file takes 21 seconds on our Pentium E2180 2ghz single cpu w/ 2 GB RAM, Database on Seagate SCSI disc.


    As much as i understood from ljaszcza and blub, i now think that i won't take
    dump file, i will take the copy of mysql data directory periodically eg daily.


    Infact i think that my procedure i mentioned at the beginning of the post
    is like a bit more secure, it takes hourly increments and hence when a breakdown is occured only data within last hour will be missing.


    But after 5 years and 10 Gb database dump (single) file size with say 35 minutes creation time may be a high drawback.


    Since i am not a database expert, i share my thoughts with you.
    Ali

    When nki compression is seleceted, server status window lists messages about compression is being made. But when i choosed jpeg there is no
    such messages, only "write .. to MAG" messages and i am looking to
    mag and see that dicom file size's is not reduced
    then i conclude that no compression is made.
    Is there any other package necessity different than conquest.zip?
    If it works at your system, may be description of fresh install highly appreciated. I think that i skip something. Ali

    Sure,


    I didnt use 1414beta server package. I used 1413 server
    with default libs and dgate, or dgate.exe ver 1414
    or dcmtk from offis web page. Ali

    Hi Marcel,


    I tried two version executables(ones comes with conquest.zip and others from dcmtk) and two dgate (1413-1414). All cross experiments are failed.
    Btw, some time ago, i had tried over web and compression through client
    has been succeded. Ali

    Hi wolbi,


    I think that problem comes from the patient id entered from cr console.
    We use fuji cr and we have no problem to send data to conquest.
    e-film does not use patient id to store to its local disk, it uses study uid which
    is unique and obsolute. Try simple patient id enterance such as 12345 (only numbers without space or delimiters) from cr console. Ali

    Thanks LJJaszczak,


    I understand that you back up mysql data directory with raid, right?
    What operation do you take to backup on offsite server? Only file copy
    or mysqldump or innodb hot backup? Ali

    I can not compress the incoming images to conquest as jpeg.
    At gui, i select "enable jpeg support using offis tools" and "with
    lossless jpeg comp."
    images on disk are named as "DCM"
    i send images from kpacs on another computer.
    this node is listed in acrnema.map and compression option is j2.
    i also removed existence dcmcjpeg and dcmdjpeg from conquest dir and
    copied the ones from offis dcmtk 3.5.4.
    But no compression occurs.
    However nki fast compression works.


    Ali

    Hi,


    Our conquest server runs on-line for 42 days.
    It holds MR and CT images.
    Today numbers are;
    2.605 study, 146.786 images and 49 GB image storage.


    I backup conquest mysql innodb tables with mysqldump.
    Procedure is;
    1. At 00:10, a folder named with current date is created.
    2. At 00:20, the complete full back up is taken with mysqldump.
    3. Each hour incremental backup is run on full backup.


    Each procedure is written in a seperate .bat files, and timing is handled
    by windows scheduled tasks.


    Today dump file reached 79 MB.


    So,


    After one year and integrating CR to conquest, I guess dump file will reach
    approximately 1 GB.
    What about after 2 years, 3 years , .. 10 years later ?


    What do you suggest for mysql backup?
    I searched several tools but i didn't catch a solution.


    Thanks Ali

    Hi Marcel,


    I analyzed the problem. Image that conquest can not delete is not exist in source patient directory. But it exists in the directory of previously moved directory of the same patient. That is, file was moved at first operation, but conquest somehow remembers it. At first move operation conquest successfully removes the source directory. I am sending the bug report to portal@nki.nl.
    I tested it with two different patient, same problem occured again. Ali