Dicom server 1.4.16(k) released !

  • Hi,


    can you explain 'everything'? If the file is stored with a different extension as was previously saved (V2 or DCM), the specific file is deleted. What FIleNameSyntax do you have configured? The delete function that deletes the file searches on SOP and patientID. If the sent items are empty there may be an issue.


    Marcel

  • Hi,


    update to 1.4.16f broke a lua script that inserts a ReferencedRTPlanSequence into RTImages.
    These lines are part of it:

    Code
    Data.ReferencedRTPlanSequence={}
    Data.ReferencedRTPlanSequence.ReferencedSOPClassUID=RTPlanStorage
    Data.ReferencedRTPlanSequence.ReferencedSOPInstanceUID=a[1][1]


    Using only the first line correctly inserts an empty ReferencedRTPlanSequence. The second line causes the server to crash - even if the third line is commented out.
    To get the system working again, I downgraded to 1.4.16d.
    1.4.16e has this bug too.


    Gerald

  • I am patched up to 1.4.16h at this point but I seem to be getting segfaults after the service has been running for a few days. I'm running on a 64 bit centos install. I'm currently attempting to crank up the logging level but the method for doing so eludes me (I start dgate in an init.d file so a command-line argument to set debugging on server start would be very helpful).

  • Hi,


    The most useful is to run dgate in gdb as follows:


    gdb dgate
    run -v
    <wait for crash>
    bt


    and provide me the log.


    Additionally the loglevel can be set as follows from the binary folder once the server is running:


    dgate --debuglevel:4


    Actually, many parameters can be set this way, see dgate -?


    Marcel

  • I'll go with the debuglevel to start, this problem takes anywhere from 1-3 days to kick up and kill the server. Further details for the server, it's in the null database configuration presently and is run as a service user on a VM.

  • This is what got dumped:


    Program received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x4281d940 (LWP 6215)]
    0x00000000004b936f in Array<AbstractSyntax>::RemoveAt(unsigned int) ()
    Hangup detected on fd 0
    Error detected on fd 0
    error detected on stdin
    A debugging session is active.


    Unfortunately I didn't get to do a stack trace since this ran over the holiday and my computer crashed before I got to review the output.

  • Hi Marchel and happy new year :-)


    I have a little issue with Conquest 1.4.16(h) form the web interface.
    The Dicom Server runs over a Windows 32 bit environment with 3 Gbytes of system RAM (less than 600 Mbytes comitted by the system when running) with Apache httpd server.


    By using the web interface.....
    When I try to push as .ZIP archive a selected study containing up to 770 CT images everything goes well and I can download the .zip archive perfectly.
    When I try to push as .ZIP archive a selected study containing more than 770 CT images the .zip archive is truncated after some Kbytes and I can't get the images.


    By doing several tries by using huge CT studies made by more than 3300 images at a certain point The Dicom server just crashed and i got A VR Memory allocation Error in server status( for example "can't allocate 650 000 000 bytes" ) then the dicom server crashed and stopped to work until Kill and Restart.


    I suspect could be a memory allocation problem but conquest log isn't helpful to find at what point it happen.


    So I've tried to track out the operations done during this procedure.


    a) When I click over Push ==> ZIP file the server copy all the selected study images to che directory /printer_files.
    b) After the copy has finished It's time to create the .ZIP archive and i can see 7za.exe running in background and I can see the .zip archive increasing.
    c) after the archive is created it "disappear" from /printer_files and I get the download prompt in my internet browser.


    This is the Apache log when the server crash.
    ------Apache log ----
    [Wed Jan 11 00:55:13 2012] [error] [client 192.168.0.12] Premature end of script headers: dgate.exe, referer: http://192.168.0.232:85/cgi-bi…0147437.372&source=(local)
    ------And of apache log----


    What happen after the zip archive is created ? It's deleted from the file system and kept in system memory until the download has finished ? I wasn't unable to find It over the entire file system


    If the archive is more than 200 Mbyte could be a memory allocation issue with 3 Gbytes of system ram ?


    Considering that the dicom server log doesn't report any problem or anomaly could be a dgate.exe issue of the web server ?


    Could be an apache memory configuration problem ?


    Sorry for my english. I hope you can understand what I have done.


    Thank you for your attention and best regards, Davide.

  • Quote from marcelvanherk

    Hi,


    the zip file is transmitted as DICOM object and is sent and received in memory.


    Marcel



    Ok Marcel and thanks for your reply.
    In the past 5 days one of my friends told me about this issue:
    ------
    Could be a matter of how the data are managed. For example assuming the following variables:
    A = 500 Mbytes of data (Memory ALLOCated)
    B = 600 Mbytes of data (Memory ALLOCated)
    C = A + B


    The total committed system memory will be: 500 + 600 + ( 500 + 600) = 2.200 Mbytes of System RAM and depending on how the memory allocation is requested the entire memory commitment could be allocated over the available Phisical system memory only (without taking the advantages of swap / virtual memory).
    ------


    Perhaps we are facing a similar memory issue and without changing something the only available solution is to "upgrade" the system to a 64 bit environment equipped with more system memory in order to bring up the memory limit issue by 2x or 3x.
    Otherwise there is the possibility to take in consideration the use of temporary files for theese kinds of huge DICOM object on order to save system memory ?


    I'm asking this for a sort of "accademic discussion" only and remember I'm not a programmer, I am an idiot who likes to use the computer and (unfortunately for you) loves conquest :-)


    Thanks for your patience.


    Best Regards, Davide.

  • Hi Marcel and thanks again for your reply.


    Unfortunately our CT scanner is last generation 64 slice and It generates a lot of images for each series and It happens very often that one single series is composed by more than 770 images.
    For example sometimes for a Body scan or for an angiographic examination the images contained in a single series can be up to 3000 images. Ok this is very sporadic limit but it's normal to need to manipulate 1000 images for a singfle series.


    By the web interface I can't select a little slice group... I can download the single image or the single series (the single atom or the entire building :-) but this isn't a big problem. I use the web interface as facility in order to get the series from everywhere instead to modify the dicom node list because in certain situations I have a dinamic IP and this isn't compliant with DICOM node maps that requires static IP.


    One of my friends suggested to use objects and libraries that works with streams that can be kept in memory or to disk.
    A low impact for the Windows version could be to change the MALLOC() instructions with Windows API that permits to specify if the memory has to be Phisical RAM or storable in Virtual memory if the Phisical RAM isn't enough to perform the request.


    I repeat again... I'm not a programmer and I'm only doing copy/paste of discussions between me and one friend who tried to help me and I'm writing you for an accademic/philosophic discussion not more, not less.


    THanks again and have a nice week-end


    Davide.

  • Hi,
    Found that 16-bit (CR) images are not get successfully retrieved or pushed to Kpacs (J1 or un), US images are fine. Using external compression (not to forget to copy DCMJPEG from 1415 package) solves the problem. Sending to another conquest works OK but sometimes even if the image file got copied (file size is correct), I get an error message ***[DecompressJPEGL]: No jpeg data found, ***[DecompressImage]: JPEG library decompression error. Deleting the series and retrieve again solves the problem. Tried with h and i versions. Looks like even if there are no errors in the log file the image transfer somehow did not finish completely.
    Thank you.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!