Posts by Davidone

    Mmmm... No I didn't !


    I've done some tests with Jpeg several months ago but wasn't oriented to benckmark the system speed, I'm sorry.
    For sure Jpeg is more speedy than Jpeg 2000 and should place "in the middle" between NKI and Jpeg-2000.
    In the next days I'll inform about a more complete speed comparison.


    For now I can pubblish theese numbers obtained using Conquest over a Intel i5 @ 3.2 GHz and Windows XP 32 bit.
    From a General Electric Workstation I sent a big CT study composed by 3,087 images to the Conquest system using a 100 Mbit LAN connection and theese was the results.
    The uncompressed data was about 1,6 Gbytes


    Saving Total space needed for time needed to
    Mode Bytes single imagespace complete the task
    N4 = 773.947.822 (223 ... 224 Kb) 210 seconds - NKI -V2 format
    jk = 600.396.168 (169 ... 171 Kb) 443 seconds - Lossless Jpeg-2000
    jl = 458.773.692 (122 ... 140 Kb) 477 seconds - Lossy Jpeg-2000 @ 95% quality


    It seems that using Jpeg 2000 algorythm the time needed to compress the images doesn't change too much if you use lossless or lossy settings and take 2x the time needed by NKI.
    This time could be reduced by using a multicore CPU and a multicore enabled compression algorytm. If I remember correctly during compression was used only one core of the dualcore CPU.


    Give me some days and I'll post a more complete table results.


    Davide.

    It appears this post was left in a"to be continued" state for some time so I decided to "explore" this way in order to make some personal tests.


    I set-up 2 test systems with very differents hardware in order to test NKI vs XP-Compressed file system performances
    I used Conquest 1.4.15 and MySQL 5 over a NTFS file-system.
    - The firs one is an Intel i5 CPU (Mid-Range Dualcore cpu @ 3.2 GHz) and Windows XP 32 bit
    - The second one is a set-top-box based on Intel Atom 330 (Dual Core cpu @ 1,6 GHz) with Windows XP 32 bit (my "portable" PACS :-)
    In both cases I alternated NKI compression and File-System compression to store About 0,5 Tb of CT and MRI Dicom images. then tested the system response.


    NKI format compress the bitmap image and leave the Dicom header uncompressed so in theory by using uncompressed DICOM images associated to a compressed file system should be the best combination.


    The results unfortunately was differents but we need to make some considerations.


    One 512x512 CT Image takes about 0,5 Mbyte of disk space when uncompressed.
    The same image compressed in NKI format take about 0,2 ... 0,25 Mbyte.
    The Windows-XP compressed file system should need about 0,17 Mbyte to compress the same uncompressed image (the file system compress both image and dicom header).


    Considering 0,5 Tb of disk space this correspond to about:
    - 1 Million of uncompressed dicom images
    - 2 ... 2.5 Millions of NKI compressed dicom images
    - 3 Millions of XP-FS compressed Dicom images


    With theese numbers the most important thing is the time needed to access the files by the file system.


    Well my tests has showed a TOTAL inefficiency of the XP-FS during disk access. This file system needs to compress/decompress the files "on the fly" at every access and this has to be transparent for the system, so this process can't take too much CPU cycles to avoid to slow-down the other processes.
    This situation is more evident over the Atom CPU where the CPU power isn't very high and this caused some timeouts during accessing to large images series (by doing Dicom queryes involving more than 3.000 images).
    Better results by using a good CPU like the i5 @ 3.2 GHz but by increasing the amount of compressed files the performances degrades too much to be used over large disks set.


    By using NKI compression (level compression =n4) the result was a more reliable system on both hardware.
    Obviously over the Atom system the transfer speed is less than storing uncompressed DICOM images but this system was intended to be used as "transportabile little PACS" so the performances wasn't a priority.


    By using the same settings over the i5 system the CPU power was enought to make the "on the fly" compression/decompression of the NKI images absolutely transparent and without take more time that archiving uncompressed DICOM images.


    My personal opinion after theese tests is to AVOID the use the XP-File-System compression unless for very small amount of images and choose to save the images in NKI format (n=1 for slow systems).


    I hope theese tests could help someone and finally sorry for my bad english !


    I made some tests by using the new 1.4.16 release with Jpeg-2000 support enabled as losless compression.
    Well... over the Atom system the performances was too much slower to be used for large amount of data so the tests was interrupted.
    Better results over the i5 CPU where by using the Jpeg-2000 algorythm showed a double time compared to store the same study in NKI format but increasing the compression ratio by 25% than NKI format.


    Perhaps by using i7 or other high-end CPU the compression time could be reduced by a significant way but I suggest to use JPEG-2000 compression level over slow network connection transfers (10 Mbit or less) and leave the NKI compression to have the best system efficiency inside a 100/1000 Mbit/s. LAN.



    Davide.

    Hi to all. This is my first post in this forum and I hope to have respected the rules.
    First of all thanks to who is working to mantain the Conqest project. This is a greath work and a fantastic software. Light, Fast, Efficient, Configurable, written by people who use It and this is simply the best way to do It !


    Now about this post. I'm a "newbye" Conquest user and I'm not an expert of Dicom issues but I'm trying to set-up a test system. I was very impressed about the possibility to use Conquest with the http interface and I appreciated the possibility to submit a ZIP file containing the Dicom study in order to import It inside the system.
    There is a way to do It in reverse way ? For example after located a study the possibility to make a ZIP archive containing the study and then download It by http browser instead to make a "push" to a dicom device ?


    This could be very useful in that situations where there is a firewall-proxy or is not possible to use the Dicom protocol transfer.



    Another question hoping to not boring you too much.
    I've set-up a Windows based PC with severals Gbytes of Disk space dedicated to the image archive as MAG0.
    After this I have connected a Gigabit NAS (Western Digital ShareSpace) as MAG1 in order to nightly move automatically the older images to this device and free some space in MAG0.
    It works but I'm concerned about the transfer NAS efficiency that is very poor for a lot of short files as Dicom Images.
    After some tests I found that this issue isn't caused by Conqest but it's a sort of overhead in Windows Network protocol. The transfer rate is about 1 image per second during transfer to the NAS and 2-5 images per second during the retrieve.
    If I try to save a big file (for example 1000 images zipped into a single archive) the transfer rate is "nominal" with a transfer rate of about 400-600 Mbit/sec (the network connection is Gigabit class).


    The question ARE:
    - To achieve a reasonable transfer efficiency I need to change the NAS with a DAS (for example an USB HDD or fiber channel storage server unit) Do you have more experiences about this issue ?
    - What about the possibility to "pack" on the fly in a ZIP file the older studyies who needs to be transferred to the NAS in a Zip file before to transfer them and in case of a retrieve do the reverse operation by unpacking on the fly the ZIP archive to the MAG0 (or other) unit before to initiate the Dicom Move ?


    I'm just writing some non-sense sentences or there's something useful in this post ?


    Thanks in advance for reading this and sorry for my bad english.



    Davide.