When (not to) use NKI compression (20070127)

  • Since there has been some discussion about the validity of the private NKI compression (see topic Conquest and Ultrasound), here are my views.

    NKI compression is an old and well tested feature of the Conquest DICOM server. We use it to store TB's of information. The compression ratio and speed are quite OK.

    It is totally transparant for clients accessing conquest over the network, guaranteed error free, and safe to use.

    Also (when set in acrnema.map) it can set as transfer syntax to transfer data from one server to another. This is OK for Conquest to Conquest transfers but should NEVER be done for transfers from Conquest to non-Conquest clients.

    The compressed data is saved in a private TAG if possible. If not, the images are not compressed and remain valid.

    Files stored by Conquest under NKI compression are NOT DICOM COMPLIANT and are unreadible for other software. Originally, it was not possible to store NKI compressed images with the .dcm extension (set with FileNameSyntax), but only with the .v2 extension (raw VR dump). That the combination of .dcm and NKI compression is now possible can actually be considered a bug - .dcm files must be DICOM compliant. Since version 1.4.12b, the nki compression is disabled for .dcm files.

    If you store data with NKI compression, you will need to keep the a version of the Conquest software running indefinitively to be able to read the data (or reuse our code in nkiqrsop.cxx).

    As David Clunie mentioned, a better way to implement NKI compression is a private transfer syntax. We will consider this change, but it has a lot of implications for the code.


  • Hi,

    You can change the compression mode and send the images from the server to itself to compress. BUT: NKI compression can only be used if files were stored as .v2, JPEG only if they were stored as .dcm. The reason is that sending to itself does a rewrite to the same filename and the different compressions need different filenames.


  • Hi

    v1.4.12c using MYSQL database on windows XP (inexpert user!)

    I tried NK compression and it didn't work which seems to be expected behaviour with this release (having read this sticky). JPG didn't woork either.

    Next I tried just telling XP to keep the dicom store directory compressed. This seems to work fine and provide good compression. (*0.2 or less).

    Is there a downside to this approach?

  • It appears this post was left in a"to be continued" state for some time so I decided to "explore" this way in order to make some personal tests.

    I set-up 2 test systems with very differents hardware in order to test NKI vs XP-Compressed file system performances
    I used Conquest 1.4.15 and MySQL 5 over a NTFS file-system.
    - The firs one is an Intel i5 CPU (Mid-Range Dualcore cpu @ 3.2 GHz) and Windows XP 32 bit
    - The second one is a set-top-box based on Intel Atom 330 (Dual Core cpu @ 1,6 GHz) with Windows XP 32 bit (my "portable" PACS :-)
    In both cases I alternated NKI compression and File-System compression to store About 0,5 Tb of CT and MRI Dicom images. then tested the system response.

    NKI format compress the bitmap image and leave the Dicom header uncompressed so in theory by using uncompressed DICOM images associated to a compressed file system should be the best combination.

    The results unfortunately was differents but we need to make some considerations.

    One 512x512 CT Image takes about 0,5 Mbyte of disk space when uncompressed.
    The same image compressed in NKI format take about 0,2 ... 0,25 Mbyte.
    The Windows-XP compressed file system should need about 0,17 Mbyte to compress the same uncompressed image (the file system compress both image and dicom header).

    Considering 0,5 Tb of disk space this correspond to about:
    - 1 Million of uncompressed dicom images
    - 2 ... 2.5 Millions of NKI compressed dicom images
    - 3 Millions of XP-FS compressed Dicom images

    With theese numbers the most important thing is the time needed to access the files by the file system.

    Well my tests has showed a TOTAL inefficiency of the XP-FS during disk access. This file system needs to compress/decompress the files "on the fly" at every access and this has to be transparent for the system, so this process can't take too much CPU cycles to avoid to slow-down the other processes.
    This situation is more evident over the Atom CPU where the CPU power isn't very high and this caused some timeouts during accessing to large images series (by doing Dicom queryes involving more than 3.000 images).
    Better results by using a good CPU like the i5 @ 3.2 GHz but by increasing the amount of compressed files the performances degrades too much to be used over large disks set.

    By using NKI compression (level compression =n4) the result was a more reliable system on both hardware.
    Obviously over the Atom system the transfer speed is less than storing uncompressed DICOM images but this system was intended to be used as "transportabile little PACS" so the performances wasn't a priority.

    By using the same settings over the i5 system the CPU power was enought to make the "on the fly" compression/decompression of the NKI images absolutely transparent and without take more time that archiving uncompressed DICOM images.

    My personal opinion after theese tests is to AVOID the use the XP-File-System compression unless for very small amount of images and choose to save the images in NKI format (n=1 for slow systems).

    I hope theese tests could help someone and finally sorry for my bad english !

    I made some tests by using the new 1.4.16 release with Jpeg-2000 support enabled as losless compression.
    Well... over the Atom system the performances was too much slower to be used for large amount of data so the tests was interrupted.
    Better results over the i5 CPU where by using the Jpeg-2000 algorythm showed a double time compared to store the same study in NKI format but increasing the compression ratio by 25% than NKI format.

    Perhaps by using i7 or other high-end CPU the compression time could be reduced by a significant way but I suggest to use JPEG-2000 compression level over slow network connection transfers (10 Mbit or less) and leave the NKI compression to have the best system efficiency inside a 100/1000 Mbit/s. LAN.


  • Mmmm... No I didn't !

    I've done some tests with Jpeg several months ago but wasn't oriented to benckmark the system speed, I'm sorry.
    For sure Jpeg is more speedy than Jpeg 2000 and should place "in the middle" between NKI and Jpeg-2000.
    In the next days I'll inform about a more complete speed comparison.

    For now I can pubblish theese numbers obtained using Conquest over a Intel i5 @ 3.2 GHz and Windows XP 32 bit.
    From a General Electric Workstation I sent a big CT study composed by 3,087 images to the Conquest system using a 100 Mbit LAN connection and theese was the results.
    The uncompressed data was about 1,6 Gbytes

    Saving Total space needed for time needed to
    Mode Bytes single imagespace complete the task
    N4 = 773.947.822 (223 ... 224 Kb) 210 seconds - NKI -V2 format
    jk = 600.396.168 (169 ... 171 Kb) 443 seconds - Lossless Jpeg-2000
    jl = 458.773.692 (122 ... 140 Kb) 477 seconds - Lossy Jpeg-2000 @ 95% quality

    It seems that using Jpeg 2000 algorythm the time needed to compress the images doesn't change too much if you use lossless or lossy settings and take 2x the time needed by NKI.
    This time could be reduced by using a multicore CPU and a multicore enabled compression algorytm. If I remember correctly during compression was used only one core of the dualcore CPU.

    Give me some days and I'll post a more complete table results.


  • OK. This is a very long post.... I hope not to be too boring. :!:

    Finally I have done some tests in order to evaluate the global efficiency of Conquest's image format compression algoritm.
    The hardware used was a common PC equipped with Intel i5 @ 3.2 GHz, Windows XP 32 bit and Conquest 1.4.16 + MySqI 5
    I selected a very big study composed by about 3,500 images generated by a 64 slices CT scanner made by General Electric (Discovery CT750-HD).
    I sent several times the entire study to the Conquest server trought a 100 Mbit/sec LAN changing at each try the method used by Conqest to save the images to Disk and taking a note about elapsed time needed to complete the task.

    Images sent: 3492

    UN = Uncompressed - Standard DICOM 3 format.
    Time needed: 192 seconds
    Archived Data: 1.809 Mbyte
    Compression Ratio: 1:1 = 100 %
    Speeds = 18 images /sec. Equivalent netwok speed = 77 Mbit/sec

    N4 = Lossless compressed – Custom NKI V2 format.
    Time needed: 212 seconds
    Archived Data: 818 Mbyte
    Compression Ratio: 1:2.21 = 45 %
    Speeds = 16.5 images /sec. Equivalent netwok speed = 32 Mbit/sec

    J2 = Lossless compressed – Standard Jpeg DICOM 3 format.
    Time needed: 262 seconds
    Archived Data: 735 Mbyte
    Compression Ratio: 1:2.46 = 41 %
    Speeds = 13.3 images /sec. Equivalent netwok speed = 23 Mbit/sec

    JK = Lossless compressed – Standard Jpeg-2000 DICOM 3 format.
    Time needed: 473 seconds
    Archived Data: 630 Mbyte
    Compression Ratio: 1:2.87 = 35 %
    Speeds = 7.4 images /sec. Equivalent netwok speed = 11 Mbit/sec

    J6 = Lossy compressed – Standard Jpeg DICOM 3 format. @ 95 % Quality
    Time needed: 260 seconds
    Archived Data: 734 Mbyte
    Compression Ratio: 1:2.47 = 40 %
    Speeds = 13.4 images /sec. Equivalent netwok speed = 23 Mbit/sec

    JL = Lossy compressed – Standard Jpeg-2000 DICOM 3 format. @ 95 % Quality
    Time needed: 516 seconds
    Archived Data: 481 Mbyte
    Compression Ratio: 1:3.76 = 27 %
    Speeds = 13.4 images /sec. Equivalent netwok speed = 7.6 Mbit/sec

    Following my personal consideration after theese tests.

    1. On an up-to-date PC/server there are no reasons to NOT use NKI compression as default saving method over a dedicated server (PACS). The difference in time needed to save the study in NKI format is minimal compaired to the possibility to double the archive capacity. Obviously you need to keep Conquest as "media converter" to be able to read the images with a third party workstation or system but the work is done "on the fly" and is totally transparent to the users.

    2. Using Lossless JPG compression algoritm permits to achieve e better compression ratio than NKI and to keep the Dicom compliance of the saved images for Workstation / viewer Dicom-JPG enabled but you need to "pay" a small amount of time to complete the task.
    In addittion JPEG algoritm usually are asimmetric. This means that the time needed to Save/Encode an image in JPEG format is more than the time needed to Read/Decode the same image.
    The storace capacity could be increased by 10 % than NKI compression. Could be a NKI alternative and moreover could be a good way to send the images to a workstation in order to speed-up the transfer time over a slow network.

    3. The Lossless JPEG-2000 algorithm is an improvement of te "standard" JPEG. This was born at the beginning of the third millennium and introducing big enanchements like wavelet compression and other features. Unfortunately isn't very diffused and there aren't a lot of dicom viewer able to display this Dicom image format.
    Using this image format "costs" a lot of time (2 X than NKI compression time) but It permit to increase the storage capacity by 20 - 25 % so could be a valid solution for long term archiviation or to transport the images between geographically dislocated Conquest servers connected by a low bandwith connection (10 Mbit/sec or less).

    4. The use of lossy JPEG / JPEG-2000 algoritm is more difficult to evaluate because there are a lot of medical and legal considerations to take about the image quality threshold and the possibility to lost diagnostic informations inside the images. This is a simple compression test and I don't want to be involved in medical and ethic discussion, I hope you undestrand :-)
    The main difference between JPEG and JPEG-2000 is that JPEG is very fast than JPEG-2000 but with JPEG-2000 you can achieve a better compression ratio than JPEG introducing a very small amounts of image artifacts. In my test 95 % of image quality permits to display the images apparently avoiding any kind of image artifacts reducing the image size in a very significant way.
    This opportunity is perfect to transport images over low bandwith networks like internet/VPN in a relative short time (in my test the equivalent transfer rate was 7.6 Mbit…. Very near to HDSL / VPN / internet connection bandwith).
    In this way for Telemedicine, second opinion, other It’s possible to shorten the transfer times about 4 times without loss too much quality.
    Could be also a way to definitely archive very old long term archives that can't be involved medical or legal issues.

    5. Other image formats / the future.
    If you take a look over the internet there are a lot of discussion about the possibility to introduce an advanced Jpeg-2000 3D compression algoritm able to increase in a significant way the compression ratio.
    Today we compress/decompress every image as is... as standalone image but for some modalities this image is part of a volume (think to a CT elicoidal scan for example or fluoroscopy or ecography and so on).
    Both lossless than lossy algoritm could have a lot of benefits if the images of a series could be considered and compressed as a "single 3D volume" pack increasing the compression ratio.
    In this vision every slice is correlated to both previous and next slice and you need to save the differences only between a "key-image" and the following images.
    This is a reality when you watch a youtube video or DIVX / XVID /H.264 video.
    Of course the better results will be achievable with lossy compression algoritm with a big compression ration compaired to the introduced artifacts.
    Some tests are demonstrating that this method could multiply by 2 or by 4 the compression ratio without significant loss of quality enabling the possibility to have a compression ratio of 1:8 or 1:16 without risks !!!
    My dreams are thinking about a future implementation of this possibility but to do this we need more CPU power than today and the definition of e new updated Dicom image format (or - NKI V3 proprietary image format) to be used for telemedicine or other network related activivies.

    Have e nice heaster time.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!