routing CTs

  • I've had a look through both the manual and the forum, but not found an answer to my problem - apologies if I've missed it and it has been answered already...


    Since we upgraded to 64 slice CTs, we are eating up disk space on PACS at a great rate.


    What I want to do is forward all of the incoming studies to another Conquest server with a large (12TB) array and forward only the thick slice MPRs and other recons, plus the scout, to PACS


    This way, we'll save our expensive PACS disk space but keep all of the original thin slice data for a couple of years in case we need to do further processing on it.


    I'm already using Conquest to buffer and forward studies to PACS because PACS is rather slow at receiving images (120 per minute) while the CT can send to Conquest at 1500 images per minute. This allows the CT to be freed up much faster to send to local modality workstations.


    This, alone, is already a huge advantage for us, so thanks once again to the Conquest developers.

  • Hi,


    Thanks for your nice words.


    To make this happen you will have to find tags in the dicom header to filter on and use commands like:


    exportconverters = 2
    exportconverter0 = ifnotequal "%m", "CT"; stop; ifnumless "%V0018,0060", "2"; export series to PACS.
    exportconverter1 = ifequal "%m", "CT"; stop; export series to PACS.


    This example should forward only CT series with slice thickness less than 2 mm (sorry, must be an integer), while all other data are sent. This will probably block scout views if they are in separate series. You may have to add another filter to enable those, but this is scanner dependent.


    Marcel

  • Ah yes, thanks, that's an avenue to try.


    There is obviously still a risk of images not always getting to the right place, how about filtering by series description, is that feasible?


    We could always modify the protocols on the CT so that the appropriate series include something unique in the series description (NON-PACS?). It would be a bit of work to change so many protocols, but perhaps more reliable in the long run?


    regards,


    dermot

  • Hi,


    you can do that. An easy alterative is to define the same conquest server twice in your scanner with different AE's: e.g., conquest_thin, and conquest_thick with the same IP and port. You would forward thin slices in their protocol to conquest_thin and thick slices in their protocol to conquest_thick: the exportconverters can test the called AE to specify which forwarding to use.


    This has the advantage that is does not require substring matching, which is not available at present, and does not deped on editing human-readable strings.


    Marcel

  • "I've had a look through both the manual and the forum, but not found an answer to my problem - apologies if I've missed it and it has been answered already...


    Since we upgraded to 64 slice CTs, we are eating up disk space on PACS at a great rate.


    What I want to do is forward all of the incoming studies to another Conquest server with a large (12TB) array and forward only the thick slice MPRs and other recons, plus the scout, to PACS"



    No. This is not a conquest solution. You need to set the destination to autosend in your scanner protocols on the series level for the destination.


    Siemens allows 3 destinations per series. I can't remember off the top of my head how many GE allows.


    ie autosend thick, mpr, recons and scouts to pacs then send the thins and scouts to Conq_12TB. (no sense storing what you can easily recreate with already stored thin slice data.)


    Or continue to use the existing conquest to buffer to PACS, sending only the PACS specific series there to autoforward and autosending your thins to the 12TB server.


    Or if f your PACS is one of those that charge by the byte, then maybe increase the storage of the current Conquest to be near term (up to one year) and reduce the PACS storage to a 3 month limit

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!