Posts by garlicknots

    Hey marcel, long time! I'm finally getting our cluster worked up for a migration to 1.5.0c on linux of course. In my testing I'm finding that my delayedforwarderthreads seem to be working on the same object at the same time. I'm assuming this is related to the discussion above... would really love a code fix so we can use this sweet sweet parallelization if that is the case. If not-- any ideas?


    This is with

    ForwardAssociationLevel = Series (I've tried image, and study - I did have some success with study but it would initially fail to send and then the send speed it achieved was less than a single thread.)

    DelayedForwarderThreads = 2


    Code
    Fri Mar 17 18:13:46 2023 Starting 2 DelayedForwarderThreads
    Fri Mar 17 18:13:46 2023 Started 18 export queue thread(s)
    Fri Mar 17 18:13:47 2023 Queue: retrying processing of file /conquest/data/20162809/1.3.12.2.1107.5.2.50.176059.30000023011712250368900000012_0006_000003_16790975700000.dcm
    Fri Mar 17 18:13:47 2023 Exportconverter0.0 executes: sh /conquest/scripts/cqStats/cqExportNotifier.sh PACSSERVER MR CONQUESTSRV1 exporting PACSQC-CC 1.3.12.2.1107.5.2.50.176059.30000023011712250368900000016 20162809 10006833
    Fri Mar 17 18:13:47 2023 Queue: retrying processing of file /conquest/data/20162809/1.3.12.2.1107.5.2.50.176059.30000023011712250368900000012_0006_000003_16790975700000.dcm
    Fri Mar 17 18:13:47 2023 Exportconverter0.0 executes: sh /conquest/scripts/cqStats/cqExportNotifier.sh PACSSERVER MR CONQUESTSRV1 exporting PACSQC-CC 1.3.12.2.1107.5.2.50.176059.30000023011712250368900000016 20162809 10006833
    Fri Mar 17 18:13:47 2023 ExportConverter0.1: forward /conquest/data/20162809/1.3.12.2.1107.5.2.50.176059.30000023011712250368900000012_0006_000003_16790975700000.dcm to PACSSERVER
    Fri Mar 17 18:13:47 2023 ExportConverter0.1: forward /conquest/data/20162809/1.3.12.2.1107.5.2.50.176059.30000023011712250368900000012_0006_000003_16790975700000.dcm to PACSSERVER



    /edit: I'm using the precompiled binary with SQLite

    hammad this is a collection of (my own) poorly-scripted tools that require specific import/export converter configurations and a grafana/influx environment- I can make them available as a package but they were never intended for distribution and you will need to edit them to make them work. That and you're going to need grafana/influx. If you can handle all this, I'll start thinking about making it downloadable.

    I'm starting to see some very very large ultrasound cine clips and have begin running into issues with 32bit memory spaces. While the vendor of the imaging device is looking to resolve this at the source, I'm hoping to come up with a tactic to split the multiframe into single frames. Curious if anyone here has already weaved this into a lua script or other exportconverter.

    Hi,


    NEVER use V2 with modern images. Your only salvage would be to look in the images, and make sure that all sequences in there are known in dgate.dic.


    Marcel

    What do you recommend Marcel? We're beginning to handle very large ultrasound content and I'm running into compression bottlenecks. Our cluster is 4 hosts for HA and throughput but they are only 3cpu/3gigs of mem so we are going to need to bump that up. We prefer jp2k generally and have not yet moved to 1.5 though I'm trying to make some progress on that today.

    hammad I've had a similar challenge in the past and was never quite able to come to the right workable solution. Marcel has a stickied post on the forum about not changing patient IDs in this manner. I didn't like the UID regeneration so I never actually went through with it.


    Why not to change patientID of a dicom image


    Adding to the complexity depending on your use-case, you end up also with the object in memory with metadata different than the object on disk.

    I really would like the ability to control the data like you, but cq's architecture doesn't support it.


    You could update the Lua (or call another) and have it rename the folders as it updates the objects.


    I've not tried it and the idea only came to me while writing this, but you may be able to have the ImportConverter do something a bit tricky.


    pseudo

    ImportConverter0 = Update Patient ID, Delete stored study(not destroy!), forward study-level to ImportConverter1

    ImportConverter1 = Whatever your normal importconverter rule would be



    As for modifying your saved data, I've got no experience there so I'd be hacking it together probably with an OS-hosted script.

    We run a cluster of 4 nodes and would like to enable query/retrieve through them. Currently, they each go by the same AEtitle as we have replicated the dicom.ini to each server. We continue to enforce config versioning through a git-managed stream. All servers in the cluster pull their dicom.ini from a single git project. This way we have to change only 1 location and we have proper replication, with historic versioning, available for rollback.



    The problem is that MyACRNema is the same for all app servers. While Query works, Retrieve most often results in the move-through store being load balanced to the wrong node.


    Can we launch dgate from the command line with parameters including MyACRNema? Does this parameter dynamically update and would putparam be a viable option?

    Somewhat common with our instances of dgate as well. For our service start and service restart, we put in something like 100 tries before it fails. If you do a netstat -aln you will see there are still living sockets on 5678.


    I've found smaller instances with less work more reliably terminate dgate rapidly.

    Built on an Ubuntu 18.04 and 16.04 box and did not have the same problem. Guessing it's something to do with the glibc libraries that are standard in different distributions. We're probably going to migrate the cluster to Ubuntu based on this.


    /edit: for fun I changed the pdu parameter to 256k without the if/else. Builds still work even with that!

    I've built a new environment for a new project. It's running CentOS7 as that is our standard flavor given the scoping towards Enterprise roles.


    I've now run 1419d and 1419b (copied from a known-functional host) on this new box and am seeing the same odd image interleaving I've never seen CQ do before. When I retrieve these images from any number of different viewers, this problem remains. I'm about to blow up the VM and take a clone of a previous environment. I was hoping to start fresh and create some new standards for our implementations moving forward. Any thoughts Marcel?


    This occurred today. I was able to use the webserver to move the object to the destination successfully. It's got something to do with the object in memory in the ExportConverter.


    I wonder if this could be resolved by adding some sort of a time buffer on the ImportConverter so that cine clips aren't released to export converters for X number of seconds.

    We have 4 nodes in a balanced cluster each with their own DB. Would the changeUID function result in differing UIDs on each environment if repeated sends were balanced to other nodes?


    We haven't really considered using shared storage or a shared db for these systems, but doing so could possibly help with tasks like this.