Posts by garlicknots

    Hi Marcel


    We have this method in place because we wanted to ensure the UID generation was unique per exam and not per send. With newuids, someone could send the same dataset repeatedly and duplicate it, couldn't they?


    When you mention inconsistent dataset, where would the inconsistency be? The filesystem / the database?


    The edit occurs on a single object, creating a new series. I am not clear on how an inconsistency with the UIDs can exist. This runs on an ImportConverter, so it should all be pre-filesystem & db, right?

    Forgot to post resolution about this. This was resolved by removing a trailing & from the stats script.


    export reporter:

    curl -i -XPOST 'http://0.0.0.0:8086/write?db=conquest_stats' --data-binary "conquest_exports,hostname=`hostname -s`,destinationae=$1,modality=$2,calledae=$3,state=$4 callingae=\"$5\",sopuid=\"$6\",mrn=\"$7\",accession=\"$8\""


    import reporter
    curl -i -XPOST 'http://0.0.0.0:8086/write?db=conquest_stats' --data-binary "conquest_imports,hostname=`hostname -s`,modality=$1,calledae=$2 callingae=\"$3\",sopuid=\"$4\",mrn=\"$5\",accession=\"$6\""

    We have an issue with our diagnostic viewer which causes cine clips to sometimes to blend in the stack of stills when we'd like to have them easily identifiable as cine clips.


    To split these clips, we built a lua which generates a new (and consistent) series instanceuid and changes the series description to CINE. This fires properly, but at times it writes an object which will not successfully export. This is not consistent. I do not know how to make it fail and I do not have access to a study which will cause this issue. We see it several times a month.


    I noticed yesterday that when this file is written, it's a DCM file instead of a v2.

    We use FileNameSyntax=3 so I'd expect to see a v2 object.


    Here is the lua:


    Got it, thanks.


    Hey marcelvanherk I've got a problem for ya.


    We had an exam arrive today that had 1024 characters in ReasonForStudy (0032,1030) which choked our Exports. We've been slowly building a DICOM normalizer lua that we run against all images that come in to clean up VL mismatches that our long-term archive does not like and ReasonForStudy is one of the tags we fire against.


    I spent a little while tonight trying to figure out if we had an issue in our lua and after some trial and error I've found that if a DICOM tag is greater than 255 characters, the comparison will not fire. What can be done to extend that char length limitation?


    Here's a snip of the lua


    Code
    for i, TAG in pairs(VL64) do
    if Data[TAG] and Data[TAG]:len() > 64 then
    Data[TAG] = string.sub(Data[TAG], 1, 64)
    print ('ic-dicomnormalizer.lua UPACS THREAD ' .. Association.Thread .. ': has truncated: ' .. Data[TAG] ..' to 64 characters')
    end
    end

    Definitely can't go without retries. We've given that an attempt in the past but there are too many erroneous failures to rely on that. The retry firing on the entire converter is helpful for a few reasons, we have the converters notifying an influxdb database of attempts so we can monitor through grafana. Seeing in grafana when something is trying repeatedly is one of our countermeasures to manage 'clogs.'


    Moving to 1.5.0 is something we could consider. We've not moved to 1.4.19d yet so an upgrade is somewhat due. When do you foresee 1.5.0 releasing?

    We are using CQ for many workflows and enforce sending retries when transfers fail. The behavior we see displayed is that a failed send will retry indefinitely until success and will not allow any other objects to send through the EC until it is handled as the EC expects.


    Due to this, we are separating workflows into different ECs so that they do not impact one-another when there is a transfer problem. We're getting closer and closer to the documented limit of 20 ECs. Can this be extended and/or is there a way to review the retry behavior so the ECs do not completely halt when there are transfer problems?

    Hello,


    My site is looking to try out DICOM TLS but we don't seem to have a software solution which supports it natively. marcelvanherk is this feature possibly coming in a future release?


    Is there another way I could may this function without native support within dgate?

    Mem increase has had no effect. We observed this issue again today.


    What's so curious is this effects ExportConverters themselves. We have more than one ExportConverter sending to the same destination and one will fire successfully while the other does not. Stop/Start dgate and everything is fine again.

    Hi frank, we've got a Centos7 cluster running CQ. We did not compile ourselves but do have a working (though rarely used) web portal and can probably help. Our webserver setup is a bit hacky from my point of view but then again that could be on me more than anything else.


    Presuming you have dgate and supporting webserver config in your cgi-bin, is that true?

    File permissions accurate?

    File ownership accurate?


    Newweb or classicweb?

    This occurred again today on one node in the cluster. This time, it didn't appear to be driven by load as we have observed in the past.

    The red line in the top-left of the graph is the rough window in time vlconquest01 was misbehaving (failing to send all studies which stored). Stopped and started dgate and the behavior returned to normal.

    Hi Marcel - do you have a preference for how requests come to you? I can name a few right now, but I don't want to dump in the wrong location. One (seemingly) small enhancement that would be nice for us would be if timestamps in logfiles would include milliseconds.