Posts by Flyboy

    Just to keep other interested parties here updated in this issue:
    We set up a VPN to a new medical facility we are dealing with.
    When they were sending test images to confirm everything is working correctly, they were sending us images at their max. throughput speed, 200MB/sec
    This over a 50ms latency line, about the same as I have been testing with internally here.
    VPN setup and everything is the same as before.


    So this proved to me that Conquest is able to achieve higher speeds at that latency.


    Seeing that, I wanted to do some more testing here to see if I could figure out our slowness when sending
    I tested sending images from different other programs.
    K-PACS, same speed issue
    SendToPACS (Java based), was able to send to Conquest at maximum speed.


    When checking some logs I found they are sending the files in different packet sizes, but I don't know if that might be causing this or not:
    [Blocked Image: http://i.imgur.com/Dt4kyuV.jpg]
    [Blocked Image: http://i.imgur.com/pdWeK7f.jpg]


    Tested more by tweaking network settings, network cards, vpn tunnels, ...
    But I could not get Conquest to go at the correct speeds.
    3Mbps seems to be what I get stuck at.


    Next I hooked up a new test-lab by vpn to my live environment and sent images across.
    Was I surprised when I got all of a sudden the maximum speed out of my connection.
    The only difference being the test-servers running windows 2012R2 and my live ones are 2008R2
    All settings, hardware, conquest versions and settings are all the same.


    I spun up another 2012R2 server in a different location and was able to replicate the way improved speeds. (way improved, being: going from 3Mbps to maximum connection throughput)


    Unless this finding makes a lightbulb go off with someone, my next recommendation to my customer is going to be to migrate all their image servers from windows 2008R2 to windows 2012R2


    David

    They basically mean, you opened Conquest and browsed the database and patients through the "Browse database" tab

    [HPACS] db extract full patientlist for GUI

    "HPACS" - is your pacs server name
    "db extract full patientlist for GUI" - the first time you open the browse database tab it will pull in a full list of patients


    [HPACS] db extract studies for GUI of patient: A 2688042-6
    [HPACS] db extract series for GUI of patient: A 2688042-6
    [HPACS] db extract series for GUI of patient: A 2688042-6
    [HPACS] db extract studies for GUI of patient: A 2502854-1
    [HPACS] db extract series for GUI of patient: A 2502854-1
    [HPACS] db extract series for GUI of patient: A 2502854-1

    After pulling in the patient list, it will start getting the details for all patients, to put them on the grid.
    "db extract studies for GUI of patient" - Starting with the studies for that particular patient
    "db extract series for GUI of patient" - then going more granular for each of the studies, to see how many and which series were taken for them


    Re-reading acrnema.map from GUI
    This is most likely because you made a change to the known DICOM providers and pressed on Save. It will re-read all providers into memory again.

    That would be great.
    If there is a reason for the name and patientID to change together, that I haven't thought of, it can stay of course.
    But then the label of the option might need to change.
    Since currently it is saying "Change PatientID" and doesn't mention the name.
    And it is a departure from the previous version where only the ID was changed.

    I was able to recover the images.


    After modifying the sql table with the 2 extra fields, the images get modified and no longer deleted.


    But with some testing (on test images) it seems there is no longer an option to just change the patient ID.
    All the options, change id for patient, study, series, all seem to also change the name.
    In most cases, when a tech puts a wrong ID in, I just need to change the ID and the name needs to stay the same.
    Would there be an option in the final version to just change the ID again?

    I just changed a patient ID on 2 of our PACS servers.
    One is running 1.4.17e and this one changed the patient ID correctly, as I've done a million times before.
    When I did it on the 1.4.19 server, I got a couple screen fulls of errors in the logs and all except for 1 image of each study were deleted
    All details on the images were also changed to the patient ID, instead of the name, ...


    The only thing I did different than normal is that I selected change patient ID for patient instead of under the old interface, the only option available there is to change it study by study


    Sounds about right for cpu load.
    Conquest will process everything single threaded, so on a 12 core cpu, 1 core under full load will translate to 8.3% overall.
    The way to speed up conquest would be to go for a faster clocked cpu, not necessarily more cores is better.
    One of the Xeon's at 3.5Ghz would be a good bet, or an overclocked pc might work well if speed is your most important attribute you are looking for.

    We are planning on migrating to new servers and recompress all our images from nk1 to jpg compression
    During this migration we are also planning on updating all our patient ID's to unique values
    We are currently running into issues where certain patients generate duplicates


    One way we thought of doing this would be to first migrate all the data and the run batch files against dgate with --modifypatid...
    One time for each patient we have that needs to be modified.
    This way, we would need to touch all images multiple times, which will take more time.


    I started testing with importconverters and was wondering if we could make all changes on the fly when the images are coming in
    through an importconverter and something like:
    [lua]
    ImportConverter0 = Data.PatientID = string.gsub(Data.PatientID, 'oldpatientid1', 'newpatientid1');Data.PatientID = string.gsub(Data.PatientID, 'oldpatientid2', 'newpatientid2');Data.PatientID = string.gsub(Data.PatientID, 'oldpatientid3', 'newpatientid3');Data.PatientID = string.gsub(Data.PatientID, 'oldpatientid4', 'newpatientid4')


    I was wondering if I can put 150k patients in a row in an importconverter
    Or if there is a way to read them from an array, csv or something else


    Or is there a better way of mass changing patient id's like this?

    below are some test results using different compressions and different versions:


    The log below is from a server that is running the latest 1.4.19beta
    This was working just fine before I put the update in place


    Code
    20160625 16:25:17 Started zip and cleanup thread
    20160625 16:25:17 Monitoring for files in: E:\pacsimagesupmc\incoming\
    20160625 16:25:17 DGATE (1.4.19beta, build Sat Mar 19 16:31:20 2016, bits 64) is running as threaded server
    20160625 16:25:17 Database type: built-in SQLite driver
    20160625 16:25:17 Started 1 export queue thread(s)

    As part of some testing I also upgraded one of my live servers to the beta release here.
    The ExportConvertors I have in place to forward images are no longer working.
    It seems like they are completely ignore now.
    No sign of life in any of the logs either.
    I tested on a brand new installation by just adding the code below to the bottom of my dicom.ini file, and that is also not forwarding any images any more


    Code
    # Configuration of forwarding and/or converter programs to export DICOM slices
    ExportConverters = 1
    ExportConverter0 = forward to STENTOR_SCP
    ForwardCollectDelay = 600
    MaximumExportRetries = 0
    MaximumDelayedFetchForwardRetries = 0

    Just some more info from testing I have done between 2 virtual machines, with simulated latency introduced.
    Hopefully this might help in finding some kind of solution.


    Test setup:
    2 virtual machines on hyper-v
    each with a virtual 10gbps network card
    same amount of data to be copied: 7 patients with 3.3GB of data in total


    Transfer speed without any limits imposed: 450Mbps
    Transfer speed with 25ms latency: 100Mbps
    Transfer speed with 50ms latency: 60Mbps
    Transfer speed with 100ms latency: 50Mbps
    Transfer speed with 200ms latency: 8Mbps

    Checked the logs from last night, only pdu length I can find is 16384
    Just ran it with the new dgate executable again, even with full debug logging on, this is the only mention of pdu length I get:
    6/23/2016 7:29:22 AM [CONQUEST] Application Context : "1.2.840.10008.3.1.1.1", PDU length: 16384


    and just went back to double check, it was running the new version:
    20160623 07:32:40 DGATE (1.4.19beta, build Wed Jun 22 22:23:34 2016, bits 64) is running as threaded server

    Thanks for the quick turnaround of the test exe file
    I tested with the new dgate executable and the speed stays at 3.1Mbps
    Tested from conquest to conquest
    Tested requesting images from kpacs in the second location
    Speed is the same in both instances, so it seems like an issue on the sending side on the Conquest server side


    Tested with the server serving nki compressed, uncompressed dcm files and sending them both as jpg and uncompressed between servers, no difference there

    Our latency is around 20ms between the servers I am currently testing with
    The speed I get stuck at is always 3.1Mbps, doesn't matter if they are large images, small images, uncompressed, jpg compression, ...


    Any help would be greatly appreciated