Posts by marcelvanherk

    Hi,


    If you have the list of invalid patient ID's (and optionally the studies they are in), modifypatientid is the right choice. It will change the patient ID, change UID's to avoid UID clashes, enter the new file into the server, and delete the original file. The scripts would have to run two levels: an image lister (dgate --imagelister) to list all files to the incorrect studies and generate a batch file. That file is then run changes all of them. These scripts can be found on the forum (http://www.image-systems.biz/f…5&hilit=imagelister#p6565). Detecting which patient ID's are wrong is beyond the scope of the server.


    The virtual server will work, but only collects data on request. To get the latest studies (or to prefetch in a virtual server context), you would need to use the "get" script command that has no options to access a preset numver of studies (yet) but that does filter on studydate and modality. For example, this command would prefetch all CT studies of last year when recieving a new CT of the same patient.


    ImportConverter0 = ifequal "%m", "CT"; get patient modality CT now-365+000 from PACS


    The presence of the "modality" and/or "now" items forces to server to collect all SERIES with matching modality and date for the incoming patient. The get will by default execute 10 minutes after the importconverter runs to avoid multiple execution for each incoming image.


    Marcel

    Hi,


    this means there eas a dgate.exe still running not controlled by the GUI app, so you were unable to restart is as the port was blocked. Kill the left-over dgate.exe with the task manager. The 5678 port is then free.


    Marcel

    Hi,


    Stop the server.


    Edit dicom.ini, as such, replacing the FileNameSyntax = 3 line with, e.g.,


    FileNameSyntax = %name\%studydate\%series\%modality%sopuid[46,56].dcm


    Start the server.


    This should work!. Which version are you using?


    Marcel

    Hi,


    it is 100 MB: 2000 chars are needed per image to be transmitted (filenames, UID's etc). So if you try to move 1 million images, you would need 2 GB memory, which won't work as it does not fit into memory on a 32 bits machine.


    The movepatient trick will not do what you want, but a movestudy will be workable.


    The following command lists date and UID of all studies (unwanted information is formatted as %0.0s, i.e., not printed at all):


    dgate "--studyfinder:local||%s,%0.0s%0.0s%0.0s%s" > studies.txt


    Studies.txt will contain such lines:


    20070812,1.2.826.0.1.3680043.2.135.733552.44047153.7.1243911268.234.67


    import this file in excel, sort it on date (before the ,), and use excel or some form of macro editor to extract the UIDs into a batch file with these lines:


    dgate --movestudy:local,server2,1.2.826.0.1.3680043.2.135.733552.44047153.7.1243911268.234.67


    The batch file will move data study by study. Alternatively you can create a batch file like this, with one line per date (range). This will move data based on the studydate. If your move is too big, you will run into the same problem though.


    dgate --movestudies:local,server2,20010101-20012231
    dgate --movestudies:local,server2,20020101


    Marcel

    Hi,


    you can use LRUSortOrder to make the system delete (when purging based on full disk) on the studydate: it should delete patients with the oldest latest studydate first. But typically, people do not delete at all, just add disk space.


    Marcel

    Hi,


    There are 3 programs (threads) at work: the c-move sender (query/move page), the server that sends and the server that recieves. It is indeed possible that the link with the c-move sender is broken (no slice xx out of xx is shown), but that the sender and reciever are still communicating. It all depends on the timeouts.


    Just from the memory size though, itis impossible to send all data at once. The query is stored in memory, and that has a thousands of chars per image. Trying to send a batch of 50.000 images, the sending system needed 100 MB to store all that information.


    Marcel

    Hi


    To get going, you could also set "EnableReadAheadThread" to 0 in dicom.ini. This avoids a lot of preprocessing on which it seems to timeout and the copying starts at once. I will also try to optimize the preprocessing speed.


    But a batch file is preferable: if an error occurs you would not know where to restart.


    Marcel

    Hi,


    The problem seems to be a TCPIP timeout because it takes so long to inspect all files before sending.


    I would suggest suggest creating a batch file with dgate --movepatient:local,server1,patientID commands and run that on server2. At least this would be restartable in case of error and it avoids the timeout problem.


    Marcel

    Hi,


    I believe conquest with MySQL would make a good setup. Your hardware sounds quite adequate. With MySQL and conquest the database tables will store in 0.4 GB / per million images. Your existing dicom files might be integrated without problem into conquest. Just set the data directory to the same folder and try a regenerate (that will take a few weeks to complete).


    As long as the database is fast (mysql is), there is no size limit. People have reported storing 10s of million images. We gave 20 million images in MsSQL. MySQL is better, I believe.


    Marcel