A basic movescu example for retrieving dicom images - dicom

I'm trying to use dcm4che for downloading images from the free http://www.dicomserver.co.uk/. I've cloned and checked out the 5.13.2 version and built it using mvn install. Now when I go into the dcm4che-assembly/target/dcm4che-5.13.2-bin/dcm4che-5.13.2/bin directory and try to download a StudyInstanceUID:
./movescu -c DCMQRSCP#www.dicomserver.co.uk:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105 --dest STORESCP
I get the error:
...
(0000,0902) LO [Unknown Move Destination: STORESCP] ErrorComment
...
The error indicates that it can't connect to the the receiver. I've tried to run:
./storescp -b STORESCP:11112
without much success. I've also tried to run the dcmqrscp with similar outcomes.
My humble request: Please provide a working example of the movescu.
Details
I can get the findscu to work without issues, e.g.:
./findscu -c DCMQRSCP#www.dicomserver.co.uk:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105 -r PatientID
gives:
(0008,0005) CS [] SpecificCharacterSet
(0008,0052) CS [STUDY] QueryRetrieveLevel
(0008,0054) AE [DCMQRSCP] RetrieveAETitle
(0010,0020) LO [PAT004] PatientID
(0020,000D) UI [1.2.826.0.1.3680043.11.105] StudyInstanceUID
Similarly the getscu command seems to work:
>./getscu -c DCMQRSCP#www.dicomserver.co.uk:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105
Creates the following DICOM files:
ls 1* -lh
-rw-rw-r-- 1 max max 12M jul 7 12:16 1.2.276.0.7230010.3.1.4.39332053.7432.1527748041.31
-rw-rw-r-- 1 max max 150K jul 7 12:17 1.2.276.0.7230010.3.1.4.8323329.11391.1527939718.955155
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.100
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.104
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.108
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.112
-rw-rw-r-- 1 max max 6,0M jul 7 12:16 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.80
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.84
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.88
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.92
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.96
Lastly, I'm sorry if this question falls into the duplicate category. After spending days without finding a working movescu example on either StackOverflow or the dcm4che-forum, I've given up searching. The goal is to have an example to use so that I can modify the underlying Java code for my own purposes. Also let me know if you're interested in the entire movescu dump.
Update
After Tarmo's helpful tip I tried to (1) use the correct AE & port and (2) change to Orthanc. Unfortunately I still can't retrieve an image from the dicomserver.co.uk but the Orthanc solution worked.
Below is the summary of the outcomes:
Alt. 1: Port & port compliance
As it seems part of my issue was RTFM-related:
Use any calling and called AE titles you like (making them specific to you will assist if logs need to be examined), but if you wish to use C-MOVE, ensure that the calling and destination AETs are the same, and that you listen on port 104.
My first attempt was to align the two AE-titles:
./movescu -c STORESCP#www.dicomserver.co.uk:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105 --dest STORESCP
This does not work and it turns out that the destination port is random. At both ends (server log + local) one can find that the port was:
14:23:47,539 INFO - MOVESCU->APA(1): close Socket
[addr=www.dicomserver.co.uk/88.202.185.144,port=104,localport=57985]
The localport changes between each attempt. Things that I've tried so far:
Variants of --dest (1) STORESCP:104, (2) STORESCP$localhost:104, (3) other AE-titles
Starting up a SCP through sudo ./dcmqrscp -b STORESCP:104 --dicomdir /home/max/tmp/dcm (the sudo is due to the low port number) and calling with AE-title only as dest
Same as above but with the -b option: ./movescu -c STORESCP#www.dicomserver.co.uk:104 -b STORESCP#localhost:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105 --dest STORESCP
Same as above without the SCP and with my local IP/external IP (no firewall changes made)
I've also tried USB-tethering through my phone to circumvent the router but the phone operated at IPv6 and not v4
It would still be nice to know how to set this up as it could be quite useful. My guess is that since C-MOVE provides the raw IP-address to the dicomserver the 104-port needs to be forwarded to the current machine. Being new to the DICOM-protocol I find many of these features somewhat cryptic...
Alt 2: Local Orthanc server (WORKS!)
Here's the full set-up for anyone that wants to get a test system up and running (using Ubuntu 18.04):
sudo apt install orthanc & check that the service is started systemsctl status orthanc.service
In /etc/orthanc/orthanc.json uncomment the line with: "sample" : [ "STORESCP", "localhost", 2000 ] and restart the server systemsctl restart orthanc.service
Go to http://localhost:8042 (unless you've changed the web-port at /etc/orthanc/orthanc.json)
Navigate into upload and find a dcm-file for uploading (you can find dcm-files to download here: https://www.dicomlibrary.com/ or you can use the getscu from above)
Drag and drop the dcm-file into http://localhost:8042/app/explorer.html#upload + press "Start the upload"
Go to patients and get the new StudyInstanceUID for the uploaded image
Start a SCP-service with the STORESCP and 2000 port that you allowed in the /etc/orthanc/orthanc.json, e.g. ./dcmqrscp -b STORESCP:2000 --dicomdir /home/max/tmp/dcm
Call the movescu with the -b to the above SCP with the new StudyInstanceUID (shortened below for readability), e.g.:
./movescu
-c ORTHANC#localhost:4242
-m StudyInstanceUID=1.2.826.0.1.3680043.8.....
-b STORESCP#localhost:2000
--dest STORESCP
And that's it!

Please read the C-MOVE information on the http://www.dicomserver.co.uk/ homepage again to figure out, how to set up your query. Your syntax for the command is correct, but some details are wrong.
Basically you need two things:
Your calling AE title must be the same as the destination AE title. You have them different at the moment
Your storescp must be accessible from the public internet on the same port, that you used to connect to dicomserver.co.uk, in your example that is 104. Their server needs to make a new TCP connection back to your computer for it to work.
I think it would be easier to install a lightweight PACS on your local machine to test your applications with (e.g. Orthanc). Getting DICOM C-MOVE to work over public internet is asking for trouble in my opinion.

Related

How to grant nginx permissions to phpMyAdmin on synology diskstation

I have a Synology Diskstation DS216se running DSM 6.2.3-25426. I've installed MariaDB 10, Web Station, PHP 7.2, and myPhpAdmin, but when I open it at http://diskstation/phpMyAdmin/ I get this error message
"Sorry, the page you are looking for is not found."
I'm using an nginx server in Web Station, and the error log at /var/log/nginx/error.log contains multiple entries like the following
*621 open() "/var/services/web/phpMyAdmin/js/vendor/jquery/jquery.debounce-1.0.5.js" failed (13: Permission denied)
The file, and all other files with permission denied entries in the logs, exist in the /var/services/web/phpMyAdmin/ directory - what permissions need to be granted to the directory for this to succeed?
I hit this as well. I managed to recover, but it effectively amounts to hard clearing any evidence of prior installs of Web Station, PHP 7.2, phpMyAdmin, and any other web related services. Then manually ripping out some bad directories with broken symlinks/permissions.
My hypothesis is that I tried to install adminer prior to this and - having not done any set up for Web Station et. al. - it put the filesystem in a bad state.
I am not willing to try installing adminer again to test this hypothesis.
What I did to fix this:
Backup what you need (e.g., any personal web site).
SSH into your diskstation. Please be aware of what you are doing and keep in mind the big picture. Don't go deleting random things.
Uninstall Web Station, PHP 7.2, Apache, phpMyAdmin, etc. Anything that Web Station would ultimately be inclined to read and serve up.
Verify that /var/services/web doesn't contain anything you care about, and delete it (sudo rm -rf /var/services/web).
Verify that /volume1/web doesn't contain anything you care about, and delete everything inside it (sudo rm -rf /var/services/web). You may need to chmod permissions for this - I ended up leaving the web directory itself intact, but nothing inside.
Reboot. Mount any encrypted disks, etc.
Check that /var/services/web now shows it is symlinked to /volume1/web, e.g. sudo readlink -e /var/services/web.
Also check permissions for /volume1/web, e.g. ls -al /volume1. It should be owned by root:root and have permissive (777) bits.
Install Web Station, PHP 7.2, and phpMyAdmin in that order.
After this, I could open phpMyAdmin and be served its log in screen.
Debugging notes:
For me, when I SSH in I see in the logs similar issues:
2020/12/17 10:36:35 [error] 32658#32658: *1028 "/var/services/web/phpMyAdmin/index.php" is forbidden (13: Permission denied),
ps says that the nginx workers run as the http user (uid=1023(http) gid=1023(http) groups=1023(http)).
The directory /var/services/web/ appears to be owned by root, both group and user:
# ls -al /var/services/web/
total 424
drwxr-xr-x 3 root root 4096 Dec 17 10:29 .
drwxr-xr-x 3 root root 4096 Dec 17 10:22 ..
-rw-r--r-- 1 root root 27959 Apr 13 2016 adminer.css
-rw-r--r-- 1 root root 82 Apr 13 2016 .htaccess
-rw-r--r-- 1 root root 387223 Apr 13 2016 index.php
drwxr-xr-x 10 root root 4096 Dec 17 10:29 phpMyAdmin
It's not clear to me how Web Station's nginx is intended to work at all given the mismatch - perhaps some set of actions I took prior caused it to decide to install with bad ownership.
I decided to leave everything owned by root, but changed group permissions so that http can access:
# chown -R root:http /var/services/web/
# chmod -R 775 /var/services/web/
This got past the initial error, but revealed a new one:
"/usr/syno/synoman/phpMyAdmin/index.cgi" is not found (2: No such file or directory)
Indeed, there was no trace of phpMyAdmin anywhere in that directory. Evidence of a bad install.
I decided to uninstall anything web related: phpMyAdmin, PHP 7, Apache (happened to be installed), nginx, and Web Station. Once I did, I still had two files in /var/services/web: adminer.css index.php.
I had tried adminer prior to this. In /var/services, there were symlinks to specific volume locations, e.g.:
# ls -al /var/services/
total 12
drwxr-xr-x 3 root root 4096 Dec 17 10:22 .
drwxr-xr-x 17 root root 4096 Dec 17 10:21 ..
lrwxrwxrwx 1 root root 18 Jan 20 2020 download -> /volume1/#download
lrwxrwxrwx+ 1 root root 14 Dec 17 10:22 homes -> /volume1/homes
lrwxrwxrwx 1 root root 24 Jan 20 2020 pgsql -> /volume1/#database/pgsql
lrwxrwxrwx 1 root root 13 Dec 17 10:22 tmp -> /volume1/#tmp
lrwxrwxrwx 1 root root 13 Dec 17 10:22 web
Interestingly, web was not symlinked. I fully deleted /var/services/web.
Looking over at /volume1, I do see a /volume1/web, again fully owned by root but with extremely constrained permission:
d---------+ 1 root root 52 Dec 17 10:14 web
There are only a few things in here, which look related to a blank install of Web Station. I fully deleted everything within /volume1/web, but left it as is. With everything maximally cleaned I rebooted.
Upon boot, /var/services/web was now symlinked to /volume1/web, which now also had useful permission bits (777), and owned by root:root. Maybe this was done by some boot recover process, who knows. (I still have nothing web related installed at this point.)
I installed Web Station, then PHP 7.2, then phpMyAdmin.
I had the same issue when accessing my server via
<name>.local/phpMyAdmin/
It worked when I accessed it via
<local ip>/phpMyAdmin/

Is the editor Atom able to open projects on a remote server?

Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?
Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.
I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.
Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.
Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs
I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor

Error trying to start Notification Server

I was trying to start Phabricator's Notification Server, but experienced the following error:
/phabricator/phabricator/bin/aphlict start
[2015-11-16 18:41:08] EXCEPTION: (FilesystemException) Requested path '/var/tmp/aphlict/pid' is not writable. at [<phutil>/src/filesystem/Filesystem.php:1081]
arcanist(head=master, ref.master=9dd6eafb5254), phabricator(head=master, ref.master=50d158a8c4d9), phutil(head=master, ref.master=e9ed72483a14)
#0 Filesystem::assertWritable(string) called at [<phutil>/src/filesystem/Filesystem.php:73]
#1 Filesystem::assertWritableFile(string) called at [<phutil>/src/filesystem/Filesystem.php:89]
#2 Filesystem::writeFile(string, string) called at [<phabricator>/src/applications/aphlict/management/PhabricatorAphlictManagementWorkflow.php:140]
#3 PhabricatorAphlictManagementWorkflow::willLaunch() called at [<phabricator>/src/applications/aphlict/management/PhabricatorAphlictManagementWorkflow.php:249]
#4 PhabricatorAphlictManagementWorkflow::executeStartCommand() called at [<phabricator>/src/applications/aphlict/management/PhabricatorAphlictManagementStartWorkflow.php:15]
#5 PhabricatorAphlictManagementStartWorkflow::execute(PhutilArgumentParser) called at [<phutil>/src/parser/argument/PhutilArgumentParser.php:406]
#6 PhutilArgumentParser::parseWorkflowsFull(array) called at [<phutil>/src/parser/argument/PhutilArgumentParser.php:301]
#7 PhutilArgumentParser::parseWorkflows(array) called at [<phabricator>/support/aphlict/server/aphlict_launcher.php:23]
The directory in question seems to be writable:
ls -l /var/tmp/aphlict
total 4
drwxr-xr-x 2 root root 4096 Nov 16 13:40 pid
If it matters, I'm running all operations as non-'root' on Ubuntu 14.04 LTS system.
I have just figured out this. As I said in the recent update, I was trying to start notification server as non-'root'. Looking again at permissions of the /var/tmp/aphlict/pid folder, the problem suddenly became crystal clear and trivial.
ls -l /var/tmp/aphlict
total 4
drwxr-xr-x 2 root root 4096 Nov 16 13:40 pid
Therefore, all that needed to be done to fix the problem is to make the directory writable for everyone (I hope that this approach does not create a potential security issue):
chmod go+w /var/tmp/aphlict/pid
su MY_NON_ROOT_USER_NAME -c './bin/aphlict start'
Aphlict Server started.
Problem solved. By the way, for the Notification Server to work properly, do I need to open port 22281, in addition to already opened 22280? (Please answer in comments. Thank you!)

goaccess nginx log IP lookup

I am using the latest version of goaccess (0.6) to analyze nginx log. The program seems to be parsing the file OK, since I am getting the host IPs, statistics etc. But the geoip is either not working, or I am not able to figure out how to get to the details page for hosts. As you can see, it is linking geoip
ldd -r /usr/local/bin/goaccess
..
libGeoIP.so.1 => /usr/lib/libGeoIP.so.1 (0x00007fae4ea58000)
...
The dat file is also in place -
ls -al /usr/share/GeoIP/GeoIP.dat
-rw-r--r-- 1 root root 1664511 Jan 3 2012 /usr/share/GeoIP/GeoIP.dat

SQLite3: Unable to Open Database

I've been attempting to figure out this problem for quite some time and have looked at all the normal solutions. I am attempting to run a .backup on an sqlite database. I don't think it matters, but this particular database is being used by Membase and is also running on the Amazon cloud. Both the folder that I am backing up to, and the folder that the database is coming from has 777 permissions (which is the normal cause of this message). If I sudo the backup command, it gets partway through the backup and then the process just hangs while consuming CPU usage and leads me to eventually kill the sqlite process. I even went through and chmod 777 the database file itself.
Here is whats happening:
/opt/membase/bin/sqlite3 /mnt/data-store/default-data/default-0.mb '.backup /mnt/data-backup/mbfiles/test.mb'
Error: unable to open database file
When I ls -la the folder:
drwxrwxrwx 2 membase membase 4096 Sep 10 15:41 .
drwxrwxrwx 4 membase root 4096 Aug 5 01:10 ..
-rw-r--r-- 1 membase membase 53248 Sep 10 15:41 default
-rwxrwxrwx 1 membase membase 849593344 Sep 10 15:41 default-0.mb
And the backup folder:
drwxrwxrwx 2 ec2-user ec2-user 4096 Sep 10 15:41 .
drwxrwxrwx 4 root root 4096 Sep 3 00:26 ..
Also, because I hear it matters, the permission of /tmp
drwxrwxrwt 3 root root 4096 Sep 10 03:32 .
I've been trying to fix this for over a week now, and any new ideas would be appreciated. It should be noted that this is a production environment so restarting is not an option.
EDIT: I checked and I can back up the smaller "default" file, just not the larger db, so this rules out any sort of issue with folder permissions. Any help would be hugely appreciated.
Thanks!
This seems like an sqlite issue. We've seen sporadic occurrences of this error at other customers but have not been able to track it down or resolve it yet. According to the sqlite experts, this should never happen ;-)
Can you shutdown the Membase process to test further? If so, trying to take a backup (make sure the 'memcached' process is stopped) at this point would rule out any issue with software accessing the file. If it still doesn't work at this point, I know there are tools to verify a sqlite DB (just don't have them off the top of my head).
You can also use a combo of ".dump" and ".restore" via sqlite, but I wouldn't recommend running that on a running Membase node as we haven't tested out the effects.
Perry

Resources