Transmission script-torrent-done not executing? - transmission

settings.json
"script-torrent-done-enabled": true,
"script-torrent-done-filename": "/var/lib/transmission/.config/transmission-daemon/torrent_complete.sh"
the owner of the torrent_complete.sh - is transmission, the chmod is 777, but the script is still not executing, what I am doing wronggg?

Related

Cannot delete Cinder volume with error message "image still has watchers"

I run Openstack cinder with ceph as its storage backend. when I occasionally tried to delete one of cinder-volume, it failed.
So I turned to use rbd commands to troubleshoot this issue, below is the error message printed by the command: rbd rm ${pool}/${volume-id}
rbd: error: image still has watchers
This means the image is still
open or the client using it crashed. Try again after closing/unmapping
it or waiting 30s for the crashed client to timeout.
Then rbd status ${pool}/${volume-id} shows
Watchers:
watcher=172.18.0.1:0/523356342 client.230016780
cookie=94001004445696
I am confused why the watcher stick on the volume and cause the volume unable to delete, is there any reason or something I did wrong?
And how to delete the volume in this case?
I found a solution to fix this issue, the concept is adding the watcher to the blacklist by using ceph osd blacklist, then the volume will become removable, after deleting, remove the watcher from the blacklist.
add the watcher to the blacklist
$ ceph osd blacklist add 172.18.0.1:0/523356342
blacklisting 172.18.0.1:0/523356342
check status and delete the volume
$ rbd status ${pool}/${volume-id}
Watchers: none
$ rbd rm ${pool}/${volume-id}
Removing image: 100% complete...done.
remove the watcher from the blacklist
$ ceph osd blacklist rm 172.18.0.1:0/523356342
un-blacklisting 172.18.0.1:0/523356342
That's all, but still finding the root cause.

Remote logs in Airflow

I have two machines. Machine1: airflow-webserver, airflow-scheduler. Machine2: airflow-worker on specific queue. I am using CeleryExecutor. Task on machine2 runs successfully (writing and deleting files on local drive), but in web UI on machine1 I didnt read log files.
*** Log file does not exist: /home/airflow/logs/delete_images_by_ttl/delete_images/2018-10-29T12:24:23.299741+00:00/1.log
*** Fetching from: http://localhost-int.localdomain:8793/log/delete_images_by_ttl/delete_images/2018-10-29T12:24:23.299741+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='localhost-int.localdomain', port=8793): Max retries exceeded with url: /log/delete_images_by_ttl/delete_images/2018-10-29T12:24:23.299741+00:00/1.log
To solve this problem edit your /etc/hosts. Add ip and dns-name for airflow webserver
HTTPConnectionPool means webserver is not able to communicate to the worker node.
Add worker node hostname on /etc/hosts file
Also verify below
base_log_folder = /home/airflow/logs/
sudo chmod -R 777 /home/airflow/logs/

[Atom][Remote-ftp][colfax] Unable to connect remote server

I'm recently trying to work on the Intel® AI DevCloud, please see Connecting from Linux or a Mac.
I can connect the remove server colfax via SSH. But I'm not able to set atom-remote-ftp .ftpconfig correctly for colfax.
Here is what I did:
download the linux access key and put it at key_path
add
Host colfax
User xxxxxx
IdentityFile key_path
ProxyCommand ssh -T -i key_path guest#cluster.colfaxresearch.com
logging in use
ssh colfax
Would anyone please let me know what should be the host(?), usr(xxxxxx) and pass("")?
{
"protocol": "ftp",
"host": "***FTP_HOSTNAME_HERE***",
"port": 21,
"user": "***YOUR_USERNAME_HERE***",
"pass": "***YOUR_PASSWORD_HERE***",
"promptForPass": false,
"remote": "***REMOTE_PATH_HERE***",
"secure": true,
"secureOptions": {"rejectUnauthorized": false, "requestCert": true, "agent": false},
"connTimeout": 10000, // integer - How long (in milliseconds) to wait for the control connection to be established. Default: 10000
"pasvTimeout": 10000, // integer - How long (in milliseconds) to wait for a PASV data connection to be established. Default: 10000
"keepalive": 10000, // integer - How often (in milliseconds) to send a 'dummy' (NOOP) command to keep the connection alive. Default: 10000
"watch":[]
}
code refer to #Sanjay Verma at [Atom][Remote-ftp] Unable to connect ftps/ftpes . Thank you!
Please find the procedure below
Host colfax
User uXXXX
IdentityFile ~/Downloads/colfax-access-key-xxxx
ProxyCommand ssh -T -i ~/Downloads/colfax-access-key-xxxx guest#cluster.colfaxresearch.com
Set the correct restrictive permissions on the private SSH. To do this, run the following commands in a terminal:
chmod 600 ~/Downloads/colfax-access-key-xxxx
chmod 600 ~/.ssh/config
After the preparation steps above, you should be able to log in to your login node
ssh colfax
Once your connection is set up, you can copy local files to your login node like this:
scp /path/to/local/file colfax:/path/to/remote/directory/

Hg clone, pull, or incoming command from any repository on an HgLab server throws a mismatch error

My question's pretty much in the title. Again, when trying to issue a clone, pull, or incoming command using mercurial from any repository on an HgLab server (whether that repository was already created from scratch on the server, or whether that repository was already pushed to the server, in both cases prior to issuing a supposedly erroneous command), I get a mismatch error. Here's the log:
hg --verbose --debug --traceback incoming http://user#server:81/hg/project/repository
using http://server:81/hg/project/repository
http auth: user user, password not set sending capabilities command
[HgKeyring] Keyring URL: http://server:81/hg/project/repository
[HgKeyring] Looking for password for user user and url http://server:81/hg/project/ repository
[HgKeyring] Keyring password found. Url: http://server:81/hg/project/ repository, user: user, passwd: *****
comparing with http://user#server:81/hg/project/ repository
query 1; heads
sending batch command
searching for changes
all local heads known remotely
sending getbundle command
Traceback (most recent call last):
File "mercurial\dispatch.pyo", line 204, in _runcatch
File "mercurial\dispatch.pyo", line 887, in _dispatch
File "mercurial\dispatch.pyo", line 632, in runcommand
File "mercurial\dispatch.pyo", line 1017, in _runcommand
File "mercurial\dispatch.pyo", line 978, in checkargs
File "mercurial\dispatch.pyo", line 884, in
File "mercurial\util.pyo", line 1005, in check
File "mercurial\commands.pyo", line 5067, in incoming
File "mercurial\hg.pyo", line 820, in incoming
File "mercurial\hg.pyo", line 783, in _incoming
File "mercurial\bundlerepo.pyo", line 509, in getremotechanges
File "mercurial\bundle2.pyo", line 1319, in writebundle
File "mercurial\changegroup.pyo", line 102, in writechunks
File "mercurial\bundle2.pyo", line 1312, in chunkiter
File "mercurial\changegroup.pyo", line 228, in getchunks
File "mercurial\changegroup.pyo", line 48, in getchunk
File "mercurial\changegroup.pyo", line 43, in readexactly
abort: stream ended unexpectedly (got 0 bytes, expected 4)
Before anyone is willing to provide easy solutions, it should suffice to know that I've tried the following already:
Look up existing solutions on stackoverflow, none of which worked. Some of them are:
Using an older version of Mercurial (downgrading from 3.5.1 to 3.4.2)
Running hg verify on both the local machine and the server to fix inconsistencies in both repositories
hg pull -r 0 http://user#server:81/hg/project/repository (gives the same error)
hg pull -f -r 0 http://user#server:81/hg/project/repository (gives the same error)
hg incoming -r 0 http://user#server:81/hg/project/repository (gives the same error)
hg incoming -f -r 0 http://user#server:81/hg/project/repository (gives the same error)
It should also be noted that hg outgoing and hg push don't give any problems whatsoever.
Please help!
Thanks guys :)
There's a bug in HgLab in the component that handles bundling the response to hg incoming or hg pull. The exact details are unclear; you'll want to contact their customer support for details (they're very responsive).
If version 1.10.6 does not have the fix, versions after that should have it.

Boot script execution order (rc.local)?

With some great help from another user on here I've managed to create a script which writes the necessary network configurations required to /etc/network/interfaces and allow public access to a DomU server.
I’ve placed this script in the /etc/rc.local file, and executed chmod u+x /etc/rc.local to enable it.
The server is a DomU Ubuntu server on the a host (Dom0). And rc.local doesn't seem to be executing before the network is brought up at boot/creation time.
So the configuration changes are being made to the /etc/network/interfaces file, but are not active once the boot process completes. I have to reboot once more before the changes take effect.
I've tried adding /etc/init.d/networking restart to the the end of the rc.local script (before exit 0), but with no joy.
I also tried adding the script to the S35networking file, but again without success.
Any advice or suggestions on getting this script to execute before the network device is brought up would be greatly appreciated.?

Resources