I can't get http response when a exec or system function is excuting.
A specific request launch the exec function sudo find / -name toto, this take a while
During the execution, the client send any other requests
I don't get responses
Did you have a similar problem ? I tried shell_exec, exec, system...
I tried also this case :
A request launch a infinite loop
A second request launch an other script who return 'toto'
It work fine. What's wrong with exec ?
Thank you for your ideas
Related
I'm trying to send logs into datadog using rsyslog. Ideally, I'm trying to do this without having the logs stored on the server hosting rsyslog. I've run into an error in my config that I haven't been able to find out much about. The error occurs on startup of rsyslog.
omfwd: could not get addrinfo for hostname '(null)':'(null)': Name or service not known [v8.2001.0 try https://www.rsyslog.com/e/2007 ]
Here's the portion I've added into the default rsyslog.config
module(load="imudp")
input(type="imudp" port="514" ruleset="datadog")
ruleset(name="datadog"){
action(
type="omfwd"
action.resumeRetryCount="-1"
queue.type="linkedList"
queue.saveOnShutdown="on"
queue.maxDiskSpace="1g"
queue.fileName="fwdRule1"
)
$template DatadogFormat,"00000000000000000 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - - %msg%\n "
$DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.logs.datadoghq.com
*.* ##intake.logs.datadoghq.com:10516;DatadogFormat
}
First things first.
The module imudp enables log reception over udp.
The module omfwd enables log forwarding over (tcp, udp, ...)
So most probably - or atleast as far as i can tell - with rsyslog you just want to log messages locally and then send them to datadog.
I don't know anything about the $ActionSendStreamDriver tags, so I can't help you there. But what is jumping out is, that in your action you haven't defined where the logs should be sent to.
ruleset(name="datadog"){
action(
type="omfwd"
target="10.100.1.1"
port="514"
protocol="udp"
...
)
...
}
I was in riak-shell when ssh lost its connection to the server. After reconnecting, I do the following:
sudo riak-shell
and get:
An instance of riak-shell is already running
So, I restarted the riak node in question. This did not seem to solve the problem. I do not see anything using ps -aux to kill. According to the docs, only one instance can run at a time. That makes sense, but when I run riak-shell from another node and try to connect to any node, I now get the following:
Error: invalid function call : connection_EXT:connect ["riak#<<<ip_address_elided>>>"]
You can connect to a specific node (whether in your riak_shell.config
or not) by typing 'connect "dev1#127.0.0.1";' substituting your
node name for dev1.
You may need to change the Erlang cookie to do this.
See also the 'reconnect' command.
Unhandled message received is {#Ref<0.0.0.135>,disconnected}
riak-shell(3)>
I have not changed the cookies during this process, and the cookie appears to be the same (at least in /etc/riak/riak_shell.config). (I am running the Riak TS AMI on AWS.)
riak-shell runs in its own Erlang VM - entirely separate from the riak node
(You don't need to run riak-shell from the machine your node is on - it uses the normal riak-erlang-client to talk to riak)
If you you are on a Linux do ps aux | grep riak_shell_app it will give you the process number you need to kill that instance:
08:30:45:~ $ ps aux | grep riak_shell_app
vagrant 4671 0.0 0.3 493260 34884 pts/4 Sl+ Aug17 0:03 /home/vagrant/riak_ee/dev/dev1/erts-5.10.3/bin/beam.smp -- -root /home/vagrant/riak_ee/dev/dev1 -progname erl -- -home /home/vagrant -- -boot /home/vagrant/riak_ee/dev/dev1/releases/2.1.1/start_clean -run riak_shell_app boot debug_off /home/vagrant/riak_ee/dev/dev1/bin/../log/riak_shell/riak_shell -noshell -config /home/vagrant/riak_ee/dev/dev1/bin/../etc/riak
I wrote a good chunk of it so let me know how you got on:
https://github.com/basho/riak_shell/graphs/contributors
To monitor a log file I have to connect to an ssh connection and redirect the output of the log file(let's call it RemoteLog.txt) out to a local machine so it can be read by a java program and put on a GUI.
Right now I have the output redirected out of the ssh connection and onto the local machine with the command:
ssh remote#ip.address tail logs/RemoteLog.txt -f > ~/Log/LocalLog.txt
and everything works fine technically with one exception: for some reason LocalLog.txt only gets updated with the changes to RemoteLog.txt every 35 seconds to the millisecond.
It doesn't matter the number of changes to RemoteLog, the number of lines specified with the tail command, or using the >> operator vs the > operator; there is always a 35 second delay between updates of LocalLog.txt while RemoteLog is constantly updating.
Does anyone have any clue why this might be?
I would like to start scrapyd as service but when I start scrapyd,
if I close the SSH session the service scrapyd close automatically.
When I try to start as service like this I have an error :
root#vps:~# service scrapyd start
scrapyd: Failed to start scrapyd.service: Unit scrapyd.service failed to load: No such file or directory.
And when i try to start daemon scrapyd the CURL request return :
{"status": "error", "message": "Use \"scrapy\" to see available commands", "node_name": "vps"}
Can someone help me to start my scrapyd as service please !
daemon --chdir=/home/Crawler scrapyd
I need to set the --chdir to load the service on the Scrapy folder !
I could not get my scrapyd start through /etc/rc.local or crontab so I found a work around. I am sure there will be a better way but for mean time this worked for me.
I created a python file start.py.
import os
os.system('/usr/bin/python3 /home/ubuntu/.local/bin/scrapyd > /home/ubuntu/scrapy.log &')
And simply call the python file start.py through crontab.
Add this to crintab file by cronat -e.
#reboot /usr/bin/python3 /home/ubuntu/start.py
Basically what I did is execute the scrapy through shell by calling the python filr through crontab.
If anyone finds something better please comment.
I'm trying to code a daemon in Unix. I understand the part how to make a daemon up and running . Now I want the daemon to respond when I type commands in the shell if they are targeted to the daemon.
For example:
Let us assume the daemon name is "mydaemon"
In terminal 1 I type mydaemon xxx.
In terminal 2 I type mydaemon yyy.
"mydaemon" should be able to receive the argument "xxx" and "yyy".
If I interpret your question correctly, then you have to do this as an application-level construct. That is, this is something specific to your program you're going to have to code up yourself.
The approach I would take is to write "mydaemon" with the idea of it being a wrapper: it checks the process table or a pid file to see if a "mydaemon" is already running. If not, then fork/exec your new daemon. If so, then send the arguments to it.
For "send the arguments to it", I would use named pipes, like are explained here: What are named pipes? Essentially, you can think of named pipes as being like "stdin", except they appear as a file to the rest of the system, so you can open them in your running "mydaemon" and check them for inputs.
Finally, it should be noted that all of this check-if-running-send-to-pipe stuff can either be done in your daemon program, using the API of the *nix OS, or it can be done in a script by using e.g. 'ps', 'echo', etc...
The easiest, most common, and most robust way to do this in Linux is using a systemd socket service.
Example contents of /usr/lib/systemd/system/yoursoftware.socket:
[Unit]
Description=This is a description of your software
Before=yoursoftware.service
[Socket]
ListenStream=/run/yoursoftware.sock
Service=yourservicename.service
# E.x.: use SocketMode=0666 to give rw access to everyone
# E.x.: use SocketMode=0640 to give rw access to root and read-only to SocketGroup
SocketMode=0660
SocketUser=root
# Use socket group to grant access only to specific processes
SocketGroup=root
[Install]
WantedBy=sockets.target
NOTE: If you are creating a local-user daemon instead of a root daemon, then your systemd files go in /usr/lib/systemd/user/ (see pulseaudio.socket for example) or ~/.config/systemd/user/ and your socket is at /run/usr/$(id -u)/yoursoftware.sock (note that you can't actually use command substitution in pathnames in systemd.)
Example contents of /lib/systemd/system/yoursoftware.service
[Unit]
Description=This is a description of your software
Requires=yoursoftware.socket
[Service]
ExecStart=/usr/local/bin/yoursoftware --daemon --yourarg yourvalue
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
Also=yoursoftware.socket
Run systemctl daemon-reload && systemctl enable yoursoftware.socket yoursoftware.service as root
Use systemctl --user daemon-reload && systemctl --user enable yoursoftware.socket yoursoftware.service if you're creating the service to run as a local-user
A functional example of the software in C would be way too long, so here's an example in NodeJS. Here is /usr/local/bin/yoursoftware:
#!/usr/bin/env node
var SOCKET_PATH = "/run/yoursoftware.sock";
function errorHandle(e) {
if (e) console.error(e), process.exit(1);
}
if (process.argv[0] === "--daemon") {
var logFile = require("fs").createWriteStream(
"/var/log/yoursoftware.log", {flags: "a"});
require('net').createServer(errorHandle)
.listen(SOCKET_PATH, s => s.pipe(logFile));
} else process.stdin.pipe(
require('net')
.createConnection(SOCKET_PATH, errorHandle)
);
In the example above, you can run many yoursoftware instances at the same time, and the stdin of each of the instances will be piped through to the daemon, which appends all the stuff it receives to a log file.
For non-Linux OSes and distros without systemd, you would use the (typically shell-scripted) startup system to begin your process at boot and the user would receive an error like could not connect to socket /run/yoursoftware.sock when something goes wrong with your daemon.