All,
I am trying to run multiple shell commands on a remote server through Jenkins
I have tried the below code with Execute Shell, plugin
sudo su
ssh -o StrictHostKeyChecking=no -i /home/ec2-user/card.pem ec2-user#10.205.75.204 cat /home/ec2-user/testfile.txt
the problem with this is i can only run one command , for more than 1 I need to run
sudo su
ssh -o StrictHostKeyChecking=no -i /home/ec2-user/card.pem ec2-user#10.205.75.204 cat /home/ec2-user/testfile.txt
ssh -o StrictHostKeyChecking=no -i /home/ec2-user/card.pem ec2-user#10.205.75.204 rm -rf /home/ec2-user/testfile.txt
how can we achieve running mutiple commands like this?
Hey # Siraj Syed check following example:
String commandToRun = 'cat /home/ec2-user/testfile.txt; rm -rf /home/ec2-user/testfile.txt'
// pipeline step
sh "ssh -o StrictHostKeyChecking=no -i /home/ec2-user/card.pem ec2-user#10.205.75.204 /bin/bash -c '\"${commandToRun}\"'"
Can you just do:
ssh -o StrictHostKeyChecking=no -i /home/ec2-user/card.pem ec2-user#10.205.75.204 "cat /home/ec2-user/testfile.txt; rm -rf /home/ec2-user/testfile.txt"
Related
I'm trying to run Pabot / robot framework-browser script in Docker.
I have tryed to use command:
docker run --rm -v "$(pwd):/test" --ipc=host --user pwuser --security-opt seccomp=Docker/seccomp_profile.json -e "enviroment=***" -e "ROBOT_THREADS=10" -e PABOT_OPTIONS="--testlevelsplit" marketsquare/robotframework-browser:latest bash -c "pabot . -i smoke --outputdir /test/output /test"
Result: bash: pabot: command not found
Whats wrong in that syntax??
If i use "robot -i Smoke --outputdir /test/output /test"" then execution works ok (no errors).
I checked and confirmed that the image does not have pabot installed. You can build an image locally with docker build -f Dockerfile ..
Here is an example of the execution with that image, but installing missing dependencies:
sudo docker run --rm -v $(pwd)/atest:/atest -v /tmp:/tmp -e "ROBOT_THREADS=10" -e PABOT_OPTIONS="--testlevelsplit" --ipc=host --user pwuser --security-opt seccomp=seccomp_profile.json marketsquare/robotframework-browser:latest bash -c "pip install robotframework-pabot psutil && pabot --outputdir /tmp/test/output atest/test"
I'm trying to create a Dockerfile which runs as non-root user.
When i building this all works fine, but nginx cannot write the log file because it dosen't have enough permissions. Can I, when building a Docker, give root permissions only for nginx?
I'm trying chmod, chown for blocked directories. Doesn't work
FROM php:7.1-fpm-alpine
RUN apk add --no-cache shadow
RUN apk add --no-cache --virtual .ext-deps \
openssl \
unzip \
libjpeg-turbo-dev \
libwebp-dev \
libpng-dev \
freetype-dev \
libmcrypt-dev \
imagemagick-dev \
nodejs-npm \
nginx \
git \
inkscape
# imagick
RUN apk add --update --no-cache autoconf g++ imagemagick-dev libtool make pcre-dev \
&& pecl install imagick \
&& docker-php-ext-enable imagick \
&& apk del autoconf g++ libtool make pcre-dev
# Install Blackfire
RUN version=$(php -r "echo PHP_MAJOR_VERSION.PHP_MINOR_VERSION;") \
&& curl -A "Docker" -o /tmp/blackfire-probe.tar.gz -D - -L -s https://blackfire.io/api/v1/releases/probe/php/linux/amd64/$version \
&& tar zxpf /tmp/blackfire-probe.tar.gz -C /tmp \
&& mv /tmp/blackfire-*.so $(php -r "echo ini_get('extension_dir');")/blackfire.so \
&& printf "extension=blackfire.so\nblackfire.agent_socket=tcp://blackfire:8707\n" > $PHP_INI_DIR/conf.d/blackfire.ini
RUN apk add -y icu-dev \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl
RUN docker-php-ext-configure pdo_mysql && \
docker-php-ext-configure opcache && \
docker-php-ext-configure exif && \
docker-php-ext-configure pdo && \
docker-php-ext-configure zip && \
docker-php-ext-configure gd \
--with-jpeg-dir=/usr/include --with-png-dir=/usr/include --with-webp-dir=/usr/include --with-freetype-dir=/usr/include && \
docker-php-ext-configure sockets && \
docker-php-ext-configure mcrypt
RUN docker-php-ext-install pdo zip pdo_mysql opcache exif gd sockets mcrypt && \
docker-php-source delete
RUN ln -s /usr/bin/php7 /usr/bin/php && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
mkdir -p /run/nginx
COPY ./init.sh /
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./.env /
RUN chmod +x /init.sh
EXPOSE 80
RUN addgroup -g 1001 node \
&& adduser -u 1001 -G node -s /bin/sh -D node
ARG UID=1001
ARG GID=1001
ENV UID=${UID}
ENV GID=${GID}
RUN usermod -u $UID node \
&& groupmod -g $GID node
RUN chown 1001:1001 /var/lib/nginx -R
RUN mkdir -p /var/tmp/nginx
RUN chown 1001:1001 /var/tmp/nginx -R
USER node
ENTRYPOINT [ "/init.sh" ]
There are quite a few unknowns in your question, for example, the contents of your default.conf file. By default the nginx logs are stored in /var/log/nginx, but I'll assume you're overriding that in the configuration.
The next thing is that the master process of nginx needs to be run as root if you wan't it to be able to bind to system ports (0 - 1023) so in case you are using nginx as a web server and intend to use ports 80 and 443 you should stick with running the nginx process as root.
In case you plan to use other ports and are set on the idea of running the master process as non-root, then you can check this answer for suggestions on how to do that - https://stackoverflow.com/a/42329561/5359953
I am using the term master process a lot here, because nginx spawns worker processes to handle the actual requests and those can be run as a different user (Defined in the nginx configuration file)
I found the solution. I just changed RUN chown 1001:1001 /var/lib/nginx -R to RUN chown -R 1001:1001 /var/. Thats works fine
RUN chown -R 1001:1001 /var/
sometimes it's will be actually bad decision.
u can try add permissions like this
RUN chown -R 1001:1001 /var/tmp/nginx
RUN chown -R 1001:1001 /var/lib/nginx
RUN chown -R 1001:1001 /var/log/nginx
RUN chown -R 1001:1001 /run/nginx
I guess RUN chown 1001:1001 /var/lib/nginx -R work wrong because I set the flag -R too late
When I am trying to use the below code in RStudio I am getting error.
Code:
system(paste("sshpass -v -f -N -o StrictHostKeyChecking=no -i '<Path>/id_rsa.ppk' -L 3306:localhost:3306 root#127.0.0.1 sleep 20"))
Warning:
running command 'sshpass -v -f -N -o StrictHostKeyChecking=no -i '<Path>/id_rsa.ppk' -L 3306:localhost:3306 root#127.0.0.1 sleep 20' had status 127
How can I resolve the above issue in R?
Running snort (in packet dump mode) with command sudo snort -C snort.conf -A console -i eth0 a following problem occurred:
--== Initializing Snort ==--
Initializing Output Plugins!
Snort BPF option: snort.conf
pcap DAQ configured to passive.
The DAQ version does not support reload.
Acquiring network traffic from "eth0".
ERROR: Can't set DAQ BPF filter to 'snort.conf' (pcap_daq_set_filter: pcap_compile: syntax error)!
Fatal Error, Quitting..
Can someone please suggest a solution?
You're using the wrong option to load the configuration, it should be the lower case '-c'.
sudo snort -c snort.conf -A console -i eth0
Also, you can test your configuration with '-T' before running it:
sudo snort -T -c snort.conf
just put "-i" before eth0 in command it will solve the problem
Try this:
sudo service snort
ps ax|grep snortstart
The output I got was
/usr/sbin/snort -m 027 -D -d -l /var/log/snort -u snort -g snort -c
/etc/snort/snort.conf -S HOME_NET=[192.168.0.0/16] -i enp4s0
The man page says
-D Run Snort in daemon mode. Alerts are sent to
/var/log/snort/alert unless otherwise specified.
So when I drop the -D and add the -A
sudo /usr/sbin/snort -m 027 -d -l /var/log/snort -u snort -g snort -c /etc/snort/snort.conf -S HOME_NET=[192.168.0.0/16] -i enp4s0 -A console
Works for snort Version 2.9.7.0 GRE (Build 149)
My site gives error 521 all the times.
When I found this error from my server
$sudo service varnish reload
* Reloading HTTP accelerator varnishd
Connection failed (localhost:6082)
Error: vcl.load 8d6fb6be-9a0a-4896-be47-e2678e3c2617 /etc/varnish/default.vcl failed
Moreover,
varnishlog
shows nothing.
I am following this tutorial to set the server up. And, I changed
DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u www-data -g www-data \
-S /etc/varnish/secret \
-s malloc,256m"
The /etc/varnish/default.vcl file is copied from the tutorial. All & has been corrected to &.
It is a fresh VPS. No firewall.
Any clue to resolve it?
Thanks!!!!
3 things come into my mind:
Start varnish in foreground mode and check what it says
varnishd -F -a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u www-data -g www-data \
-S /etc/varnish/secret \
-s malloc,256m
Try changing -T localhost:6082 to -T 127.0.0.1:6082
Your port 6082 might be already taken. Change it or check if it's listed in already open ports' list with
netstat -tlnep
restart your varnish
sudo /etc/init.d/varnish restart
then
sudo /etc/init.d/varnish reload