Curl connection refused on circleci but works on local machine - http

I have a circleci pipeline, and after deployment I run a smoke test to check the application status. This is the code below:
smoke-test:
docker:
- image: python:3.10.5-alpine3.16
steps:
- checkout
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl aws-cli tar gzip jq
- run:
name: Backend smoke test
command: |
export BACKEND_IP=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=UdaPeople-backend-${CIRCLE_WORKFLOW_ID:0:5}" \
'Name=instance-state-name,Values=running' \
--query 'Reservations[*].Instances[*].PublicIpAddress' \
--output text)
export API_URL="http://${BACKEND_IP}:3030/api/status"
echo "${API_URL}"
wget "${API_URL}"
if curl -s -v "${API_URL}" | grep "ok"
then
return 0
else
return 1
fi
More details:
the server I am trying to query is an ec2 instance with a security group that allows all IP addresses on port 3030
I downloaded the container I am using in circle ci and tested the curl command and wget. It works perfectly
I have made more than 30 deployments, and the result is the same
The error output from circleci shows that it actually hits the IP address.
I increased the timeout seconds and also set the retries to 5
Please what could I be missing?

Related

How to test Firestore Security Rules with Jenkins?

I'm developing some Firestore security rules locally. I use mocha to test the rules, and locally everything works. I've a Jenkins pipeline that every time I merge a PR on develop it published the rules on Firebase in cloud. What I want to do is running my unit tests within Jenkins. Anyway, every time Jenkins calls yarn test from the pipeline, I get an error that says
#firebase/firestore: Firestore (7.18.0): Could not reach Cloud Firestore backend. Connection failed 1 times. Most recent error: FirebaseError: [code=internal]: 13 INTERNAL: Received RST_STREAM with code 2 triggered by internal client error: Protocol error
This typically indicates that your device does not have a healthy Internet connection at the moment. The client will operate in offline mode until it is able to successfully connect to the backend.
Is there a way to run the firebase emulators from Jenkins?
Thanks!
I found a way to do that.
By using firebase-tools-docker I can easily run my tests inside a docker container that brings up the emulators suite.
The Jenkinsfile goes like this:
def jenkinsUser = 1001
def firebaseDocker = 'andreysenov/firebase-tools:9.14.0'
stage('Pull docker image') {
sh "docker pull $firebaseDocker"
}
stage('Unit tests') {
sh "docker run -d --rm \
--user $jenkinsUser:$jenkinsUser \
-p 8080:8080 \
-v ${pwd()}:/home/node \
--name firebase-emulators \
$firebaseDocker \
firebase emulators:start"
sleep(5)
sh "docker exec firebase-emulators /bin/bash -c 'cd tests && yarn test'"
sh "docker stop firebase-emulators"
}
This is my folder structure (for reference):
Hope this helps 😉

Varnish 6.0.8 Secret file is not created

Please we're facing some issues when installing Varnish 6.0.8 on ubutnu 18.04.6 OS, it doesn't create the secret file inside the /etc/varnish dir as shown below:
enter image description here
we use the following script to for installation :
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish60lts/script.deb.sh | sudo bash
can someone please help ?
PS: we tried to install later versions (6.6 and 7.0.0) and we got the same issue.
Form a security point of view, remote CLI access is not enabled by default. You can see this when looking at /lib/systemd/system/varnish.service:
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target nss-lookup.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
There are no -T and -S parameters in the standard systemd configuration. However, you can enable this by modifying the systemd configuration yourself.
Just run sudo systemctl edit --full varnish to edit the runtime configuration and add a -T parameter to enable remote CLI access.
Be careful with this and make sure you restrict access to this endpoint via firewalling rules.
Additionally you'll add -S /etc/varnish/secret as a varnishd runtime parameter in /lib/systemd/system/varnish.service.
You can use the following command to add a random unique value to the secret file:
uuidgen | sudo tee /etc/varnish/secret
This is what your runtime parameters would look like:
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,2g \
-S /etc/varnish/secret \
-T :6082
When you're done just run the following command to restart Varnish:
sudo systemctl restart varnish

Docker and Plumber issue with port forwarding

Im trying to expose my plumber server over Docker. I'm getting a log from plumber in RStudio that it's listening on my desired port. And swagger launches and runs fine (the API works ok via swagger in my browser connecting over the exposed port for Rstudio of 8787).
i'm running this command as:
docker run -e PASSWORD=rstudio --rm -it -p 8787:8787 -p 3038:3038 -v
"/Users/my_name/Google
Drive/r_files":"/home/rstudio/r-docker-tutorial" rocker/verse
if i do:
curl http://localhost:3038
I get 'Empty reply from server'
Likewise if i attempt to hit my endpoint in Postman i get 'Could not get any response'
So it appears that the port isn't being successfully proxied by Docker, but i'm a bit stuck as this doesn't make much sense!
Does anyone have any ideas?
Thanks
Dan
Here is a template Dockerfile we use to expose plumber API :
Dockerfile
FROM rocker/r-ver:4.0.0
WORKDIR /src
RUN apt-get update && apt-get install -y --no-install-recommends \
git-core \
libssl-dev \
libz-dev \
libcurl4-gnutls-dev \
libsodium-dev
RUN install2.r --error plumber \
data.table \
xgboost
COPY ./startup.R /var
COPY ./plumber.R /var
EXPOSE 8004
ENTRYPOINT ["R", "-f", "/var/startup.R", "--slave"]
startup.R
library(plumber)
pr <- plumb("plumber.R")
pr$run(host = "0.0.0.0", port = 8004)
plumber.R
#* Health check
#* #get /
#* #serializer unboxedJSON
function() {
list(status = "OK")
}
I was trying to configure a plumber service in Docker; while plumber replied to the queries made within the container successfully, those queries made from the host would always result in the following error curl: (56) Recv failure: Connection reset by peer.
I followed the code from the following example, part 1 and part 2.
The clue to the solution was provided in Bruno Tremblay's answer:
#startup.R
library(plumber)
pr <- plumb("plumber.R")
pr$run(host = "0.0.0.0", port = 8004)
The use of host="0.0.0.0" is the relevant bit (plumber uses host="127.0.0.1" by default)

Tensorflow serving(docker): How to get the grpc request log sent by client?

I've started tensorflow serving using docker:
docker run -d -p 8500:8500 -p 8501:8501 -p 8503:8503 \
-v /tfserving/models:/models \
-v /tfserving/config:/tfconfig \
tensorflow/serving:1.14.0 \
--model_config_file=/tfconfig/models.config \
--monitoring_config_file=/tfconfig/monitor.config \
--enable_batching=true \
--batching_parameters_file=/tfconfig/batching.config \
--model_config_file_poll_wait_seconds=30 \
--file_system_poll_wait_seconds=30
--rest_api_port=8503 \
--allow_version_labels_for_unavailable_models=true
I can get the log by docker logs containerid, but only show the basic log:
2019-11-22 07:03:48.912930: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:103]
2019-11-22 07:03:48.938568: I tensorflow_serving/model_servers/server.cc:324] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2019-11-22 07:03:48.947993: I tensorflow_serving/model_servers/server.cc:344] Exporting HTTP/REST API at:localhost:8503 ...
[evhttp_server.cc : 239] RAW: Entering the event loop ...
When I sent grpc request from client, there aren't any more logs output.
And I'v tried to use the -v=1 when starting, but it didn't work.
What I want to do is to get the grpc request log from server for problem solving. How to do with it? Thanks~

run apps using audio in a docker container

This question is inspired by Can you run GUI apps in a docker container?.
The basic idea is to run apps with audio and ui (vlc, firefox, skype, ...)
I was searching for docker containers using pulseaudio but all containers I found where using pulseaudio streaming over tcp.
(security sandboxing of the applications)
https://gist.github.com/hybris42/ce429de428e5af3a344a
https://github.com/jlund/docker-chrome-pulseaudio
https://github.com/tomparys/docker-skype-pulseaudio
In my case I would prefere playing audio from an app inside the container directly to my host pulseaudio. (without ssh tunneling and bloated docker images)
Pulseaudio because my qt app is using it ;)
it took me some time until i found out what is needed. (Ubuntu)
we start with the docker run command docker run -ti --rm myContainer sh -c "echo run something"
ALSA:
we need /dev/snd and some hardware access as it looks like.
when we put this together we have
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
myContainer sh -c "echo run something"`
In new docker versions without lxc flags you shoud use this:
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--privileged \
myContainer sh -c "echo run something"`
PULSEAUDIO:
update: it may be enought to mount the pulseaudio socket within the container using -v option. this depends on your version and prefered access method. see other answers for the socket method.
Here we need basically /dev/shm, /etc/machine-id and /run/user/$uid/pulse. But that is not all (maybe because of Ubuntu and how they did it in the past). The envirorment variable XDG_RUNTIME_DIR has to be the same in the host system and in your docker container. You may also need /var/lib/dbus because some apps are accessing the machine id from here (may only containing a symbolic link to the 'real' machine id). And at least you may need the hidden home folder ~/.pulse for some temp data (i am not sure about this).
docker run -ti --rm \
-v /dev/shm:/dev/shm \
-v /etc/machine-id:/etc/machine-id \
-v /run/user/$uid/pulse:/run/user/$uid/pulse \
-v /var/lib/dbus:/var/lib/dbus \
-v ~/.pulse:/home/$dockerUsername/.pulse \
myContainer sh -c "echo run something"
In new docker versions you might need to add --privileged.
Of course you can combine both together and use it together with xServer ui forwarding like here: https://stackoverflow.com/a/28971413/2835523
Just to mention:
you can handle most of this (all without the used id) in the dockerfile
using uid=$(id -u) to get the user id and gid with id -g
creating a docker user with this id
create user script:
mkdir -p /home/$dockerUsername && \
echo "$dockerUsername:x:${uid}:${gid}:$dockerUsername,,,:/home/$dockerUsername:/bin/bash" >> /etc/passwd && \
echo "$dockerUsername:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d && \
echo "$dockerUsername ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$dockerUsername && \
chmod 0440 /etc/sudoers.d/$dockerUsername && \
chown ${uid}:${gid} -R /home/$dockerUsername
Inspired by the links you've posted, I was able to create the following solution. It is as lightweight as I could get it. However, I'm not sure if it is (1) secure, and (2) entirely fits your use-case (as it still uses the network).
Install paprefson your host system, e.g. using sudo apt-get install paprefs on an Ubuntu machine.
Launch PulseAudio Preferences, go to the "Network Server" tab, and check the "Enable network access to local sound devices" checkbox [1]
Restart your computer. (Only restarting Pulseaudio didn't work for me on Ubuntu 14.10)
Install Pulseaudio in your container, e.g. sudo apt-get install -y pulseaudio
In your container, run export "PULSE_SERVER=tcp:<host IP address>:<host Pulseaudio port>". For example, export "PULSE_SERVER=tcp:172.16.86.13:4713" [2]. You can find out your IP address using ifconfig and the Pulseaudio port using pax11publish [1].
That's it. Step 5 should probably be automated if the IP address and Pulseaudio port are subject to change. Additionally, I'm not sure if Docker permanently stores environment variables like PULSE_SERVER: If it doesn't then you have to initialize it after each container start.
Suggestions to make my approach even better would be greatly appreciated, since I'm currently working on a similar problem as the OP.
References:
[1] https://github.com/jlund/docker-chrome-pulseaudio
[2] https://github.com/jlund/docker-chrome-pulseaudio/blob/master/Dockerfile
UPDATE (and probably the better solution):
This also works using a Unix socket instead of a TCP socket:
Start the container with -v /run/user/$UID/pulse/native:/path/to/pulseaudio/socket
In the container, run export "PULSE_SERVER=unix:/path/to/pulseaudio/socket"
The /path/to/pulseaudio/socket can be anything, for testing purposes I used /home/user/pulse.
Maybe it will even work with the same path as on the host (taking care of the $UID part) as the default socket, this way the ultimate solution would be -v /run/user/$UID/pulse/native:/run/user/<UID in container>/pulse; I haven't tested this however.
After trying most of the solutions described here I found only PulseAudio over network to be really working. However you can make it safe by keeping the authentication.
Install paprefs (on host machine):
$ apt-get install paprefs
Launch paprefs (PulseAudio Preferences) > Network Server > [X] Enable network access to local sound devices.
Restart PulseAudio:
$ service pulseaudio restart
Check it worked or restart machine:
$ (pax11publish || xprop -root PULSE_SERVER) | grep -Eo 'tcp:[^ ]*'
tcp:myhostname:4713
Now use that socket:
$ docker run \
-e PULSE_SERVER=tcp:$(hostname -i):4713 \
-e PULSE_COOKIE=/run/pulse/cookie \
-v ~/.config/pulse/cookie:/run/pulse/cookie \
...
Check that the user running inside the container has access to the cookie file ~/.config/pulse/cookie.
To test it works:
$ apt-get install mplayer
$ mplayer /usr/share/sounds/alsa/Front_Right.wav
For more info may check Docker Mopidy project.
Assuming pulseaudio is installed on host and in image, one can provide pulseaudio sound over tcp with only a few steps. pulseaudio does not need to be restarted, and no configuration has to be done on host or in image either. This way it is included in x11docker, without the need of VNC or SSH:
First, find a free tcp port:
read LOWERPORT UPPERPORT < /proc/sys/net/ipv4/ip_local_port_range
while : ; do
PULSE_PORT="`shuf -i $LOWERPORT-$UPPERPORT -n 1`"
ss -lpn | grep -q ":$PULSE_PORT " || break
done
Get ip adress of docker daemon. I always find it being 172.17.42.1/16
ip -4 -o a | grep docker0 | awk '{print $4}'
Load pulseaudio tcp module, authenticate connection to docker ip:
PULSE_MODULE_ID=$(pactl load-module module-native-protocol-tcp port=$PULSE_PORT auth-ip-acl=172.17.42.1/16)
On docker run, create environment variable PULSE_SERVER
docker run -e PULSE_SERVER=tcp:172.17.42.1:$PULSE_PORT yourimage
Afterwards, unload tcp module. (Note: for unknown reasons, unloading this module can stop pulseaudio daemon on host):
pactl unload-module $PULSE_MODULE_ID
Edit: How-To for ALSA and Pulseaudio in container
I managed to dockerize a Java game in the following ways, effectively passing through the game's sound.
This approach requires building an image, making sure the app has all the dependencies it'll need, in this case, pulseaudio and x11. If you're sure your images has everything it needs, you may procees as stated in the previous answers.
Here, we need to build the image, then we can actually launch it.
docker build -t my-unciv-image . # Run from directory where Dockerfile is
docker run --name unciv # image name\
--device /dev/dri \
-e DISPLAY=$DISPLAY \
-e PULSE_SERVER=unix:/run/user/1000/pulse/native \
--privileged \
-u $(id -u):$(id -g) \
-v /path/to/Unciv:/App \
-v /run/user/$(id -u)/pulse:/run/user/(id -u)/pulse \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-w /App \
my-unciv-image \
java -jar /App/Unciv.jar
In the second command the following is specified:
--name: a name is given to the container
--device: video device*
-e: required environment vars
DISPLAY: the display number
PULSE_SERVER: PulseAudio audio server socket
--privileged: run ip privileged*, so it can access all devices
-v: Mounted volumes:
Path to the game mounted into /App in the container**
Audio server socke
Display server socket
-w: Working directory
Here is a docker-compose.yml version of it:
# docker-compose.yml
version: '3'
services:
unciv:
build: .
container_name: unciv
devices:
- /dev/dri:/dev/dri # * Either this
entrypoint: java -jar /App/Unciv.jar
environment:
- DISPLAY=$DISPLAY
- PULSE_SERVER=unix:/run/user/1000/pulse/native
privileged: true # * or this
user: 1000:1000
volumes:
- /path/to/game/:/App
- /run/user/1000/pulse:/run/user/1000/pulse
- /tmp/.X11-unix:/tmp/.X11-unix
working_dir: /App
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install openjdk-11-jre -y
RUN apt-get install -y xserver-xorg-video-all
RUN apt-get install -y libgl1-mesa-glx libgl1-mesa-dri
RUN apt-get install -y pulseaudio
USER unciv
Notes:
*Only required for a game or anything that uses openGL. Either passing the devices explicitly or running it as privileged, but I think it's enough to pass the device, making it privileged may be overkill.
**This math may be bundled with the docker image, but for a demo.
For the audio, it's required to pass env variable PULSE_SERVER and mounting the pulseaudio socket

Resources