How to test Firestore Security Rules with Jenkins? - firebase

I'm developing some Firestore security rules locally. I use mocha to test the rules, and locally everything works. I've a Jenkins pipeline that every time I merge a PR on develop it published the rules on Firebase in cloud. What I want to do is running my unit tests within Jenkins. Anyway, every time Jenkins calls yarn test from the pipeline, I get an error that says
#firebase/firestore: Firestore (7.18.0): Could not reach Cloud Firestore backend. Connection failed 1 times. Most recent error: FirebaseError: [code=internal]: 13 INTERNAL: Received RST_STREAM with code 2 triggered by internal client error: Protocol error
This typically indicates that your device does not have a healthy Internet connection at the moment. The client will operate in offline mode until it is able to successfully connect to the backend.
Is there a way to run the firebase emulators from Jenkins?
Thanks!

I found a way to do that.
By using firebase-tools-docker I can easily run my tests inside a docker container that brings up the emulators suite.
The Jenkinsfile goes like this:
def jenkinsUser = 1001
def firebaseDocker = 'andreysenov/firebase-tools:9.14.0'
stage('Pull docker image') {
sh "docker pull $firebaseDocker"
}
stage('Unit tests') {
sh "docker run -d --rm \
--user $jenkinsUser:$jenkinsUser \
-p 8080:8080 \
-v ${pwd()}:/home/node \
--name firebase-emulators \
$firebaseDocker \
firebase emulators:start"
sleep(5)
sh "docker exec firebase-emulators /bin/bash -c 'cd tests && yarn test'"
sh "docker stop firebase-emulators"
}
This is my folder structure (for reference):
Hope this helps 😉

Related

Curl connection refused on circleci but works on local machine

I have a circleci pipeline, and after deployment I run a smoke test to check the application status. This is the code below:
smoke-test:
docker:
- image: python:3.10.5-alpine3.16
steps:
- checkout
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl aws-cli tar gzip jq
- run:
name: Backend smoke test
command: |
export BACKEND_IP=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=UdaPeople-backend-${CIRCLE_WORKFLOW_ID:0:5}" \
'Name=instance-state-name,Values=running' \
--query 'Reservations[*].Instances[*].PublicIpAddress' \
--output text)
export API_URL="http://${BACKEND_IP}:3030/api/status"
echo "${API_URL}"
wget "${API_URL}"
if curl -s -v "${API_URL}" | grep "ok"
then
return 0
else
return 1
fi
More details:
the server I am trying to query is an ec2 instance with a security group that allows all IP addresses on port 3030
I downloaded the container I am using in circle ci and tested the curl command and wget. It works perfectly
I have made more than 30 deployments, and the result is the same
The error output from circleci shows that it actually hits the IP address.
I increased the timeout seconds and also set the retries to 5
Please what could I be missing?

404 error when using Google Cloud Scheduler to run Docker container on Cloud Run

I am posting a follow on question to this one that I posted recently: Docker container failed to start when deploying to Google Cloud Run. I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I've been able to successfully deploy the Docker container, but I cannot invoke it. I believe I'm misunderstanding something fundamental about APIs, and I'd greatly appreciate any input!
So far, I have:
1.- Used the plumber R package to expose the R code as a service by "decorating" it with special annotations
# script called big-query-tutorial.R
library(bigrquery)
library(tidyverse)
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
#* #get /time
systime <- function(){
# upload Sys.time() to Big Query
insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND", values=Sys.time() %>% as_tibble(), billing=project)
}
2.- Translated the R code from (1) to a plumber API with this R script
# script called main.R
library(plumber)
r <- plumb("/home/rstudio/big-query-tutorial.R")
r$run(host="0.0.0.0", port=8080)
3.- Made the Dockerfile
FROM rocker/tidyverse:latest
# BEGIN rstudio/plumber layers
RUN apt-get update -qq && apt-get install -y --no-install-recommends \
git-core \
libssl-dev \
libcurl4-gnutls-dev \
curl \
libsodium-dev \
libxml2-dev
RUN R -e "install.packages('plumber', repos='http://cran.us.r-project.org')"
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
# add json file for authentication with BigQuery and necessary R scripts
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial.R /home/rstudio
ADD main.R /home/rstudio
# open port 8080 to traffic
EXPOSE 8080
# when the container starts, start the main.R script
ENTRYPOINT ["Rscript", "/home/rstudio/main.R", "--host", "0.0.0.0"]
4.- Successfully run the container locally on my machine, with the system time being written to BigQuery when I visit http://0.0.0.0:8080/time and then refresh the browser.
5.- Pushed the container to my container registry in Google Cloud
6.- Successfully deployed the container to Cloud Run.
7.- Created a service account (i.e., xxxx#xxxx.iam.gserviceaccount.com) that has roles "Cloud Run Invoker" and "Cloud Scheduler Service Agent".
8.- Set up a Cloud Scheduler job by filling out the fields in the console as follows
Frequency: ***** (i.e., once per minute)
Timezone: Pacific Standard Time (PST)
Target: HTTP
URL: xxxx-xxxx.run.app
HTTP method: GET
Auth header: Add OIDC token
Service account: xxxx#xxxx.iam.gserviceaccount.com (i.e., account from (7))
Audience: xxxx-xxxx.run.app (I leave this field blank, it is automatically added)
When I click on "RUN NOW" in Cloud Scheduler, I get the error
httpRequest: {
status: 404
}
When I check the log for Cloud Run, every minute there is the 404 error. The request count under the "METRICS" tab averages out to 0.02/s.
Thank you!
-H.
A couple of recommendations:
Make sure your service account has roles/iam.serviceAccountTokenCreator and roles/cloudscheduler.serviceAgent that will enable impersonation. And roles/run.Invoker to be able to call Cloud Run.
Also you have chosen OIDC Audience
A bit about the audience: field in OIDC tokens.
You must set this field for the invoking service and specify the fully qualified URL of the receiving service. For example, if you are invoking Cloud Run or Cloud Functions, the id_token must include the URL/path of the service.
Example declaration:
gcloud beta scheduler jobs create http oidctest --schedule "5 * * * *" --http-method=GET \
--uri=https://hello-6w42z6vi3q-uc.a.run.app \
--oidc-service-account-email=schedulerunner#$PROJECT_ID.iam.gserviceaccount.com \
--oidc-token-audience=https://hello-6w42z6vi3q-uc.a.run.app

Firestore authorization for Google Compute engine for app on a docker container

I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.

Spinnaker Nexus Integration

I'm facing issue while integrating spinnaker with Nexus.
Basically, here is my process - Building docker image using Jenkins and uploading to Nexus. Next, want to trigger spinnaker pipelines based on new image available on Nexus to deploy apps on kubernetes.
I've used these 2 commands
hal config provider docker-registry enable
hal config provider docker-registry account add my-docker-registry \
--address <pvtIP>:9082 \
--repositories repository/<repoName> \
--username <userName> \
--password
Getting error as below
+ Get current deployment
Success
- Add the my-docker-registry account
Failure
Problems in default.provider.dockerRegistry.my-docker-registry:
! ERROR Unable to fetch tags from the docker repository:
repository/test-docker-snapshots/, Unrecognized SSL message, plaintext
connection?
? Can the provided user access this repository?
- WARNING None of your supplied repositories contain any tags.
Spinnaker will not be able to deploy any docker images.
? Push some images to your registry.
- Failed to add account my-docker-registry for provider
dockerRegistry.
is it mandatory to have nexus on HTTPS ? I'm running on http, and using in internal network only...
please advise.. thanks..
If your nexus repo is running on HTTP then you should set --insecure-registry flag in your command. So you would final command would be as follows:
hal config provider docker-registry account add my-docker-registry \
--address <pvtIP>:9082 \
--repositories repository/<repoName> \
--insecure-registry true \
--username <userName> \
--password

Error when trying to run a meteor app in docker using meteorhacks:meteord

I have just began playing around with docker. While trying out method 1 from meteorhacks:meteord, I get the folowing error
=> You don't have an meteor app to run in this image.
Here is what I have done after creating the basic counter demo meteor app.
docker build -t app .
Sending build context to Docker daemon 11.75 MB
Step 0 : FROM meteorhacks/meteord:base
---> 528baf8d4263
Step 1 : MAINTAINER MeteorHacks Pvt Ltd.
---> Running in 6d7e7eb6ebce
---> d69fefdbeb70
Removing intermediate container 6d7e7eb6ebce
Step 2 : ONBUILD copy ./ /app
---> Running in e68618104dfa
---> c253ae966ea1
Removing intermediate container e68618104dfa
Step 3 : ONBUILD run bash $METEORD_DIR/on_build.sh
---> Running in e51e557c2b05
---> a6a6a1be9147
Removing intermediate container e51e557c2b05
Successfully built a6a6a1be9147
then ( I already initiated a mongo container exposing 27017 and grabbed the internal ip address which was 171.17.0.1)
docker run -d \
-e ROOT_URL=http://localhost:3000 \
-e MONGO_URL=mongodb://172.17.0.1:27017/ \
-e MONGO_OPLOG_URL=mongodb://172.17.0.1:27017/ \
-p 8080:80 \
app
I get the error when I do this and then run docker logs <container id>
Can someone guide me on this?
Thanks in advance.
This error comes from scripts/run_app.sh, which is the ENTRYPOINT of the base Dockerfile.
It checks for the presence of:
/bundle folder, or
/built_app, or
$BUNDLE_URL
If your counter demo Dockerfile didn't populate /bundle or /built_app folders, then you need to make sure you are defining ENV BUNDLE_URL with the right url.

Resources