Is it possible to use firebase-functions from my laptop? If not, is firebase-admin the only option remaining?
Here are some examples:
How can I rent and use my own servers for cloud functions?
Listen only to additions to a cloud firestore collection?
Does Firebase Admin SDK perform any caching?
I am able to make an index.js file on my laptop, npm install firebase-admin module, link to my Firestore database and make changes to data just fine, using admin-credentials. When I also try npm install firebase-functions make use of event-triggers onCreate/onWrite/onUpdate/onDelete, they do not get any updates?
To my understanding, the only way possible to make use of event-triggers is by uploading to cloud functions, since you need Google's infrastructure to use those and you can't use them on your local machine, which you can with firebase-admin package. You can use use the local emulator(?), but it isn't production ready and is not for that use case(?).
So, in order to listen for new events on my Firestore database, only using my laptop (not Google Cloud Functions platform or some other server-hosted option), I have to use .onSnapshot() from firebase-admin npm.
However that module is unable to cache, and you are left querying the whole firestore database, downloading every document.
Is this correct? or is there any way possible to make firebase-functions work from my laptop server using firebase-admin + admin credentials, almost as if I uploaded the file to cloud platform. I don't require this part of data to be on the cloud, so I want to make changes and adjust firestore database from my laptop's Terminal.
You will need to balance scalability and what you want to achieve. One approach uses a Parse Server and a Docker container that works with Express. This method's advantage is its flexibility as to where the parse server can run. You can run this on your laptop or move it to Google Cloud if you need more processing power. However, it is worth noting that the container cannot access all Firebase trigger types.
I am not sure which Operating System you are using, but for Ubuntu, you can install Docker and run Parse Server like this:
# Update the apt package first
$ sudo apt-get update
# install dependencies
$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common \
git
# Add Docker’s GPG key:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# add the Docker repository
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# Update the apt package again
$ sudo apt-get update
# Install Docker
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
$ git clone https://github.com/parse-community/parse-server
$ cd parse-server
$ docker build --tag parse-server .
$ docker run --name my-mongo -d mongo
To run the Parse Server:
$ docker run --name my-parse-server -v config-vol:/parse-server/config \
-p 1337:1337 --link my-mongo:mongo -d parse-server --appId APPLICATION_ID \
--masterKey MASTER_KEY --databaseURI mongodb://mongo/test
To link Firebase and the Docker Parse Server, you will need an adapter. The container above is an example, but it should be enough to get you started running from your laptop.
Related
I'm trying to put my shiny app in docker container. My shiny app works totally fine on my local computer. But after dockerize my shiny app, I always have error message on my localhost like The application failed to start. The application exited during initialization..
I have no idea why that happens. I'm new to docker. How can I find the error logs when I run the docker image? I need the log to know what goes wrong.
Here is my dockfile:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
# TODO: add further package if you need!
RUN R -e "install.packages(c( 'tidyverse', 'ggplot2','shiny','shinydashboard', 'DT', 'plotly', 'RColorBrewer'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
I built image and ran like below:
docker build -t myshinyapp .
docker run -p 80:80 myshinyapp
Usually the logs for any (live or dead) container can be found by just using:
docker logs full-container-name
or
docker logs CONTAINERID
(replacing the actual ID of your container)
As first said, this usually works as well even for stopped (not still removed) containers, which you can list with:
docker container ls -a
or just
docker ps -a
However, sometimes you won't even have a log, since the container was never created at all (which I think, by experience, fits more to your case)
And it can be happening simply because the docker engine is unable to allocate all of the resources that your service definition is requiring to have available.
The application failed to start. The application exited during initialization
is usually reflect of your docker engine being unable to get the required resources.
And the most common case for that, is just as simple as your host ports:
If you have another service (being dockerized or not) using (for example) that port that you want to use for your service (in your case, port 80) then Docker would just be unable to start your container.
So... in short... the easiest fix for that situation (and your first try whenever you face this kind of issues) is just to bind any other port from your host (say: 8080), to that 80 port that your service will be listening to internally (inside your container):
docker run -p 8080:80 myshinyapp
The same principle applies to unallocatable volumes (e.g.: trying to bind a volume as read-only that doesn't actually exist in the host)
As an aside comment/trick:
Since you're not setting a name for your container, you will need to use the container id instead when looking for its logs.
But instead of typing (or copy-pasting) the full container id (usually something like: 1283c66babea or even larger) you can just type in a few first digits instead, and it will still work as expected:
docker logs 1283c6 or docker logs 1283 or even docker logs 128
(of course... as long as you don't have any other 128***** container)
I will run JupyterHub in a sub domain. Here is the Dockerfile, jupyterhub_config.py, .gitlab-ci.yml.
My first question is how to configure the jupyter_config.py. How can I load the jupyterhub_config.py on the build in the container?
How do I start Jupyterhub in the .gitlab-ci.yml for tests and how do I copy the application in the sub domain? I wrote a README.md. I need a little help for the JupyterHub. If all works fine, I will write a complete HOWTO Install JupyterHub on a local machine and in a sub domain by a provider.
FROM continuumio/miniconda3
# Updating packages
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
git \
nano \
unzip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install conda and Jupyter
RUN conda update -y conda
RUN conda install -c conda-forge jupyter_nbextensions_configurator \
jupyterhub \
jupyterlab \
matplotlib \
pandas \
scipy
# Setup application
EXPOSE 8000
CMD ["jupyterhub", "--ip='*'", "--port=8000", "--no-browser", "--allow-root"]
The .gitlab-ci.yml
image: docker:latest
variables:
CONTAINER_IMAGE: registry.gitlab.com/joklein
DOCKER_IMAGE: jupyterhub
TAG: 0.1.0
services:
- docker:dind
stages:
- build
- test
- release
- deploy
before_script:
- echo "$GITLAB_PASSWORD" | docker login registry.gitlab.com --username $GITLAB_USER --password-stdin
build:
stage: build
script:
- docker build -t $CONTAINER_IMAGE/$DOCKER_IMAGE .
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE
test:
stage: test
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
# - docker run $CONTAINER_IMAGE/$DOCKER_IMAGE -dt -p 8000:8000 --name $DOCKER_IMAGE
release:
stage: release
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
- docker tag $CONTAINER_IMAGE/$DOCKER_IMAGE:latest $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
only:
- master
deploy:
stage: deploy
image: alpine:latest
before_script:
- apk update && apk add git openssh-client rsync
script:
- mkdir .public
- cp -r * .public
- mv .public public
- mkdir "${HOME}/.ssh"
- echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"
- echo "${SSH_PRIVATE_KEY}" > "${HOME}/.ssh/id_rsa"
- chmod 700 "${HOME}/.ssh/id_rsa"
- rsync -hrvz --delete --exclude=_ public/ user#example.com:www/jupyter/
only:
- master
The jupyterhub_config.py
c = get_config()
# Letsencrypt (https://letsencrypt.org/) to obtain a free, trusted SSL
# certificate.
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
c.JupyterHub.port = 443
#
# Change from JupyterHub to JupyterLab
c.Spawner.default_url = '/lab'
c.Spawner.debug = True
#
# # Specify users and admin
c.Authenticator.whitelist = {"systemuser"}
c.Authenticator.admin_users = {"systemuser"}
Docker base image of JupyterHub and JupyterLab
JupyterHub is a multi-user server for Jupyter notebooks. JupyterLab is the
next-generation web-based user interface for the Jupyter Project. This
JupyterHub is a Docker base image for JupyterHub and JupyterLab
that works as a stand-alone application and in a (sub) domain.
Images derived from this image can either run as a stand-alone server, or
function as a volume image for your server. You can also use them in a CI/CD
system such as GitLab CI to build your content prior to bundling it into a
standalone server container.
Building your JupyterHub image
Based on this structure, you can easily build an image for your needs. There are two options for using the image you generated:
as a stand-alone image
as a volume image for your webserver
The simplest way to build your own image is to use a Dockerfile. This is only an example. If you need more software packages you can install them with this
Dockerfile and conda.
Build the container
docker build -t juypterhub .
Your JupyterHub with JupyterLab is automatically generated during this build.
Run the container
docker run -p 8000:8000 -d --name jupyterhub jupyterhub jupyterhub
-p is used to map your local port 8000 to the container port 8000
-d is used to run the container in background. JupyterHub will just write
logs so no need to output them in your terminal unless you want to troubleshoot a server error.
-- name jupyterhub names your container jupyterhub
jupyterhub the image
jupyterhub is the last command used to start the jupyterhub server
and your JupyterHub with Jupyterlab is now available of http://localhost:8000.
Start / Stop JupyterHub
docker start / stop juyterhub
Configure JupyterHub
Let's encrypt certificates for JupyterHub
To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to
demonstrate control over the domain. With Let’s Encrypt, you do this using
software that uses the ACME protocol, which typically runs on your web host.
Change to zerossl.com and generate a certificate for your domain. As the
result you get four files, domain-key.txt, domain-crt.txt, domain-csr.txt, account-key.txt. This files uses base 64, which is readable in
ASCII, not binary format. The certificates are already in PEM format. Just
change the extension to *.pem.
For JupyterHub only the files domain-key.txt and domain-crt are needed.
cp domain-crt.txt fullchain.pem
cp domain-key.txt privkey.pem
Add a System user in the container
By default JupyterHub searches for users on the server. In order to be able to
log in to our new JupyterHub server we need to connect to the JupyterHub docker
container and create a new system user with a password.
docker exec -it jupyterhub bash
useradd --create-home systemuser
passwd systemuser
exit
The command docker exec -it jupyterhub bash will spawn a root shell in your
docker container. You can use the root shell to create system users in the
container. These accounts will be used for authentication in JupyterHub's
default configuration.
The first command useradd creates a new user named systemuser. The second will
ask you for a password.
The all process might be simpler with GitLab 12.0 (June 2019), and its
Git integration for JupyterHub
Deploying JupyterHub via GitLab’s Kubernetes integration provides an easy way to get started with Jupyter notebooks, which can be used to create and share documents that contain live code, visualizations, and even runbooks.
Starting with GitLab 12.0, JupyterLab’s Git extension is automatically provisioned and configured when installing JupyterHub onto your Kubernetes cluster.
This integration enables full version control of your notebooks as well as issuance of Git commands within Jupyter. Git commands can be issued via the Git tab on the left panel or via Jupyter’s command line prompt.
See documentation and gitlab-ce issue 47138.
jupyterhub --generate-config
This is what on the documentation
It created a config.py file in /srv/jupyterhub
I have used firebase cli to host a website.Today i tried to push my files from my local machine to firebase storage using firebase cli but when i give the command firebase deploy nothing happened.can anyone tell me how to push my files to firebase storage.
Install gsutil using tutorial
https://cloud.google.com/storage/docs/gsutil_install#deb
Example for Ubuntu:
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
apt-get install apt-transport-https ca-certificates -y
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
Login to google cloud
gcloud auth login
Go to displayed link, login and paste verification code back to console. Select project
gcloud config set project PROJECT_ID
Send file. For example:
gsutil cp backup.$(date +%F).gz.gpg gs://PROJECT_ID.appspot.com/backups
I'm trying to set Some variables on Dokku for deployment. As far as i can see from the dev files, one should create a .env file in the directory and put the variables in there. But this is not updating anything
.env file
DOKKU_NGINX_PORT=3000
MYSQL_URL=http://blabla
MYSQL_USER=mysqluser
I'm trying to map the port of the app to port 3000, and inject the mysql vars into the runtime environment.
I know I can set it with dokku config:set on the server, but I want to be able to automate it during deployment.
Any ideas? Or an example?
You'll need to install a Dokku client, or CLI in order to locally interact with the remote application on your Dokku instance.
Here are a few options:
(node.js) dokku-toolbelt
Dokku toolbelt is a node-based CLI wrapper that proxies requests to
the Dokku command running on remote hosts.
You can install it via the following shell command (assuming you have node and npm installed):
$ npm install -g dokku-toolbelt
See documentation here for more information.
(python) dokku-client
Dokku client is an extensible python-based cli wrapper for remote
Dokku hosts.
You can install it via the following shell command (assuming you have python and pip installed):
$ pip install dokku-client
See documentation here for more information.
(ruby) Dokku CLI
Dokku CLI is a rubygem that acts as a client for your Dokku
installation.
You can install it via the following shell command (assuming you have ruby and rubygems installed):
$ gem install dokku-cli
See documentation here for more information.
After the Dokku client is installed locally, make sure that the dokku app remote is set inside the repository directory.
You can verify this by running $ git remote -v.
If the output doesn't show your dokku application instance, set it with the following command:
$ git remote add dokku dokku#example.com:your-app-name
Here's an example from my terminal with some information redacted for security purposes.
seth#linuxmint ~/repos/Adopt-a-Pet $ git remote -v
dokku dokku#example.com:adopt-a-pet (fetch)
dokku dokku#example.com:adopt-a-pet (push)
origin https://github.com/sethbergman/Adopt-a-Pet.git (fetch)
origin https://github.com/sethbergman/Adopt-a-Pet.git (push)
Then you can set environment variables with the following commands:
$ dokku config:set DOKKU_NGINX_PORT=3000
You can optionally set environment variables with the .env file:
$ dokku config:set:file <path/to/.env>
If the .env file is in the root directory of the repository, then the command would be:
$ dokku config:set:file <.env>
If you're using ruby, you can use the gem 'dokku-cli'. With that, you can set config from any file by issuing the command
dokku config:set:file <path/to/file>
See ruby doc
What is a workflow for deploying to Digital Ocean with Phusion Docker and Node/Meteor support?
I tried :
FROM phusion/passenger-nodejs:0.9.10
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# ssh
ADD private/keys/akey.pub /tmp/your_key
RUN cat /tmp/your_key >> /root/.ssh/authorized_keys && rm -f /tmp/your_key
## Download shit
RUN apt-get update
RUN apt-get install -qq -y python-software-properties software-properties-common curl git build-essential
RUN npm install fibers#1.0.1
# install meteor
RUN curl https://install.meteor.com | /bin/sh
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Enable nginx
# RUN rm -f /etc/service/nginx/down
#setup app
RUN mkdir /home/app/someapp
ADD . /home/app/someapp
WORKDIR /home/app/someapp
EXPOSE 4000
CMD passenger start -p 4000
But nothing is working and then I'm not sure how to really manage update/deploy/running?
E.g, how would you also handle updating the app without rebuilding the docker image?
Here is my suggested workflow:
Create an account on Docker Hub, you can get 1 private repository for free. If you want a complete private repository hosted on your own server, you can run an entire docker registry and use it to host your images.
Create your image on your development machine (locally or on a server), then push the image to the repository using docker push
Update the image when needed and commit your changes with docker commit then push the updated image to your repository (you should properly version and tag all your images)
You can start a digital ocean droplet with docker pre-installed (from applications tab) and simply pull your image and run your container. Whenever you update and push your image from your development machine, simple pull it again from the droplet.
For large and complex infrastructure, I would recommend looking into Ansible to configure your docker containers and manage digital ocean droplet as well.
Be aware that your data will be lost if you stop the container, so consider defining a volume in your container that is mapped to a shared folder on your host machine
I suggest you test your Dockerfile in a local VirtualBox VM. I wrote a tutorial about deploying node.js app with Docker. I build several images (layers) instead of just 1. When you update your app, you just need to rebuild the top layer. Hope it helps. http://vinceyuan.blogspot.com/2015/05/deploying-web-app-redis-postgres-and.html