I'm having some problems running Gcloud's Datastore emulator in Travis-ci.
Now running it like:
script:
- export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"
- echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
- curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
- sudo apt-get update && sudo apt-get install google-cloud-sdk
- nohup gcloud beta emulators datastore start &
But this seems less than ideal.
Not sure what is wrong with this setup, as you say it is 'less than ideal', which indicates that it works.
If you want the setup steps to be cleaner, you can install the google-cloud-sdk directly because it's whitelisted by travis:
dist: trusty
apt:
packages:
- google-cloud-sdk
before_script:
- gcloud beta emulators datastore start &
- $(gcloud beta emulators datastore env-init)
Related
When I try build an image from the following Dockerfile, it fails. (I am using Portainer, it simply shows "build failed" with no further explanation.)
FROM rhub/r-minimal:4.0.5
RUN apk update
RUN installr -d -t "R-dev file linux-headers libxml2-dev gnutls-dev openssl-dev libsass-dev libx11-dev cairo-dev libxt-dev libuv-dev geos-dev gdal-dev proj-dev sqlite-dev cmark-dev http-parser-dev" \
-a "libxml2 libuv cmark libgit2 openssl cairo libsass libx11 font-xfree86-type1 sqlite proj gdal geos http-parser" later
RUN rm -rf /var/cache/apk/*
RUN addgroup --system app && adduser --system --ingroup app app
I am trying to build an image for shiny/leaflet and this package seems to be responsible for not getting there...
I also tried the r-hub/r-minimal example but it failed too.
I am aware of this discussion but can not act on it.
Interestingly, the following Dockerfile works:
FROM rhub/r-minimal:4.0.5
RUN apk update
RUN installr -d -t "R-dev file linux-headers libsodium-dev" \
-a "libsodium font-xfree86-type1" later
RUN rm -rf /var/cache/apk/*
RUN addgroup --system app && adduser --system --ingroup app app
Any ideas on how to get a small Docker image for shiny/leaflet is very appreciated.
I want to mount a Cloud Filestore instance in a GCP AI Platform Jupyter notebook instance so that I don't have to upload all of my data into the notebook.
I followed the instructions at https://cloud.google.com/filestore/docs/mounting-fileshares, but get these error messages:
root#0084329abd1b:/home# mount <IP_ADDRESS>:/streams cfs
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
root#0084329abd1b:/home# mount -o nolock <IP_ADDRESS>:/streams cfs
mount.nfs: Operation not permitted
From your terminal, you can do something like this.
mkdir des_bucket
gcsfuse --debug_gcs --implicit-dirs src_bucket des_bucket
Create a Filestore instance link
Crerate a Google VM instance link
Create a Notebook AI instance link
On the VM instance run the commands:
sudo apt-get -y update
sudo apt-get -y install nfs-common
sudo mkdir test
# fileshare remote target
sudo mount 111.11.111.11:/fileshare test
sudo chmod go+rw test
echo 'This is a test' > test/testfile
ls test
#testfile
On the Notebook AI instance run the commands link:
sudo apt-get -y update
sudo apt-get -y install nfs-common
sudo mkdir test
# fileshare remote target
sudo mount 111.11.111.11:/fileshare /test
ls test
#testfile
You can also check link
I'm trying to use JFrog CLI with CircleCI 2.0 to publish my docker image into my JFrog artifactory, after some research I've found this tutorial: https://circleci.com/docs/1.0/Artifactory/ but it's based on CircleCI 1.0 specification.
my config.yml file currently is:
version: 2
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- run:
name: Setup JFrog
command: |
wget http://dl.bintray.com/jfrog/jfrog-cli-go/1.7.1/jfrog-cli-linux-amd64/jfrog
chmod +x jfrog
./jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -e $ARTIFACTORY_EMAIL -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
But I'm getting the following error:
#!/bin/sh -eo pipefail
wget http://dl.bintray.com/jfrog/jfrog-cli-go/1.7.1/jfrog-cli-linux-amd64/jfrog
chmod +x jfrog
./jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -e $ARTIFACTORY_EMAIL -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
Connecting to dl.bintray.com (35.162.24.14:80)
Connecting to akamai.bintray.com (23.46.57.209:80)
jfrog 100% |*******************************| 9543k 0:00:00 ETA
/bin/sh: ./jfrog: not found
Exited with code 127
Does anyone know what is the correct way to use JFrog CLI with CircleCI 2.0?
I've fixed this installing JFrog CLI through npm:
version: 2
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 \
openssl \
nodejs
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- run:
name: Setup JFrog
command: |
npm install -g jfrog-cli-go
jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
Now it's working.
As an alternative to installing with Node.js (which is perfectly possible too, especially if you're running a Node.js build in CircleCI), you can use a cURL command to install it for you.
curl -fL https://getcli.jfrog.io | sh
This script will download the latest released version of the JFrog CLI based on your operating system and your architecture (32 vs 64 bits).
I have used firebase cli to host a website.Today i tried to push my files from my local machine to firebase storage using firebase cli but when i give the command firebase deploy nothing happened.can anyone tell me how to push my files to firebase storage.
Install gsutil using tutorial
https://cloud.google.com/storage/docs/gsutil_install#deb
Example for Ubuntu:
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
apt-get install apt-transport-https ca-certificates -y
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
Login to google cloud
gcloud auth login
Go to displayed link, login and paste verification code back to console. Select project
gcloud config set project PROJECT_ID
Send file. For example:
gsutil cp backup.$(date +%F).gz.gpg gs://PROJECT_ID.appspot.com/backups
I'm migrating my standard environment app to flexible environment in GAE and facing issues.
app.yaml snippet
runtime: custom
env: flex
api_version: 1
threadsafe: true
handlers:
- url: /.*
script: main.app
Dockerfile
FROM gcr.io/google_appengine/python-compat-multicore
RUN apt-get update -y
RUN apt-get install -y python-pip build-essential libssl-dev libffi-dev python-dev libxml2-dev libxslt1-dev xmlsec1
RUN apt-get install -y curl unzip
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
RUN curl https://storage.googleapis.com/appengine-sdks/featured/google_appengine_1.9.40.zip > /tmp/google_appengine_1.9.40.zip
RUN unzip /tmp/google_appengine_1.9.40.zip -d /usr/local/gae
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
ENV PATH $PATH:/usr/local/gae/google_appengine/
COPY . /app
WORKDIR /app
ENV MODULE_YAML_PATH app.yaml
RUN pip install -r requirements.txt
issue while running gcloud app deploy(stack trace)
File "/env/local/lib/python2.7/site-packages/google/appengine/ext/vmruntime/vmconfig.py", line 63, in BuildVmAppengineEnvConfig
escaped_appid = appid.replace(':', '_').replace('.', '_')
AttributeError: 'NoneType' object has no attribute 'replace'
Is there anything which I'm missing in dockerfile? What are the other configuration changes which should be done such that there is not much application level code changes. Is it advidsable to use webapp2 in flexible environment
We're working on a better error message, but this is happening because you're trying to use the python-compat-multicore runtime. That runtime is not supported on env:flex, and has been deprecated. We're asking folks to follow this guide to upgrade to runtime:python:
https://cloud.google.com/appengine/docs/flexible/python/migrating