how to deploy docker image in gcp vm - r

I'm trying to deploy a simple R Shiny app containerized in a Docker image, onto a virtual machine hosted by Google Cloud Platform, but I'm having problems.
The files are stored on a Github repo and the Docker image is built using a trigger on GCP/Cloud Build. The Docker file is based on the rocker/shiny format.
The build is triggered correctly and starts to build, but the build keeps timing out after 10min.
TIMEOUT ERROR: context deadline exceeded
Is there a command I can add to the Dockerfile to extend the build time, or is my Dockerfile wrong?

You can extend the timeout with a Cloud Build config (cloudbuild.yaml). The default timeout for a build is 10 minutes. Note that you define timeouts for each step as well as the entire build: https://cloud.google.com/cloud-build/docs/build-config
For your app, the cloudbuild.yaml would look something like
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/$PROJECT_ID/linear', '.'] # build from Dockerfile
images: ['gcr.io/$PROJECT_ID/linear'] # push tagged images to Container Registry
timeout: '1200s' # extend timeout for build to 20 minutes

Related

CrashLoopBackOff when trying to run .Net Applications in AKS cluster accross 2 pods

Apologies from the start but please bear with me, I am still rather novice at this so if you see any issues glaringly obvious, please forgive me.
I am working at a company where some devs are trying to have us deploy some .NET Core applications to containers in Azure Kubernetes Service (AKS). From my understanding, they have been written in .NET Core 3.1. The goal is to run this process using a CI/CD Azure Pipeline, using Azure Repos as repository, using a build pipeline to create the docker image, push image to our Azure Container Registry and create an artifact for the release pipeline to then deploy (using helm) contianers into the AKS.
File Structure is as follows:
Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
COPY ["AppFolder1\App.csproj", "."]
RUN dotnet restore "AppFolder1\App.csproj"
COPY . .
RUN dotnet build "AppFolder1\App.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "AppFolder1\App.csproj" -c Release -o /app/publish
FROM base AS final
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "DotNet.Docker.dll"]
ERROR
Question: Could there be an issue with 6.0 sdk when trying to deploy app made with .net core 3.1?
running "kubectl get pods -n redacted-namespace"
a) retrieves two pods with CrashLoopBackOff Status showing 9 restarts
running "kubectl define pod -n redacted-namespace" retrieves information on pods
a) both pods show successful pod scheduling - Successfully assigned redacted-namespace/ to aks-nodepool1-02 AND aks-nodepool1-00
b) Both show multiple successful pull of image
c) Both show creation of container and start of container
d) End message:
Warning BackOff 58s (x117 over 26m) kubelet Back-off restarting failed container
--ATTEMPTS TO SOLVE--
It was suggested that the Dockerfile was to blame. Spent time creating and running pipeline with multiple iterations of dockerfile, including changing .net versioning to 3.1 from 6.0. No successful pipelines using these dockerfiles yet.
running kubectl logs <pod-name> -n redacted-namespace:
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application 'DotNet.Docker.dll' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
I had figured that the installation of the .NET SDK should have been handled by the dockerfile line 1, however it doesn't seem to be working properly. In the meantime, adding in pipeline release Agent Task Use .NET Core sdk 6.0 and deleting previous pods to try again.
Re-running pipeline release - No effect. Likely .NET Core SDK install agent task does not work inside of each pod and is therefore not available as an installed resource within pods and replicas.
Apparently there were TWO problems with the Dockerfile. The first and foremost, #Hans Kilian, you're absolutely right. Apparently they were using .NET 3.1. The other issue was the ENDPOINT I had set up was not pointing to the right .dll file. This I found by going to Solutions/App.sln and pulled the name from the Project line (something like Project("################################")= "Project_name"... Its working and running just fine now. Thank you!

gcloud deploy with sqlite database without wiped out

I have an issue when I am trying to deploy an application to GCP which is using a sqlite database at the backend. My problem is that on each deployment the database is been wiped out and I can't find a way to make it permanent.
So assume that the database.sqlite is placed in api/db/database.sqlite. For the initial deployment is working fine as the database is created and placed in the /db folder. But when I deploy the app again the folder is wiped out and the database. I have also tried to placed it in a folder outside of the api folder (e.g /database in the root) but again the folder is wiped out.
I don't want to create/migrate the db on each build and pass it as an artifact to the deploy job.
// gitlab-ci.yaml
image: node:latest
stages:
- build
- deploy
build:
stage: build
script:
- yarn
- yarn test:api
- yarn test:ui
- yarn build
// this runs for the first time to create the db, then I remove it but the database is gone.
# - yarn db:migrate --env production
artifacts:
paths:
- api/dist/
// this runs for the first time to upload the db
// - api/db
- ui/dist/
expire_in: 1 day
deploy:
only:
- master
stage: deploy
image: google/cloud-sdk:alpine
dependencies:
- build
script:
- gcloud app deploy --project=PROJECT_ID ui-app.yaml api-app.yaml
// api-app.yaml
service: service-name
runtime: nodejs
env: flex
skip_files:
- node_modules/
- ui/
manual_scaling:
instances: 1
resources:
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 1
memory_gb: 6
disk_size_gb: 10
Ideally I need a folder in the instance somewhere which will not wiped out on each deployment. I am sure that I am missing something. Any ideas?
You simply can't! App engine, flex or standard, are serverless and stateless. Your volume type is explicit tmpfs "TeMPorary FileSystem".
App Engine Flex are at least restarted once a week for patching and update of the underlying server, and thus you will lost your database at least every week.
When you deploy a new version, a new instance is created from scratch, and thus your memory is empty when the instance start.
Serverless product are stateless. If you want to persist your data you need to store them outside of the serverless product. On a compute engine for example, and with a persistent disk, which is "persistent".
If you use an ORM, you can also easily switch from SQL Lite to MySQL or PostgreSQL. And thus leverage Cloud SQL managed database for your uses.

Cannot find file './aws-exports' in './src'

I'm on the third module of this AWS tutorial to build a React app with AWS, Amplify and GraphQL but the build keeps breaking. When I ran amplify push --y the CLI generated ./src/aws-exports.js and added the same file to the .gitignore. So I'm not surprised the build is failing, since that file isn't included when I push my changes.
So I'm not sure what to do here. Considering it's automatically added to the .gitignore I'm hesitant to remove it.
Any suggestions?
I'm assuming you are trying to build your app in a CI/CD environment?
If that's the case then you need to build the backend part of your amplify app before you can build the frontend component.
For example, my app is building from the AWS amplify console and in my build settings I have
version: 0.1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- yarn install --frozen-lockfile
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- "**/*"
cache:
paths:
- node_modules/**/*
Note that the backend is building first with the amplifyPush --simple command. This is what generates the aws-exports.js file.
The 'aws-exports.js' file gets created automatically when AWS Amplify runs the CI/CD deployment build process and gets configured with the appropriate settings for the environment you are deploying to.
And for this reason it is included in the .gitignore. You don't want your local test configuration to be used in your production deployment for example.
As per Matthe's answer above the should be generated when the build script runs the 'amplifyPush' command. For some reason this is not working for me at the moment though!
AWS added support to automatically generate the aws-exports.js at build time to avoid getting the error: https://docs.aws.amazon.com/amplify/latest/userguide/amplify-config-autogeneration.html

How do I read a file from the local file system from inside a meteor app?

I have a meteor app that needs to periodically read a file located on the host's file system, outside of the app package. I am using node's fs to accomplish this, and it works fine on my (macOS) development machine.
However, when I run mup deploy to deploy it to my (Ubuntu 14) server, mup returns the following error after starting meteor:
Error: ENOENT: no such file or directory, open '/home/sam/data/all_data.json'
at Object.fs.openSync (fs.js:652:18)
at Object.fs.readFileSync (fs.js:553:33)
Does anyone know why this might be happening?
you should follow mup documentation closely. Have you seen volumes setup in mup config? Try this to solve your issue.
Reason: mup runs app in docker without any access to host file system unless specified. Volumes setup does this for you with mup deployment.
Below is the part of mup config from http://meteor-up.com/docs.html, Everything Configured, read more to get a better idea.
name: 'app',
path: '../app',
// lets you add docker volumes (optional). Can be used to
// store files between app deploys and restarts.
volumes: {
// passed as '-v /host/path:/container/path' to the docker run command
'/host/path': '/container/path',
'/second/host/path': '/second/container/path'
},
The user you have that is running your meteor build on the server needs to have access to that folder - read access. I would store the file in a different directory than the home one, because you don't want to mess it up. Either way doing something like chmod -R 444 /home/sam/data should give read access to any user for all files in that directory. You are probably running meteor as your local user(sam?) in development mode on your macOS, but the built up gets run as meteor or some other user on ubuntu, because of mup and forever.

"Meteor create my-app" taking forever installing npm dependencies

I am a newbie in web development. I have installed Meteor on ubuntu. When I try to create an app using something like:
Meteor create my-app
It creates the my-app folder but it never returns out of "Installing npm dependencies". I have been waiting for more than half and hour. I was wondering if this is normal? How long more should I wait for it to end?
I'm working behind a company proxy but I have set the proxy using the following lines (and that allowed me to install Meteor in the first place):
export http_proxy=http://username:password#proxy:port
export https_proxy=http://username:password#proxy:port
I was stuck in a similar situation on Windows.
Cancel the job using ctrl+c. Since meteor creates the app directory even before the command is complete, you will have the directory intact, go into the directory and run
meteor npm install
It will install all the dependencies (took 7s for me),
then run meteor to start the server.

Resources