gcloud deploy with sqlite database without wiped out - sqlite

I have an issue when I am trying to deploy an application to GCP which is using a sqlite database at the backend. My problem is that on each deployment the database is been wiped out and I can't find a way to make it permanent.
So assume that the database.sqlite is placed in api/db/database.sqlite. For the initial deployment is working fine as the database is created and placed in the /db folder. But when I deploy the app again the folder is wiped out and the database. I have also tried to placed it in a folder outside of the api folder (e.g /database in the root) but again the folder is wiped out.
I don't want to create/migrate the db on each build and pass it as an artifact to the deploy job.
// gitlab-ci.yaml
image: node:latest
stages:
- build
- deploy
build:
stage: build
script:
- yarn
- yarn test:api
- yarn test:ui
- yarn build
// this runs for the first time to create the db, then I remove it but the database is gone.
# - yarn db:migrate --env production
artifacts:
paths:
- api/dist/
// this runs for the first time to upload the db
// - api/db
- ui/dist/
expire_in: 1 day
deploy:
only:
- master
stage: deploy
image: google/cloud-sdk:alpine
dependencies:
- build
script:
- gcloud app deploy --project=PROJECT_ID ui-app.yaml api-app.yaml
// api-app.yaml
service: service-name
runtime: nodejs
env: flex
skip_files:
- node_modules/
- ui/
manual_scaling:
instances: 1
resources:
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 1
memory_gb: 6
disk_size_gb: 10
Ideally I need a folder in the instance somewhere which will not wiped out on each deployment. I am sure that I am missing something. Any ideas?

You simply can't! App engine, flex or standard, are serverless and stateless. Your volume type is explicit tmpfs "TeMPorary FileSystem".
App Engine Flex are at least restarted once a week for patching and update of the underlying server, and thus you will lost your database at least every week.
When you deploy a new version, a new instance is created from scratch, and thus your memory is empty when the instance start.
Serverless product are stateless. If you want to persist your data you need to store them outside of the serverless product. On a compute engine for example, and with a persistent disk, which is "persistent".
If you use an ORM, you can also easily switch from SQL Lite to MySQL or PostgreSQL. And thus leverage Cloud SQL managed database for your uses.

Related

Strange behavior restore package from local feed in azure devops pipeline

I've a strange problem. I try to restore a .net core application with dotnet restore command in Azure pipeline. One of my package is on my local feed (repo and feed are in same azure devops project).
With classic editor (UI editor) I've no problem but with yaml file restore is unable. I've the error : error NU1301: Unable to load the service index for source
I export yaml from classic editor and copy it in a new yaml pipeline file but build faild also... The configuration of restore task is the same but it doesn't work with yaml.
What is stranger is that the restore task worked friday (with yaml) but fail since yesterday without changes...
I'don't undersand anything... Who would have an idea ?
Thanks
(The agent is an azure agent)
The content of yaml file :
jobs:
- job: Job_1
displayName: Agent job 1
pool:
vmImage: windows-2019
steps:
- checkout: self
fetchDepth: 1
- task: DotNetCoreCLI#2
displayName: "dotnet restore"
inputs:
command: restore
projects: '**\*.sln'
vstsFeed: "afcdef6a-5c72-4a0e-90c5-33d9e751869c/ab1da1d1-3562-4a0a-9781-4f4d80de93ba"
For Classic pipeline, your Build job authorization scope may be Project Scope(Current project)
Classic Pipeline Options
When you use yaml pipeline, the Build job authorization scope is defined by
Project setting->Limit job authorization scope to current project for non-release pipelines
Project Settings
If it is off, the build job authorization scope is Organization Scope (Project Collection)
If it is on, the build job authorization scope is Project Scope (Current project)
You could try to add Project Collection Build Service (OrgName) to Feed setting->Permissions.
Feed settings

Building different images for each environment with Gitlab-CI AutoDevOps

Dockerfiles accept ENV variables via --build-args. These vars are mandatory for NextJS, used for static pages (to call remote APIs) and are "Hard coded" within the built image.
Gitlab-CI AutoDevOps has an ENV var to pass these args (AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS). But this is only consumeable if you are using one environment/image. When multiple environments (staging and production) needed, the URLs differ (https://staging.exmpl.com and https://www.exmpl.com).
How do I have to modify the Gitlab AutoDevOps to have two different images built?
In the CI/CD Settings my AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS are set:
--build-arg=API_URL=https://staging.exmpl.at/backend --build-arg=NEXT_PUBLIC_API_URL=https://staging.exmpl.at/backend
# as well $NEXT_PUBLIC_API_URL is set there
Currently this is my complete gitlab-ci.yml:
include:
- template: Auto-DevOps.gitlab-ci.yml
# added vars for build
build:
stage: build
variables:
API_URL: $NEXT_PUBLIC_API_URL
NEXT_PUBLIC_API_URL: $NEXT_PUBLIC_API_URL
How can I build two images, without "leaving" AutoDevOps? I assume I have to customize the build stage.
Another idea is to create a second Git repository called production with production URL set for $NEXT_PUBLIC_API_URL:
Staging get build and runs tests.
if successful it gets published
Staging repo content will be copied to the production repo
Production repo gets build (with production URL) and tested and then published too
Then I have two images.
Has someone please a better idea?
Thank you in advance
Maybe a couple way you can do this. It's pretty easy if you only care about the build job.
One way would be to make a second job which extends: from build
build:
variables:
MY_ENV: staging
build production:
extends: build
variables:
MY_ENV: production
Another way might be to add a parallel:matrix: key to the build job
build:
parallel:
matrix:
- MY_ENV: staging
- MY_ENV: production
Keep in mind, however, if any jobs downstream of build depends on its artifacts, you'll need to manage those as well.
For example:
build:
variables:
MY_ENV: staging
build production:
extends: build
variables:
MY_ENV: production
# some other job inherited from a template that depends on `build`
# we also want two of these jobs for each environment
downstream:
variables:
MY_ENV: staging
# extend it for production
downstream production:
extends: downstream
dependencies: # make sure we get artifacts from correct upstream build job
- build production
variables:
MY_ENV: production

Cannot find file './aws-exports' in './src'

I'm on the third module of this AWS tutorial to build a React app with AWS, Amplify and GraphQL but the build keeps breaking. When I ran amplify push --y the CLI generated ./src/aws-exports.js and added the same file to the .gitignore. So I'm not surprised the build is failing, since that file isn't included when I push my changes.
So I'm not sure what to do here. Considering it's automatically added to the .gitignore I'm hesitant to remove it.
Any suggestions?
I'm assuming you are trying to build your app in a CI/CD environment?
If that's the case then you need to build the backend part of your amplify app before you can build the frontend component.
For example, my app is building from the AWS amplify console and in my build settings I have
version: 0.1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- yarn install --frozen-lockfile
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- "**/*"
cache:
paths:
- node_modules/**/*
Note that the backend is building first with the amplifyPush --simple command. This is what generates the aws-exports.js file.
The 'aws-exports.js' file gets created automatically when AWS Amplify runs the CI/CD deployment build process and gets configured with the appropriate settings for the environment you are deploying to.
And for this reason it is included in the .gitignore. You don't want your local test configuration to be used in your production deployment for example.
As per Matthe's answer above the should be generated when the build script runs the 'amplifyPush' command. For some reason this is not working for me at the moment though!
AWS added support to automatically generate the aws-exports.js at build time to avoid getting the error: https://docs.aws.amazon.com/amplify/latest/userguide/amplify-config-autogeneration.html

how to deploy docker image in gcp vm

I'm trying to deploy a simple R Shiny app containerized in a Docker image, onto a virtual machine hosted by Google Cloud Platform, but I'm having problems.
The files are stored on a Github repo and the Docker image is built using a trigger on GCP/Cloud Build. The Docker file is based on the rocker/shiny format.
The build is triggered correctly and starts to build, but the build keeps timing out after 10min.
TIMEOUT ERROR: context deadline exceeded
Is there a command I can add to the Dockerfile to extend the build time, or is my Dockerfile wrong?
You can extend the timeout with a Cloud Build config (cloudbuild.yaml). The default timeout for a build is 10 minutes. Note that you define timeouts for each step as well as the entire build: https://cloud.google.com/cloud-build/docs/build-config
For your app, the cloudbuild.yaml would look something like
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/$PROJECT_ID/linear', '.'] # build from Dockerfile
images: ['gcr.io/$PROJECT_ID/linear'] # push tagged images to Container Registry
timeout: '1200s' # extend timeout for build to 20 minutes

How to publish builds to Artifactory from GitLab CI?

I am looking for an easy and clean way to publish artefacts build with GitLab CI onto Artifactory.
I was able to spot https://gitlab.com/gitlab-org/omnibus/blob/af8af9552966348a15dc1bf488efb29a8ca27111/lib/omnibus/publishers/artifactory_publisher.rb but I wasnt able to find any documentation regarding how I am supposed to configure it to make it work.
Note: I am looking for a gitlab_ci.yaml approach, not as in implementing it externally.
At a basic level, this can be done with the JFrog CLI tools. Unless you want to embed configuration in your .gitlab-ci.yml (I don't) you will first need to run (on your runner):
jfrog rt c
This will prompt for your Artifactory URL and an API key by default. After entering these items, you'll find ~/.jfrog/jfrog-cli.conf containing JSON like so:
{
"artifactory": {
"url": "http://artifactory.localdomain:8081/artifactory/",
"apiKey": "AKCp2V77EgrbwK8NB8z3LdvCkeBPq2axeF3MeVK1GFYhbeN5cfaWf8xJXLKkuqTCs5obpzxzu"
}
}
You can copy this file to the GitLab runner's home directory - in my case, /home/gitlab-runner/.jfrog/jfrog-cli.conf
Once that is done, the runner will authenticate with Artifactory using that configuration. There are a bunch of other possibilities for authentication if you don't want to use API keys - check the JFrog CLI docs.
Before moving on, make sure the 'jfrog' executable is in a known location, with execute permissions for the gitlab-runner user. From here you can call the utility within your .gitlab-ci.yml - here is a minimal example for a node.js app that will pass the Git tag as the artifact version:
stages:
- build-package
build-package:
stage: build-package
script:
- npm install
- tar -czf test-project.tar.gz *
- /usr/local/bin/jfrog rt u --build-name="Test Project" --build-number="${CI_BUILD_TAG}" test-project.tar.gz test-repo
If you're building with maven this is how I managed to do mine:
Note: you need to have your artifactory credentials (user and pass) ready.
Create a master password and generate an encrypted password from it. The procedure on how to create a masterpassword can be found here
In your pipeline settings in gitlab, create 2 secret variables, one for the username and the other for your encrypted password.
Update or create a settings.xml file in .m2 directory for maven builds. Your settings.xml should look like this:
<settings xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd" xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<servers>
<server>
<id>central</id>
<username>${env.ARTIFACTORY_USER}</username>
<password>${env.ENCRYPTED_PASS}</password>
</server>
</servers>
</settings>
In your .gitlab-ci.yml file, you need to use this settings.xml like this:
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
and that's it. This should work. You can visit here for more about how to use artifactory with maven
I know this doesn't exactly answer your question, but I got to this question from a related search, so I thought it might be relevant to others too:
I ended up using an mvn deploy job that was bound to the deploy stage for gitlab.
Here is the relevant job portion:
deploy:jdk8:
stage: test
script:
- 'mvn $MAVEN_CLI_OPTS deploy site site:stage'
only:
- master
# Archive up the built documentation site.
artifacts:
paths:
- target/staging
image: maven:3.3.9-jdk-8

Resources