Download docker image from artifactory using curl or wget..? - unix

Do we have any option/way to download a docker image using wget or curl.
My docker image is present in Jfrog artifactory.

First, any curl command to an Artifactory repo would need the API key of your account. See "How to use docker registry API with Artifactory Docker Repository when not using docker client?"
you can use the following header: "X-JFrog-Art-Api" and pass the API key of the user to authenticate. The API key of the user can be retrieved from the "User Profile" page in Artifactory. Artifactory REST API supports three forms of authentication and you can use any one of them with the docker repository
Second, downloading an image is not trivial (as you need to get all the layers).
You might have some chance adapting the moby contrib script download-frozen-image-v2.sh
Or try docker-registry-debug which will print a curl command for fetching the layer, as explained here.

I found this answer while looking to do the same thing with gitlab. I modified the suggested moby contrib script to do the same thing for a gitlab instance.
Download download-gitlab-frozen-docker-image.sh
Mark it executable (chmod +x download-gitlab-frozen-docker-image.sh)
Run the script:
./download-gitlab-frozen-docker-image.sh <FOLDER_NAME> <DOCKER_URL>
where FOLDER_NAME is the folder to store the frozen docker image and DOCKER_URL is the url straight out of the gitlab container registry.
Import the frozen folder into docker (at your convenience/any future date):
tar -cC '<FOLDER_NAME>' . | docker load

Related

Unable to download docker image layer from JFrog Artifactory using curl

I am using the command:
curl -O -u username:API_KEY https://artifactory_url/artifactory/docker_repo/image/tag/sha
The file created after curl completes successfully is empty
This could happen if redirect download feature is enabled on Artifactory side. The reason is, when a repository is configured to redirect downloads, a client requesting Artifactory for an artifact(exceeding a specific file size) hosted in that repository receives a 302 response initially with a Location header containing a signed direct download url. Then the client uses the particular URL to download the actual file.
Hence we would need to make sure follow redirect option is enabled by the client attempting to download it. In case of curl it can be achieved by adding -L
curl -O -L -u username:API_KEY
https://artifactory_url/artifactory/docker_repo/image/tag/sha

AWS amplify/dynamo/appsync - how to sync data locally

All I want to do is essentially take the exact dynamdb tables with their data that exist in a remote instance ( e.g. amplify staging environment/api) and import those locally.
I looked at datasync but that seemed to be FE only. I want to take the exact data from staging and sync that data to my local amplify instance - is this even possible? I can't find any information that is helping right now.
Very used to using mongo/postgres etc. and literally being able to take a DB dump and just import that...I may be missing something here?
How about using dynamodump
Download the data from AWS to your local machine:
python dynamodump.py -m backup -r REGION_NAME -s TABLE_NAME
Then import to Local DynamoDB:
dynamodump -m restore -r local -s SOURCE_TABLE_NAME -d LOCAL_TABLE_NAME --host localhost --port 8000
You have to build a custom script that reads from the online DynamoDB and then populate the Local DynamoDB. I found the docker image be just perfect to have an instance, make sure to give the jar file name to prevent the image to be ephemeral and have persistence of data.
Sort of macro instructions:
Download Docker Desktop (if you want)
Start docker desktop and in a terminal ask for Dynamo DB official image:
https://hub.docker.com/r/amazon/dynamodb-local/
docker pull amazon/dynamodb-local
And then run the docker container:
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DB.jar
Now you can start a python script that get the data from the online DB and copy in the local dynamoDB, as in official docs:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-amazon-dynamodb-tables-across-accounts-using-a-custom-implementation.html
Working out the connections to the local container (localhost:8000) you shall be able to copy all the data.
I'm not too well versed on local Amplify instances, but for DynamoDB there is a product which you can use locally called DynamoDB Local.
Full list of download instructions for DynamoDB Local are available here: Downloading And Running DynamoDB Local
If you have docker installed in your local machine, you can easily download and start the DynamoDB Local service with a few commands:
Download
docker pull amazon/dynamodb-local
Run
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DB.jar
This will allow you to avail of 90% of DynamoDB features locally. However, migrating data from DynamoDB Web Service to DynamoDB local is not something that is provided out of the box. For that, you would need to create a small script which you run locally, read data from your existing table and write to your local instance.
An example of reading from one table and writing to a second can be found in the docs here: Copy Amazon Dynamodb Tables Across Accounts
One change you will have make is manually setting the endpoint_url for DynamoDB Local:
dynamodb_client = boto3.Session(
aws_access_key_id=args['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=args['AWS_SECRET_ACCESS_KEY'],
aws_session_token=args['TEMPORARY_SESSION_TOKEN'],
endpoint_url='YOUR_DDB_LOCAL_ENDPOINT_URL'
).client('dynamodb')

Firestore authorization for Google Compute engine for app on a docker container

I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.

Artifactory: upload with api key (not password)

How would you upload an artifact to artifactory without using a password?
If I create a new user specific for uploads, that user by default doesn't git the 'upload' permission unless they are an administrator.
To upload with credentials
curl -u admin:'correct-horse-battery-staple' -T foo.zip
To upload with an api key
curl --header 'X-JFrog-Art-Api: 1234567890' -T foo.zip
Alternativly you can use the syntax <username:apikey>
curl -u admin:1234567890 -T foo.zip
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API
You can create the api key on the user profile page.
See the various authentication options, including authentication using API key, in the JFrog CLI for Artifactory documentation page:
https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory
If you want to use .pypirc you can just put:
[distutils]
index-servers = local
[local]
repository: https://artifactory-url/repo
username: <username>
password: <api-key>
Then you can upload using python setup.py bdist_wheel upload -r local.
Though my user is an admin at the moment so it answers only the API key part of the question.
If you're looking at a nuget artifact, here's the one line CLI command below.
nuget push <your-package-name.nupkg> -source <artifactory-repo-url>/nuget-local/ -ApiKey <your-user-name>:<apikey>
It's buried in the jfrog documentation. I would think uploading other artifacts would follow a similar pattern.

Jenkins CI integrate with NodeJS and Github problems in configuring build

We have build our first Nodejs app and I want to integrate Jenkins as continuous integration we are running node server behind Nginx as proxy and source control in Gitlab. I need example configurations or steps.
I am looking here any doc or wiki link or if you can point me into right direction it will be helpful
I have CentOS server and managed to install and configure Jenkins but not getting the proper way to connect my Gitlab server. I need to run npm commands after each build. If any one already has done that please let me know.
Thanks
Your question is still vague but I will try to provide you here how I had done Jenkins NodeJs with Gitlab integration. I have CentOS 6 and tested.
Steps
Open Java should be installed prior.
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins
sudo service jenkins start
Login as jenkins
sudo -s -H -u jenkins
Now generate the ssh key in the folder /var/lib/jenkins/.ssh and copy that key to gitlab
ssh-keygen
Install Gitlab Hook Plugin and GitLab Plugin in jenkins.
As you will create a project by accessing your jenkins in browser
After creating the project go to configure (left side menu) project page
There lots of options are self explanatory - setup Git repo url
and setup mail git browser url.
Create a new item in jenkins and add the git repo url and in build triggers
select Build when a change is pushed to GitLab. GitLab CI Service URL:
Build Triggers
check the option
Build when a change is pushed to GitLab
Paste that url in your gitlab repo's webhooks in settings.
This is to run npm commands after build
There is one section SSH Publisher
In exec commands section (I have put my example you can write your commands)
cd project_dir
rm -rf public server package.json
tar -xvf projectname.tgz 
ls
npm install --production
export NODE_ENV=production
forever restartall
jasmine-node spec/api/frisbyapi_spec.js
rm -rf projectname.tgz 
I have written most the steps that I took to setup jenkins nodejs and gitlab.
I might have forgot any step. If you face any error please post that as well.

Resources