I try to set up a Gitlab CI configuration, which executes the unit tests of a symfony project. The same configuration works for an old project, but in my new one the command composer update fails. The error message says that it isn’t possible to clone our own bundle out of our Gitlab.
$ composer update
Loading composer repositories with package information
In Git.php line 471:
Failed to execute git clone --mirror -- ‘https://glp...XXX:private-token#gi
tlab.company.com/bundle/test-bundle.git’ ‘/tmp/cache/vcs/https---gitla
b.company.com-bundle-test-bundle.git/’
Cloning into bare repository ‘/tmp/cache/vcs/https---gitlab.company.com-bundle-test-bundle.git’...
fatal: unable to access ‘https://gitlab.company.com/bundle/test-bundle
.git/’: Failed to connect to gitlab.company.com port 443 after 0 ms: Conn
ection refused
At first I tried to use a personal access token in my .gitlab-ci.yaml but get the previous mentioned error.
test:
image: composer:latest
stage: test
before_script:
- composer config gitlab-token.gitlab.company.com $PERSONAL_CI_TOKEN
After that I tried the access by username/password.
echo "{\"http-basic\":{\"gitlab.company.com\":{\"username\":\"user\",\"password\":\"password\"}}}” > $HOME/.composer/auth.json
All the possibilities worked in my composer docker container and in my other project the access is still possible. I don't know how to solve this error.
Please check if you have added your self-hosted gitlab domain in your config in the composer.json file.
It should look like this:
"config": { "gitlab-domains": ["gitlab.company.com"] }
Related
I am running my Wordpress site hosted in AWS EC2 [Amazon Linux2 AMI].
I have a pipeline to deploy the latest source from Github.
The codebuild buildspec file is as follow: [Mentioned only related part]
version: 0.2
phases: install:
runtime-versions: php: 7.4
Codebuild project in cloudformation is like this:
Image: 'aws/codebuild/amazonlinux2-x86_64-standard:3.0'
Using above configuration, deployment is going without problem and problem happens when I update like below: [Mentioned only related part]
In cfn template:
Image: 'aws/codebuild/amazonlinux2-x86_64-standard:4.0'
In buildspec: runtime-versions: php: 8.1
Validation script in deployment hooks runs some validation commands to check RDS connection and it fails:
LifecycleEvent - ValidateService Script - deploy_hooks/validate_service.sh [stdout]OK SO FAR
[stderr]mysqlcheck: Got error: 1045: Access denied for user 'root'#'10.37.253.207' (using password: NO) when trying to connect
After deployment, I loginto the ec2 instance and below WP cli command fails as well:
$ wp db check
These above problems does not occur when I roll back to previous state of the PHP version.
In cfn template:
Image: 'aws/codebuild/amazonlinux2-x86_64-standard:3.0'
In buildspec:
runtime-versions: php: 7.4
What might be the problem ?
Reference:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
https://docs.aws.amazon.com/codebuild/latest/userguide/available-runtimes.html#linux-runtimes
** UPDATE:
Codedeploy is failing to deploy DB endpoint , password and other information in wp-config.php file when codedeploy is executed by codepipeline.
If I terminate the ec2 instance, the auto-scaling will launch a new instance for me and deployment is triggered by the codedeploy and the deployment is success. I can access the site.
Does it mean that executing Codedeploy through Codepipeline is the problem?
I have an AWS-Amplify project that had been building without a problem but is now failing.
# Starting phase: build
2021-11-20T00:40:02.506Z [INFO]: [31mFailed to get profile: Profile configuration is missing for: amplify[39m
2021-11-20T00:40:02.564Z [ERROR]: !!! Build failed
2021-11-20T00:40:02.564Z [ERROR]: !!! Non-Zero Exit Code detected
2021-11-20T00:40:02.564Z [INFO]: # Starting environment caching...
2021-11-20T00:40:02.565Z [INFO]: # Environment caching completed
Terminating logging...
The problem seemed to start after I made an error doing a pull request (in the wrong direction!), however, the problem has persisted despite reverting back to an earlier commit.
I have also ensured all the Amplify code is up to date amplify pull, as well as trying amplify configure and amplify init on my development machine.
Other posts that describe problems with 'Profile Configuration' seem to be related to the development machine and setting up the CLI. This failure is happening when I try to build on AWS using continuous deploys, building locally works fine.
so i got it to work.
Just delete the aws-exports.json and the amplify folder.
Then run the command from amplify which is something like:
amplify pull --appId XXXXXXXXXXX --envName dev
After a few mins, it will prompt you to select:
AWS PROFILE
AWS KEYS
select AWS KEYS
enter credentials for a programmatic user and it should be fine
V2.6.5
I'm facing a 400 Error bad request when requesting the API after deploying it on Heroku. But I can't figure out why ?
What I did :
Added Procfile in root of /api
Added .htaccess in /api/public (via composer require symfony/apache-pack command)
Defined APP_ENV and DATABASE_URL on Heroku Dashboard app settings
Added the Postegresql addons on Heroku Dashboard
Inside /api folder : git init >> git add . >> git commit -m "..." >> heroku create >> git push heroku master
Sending a GET http request to the /greetings endpoint via postman (response with 400 code error)
This is a brand new api project, I did nothing except the step describe above.
(At first I followed the tutorial in the official api platform documentation using the app.json manifest but it was not taken into account, so I did the configuration in heroku dashboard directly)
On the Procfile, just remove the quotes ''
Before : 'web: heroku-php-apache2 public/'
After : web: heroku-php-apache2 public/
OR
Maybe you should try :
/api/greetings/
I'm trying to use Concourse to grab a dockerfile defintion from a git repository, do some work, build the docker image, and push the new image to Artifactory. See below for the pipeline definition. At this time I have all stages up to the artifactory stage (the one that pushes to Artifactory) working. The artifactory stage exits with error with the following output:
waiting for docker to come up...
sha256:c6039bfb6ac572503c8d97f42b6a419b94139f37876ad331d03cb7c3e8811ff2
The push refers to repository [artifactory.server.com:2077/base/golang/alpine]
a4ab5bf94afd: Preparing
unauthorized: The client does not have permission to push to the repository.
This would seem straight-forward as an Artifactory permissions issue, except that I've tested locally with the docker cli and am able to push using the same user/pass as specified within destination_username and destination_password. I double checked the credentials to make sure I'm using the same ones and find that I am.
Question #1: is there any other known cause for getting this error? I've scoured the resource github page without finding anything. Any ideas why I may be getting the permissions error?
Without having an answer to the above question, I'd really like to dig deeper into troubleshooting the problem. To do so I use fly hijack to get a shell in the corresponding container. I notice that docker is installed on the container, so next step I think would be to do a docker import on the tarball for the image I'm trying to push and then perform a docker push to push it to the repo. When attempting to run the import I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Question #2: Why can't I use docker commands from within the container? Perhaps this has something to do with the issue I'm seeing with pushing to repo when running the pipeline (I don't think so)? Is it because the container isn't running with privilege? I thought that the privileged argument would be supplied in the resource type definition, but if not, how can I run with privilege?
resources:
- name: image-repo
type: git
source:
branch: master
private_key: ((private_key))
uri: ssh://git#git-server/repo.git
- name: artifactory
type: docker-image
source:
repository: artifactory.server.com:2077/((repo))
tag: latest
username: ((destination_username))
password: ((destination_password))
jobs:
- name: update-image
plan:
- get: image-repo
- task: do-stuff
file: image-repo/scripts/do-stuff.yml
vars:
repository-directory: ((repo))
- task: build-image
privileged: true
file: image-repo/scripts/build-image.yml
- put: artifactory
params:
import_file: image/image.tar
Arghhhh. Found after much troubleshooting that the destination_password wasn't being picked up properly due to special characters and a lack of quotes. Fixed the issue by properly setting the password within yaml file being included with the --load-vars flag.
As part of our chef infrastructure I'm trying to set up and configure a berks-api server. I have created an Ubuntu server in azure and i have bootstrapped it and it appears as a node in my chef-server.
I have followed the instructions at github - bekshelf-api installation to install the berks-api via a cookbook. I have run
sudo chef-client
on my node and the cookbook appears to have been run successfully.
The problem is that the berks-api doesn't appear to run. My Linux terminology isn't great so sorry if I'm making mistakes in what I say but it appears as if the berks-api service isn't able to run. If I navigate to /etc/service/berks-api and run this command
sudo berks-api
I get this error
I, [2015-07-23T11:56:37.490075 #16643] INFO -- : Cache manager starting...
I, [2015-07-23T11:56:37.491006 #16643] INFO -- : Cache Builder starting...
E, [2015-07-23T11:56:37.493137 #16643] ERROR -- : Actor crashed!
Errno::EACCES: Permission denied # rb_sysopen - /etc/chef/client.pem
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `read'
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `initialize'
If anyone could help me figure out what is going on, I'd really appreciate it. If you need to explain the setup any more let me know.
It turns out I misunderstood the configuration of the berks-api. I needed to get a new private key for my client (berkshelf) from manage.chef.io for our organization. I then needed to upload the new key (berkshelf.pem) to /etc/berkshelf/api-server and reconfigure the berks-api to use the new key. so my config for the berks-api now looks like below:
{
"home_path":"/etc/berkshelf/api-server",
"endpoints":[
{
"type":"chef_server",
"options":
{
"url":"https://api.opscode.com/organizations/my-organization",
"client_key":"/etc/berkshelf/api-server/berkshelf.pem",
"client_name":"berkshelf"
}
}
],
"build_interval":5.0
}
I couldn't upload berkshelf.pem directly to the target location, i had to upload it to my home location, then copy it from within linux.
Having done this, the service starts and works perfectly.