I looked the different posts concerning gitlab, docker, and ssh issues without any help. So, I ask my question here.
I have the following setting:
linux box with ubuntu server 14.04 and IP 192.168.1.104
DNS: git.mydomain.com = 192.168.1.104
A gitlab docker that I start, according to the official doc, this way:
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 --volumes-from gitlab_data gitlab_image
or
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 -e "GITLAB_SHELL_SSH_PORT=2222" --volumes-from gitlab_data gitlab_image
the linux box runs an nginx which redirects (proxy_pass) git.mydomain.com to 192.168.1.104:8080
I access git.mydomain.com without any issue, everything works.
I generated an ssh key that I have added to my profile on gitlab and added the following lines to my ~/.ssh/config
Host git.mydomain.com
User git
Port 2222
IdentityFile /home/user/.ssh/id_rsa
If I try
ssh -p 2222 git#git.mydomain.com
the connection is closed. I assume it is because only a git-shell is permitted.
But, if I try
mkdir test
cd test
git init
touch README.md
git add README.md
git commit -m "first commit"
git remote add origin git#git.domain.com:user/test.git
git push -u origin master
it stucks with
Connection closed by 192.168.1.104
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I also tried with
git remote add origin git#git.domain.com:2222/user/
and the result was the same.
Note that the logs of gitlab docker include
[2015-03-06T11:04:43+00:00] INFO: group[git] created
[2015-03-06T11:04:43+00:00] INFO: user[git] created
[2015-03-06T11:04:44+00:00] INFO: group[gitlab-www] created
[2015-03-06T11:04:44+00:00] INFO: user[gitlab-www] created
Any idea how I can fix this issue?
Thanks in advance for your help.
I would guess that you have authentication problem.
Here are few things you can try:
Make sure you added you public key in gitlab.
Check permissions of you id_rsa file.
Try temporarily disabling hosts verification with
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
I have the same setup as you (docker container in VM, DNS points to VM). I also configured .ssh/config like you.
But when I log-in with ssh I get:
ssh -p 2222 git#gitlab
PTY allocation request failed on channel 0
Welcome to GitLab, tomzo!
Connection to gitlab closed.
Git remotes do not need port 2222 configured. This is OK (works for me):
$ git remote -v
origin git#gitlab:lab/workflow.git (fetch)
origin git#gitlab:lab/workflow.git (push)
And I can push and pull with git.
$ git push
Everything up-to-date
Related
I'm setting up our Gitlab server and it works well when I disabled the seLinux.
How to fix the configuration of the seLinux to allow the gitlab work?
Environmnt:
CentOS 7.4.1708 and update all packages.
Gitlab 10.5.2
nginx 1.13.10
I've installed Gitlab and nginx and followed this link to configure to make the Gitlab work with installed nginx:
https://docs.gitlab.com/omnibus/settings/nginx.html#using-a-non-bundled-web-server
When I clicked the link to the Gitlab, I could not reach there and I found error message in /var/log/nginx/error.log:
2018/04/05 11:39:27 [crit] 4092#4092: *3 connect() to unix:/var/opt/gitlab/gitlab-workhorse/socket failed (13: Permission denied) while connecting to upstream, client: xx.xx.xx.xx, server: localhost, request: "POST /gitlab/api/v4/jobs/request HTTP/1.1", upstream: "http://unix:/var/opt/gitlab/gitlab-workhorse/socket:/gitlab/api/v4/jobs/request", host: "xx.xx.xx.xx"
After I changed the seLinux to 'permissive' mode, it worked well as expected.
And in the /var/log/audit/audit.log file, I found the message:
type=AVC msg=audit(1522905628.444:872): avc: denied { write } for pid=12407 comm="nginx" name="socket" dev="dm-2" ino=8871 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=sock_file
Then I tryed to follow the instruction below:
https://gitlab.com/gitlab-org/gitlab-recipes/tree/master/web-server/apache#selinux-modifications
but I cannot see the files/directories in it.
setsebool -P httpd_can_network_connect on
setsebool -P httpd_can_network_relay on
setsebool -P httpd_read_user_content on
semanage -i - <<EOF
fcontext -a -t user_home_dir_t '/home/git(/.*)?'
fcontext -a -t ssh_home_t '/home/git/.ssh(/.*)?'
fcontext -a -t httpd_sys_content_t '/home/git/gitlab/public(/.*)?'
fcontext -a -t httpd_sys_content_t '/home/git/repositories(/.*)?'
EOF
restorecon -R /home/git
git user's home directory is /var/opt/gitlab instead of /home/git
/var/opt/gitlab directory has no gitlab directori or repositories directory.
How can I configure the seLinux to work with my environment?
I'm currently figuring this out. The documentation is a mix of old and new info and lacks distinction between the standard and "Omnibus" install. The problem is they don't label their socket file properly to allow access by Nginx. I've had success running this after every time I run gitlab-ctl reconfigure:
chcon -t httpd_var_run_t /var/opt/gitlab/gitlab-workhorse/socket
And also don't forget these bits of setup:
usermod -aG git,gitlab-www nginx
chmod g+rx /var/opt/gitlab/
chown git:git /var/opt/gitlab
As well, I couldn't get Nginx to start with the provided config; I had to create a proxy cache directory:
mkdir /usr/share/nginx/proxy_cache
restorecon -vFR /usr/share/nginx
chown nginx /usr/share/nginx/proxy_cache/
Just had this issue myself (I'm even also using a CentOS server) and was able to solve it using the command posted by miken32
chcon -t httpd_var_run_t /var/opt/gitlab/gitlab-workhorse/socket
In my case I installed the Omnibus gitlab-ce package using the docs provided by Gitlab
Afterwards I followed the instructions for Using a non-bundled web-server. If you read carefully you'll notice the 5. Download the right web server configs paragraph that contains a link GitLab recipes repository.
Follow this link and you will find the configs for multiple different web server including the ones for nginx. Be careful since within the nginx web server directory you will be redirected to the GitLab official repository again...
Download the required config (with or without SSL etc.) into the /etc/nginx/conf.d/ directory (this is special for at least CentOS). Carefully inspect the downloaded file since you will need to modify it with correct paths for the Omnibus package.
Also don't forget to give nginx access to git group as mentioned in the documentation. I'm not sure if really necessary but my nginx user is also member of the gitlab-www group.
After all this I was still unable to launch the gitlab site. The browser just showed up with the 502 error page.
The /var/log/nginx/gitlab-error.log showed a permission denied error for the workhorse socket which lead me to this page and can be solved (at least in my case) with the command provided by miken32.
I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this.
Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 5000
CMD ["nginx -d -p 5000:5000"]
Error
Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From ::
https://docs.docker.com/engine/reference/builder/#expose
EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number
CMD ["nginx -d -p 5000:5000"]
You add your dockerfile
FROM nginx:alpine
its already starts nginx.
after you build from your dockerfile
you should use this on
docker run -d -p 5000:5000 <your_image>
Edit:
If you want to use docker port 80 -> machine port 5000
docker run -d -p 5000:80 <your_image>
I'm following Digital Ocean's tutorial on how to start a nginx docker container (Currently on Step 4). Currently this is their output:
$ docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b91f3ce26553 nginx "nginx -g 'daemon off" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 443/tcp docker-nginx
But when I run it, this is my output (noticed the different IP of the container):
C:\>docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
C:\>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3ccb73a9198 nginx "nginx -g 'daemon off" 14 hours ago Up 2 seconds 10.0.75.2:80->80/tcp, 443/tcp docker-nginx
Why does this happen? And how can I get the same results as Digital Ocean's? (Getting the server to start on localhost)
Edit: I'm using Docker for windows (recently released) which apparently runs native using Hyper-V. My output for docker-machine ls is this:
C:\>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
C:\>
But when I run it, this is my output (noticed the different IP of the
container)
Since this a Windows machine, I assume that you're using Docker Toolbox Docker for Windows. 10.0.75.2 is the IP of the boot2docker virtual machine.
If you are using Windows or Mac OS, you will need some form of virtualization in
order to run Docker. The IP you just saw is the IP of that lightweight virtual machine.
And how can I get the same results as Digital Ocean's? (Getting the
server to start on localhost)
Use a Linux distribution! Also you can enable Expose container ports on localhost in Docker For Windows Settings:
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command (or open a browser) to view the default web site on your nginx web server inside the container
curl http://192.168.99.100:80
if you are using a virtual machine on windows:
docker-machine ip default
https://docs.docker.com/machine/concepts/
When I ran this command for the first time: docker run -d -p 80:80 --name docker-tutorial docker101tutorial
I got this error:
docker: Error response from daemon: Conflict. The container name
"/docker-tutorial" is already in use by container "LONG_CONTAINER_ID".
You have to remove (or rename) that container to be able to reuse that
name.
so, I tried to remove this container using: docker rm -f LONG_CONTAINER_ID
then I did: docker run -d -p 3080:80 --name docker-tutorial docker101tutorial
note 3080:80 instead of 80:80... Had I run this from the docker desktop, I would see this default option below:
I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly
I'm attempting to setup Nginx as a load balancer using the StrongLoop Nginx Controller. Nginx will be acting as a load balancer for a StrongLoop LoopBack application hosted by the standalone StrongLoop Process Manager. However, I've been unsuccessful at making the Nginx deployment following the official directions from StrongLoop. Here are the steps I've taken:
Step #1 -- My first step was to install Nginx and the StrongLoop Nginx Controller on an AWS EC2 instance. I launched an EC2 sever (Ubuntu 14.04) to host the load balancer, and attached an Elastic IP to the server. Then I executed the following commands:
$ ssh -i ~/mykey.pem ubuntu#[nginx-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install nginx
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-nginx-controller
$ sudo sl-nginx-ctl-install -c 444
Then I opened up port 444 in the security group of the EC2 instance using a Custom TCP Rule.
Step #2 -- My second step was to setup two Loopback application servers. To accomplish this I launched two more EC2 servers (both Ubuntu 14.04) for the application servers, and attached an Elastic IP to each server. Then I ran the following series of commands, once on each application server:
$ ssh -i ~/mykey.pem ubuntu#[application-server-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-pm
$ sudo sl-pm-install
$ sudo /sbin/initctl start strong-pm
Step #3 -- My third step was to deploy the application to each of the application servers. For this I used StrongLoop Arc:
$ cd /path/to/loopback-getting-started-intermediate # my application
$ slc arc
Once in the StrongLoop Arc web console, I built a tar for the application, and deployed it to both application servers. Then in the Arc Process Manager, I connected to both application servers. Once connected, I clicked "load balancer," and entered the Nginx host and port into the form and pressed save. This caused a message to pop up saying "load balancer config saved."
Something strange happened at this point: The fields in StrongLoop Arc where I just typed the settings for the load balancer (host and port) reverted back to the original values the fields had before I started typing. (The original port value was 555 and the original value in the host field was the address of my second application server.)
Don't know what to do next -- This is where I really don't know what to do next. (I tried opening my web browser and navigating to the IP address of the Nginx load balancer, using several different port values. I tried 80, 8080, 3001, and 80, having opened up each in the security group, in an attempt to find the place to which I need to navigate in order to see "load balancing" in action. However, I saw nothing by navigating to each of these places, with the exception of port 80 which served up the "welcome to Nginx page," not what I'm looking for.)
How do I setup Nginx as a load balancer using the StrongLoop Nginx Controller? What's the next step in the process, assuming all of my steps listed are correct.
What I usually do is this:
sudo sl-nginx-ctl-install -c http://0.0.0.0:444
Maybe this can solve your problem.