How can I restart bundle nginx in gitlab separately? - nginx

I have installed Gitlab CE version. I can find nginx bundled in Gitlab. However I cannot find a way to restart nginx separately. I have tried sudo service nginx restart but it gives:
* Restarting nginx nginx [fail]
I have checked all the document but cannot find a solution. I am trying to add vhost to the bundled nginx according to this tutorial. But I stuck at that step. Is there other way to add vhost to bundled nginx with Gitlab? Or How can I check whether my nginx conf work?
Edit: 502 error I have solved.
I try to use NON-bundle nginx according to this doc , But after I modify gitlab.rb and run sudo gitlab-ctl reconfigure , I got 502 Whoops, GitLab is taking too much time to respond. error.
Here is my gitlab.conf for nginx.
upstream gitlab {
server unix://var/opt/gitlab/gitlab-git-http-server/sockets/gitlab.socket fail_timeout=0;
#
}
server {
listen *:80;
server_name blcu.tk;
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
client_max_body_size 250m;
access_log /var/log/gitlab/nginx/gitlab_access.log;
error_log /var/log/gitlab/nginx/gitlab_error.log;
# Ensure Passenger uses the bundled Ruby version
passenger_ruby /opt/gitlab/embedded/bin/ruby;
# Correct the $PATH variable to included packaged executables
passenger_env_var PATH "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/usr/local/bin:/usr/bin:/bin";
# Make sure Passenger runs as the correct user and group to
# prevent permission issues
passenger_user git;
passenger_group git;
# Enable Passenger & keep at least one instance running at all times
passenger_enabled on;
passenger_min_instances 1;
location / {
try_files $uri $uri/index.html $uri.html #gitlab;
}
location #gitlab {
# If you use https make sure you disable gzip compression
# to be safe against BREACH attack
proxy_read_timeout 300; # Some requests take more than 30 seconds.
proxy_connect_timeout 300; # Some requests take more than 30 seconds.
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://gitlab;
}
location ~ ^/(assets)/ {
root /opt/gitlab/embedded/service/gitlab-rails/public;
# gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
}
error_page 502 /502.html;
}

To restart only one component of GitLab Omnibus you can execute sudo gitlab-ctl restart <component>. Therefore, to restart Nginx:
sudo gitlab-ctl restart nginx
As a further note, this same concept is possible with nearly all of the gitlab-ctl commands. For example, sudo gitlab-ctl tail allows you to see all GitLab logs. Applying this concept, sudo gitlab-ctl tail nginx will tail only Nginx logs.

My tuto explains how to add vhosts to a NON-bundled nginx server, not the bundled one.
The steps are :
disable the bundled version
install a standalone nginx version compiled with passenger module,
configure it to serve gitlab as a vhost
and then configure other custom vhosts on it.
If sudo service nginx restart return
* Restarting nginx nginx [fail]
then you probably installed nginx separately with something like sudo apt-get install nginx or you installed the recompiled version with pushion passenger module as I explain in my tuto ?
Do you really use the bundled version or you misunderstood this step in my tuto ?
Please answer these questions in comments then I will edit this answer to write the solution you really need.

To restart bundled nginx do sudo gitlab-ctl restart

Related

Nuxt not serving 404 but nginix is

I've just uploaded my nuxt.js site with pre-rendered HTML in universal mode, However the server is serving the 404 errors as I'm getting the server with "An error occurred" from nginix.
I'm using Docker with nginix server.
How can I get my server to let nuxt.js handle the routing and serve the errors?
Its good to use a reverse proxy before your application, here i show you my nginx nuxt configuration.
In my case the configuration is under /etc/nginx/sites-available/default.
server {
server_name homepage.com www.homepage.com;
listen 443 ssl http2;
ssl on;
ssl_certificate /etc/letsencrypt/live/homepage.at/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/homepage.at/privkey.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~* \.(png|jpg|jpeg|ico)$ {
root /root/AutoProjekt/static;
expires 1y;
add_header Cache-Control "public, no-transform";
}
}
server {
server_name homepage.com www.homepage.com;
listen 80;
return 301 https://$host$request_uri;
}
On the top define your website name.
The port for HTTPS is 443 so you need to listen to it.
You can get the two certificates easely wih https://certbot.eff.org/ just follow the instruction.
At location you tell nginx where your nuxt projekt is in my case its at /
Important here is to pass the client to your application with is running in my case at port 3000.
The next location isnt necessary its optional. In my application i can upload images and the problem witch i have is that my images doesnt have a Cache-Control header that signals the browser to cache it. However its not essential for you here.
And the last one you listen to the port 80 witch is HTTP. There you redirect your incoming request to HTTPS.
After making changes always test for syntax errors with nginx -t and if its ok then restart your nginx with systemctl restart nginx
I also suggest to use pm2 https://www.npmjs.com/package/pm2
npm i pm2 -g
Its a production process manager that handles all your node.js processes. The problem witch you get if you just start the application with npm start is that your application stop to work if you close for examply PuTTy.
Usually you start the application with npm start but with pm2 you can start it like this:
pm2 start npm --name MyCoolApp -- start
You should see now your app with status online. To stop it type in pm2 stop MyCoolApp and for run obviously pm2 start MyCoolApp

Link domain to shiny server with nginx

I own a website bought in Godaddy and I recently created a sub-domain "tools" for this website.
My final goal is that the url: http://tools.website.com redirects to my shiny dashboard hosted on my shiny server. For that I know I need to use nginx which I did but apparently not successfully.
I am using Linux and EC2 instance, my shiny dashboard runs on http://5.82.382.227:3838/myapp/ (example). After I created the subdomain, I noticed it also runs on:
http://tools.website.com:3838/myapp/
I followed instructions and updated the shiny.conf file with the following:
server {
listen 80;
listen [::]:80;
server_name tools.website.com;
location / {
proxy_pass http://5.82.382.227:3838;
# proxy_redirect http://localhost:3838/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 20d;
proxy_buffering off;
}
}
I also figured out I had to add:
include /etc/nginx/sites-enabled/*
in the http of the nginx.conf file so I modified it by doing:
sudo nano /etc/nginx/nginx.conf
Finally, I ran:
sudo nginx -t
and get:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and restrated nginx:
sudo systemctl restart nginx
But still does not work.

Nginx as Reverse Proxy for Docker VHosts

I'm currently trying to built my own webserver/service and wanted to set up things like this:
Wordpress for the main "blog"
Gitlab for my git repositories
Owncloud for my data storage
I've been using Docker for getting a nice little gitlab running, which works perfectly fine, mapping to port :81 on my webserver with my domain.
What annoys me a bit is, that Docker images are always bound to a specific portnumber and are thus not really easy to remember, so I'd love to do something like this:
git.mydomain.com for gitlab
mydomain.com (no subdomain) for my blog
owncloud.mydomain.com for owncloud
As far as I understood, I need a reverse proxy for this, which I decided to use nginx for. So I set things up like this:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:84;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
server {
listen 80;
server_name git.mydomain.com;
location / {
proxy_pass http://localhost:81;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This way, I have git.mydomain.com up and running flawlessly, but my wordpress just shows me a blank webpage. My DNS is setup like this:
Host Type MX Destination
* A IP
# A IP
www CNAME #
Am I just too stupid or whats going on here?
I know your question is more specifically about your Nginx proxy configuration, but I thought it would be useful to give you this link which details how to set up an Nginx docker container that automagically deploys configurations for reverse-proxying those docker containers. In other words, you run the reverse proxy and then your other containers, and the Nginx container will route traffic to the others based on hostname.
Basically, you pull the proxy container and run it with a few parameters set in the docker run command, and then you bring up the other containers which you want proxied. Once you've got docker installed and pulled the nginx-proxy image, the specific commands I use to start the proxy:
docker run -d --name="nginx-proxy" --restart="always" -p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
And now the proxy is running. You can verify by pointing a browser at your address, which should return an Nginx 502 or 503 error. You'll get the errors because nothing is yet listening. To start up other containers, it's super easy, like this:
docker run -d --name="example.com" --restart="always" \
-e "VIRTUAL_HOST=example.com" w3b1x/mywebcontainer
That -e "VIRTUAL_HOST=example.com" is all it takes to get your Nginx proxy routing traffic to the container you're starting.
I've been using this particular method since I started with Docker and it's really handy for exactly this kind of situation. The article I linked gives you step-by-step instructions and all the information you'll need. If you need more information (specifically about implementing SSL in this setup), you can check out the git repository for this software.
Your nginx config look sane, however, you are hitting localhost:xx, which is wrong. It should be either gatewayip:xx or better target_private_ip:80.
An easy way to deal with this is to start your containers with --link and to "inject" the ip via a shell script: have the "original" nginx config with a placeholder instead of the ip, then sed -i with the value from the environment.

Using GitLab behind nginx enabled basic_auth?

I've successfully installed GitLab for management of private repositories (it's quite awesome!).
The problem I am having is by default, Gitlab login is presented when anyone hits my subdomain. I would like to protect the entire area with a basic_auth layer before the user gets the GitLab login screen. Unfortunately, this breaks my ability to push/pull from GitLab when it's enabled.
my nginx config to enable basic_auth:
auth_basic "Restricted";
auth_basic_user_file htpasswd;
Any ideas on how I can enable basic_auth without breaking git / gitlab functionality?
Add this to /etc/gitlab/gitlab.rb:
nginx['custom_gitlab_server_config'] = "auth_basic 'Restricted';\n auth_basic_user_file htpasswd;\n"
And run gitlab-ctl reconfigure
Kind of a hack at the moment but give this is a shot.
Edit your nginx site configuration to add / modify the following locations
location ^~ /api/v3/internal/allowed {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;}
location / {
auth_basic "Gitlab Restricted Access";
auth_basic_user_file /home/git/gitlab/htpasswd.users;
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
Leaving your #gitlab location block as is.
The trick is you let /api/v3/internal/allowd bypass the authentication. If you look at the logs when you do an git pull / push a request is made to the server whether or not to allow it. And on the standard nginx config with htpasswd that request would be blocked because the server has no idea about the authentication required.
Anyway not sure if there's a better alternative (couldn't find any) but this seems to work for me.
Your issue is that you want set a password restriction for public access to GitLab, but let Gitlab-Shell access the local GitLab instance without restriction.
You can have 2 nginx configurations depending on the IP interface. Change the line listen 0.0.0.0:80 default_server to listen 127.0.0.1:80 default_server.
https://github.com/gitlabhq/gitlabhq/blob/v7.7.2/lib/support/nginx/gitlab#L37-38

Installed gitlab, but only nginx welcome page shows

I installed gitlab using its installation guide. Everything was OK, but when I open localhost:80 in the browser all I see it the message Welcome to nginx!. I can't find any log file with any errors in it.
I am running Ubuntu in VirtualBox. My /etc/nginx/sites-enabled/gitlab config file reads:
# GITLAB
# Maintainer: #randx
# App Version: 3.0
upstream gitlab {
server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen 192.168.1.1:80; # e.g., listen 192.168.1.1:80;
server_name aridev-VirtualBox; # e.g., server_name source.example.com;
root /home/gitlab/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
The nginx documentation says:
Server names are defined using the server_name directive and determine which server block is used for a given request.
That means in your case that that you have to enter aridev-VirtualBox within your browser instead of localhost.
To get this working you have to enter aridev-VirtualBox within your local Hosts file and point it to the IP of your VirtualBox PC.
This would look something like follows:
192.168.1.1 aridev-VirtualBox
I removed /etc/nginx/sites-enabled/default to get rid of that problem.
Try following both orkoden's advice of removing the default site from /etc/nginx/sites-enabled/ but also comment out your listen line since the default implied line there should be sufficient.
Also, make sure that when you make changes to these configurations, shut down both the gitlab and nginx services and start them in the order of gitlab first, followed by nginx.
Your configuration file is right. # /etc/nginx/sites-enabled/gitlab
Maybe I think your gitlab file link is wrong.
So Example:
ln -s /etc/nginx/sites-available/default
/etc/nginx/sites-enabled/gitlab
pls check default content == your /etc/nginx/sites-enabled/gitlab
content
after
Me I changed this line :
proxy_pass http://gitlab;
by this :
proxy_pass http://localhost:3000;
3000 is the port of my unicorn server.
moreover I did a chown root:ngnix on the conf file and it work now.
Old topic, but it may happen when there is a previously installed nginx.
$ gitlab-ctl reconfigure
or restart will not complain but the previous nginx instance may actually running instead of the one under gitlab.
This just happened to me.
Shutdown and disable this old nginx instance and do again:
$ gitlab-ctl reconfigure

Resources