docker does not create folder in nginx html root - nginx

I am trying to create an nginx docker container (to provide file upload/download).
Under the html root I'm looking to create some additional subfolders (upload and download).
I took the nginx docker image as my base and added some additional lines to create/initialise the subfolders.
FROM nginx
MAINTAINER Carl Wainwright <carl.wainwright#ipaccess.com>
ENV HTML_PATH /var/www/html
COPY nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p $HTML_PATH/upload && mkdir -p $HTML_PATH/download
RUN chmod 755 $HTML_PATH/upload && mkdir chmod 755 $HTML_PATH/download
RUN chown nginx:nginx $HTML_PATH/upload && chown nginx:nginx $HTML_PATH/download
In my docker-compose file I am creating my container as follows:
wbh-device-asset-server:
restart: always
image: wbh-device-asset-server/nginx:test
container_name: wbh-device-asset-server
volumes:
- /www-data:/var/www/html
ports:
- "8081:8081"
networks:
- mynetwork
My nginx configuration has the following server configuration.
server {
error_log /var/log/nginx/error.log debug;
access_log /var/log/nginx/access.log main;
# Running port
listen 8081;
# Proxy requests to get SDP's
location ~ \.sdp {
root /var/www/html;
try_files $uri =404;
limit_except GET { deny all; }
}
# Proxy requests to put APD's.
location ~ \.(apd) {
dav_methods PUT;
limit_except PUT { deny all; }
client_body_temp_path /tmp/files/;
client_body_in_file_only on;
client_body_buffer_size 128K;
client_max_body_size 30M;
}
On my local machine /www-data exists and has write permissions.
Each time I bring the container up the contents of /www-data are empty.
Why is it I cannot create folders under /var/www/html/ ? What is stopping me from doing this.
NOTE: As part of my troubleshooting I created a docker image based on centos and installed nginx from packages and I faced the same issue.

The Dockerfile creates an image, this is the definition used to run containers, not the container itself, and is only run once per image creation. So all the RUN commands happen in there and update your image with the directories you expect.
In your docker-compose.yml, which creates containers, you have a host volume mount. Volumes are performed on the container, not the image build, so this directory is mounted after your Dockerfile's RUN commands have already updated the image. With a host volume, the contents of that directory on the host completely overlay the contents of the image, they aren't deleted, but you won't be able to see them in any container with that volume mount. If you used a named volume, and that named volume happened to be empty, e.g. after you created it for the first time, then Docker includes a feature to copy the contents of the image's directory into the volume before mounting it into your container.
So your 3 options are:
Don't use a volume at all and your files will be visible. Not recommended if you want to preserve this data between containers.
Use a named volume. That can be as easy as changing the volume source from a fully qualified directory, /www-data, to a name, www-data. If you do this, you won't be able to manage the contents of that folder easily from your host, Docker will manage it via it's internal directory structure and you'll want to manage it via containers.
Simply add your desired files to the directory on your host. This is the easiest solution when you're starting out, but be aware that users on the host may not match users in the container, so you may see permission and uid errors that you'll need to fix with chmod or chown commands.

Related

Wordpress Docker behind Nginx Reverse Proxy

I use this Page and their threads to solve problems for years, but know I have to make a question.
I have tried to install WordPress Docker on my Vserver Machine. It pretty works but the only HTTP.
To install the Wordpress Docker I have to use the tutorial from the following Link.
Additionally, I added --restart always at docker run -e ... command.
Then I installed nginx 1.12.xxx to have a Reverse Proxy. But SSL didn't work. After that, I tried to install a newer version 1.15.xx from nginx repository with no better results.
I installed a certificate with Let's Encrypt and Certbot.
After that WordPress was running and the wp-admin.php was accessible.
But I don't get SSL/HTTPS working. I already tried many codes and my workmates at my workplace even can't get a solution.
I hope you can get one :)
I tried to configure wp-config.php to enable https with commands like "$_SERVER['HTTPS'] = 'on';" and others with no working rather destroying effects.
I also tried to enable "X-Forwared-Proto $scheme;" and "FastCGI" which didn't work as well. I tried many variations of them.
I tried some SSL Plugins from Wordpress but none of them are working.
https://www.bilder-upload.eu/upload/a0eb85-1554884646.png
https://www.bilder-upload.eu/upload/028dc9-1554883515.png
I hope its a little fault and you can help me easily.
First Install Docker on Ubuntu
Either you go with a docker provider like Bluemix or you get a virtual machine from softlayer or any other provider. In my case I have chosen a virtual server so I had to install docker on Ubuntu LTS. Which is really easy. Basically you add a new repository entry to your apt sources and install latest stable docker packages. There is also a script available on get.docker.com but I don’t feel comfortable to execute a shell script right from the net with root access. But it’s up to you.
wget -qO- https://get.docker.com/ | sh
Docker on linux does not contain docker-compose compared to the docker installation for example on mac. Installing docker compose is straightforward. The docker compose script can be downloaded from github here: https://github.com/docker/compose/releases.
Docker-compose
Docker-compose takes care of a docker setup containing more than one docker container, including network and also basic monitoring. The following script starts and builds all docker container with nginx, mysql and wordpress. It also exports the volumes on the host file system for easy backup and persistence along docker container rebuilds and monitors if the docker containers are up and running.
version: '3'
services:
db:
image: mysql:latest
volumes:
- ./db:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: easytoguess
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: eveneasier
wordpress:
depends_on:
- db
image: wordpress:latest
restart: always
volumes:
- ./wordpress:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: eveneasier
WORDPRESS_DB_NAME: wordpress
nginx:
depends_on:
- wordpress
restart: always
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "80:80"
Mysql is the first container we bring up with environment variables for the database like username, password and database name. Line 7 takes care to save the database file outside the docker container so you can delete the docker container, start a new one and still have the same database up and running. Point this where you want to have it. In this case in “db” under the same directory. Also make sure you come up with decent passwords.
The second container is wordpress. Same here with the host folder on line 21. Furthermore make sure you have the same user, password and db name configured as in the mysql container configuration.
Last one is nginx as internet facing container. You expose the port 80 here. While you just specify a container in the other two, in this one you configure a Dockerfile and a build context to customize your nginx regarding to the network setup. If you only want to host static files you can add this via volume mounts, but in our case we need to configure nginx itself so we need a customized Dockerfile as described below.
Dockerfile for nginx setup
FROM nginx:latest
COPY default.conf /etc/nginx/conf.d/default.conf
VOLUME /var/log/nginx/log/
EXPOSE 80
This dockerfile inherits everything from the latest nginx and copies the default.conf file into it. See next chapter for how to setup the config file.
Nginx config file
server {
listen 80;
listen [::]:80;
server_name www.23-5.eu ansi.23-5.eu;
access_log /var/log/nginx/log/unsecure.access.log main;
location / {
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_pass http://wordpress;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
Line 2 and 3 configures the port we want to listen on. We need one for ip4 and one for ip6. Important is the proxy configuration in line 8 to 15. Line 11 redirect all calls to “/” (so without a path in the URL) to the server wordpress. As we used docker-compose for it docker takes care to make the address available via the internal DNS server. Line 13-15 rewrites the http header in order to map everything to the different URL, otherwise we would end up with auto generated links in docker pointing to http://wordpress
Start the System
If everything is configured and the docker-compose.yml, default.conf, Dockerfile-nginx and the folders db and wordpress are in the same folder, we can start everything being in this folder with:
docker-compose up --build -d
The parameter “-d” starts the setup in the background (daemon). For the very first run I would recommend using it without the “-d” parameter to see all debug messages.

Authorization Issue with Certbot(Standalone)+Nginx+Chef

I use Nginx on my server and want to serve my application on HTTPS using Let's Encrypt certs. I do the following on a fresh server before the application code gets deployed:
Install Nginx
Write the following nginx configuration file to sites-available, for certbot. Then symlink to sites-enabled and restart nginx
server {
listen 80;
server_name foo.bar.com;
allow all;
location ^~ /.well-known/acme-challenge/ {
proxy_pass http://0.0.0.0:22000;
}
}
Then run certbot
certbot certonly -m foo#bar.com --standalone --http-01-port 22000 --preferred-challenges http --cert-name bar.com -d foo.bar.com --agree-tos --non-interactive
All of the above work fine when run manually.
I use Chef to automate the above process. Certbot gets a 404 the first time I deploy. It works on subsequent deployments though.
Keep a note of the following detail:
The phenomenon happens only when I freshly install Nginx and then run my deploy script through Chef and disappears on subsequent deploys.
I use a custom LWRP to run the above steps in Chef expcept nginx installation. Nginx installation is taken care of by chef_nginx. I've pasted the snippet of the LWRP that runs the above steps.
vhost_file = "#{node['certbot']['sites_configuration_path']}/#{node['certbot']['sites_configuration_file']}"
template vhost_file do
cookbook 'certbot'
source 'nginx-letsencrypt.vhost.conf.erb'
owner 'root'
group 'root'
variables(
server_names: new_resource.sans,
certbot_port: node['certbot']['standalone_port'],
mode: node['certbot']['standalone_mode']
)
mode 00644
only_if "test -d #{node['certbot']['sites_configuration_path']}"
end
nginx_site node['certbot']['sites_configuration_file']
Using certbot in standalone mode on port 22000
How do I make things work even on the first deployment ?

How to configure NGINX routes?

I have 5 lamp containers (tutum/lamp) with mounted ports as follows:
127.0.0.1:81:80
127.0.0.1:82:80
127.0.0.1:83:80
127.0.0.1:84:80
127.0.0.1:85:80
What I would like to do is put an NGINX in front of them so that it redirects to the appropriate containers depending on the URL. For example, let's assume that the host IP is 12.45.5.113. Then, when I visit 12.45.5.113/c1/ I want to get redirected to home page of container 127.0.0.1:81:80, when I visit 12.45.5.113/c2/ I want to get redirected to home page of container 127.0.0.1:82:80 and so on so forth.
How should the NGINX configuration look like? Should I installed NGINX on the host with apt-get install or it could be possible to install it as a additional container too?
I think easiest approach is to launch nginx in container.
docker run --port 80:80 --link c1 ... --link cn ... nginx
with config like (can be mounted from host by --volume argument to docker run):
{
listen 80;
location /c1/ {
proxy_pass http://c1;
}
...
location /cn/ {
proxy_pass http://cn;
}
}
this way it will redirect all request as you wish using Docker container linking mechanism (all request will be routed through bridge network).
For more information check Docker documentation: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#/connect-with-the-linking-system

Docker containers: how do they work together?

I have started working with docker and built a working example as seen in https://codeable.io/wordpress-developers-intro-docker. I need a quite small footprint of the docker containers since the deployment will be on an emebedded system.
But I have no clue about how this fits together.
There are two Dockerfiles, one for Nginx:
FROM nginx:1.9-alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
The nginx.conf is defined as:
server {
server_name _;
listen 80 default_server;
root /var/www/html;
index index.php index.html;
access_log /dev/stdout;
error_log /dev/stdout info;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ .php$ {
include fastcgi_params;
fastcgi_pass my-php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
The other Dockerfuile is for PHP:
Dockerfile.php-fpm:
FROM php:7.0.6-fpm-alpine
RUN docker-php-ext-install -j$(grep -c ^processor /proc/cpuinfo 2>/dev/null || 1) \
iconv gd mbstring fileinfo curl xmlreader xmlwriter spl ftp mysqli
VOLUME /var/www/html
Finally everything comes together in a docker-compose.yml:
version: '2'
services:
my-nginx:
build: .
volumes:
- .:/var/www/html
ports:
- "8080:80"
links:
- my-php
my-php:
build:
context: .
dockerfile: Dockerfile.php-fpm
volumes:
- .:/var/www/html
ports:
- "9000:9000"
The docker containers are started up using
$ docker-compose build
$ docker-compose up
And everything works - it's a kind of magic!
Here are (some of) my questions to understand what's going on:
How does the nginx-container know about the php-container?
When PHP is invoked from nginx, which container does
the PHP process run in?
How is the data passed from nginx to PHP and back?
Is this docker usage (3 containers for a simple
web server application) the right way to use docker or
is this an overkill of containers?
How can this docker architecture be scaled for increasing
load? Can I use it for production?
The containers use the same volume (./) on the host.
When using a PHP Framework as Yii2, wouldn't it better
to move the volume to either the PHP or Nginx container?
How does the nginx-container know about the php-container?
Under links you listed the my-php container, this, among other things, creates a mapping between the name of the container and it's IP in the /etc/hosts file.
When PHP is invoked from nginx, which container does the PHP process run in?
As you would expect, any php code will run in the my-php container, this is defined in the nginx config file, which passes the processing of requests to the php engine running on my-php:9000.
How is the data passed from nginx to PHP and back?
Over regular socket communication. Both dockers have their addresses, and they can communicate with each other, like any other computer connected to the network.
Is this docker usage (3 containers for a simple web server application) the right way to use docker or is this an overkill of containers?
I only see 2 containers here. There are some who would say a container should only run one process (like here, so you have built the minimal system), and there are some who say each container should run whatever the service needs. (this however is a matter of preference, and there are different opinions on the matter)
How can this docker architecture be scaled for increasing load? Can I use it for production?
Yes, you could use it for production. It can scale easily, but in order to achieve scale you are missing some pieces to balance the load. (e.g. a load balancer that can send new requests to an instance which isn't already busy. A very common tool for this is HAProxy.
The containers use the same volume (./) on the host. When using a PHP Framework as Yii2, wouldn't it better to move the volume to either the PHP or Nginx container?
As the PHP container does all the processing in this case, it should be safe to only mount the volume on my-php.

How to serve other vhosts next to Gitlab Omnibus server? [Full step-by-step solution]

I installed Gitlab CE on a dedicated Ubuntu 14.04 server edition with Omnibus package.
Now I would want to install three other virtual hosts next to gitlab.
Two are node.js web applications launched by a non-root user running on two distinct ports > 1024, the third is a PHP web application that need a web server to be launched from.
There are:
a private bower registry running on 8081 (node.js)
a private npm registry running on 8082 (node.js)
a private composer registry (PHP)
But Omnibus listen 80 and doesn't seem to use neither Apache2 or Nginx, thus I can't use them to serve my PHP app and reverse-proxy my two other node apps.
What serving mechanics Gitlab Omnibus uses to listen 80 ?
How should I create the three other virtual hosts to be able to provide the following vHosts ?
gitlab.mycompany.com (:80) -- already in use
bower.mycompany.com (:80)
npm.mycompany.com (:80)
packagist.mycompany.com (:80)
About these
But Omnibus listen 80 and doesn't seem to use neither Apache2 or Nginx [, thus ...].
and #stdob comment :
Did omnibus not use nginx as a web server ??? –
Wich I responded
I guess not because nginx package isn't installed in the system ...
In facts
From Gitlab official docs :
By default, omnibus-gitlab installs GitLab with bundled Nginx.
So yes!
Omnibus package actually uses Nginx !
but it was bundled, explaining why it doesn't require to be installed as dependency from the host OS.
Thus YES! Nginx can, and should be used to serve my PHP app and reverse-proxy my two other node apps.
Then now
Omnibus-gitlab allows webserver access through user gitlab-www which resides
in the group with the same name. To allow an external webserver access to
GitLab, external webserver user needs to be added gitlab-www group.
To use another web server like Apache or an existing Nginx installation you will have to do
the following steps:
Disable bundled Nginx by specifying in /etc/gitlab/gitlab.rb
nginx['enable'] = false
# For GitLab CI, use the following:
ci_nginx['enable'] = false
Check the username of the non-bundled web-server user. By default, omnibus-gitlab has no default setting for external webserver user.
You have to specify the external webserver user username in the configuration!
Let's say for example that webserver user is www-data.
In /etc/gitlab/gitlab.rb set
web_server['external_users'] = ['www-data']
This setting is an array so you can specify more than one user to be added to gitlab-www group.
Run sudo gitlab-ctl reconfigure for the change to take effect.
Setting the NGINX listen address or addresses
By default NGINX will accept incoming connections on all local IPv4 addresses.
You can change the list of addresses in /etc/gitlab/gitlab.rb.
nginx['listen_addresses'] = ["0.0.0.0", "[::]"] # listen on all IPv4 and IPv6 addresses
For GitLab CI, use the ci_nginx['listen_addresses'] setting.
Setting the NGINX listen port
By default NGINX will listen on the port specified in external_url or
implicitly use the right port (80 for HTTP, 443 for HTTPS). If you are running
GitLab behind a reverse proxy, you may want to override the listen port to
something else. For example, to use port 8080:
nginx['listen_port'] = 8080
Similarly, for GitLab CI:
ci_nginx['listen_port'] = 8081
Supporting proxied SSL
By default NGINX will auto-detect whether to use SSL if external_url
contains https://. If you are running GitLab behind a reverse proxy, you
may wish to keep the external_url as an HTTPS address but communicate with
the GitLab NGINX internally over HTTP. To do this, you can disable HTTPS using
the listen_https option:
nginx['listen_https'] = false
Similarly, for GitLab CI:
ci_nginx['listen_https'] = false
Note that you may need to configure your reverse proxy to forward certain
headers (e.g. Host, X-Forwarded-Ssl, X-Forwarded-For, X-Forwarded-Port) to GitLab.
You may see improper redirections or errors (e.g. "422 Unprocessable Entity",
"Can't verify CSRF token authenticity") if you forget this step. For more
information, see:
What's the de facto standard for a Reverse Proxy to tell the backend SSL is used?
https://wiki.apache.org/couchdb/Nginx_As_a_Reverse_Proxy
To go further you can follow the official docs at https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#using-a-non-bundled-web-server
Configuring our gitlab virtual host
Installing Phusion Passenger
We need to install ruby (gitlab run in omnibus with a bundled ruby) globally in the OS
$ sudo apt-get update
$ sudo apt-get install ruby
$ sudo gem install passenger
Recompile nginx with the passenger module
Instead of Apache2 for example, nginx isn't able to be plugged with binary modules on-the-fly. It must be recompiled for each new plugin you want to add.
Phusion passenger developer team worked hard to provide saying, "a bundled nginx version of passenger" : nginx bins compiled with passenger plugin.
So, lets use it:
requirement: we need to open our TCP port 11371 (the APT key port).
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 561F9B9CAC40B2F7
$ sudo apt-get install apt-transport-https ca-certificates
creating passenger.list
$ sudo nano /etc/apt/sources.list.d/passenger.list
with these lignes
# Ubuntu 14.04
deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main
use the right repo for your ubuntu version. For Ubuntu 15.04 for example:
deb https://oss-binaries.phusionpassenger.com/apt/passenger vivid main
Edit permissions:
$ sudo chown root: /etc/apt/sources.list.d/passenger.list
$ sudo chmod 600 /etc/apt/sources.list.d/passenger.list
Updating package list:
$ sudo apt-get update
Allowing it as unattended-upgrades
$ sudo nano /etc/apt/apt.conf.d/50unattended-upgrades
Find or create this config block on top of the file:
// Automatically upgrade packages from these (origin:archive) pairs
Unattended-Upgrade::Allowed-Origins {
// you may have some instructions here
};
Add the following:
// Automatically upgrade packages from these (origin:archive) pairs
Unattended-Upgrade::Allowed-Origins {
// you may have some instructions here
// To check "Origin:" and "Suite:", you could use e.g.:
// grep "Origin\|Suite" /var/lib/apt/lists/oss-binaries.phusionpassenger.com*
"Phusion:stable";
};
Now (re)install nginx-extra and passenger:
$ sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak_"$(date +%Y-%m-%d_%H:%M)"
$ sudo apt-get install nginx-extras passenger
configure it
Uncomment the passenger_root and passenger_ruby directives in the /etc/nginx/nginx.conf file:
$ sudo nano /etc/nginx/nginx.conf
... to obtain something like:
##
# Phusion Passenger config
##
# Uncomment it if you installed passenger or passenger-enterprise
##
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/passenger_free_ruby;
create the nginx site configuration (the virtual host conf)
$ nano /etc/nginx/sites-available/gitlab.conf
server {
listen *:80;
server_name gitlab.mycompany.com;
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
client_max_body_size 250m;
access_log /var/log/gitlab/nginx/gitlab_access.log;
error_log /var/log/gitlab/nginx/gitlab_error.log;
# Ensure Passenger uses the bundled Ruby version
passenger_ruby /opt/gitlab/embedded/bin/ruby;
# Correct the $PATH variable to included packaged executables
passenger_env_var PATH "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/usr/local/bin:/usr/bin:/bin";
# Make sure Passenger runs as the correct user and group to
# prevent permission issues
passenger_user git;
passenger_group git;
# Enable Passenger & keep at least one instance running at all times
passenger_enabled on;
passenger_min_instances 1;
error_page 502 /502.html;
}
Now we can enable it:
$ sudo ln -s /etc/nginx/sites-available/gitlab.cong /etc/nginx/sites-enabled/
There is no a2ensite equivalent coming natively with nginx, so we use ln, but if you want, there is a project on github:
nginx_ensite:
nginx_ensite and nginx_dissite for quick virtual host enabling and disabling
This is a shell (Bash) script that replicates for nginx the Debian a2ensite and a2dissite for enabling and disabling sites as virtual hosts in Apache 2.2/2.4.
It' done :-). Finally, restart nginx
$ sudo service nginx restart
With this new configuration, you are able to run other virtual hosts next to gitlab to serve what you want
Just create new configs in /etc/nginx/sites-available.
In my case, I made running and serving this way on the same host :
gitlab.mycompany.com - the awesome git platform written in ruby
ci.mycompany.com - the gitlab continuous integration server written in ruby
npm.mycompany.com - a private npm registry written in node.js
bower.mycompany.com - a private bower registry written in node.js
packagist.mycompany.com - a private packagist for composer registry written in php
For example, to serve npm.mycompany.com :
Create a directory for logs:
$ sudo mkdir -p /var/log/private-npm/nginx/
And fill a new vhost config file:
$ sudo nano /etc/nginx/sites-available/npm.conf
With this config
server {
listen *:80;
server_name npm.mycompany.com
client_max_body_size 5m;
access_log /var/log/private-npm/nginx/npm_access.log;
error_log /var/log/private-npm/nginx/npm_error.log;
location / {
proxy_pass http://localhost:8082;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Then enable it and restart it:
$ sudo ln -s /etc/nginx/sites-available/npm.conf /etc/nginx/sites-enabled/
$ sudo service nginx restart
As I would not like to change the nginx server for gitlab (with some other integrations), the safest way would be below solution.
also as per
Gitlab:Ningx =>Inserting custom settings into the NGINX config
edit the /etc/gitlab/gitlab.rb of your gitlab:
nano /etc/gitlab/gitlab.rb
and sroll to nginx['custom_nginx_config'] and modify as below make sure to uncomment
# Example: include a directory to scan for additional config files
nginx['custom_nginx_config'] = "include /etc/nginx/conf.d/*.conf;"
create the new config dir:
mkdir -p /etc/nginx/conf.d/
nano /etc/nginx/conf.d/new_app.conf
and add content to your new config
# my new app config : /etc/nginx/conf.d/new_app.conf
# set location of new app
upstream new_app {
server localhost:1234; # wherever it might be
}
# set the new app server
server {
listen *:80;
server_name new_app.mycompany.com;
server_tokens off;
access_log /var/log/new_app_access.log;
error_log /var/log/new_app_error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
location / { proxy_pass http://new_app; }
}
and reconfigure gitlab to get the new settings inserted
gitlab-ctl reconfigure
to restart nginx
gitlab-ctl restart nginx
to check nginx error log:
tail -f /var/log/gitlab/nginx/error.log

Resources