I'm trying to create SSL for Gitlab server by following the steps below:
1-Change rb file to indicate the external_url
/etc/gitlab/gitlab.rb
external_url 'https://10.1.43.111:443/gitlab'
2-Define the ssl cert in nginx - gitlab-http.conf
/var/opt/gitlab/nginx/conf#
However, when i run reconfigure command for gitlab --> sudo gitlab-ctl reconfigure, the content for gitlab-http.conf revert to the original file.
Did I define the SSL setting correctly? Any idea?
Thanks
One possible cause I get from the documentation "Configuration options for the GitLab Linux package / Specify the external URL at the time of installation"
As part of package updates, if you have EXTERNAL_URL variable set inadvertently, it replaces the existing value in /etc/gitlab/gitlab.rb without any warning.
Check the content of gitlab.rb after the reconfigure command: if it changes, that would explain why gitlab-http.conf is, in turn, affected.
Related
I'm using vagrant (VVV actually) to run local wordpress installs. I want to test different behaviors for different GEO's on my local machine instead of upload it every time to the server which is annoying.
So, I've tried to install the GeoIP nginx module to the local machine with the following guide https://piwik.org/faq/how-to/faq_166/ (and a bit more google but it doesn't matter at the moment).
When I'm using ./configure the following is exists:
checking for GeoIP library ... found
checking for GeoIP IPv6 support ... found
I've also set the .dat files in my conf file, and set the $_SERVER (fastcgi_param) parameters - so they displayed when I'm printing the $_SERVER var.
But those GeoIP vars are empty. I'm not sure about the reason, but 2 things is bothering me. First, when I'm write nginx -V in the terminal the argument --with-http_geoip_module is missing. Second, could it actually works if the REMOTE_ADDR (IP) is not my real IP? (192.168.1.50 for example).
nginx is a bit strange for me, so sorry if something isn't exact..
--
Operating system - macOS, nginx version - 1.3.15, running with VVV (vagrant box)
If there is a reverse proxy in front of your nginx, use geoip_proxy to set IPs whose X-Forwarded-For-Header can be trusted.
You can also use that without actually having a reverse proxy when you're developing. Add your local IP to the geoip_proxy-list and set the X-Forwarded-For-Header to your public IP in your browser (use a plugin like Modify Headers).
Setup: Windows 10; Docker running with Boot2Docker on Hyper-V; PHPStorm 9
Webserver on the VM is Nginx. I've configured the xdebug.ini for php5-fpm as:
zend_extension=xdebug.so
xdebug.remote_enable=on
xdebug.remote_port=9000
xdebug.remote_connect_back=On
xdebug.remote_handler=dbgp
xdebug.profiler_enable=0
If I set a breakpoint and reload the page I get an incoming connection from Xdebug in PHPStorm:
I wonder that there is only one file shown and not the entire project which is much bigger. If I accept the connection I can debug the very first line but it is not stopping on my breakpoint and creates a server entry which looks like:
What is very strange that host is empty.
I already added the server with the correct mapping but it got ignored.
So how to get Xdebug to stop on breakpoints?
What is very strange that host is empty.
PhpStorm requires this field to be filled as it uses this to recognize what server entry (and therefore path mappings) to use -- IDE supports debugging the same code base running on different domains / remote servers.
In this particular case the servername field / parameter of your nginx configuration is empty. You can fix this by providing some value in nginx config file.
I have a small nginx based test application that I want to run inside a docker container. So I followed the example given here docker installation
So I have a foder name restartTest and it contains an index.html file that has this single line in it that says Docker Test 1. I mount this up as my volume during runtime for docker container. So the commmand I use is
docker run -dP -v /Users/Sachin/restartTest/:/usr/share/nginx/html --name engine2 nginx
And it runs fine. I use curl to verify that the volume has mounted properly and the application is running as desired. Now what I do is that I change the content of the index.html file (from my localhost) to Docker test 2 and then I restart the container. I execute the following command to verify that the content has indeed changed inside the docker container
docker exec engine2 cat /usr/share/nginx/html/index.html
And as expected, the file reads Docker Test 2. However, when I use the curl command to see if the webpage also reflects the change I see that I still get Docker Test 1 as the response. The index.html reflects the change however when I run the curl command or if I access the app from the browser, I still get the same result. I have tried the following but to no avail.
Restart the service
Stop and start the container
Stop and start the boot2docker VM and docker daemon.
I have no clue as to why this is happening.
So I found this known bug with VirtualBox VM that is used for running Docker on Mac.
When we have shared content between our host machine and the VirtualBox, then only we face this bug. There is a optimisation as far as web servers like nginx, apache (and apparently vertx) are concerned. Whenever we request a static file from the server, it uses sendfile to provide us with the file. The bug is that in case of VirtualBox (in the scenario described above) we always get the first version of the file no matter what we try. The workaround for this in case of nginx and apache is to turn sendfile off . However, there is a hack that we use as far as vertx is concerned.
rename the file say login.html to login.html.moved (anything)
curl :/….../login.html (we won’t get anything)
rename the file back to its original name login.html.moved to login.html
Hard refresh the page (Command + Shift + R).
For further reading about this bug consult the following
Link1
Link2
Link3
Link4
I assume it is a caching problem. Did you try to set expires -1 in your index.html location configuration to disable server side caching for static files?
I am having a website which is working on nginx already .
nginx conf file is in /etc/nginx.conf folder.
Now i want to integrate lua into that project so i installed Openresty .
I created a folder with name "work" as per instruction in doc .And website is working fine at port 8080 as per instructions.
Now i want to use same code into my /etc/nginx/nginx.conf file.
like i can use statements like 'content_by_lua ' there .
I am not able to configure this .
I am getting below error.
Starting nginx: nginx: [emerg] unknown directive "content_by_lua" in /etc/nginx/nginx.conf:25
nginx: configuration file /etc/nginx/nginx.conf test failed
Let me know what i am doing wrong
I started from the same point. Had nginx, had lua, installed openresty and went from there. I was getting the exact same error. After spending considerable time, trying to make the openresty packages play nice with my nginx installation, I found it easiest to uninstall nginx and move forward just with openresty's nginx. Just make backups of your current nginx.conf and any vhost files.
When installing openresty I was sure to include the --with-luajit option. Set up a "hello, world" test, and everything worked wonderfully. My biggest complaint was not being able to start and stop nginx as a service anymore. The issue is a lack of init.d file in the openresty installation. Luckily I ran across this:
https://groups.google.com/forum/#!topic/openresty-en/7UOz-y77CY4
just change the name to openresty (instead of openresty.init.d) and place in /etc/init.d/ (assumed for Ubuntu). and start/stop/reload as sudo service openresty start
The error shows that your nginx don't compiled with the right module.
try type nginx -V to see if your nginx configured with nginx_lua_module
Maybe you should find out where the openresty nginx is and use this nginx instead of the default one.
I'm just starting to explore nginx on my ubuntu 10.04. I installed nginx and I'm able to get the "Welcome to Nginx" page on localhost. However I'm not able to add a new server_name.
Even when I make the changes in site-available/default. I also tried reloading/restarting nginx, but nothing works.
To build on mark's answer, Debian/Ubuntu distros default configuration file has an include /etc/nginx/sites-enabled/*; directive with site configuration file stored in /etc/nginx/sites-available/, a default site is usually included in that dir.
For examples beyond the default config, follow nginx beginner's guide or see wiki.nginx.org for more details.
After creating a new configuration in sites-available, create a symbolic link with this command, assuming that your conf file is named "myapp" and nginx is at /etc/nginx (could also be at /usr/local/etc/nginx):
ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/myapp
By the way, you could always create your conf file directly in sites-enabled but the recommended way above allows you to "enable and disable" sites on the server very quickly without actually moving/deleting your conf files.
P.S: Don't trust the tutorials: check your configuration!
P.P.S: You can use the command nginx -t to test your sites conf and nginx -s reload to reload the conf.
The usual way to add another site in Nginx in Ubuntu is to copy the sites-available/default file to sites-available/new-site-name, then create a symbolic link in sites-enabled to sites-available/new-site-name.
In the new configuration file, you need to edit the listen and server directives. Use listen to specify the IP address and port, and the server directive to specify the hostnames. For more details, see HttpCoremodule.