heroku + nginx on port 80 - nginx

im trying to start a nginx server on heroku free environmnent. I ready any how-tos and tutorial, but i dont get it running.
First of all, i would like to start nginx as default web-server on Port 80. Afterwards i would like configure nginx as proxy for the underline express server (other heroku instance).
For 4 days i trying to start only nginx on my heroku instance. I always getting the exception that not permitted to start on port 80.
I forked the nginx-buildback (https://github.com/moohkooh/nginx-buildpack) from (https://github.com/benmurden/nginx-pagespeed-buildpack) to adjust some configuration. If i run nginx via heroku bash on port 2555, nginx is starting, but i get connection refused on web-browser.
If i starting nginx via Dyno i getting error message on logs
State changed from starting to crashed
the Procfile of my Dyno
web: bin/start-nginx
My nginx.config.erb
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
server {
listen <%= ENV['PORT'] %>;
server_name _;
keepalive_timeout 5;
root /app/www;
index index.html;
location / {
autoindex on;
}
}
}
I also set PORT variable to 80
heroku config:get PORT
80
Some other configuration:
heroku config:get NGINX_WORKERS
8
heroku buildpacks
https://github.com/heroku/heroku-buildpack-multi.git
heroku stack
cedar-14
My .buildpack file
https://github.com/moohkooh/nginx-buildpack
https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/ruby.tgz
I also have the guess, that heroku dont use my variable that i set to 80. Whats wrong? Big thanks for anyone.
Btw: my express server running without any error on port 1000 (for test i start it also on port 80 without any errors)

i solved my problem with this configuration.
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
server {
listen <%= ENV['PORT'] %>;
server_name localhost;
port_in_redirect off;
keepalive_timeout 5;
root /app/www;
index index.html;
location / {
autoindex on;
}
}
}

For those who are trying to deploy to an NGINX container (React App in my case):
With the help of this docker image I was able to do it. You will need the following in a file called /etc/nginx/conf.d/default.conf.template inside your container:
server {
listen $PORT;
listen [::]:$PORT;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
Then in your Dockerfile, use:
CMD /bin/sh -c "envsubst '\$PORT' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf" && nginx -g 'daemon off;'
Now when you run:
> heroku container:push
> heroku container:release
NGINX will use Heroku's assigned PORT to distribute your application

Related

Get the certificate and key file names stored on Heroku to set up SSL on Nginx server

I wanted to add the certificate and key to the Nginx server that my application is served on and hosted by Heroku. This is what I currently have in my Nginx config file. Does proxying the SSL server work for this instead and keeps the server secure? If not then how am I supposed to get file names for the .pem and .key files that I uploaded to Heroku for my specific application?
nginx.conf.erb
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections <%= ENV['NGINX_WORKER_CONNECTIONS'] || 1024 %>;
}
http {
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log <%= ENV['NGINX_ACCESS_LOG_PATH'] || 'logs/nginx/access.log' %> l2met;
error_log <%= ENV['NGINX_ERROR_LOG_PATH'] || 'logs/nginx/error.log' %>;
include mime.types;
default_type text/html;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#Must read the body in 65 seconds.
keepalive_timeout 65;
# handle SNI
proxy_ssl_server_name on;
upstream app_server {
server unix:/tmp/nginx.socket fail_timeout=0;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
location / {
proxy_ssl_name <%= ENV["HEROKU_DOMAIN"] %>;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
client_max_body_size 5M;
}
location /static {
alias /app/flask_app/static;
}
}
}
If you create SSL certificate from CloudFlare, you can't access through Heroku CLI, but you can download it through CloudFlare.
Please check if you have routed your domain web through Configure-Cloudflare-and-Heroku-over-HTTPS.
Download the SSL Cert via CloudFlare.
Setup SSL cert for Nginx Setup SSL Cert.
Hope it helps.
EDIT
Put SSL cert .key and .pem into same folder with nginx.conf.erb. I.e. domain_name.key & domain_name.pem
Deploy to Heroku.
Use config like this:
ssl_certificate domain_name.pem;
ssl_certificate_key domain_name.key;

cgit + uwsgi + nginx not generating the pages for repositories

I am trying to configure cgit with nginx through uwsgi. I managed to get the main page working on example.com/ and added my repos but when I try to access a repo in example.com/somerepo I get a 502 error.
I know cgit is working fine because I can run cgit.cgi with and without the QUERY_STRING="url=somerepo"environmental variable and it generates the correct html for the main page and the somerepo page respectively.
I have been trying to debug the issue using the nginx error logs with debug level, strace and gdb on both nginx and cgit.cgi and the output from uwsgi, this is what I've found so far:
When I click on a somerepo link on cgit's main page uwsgi makes a GET request to /somerepo and nginx tries to open a directory in /htdocs/somerepo which it can't find because it doesn't exist. (I suppose cgit.cgi should generate this on the fly). I know this from strace stat("/usr/share/webapps/cgit/1.2.1/htdocs/olisrepo/", 0x7ffdf4c817c0) = -1 ENOENT (No such file or directory)
When I click on a somerepo link I get read(8, 0x561749c8afa0, 65536) = -1 EAGAIN (Resource temporarily unavailable) from cgit.cgi's strace.
When I try to visit a invalid url like somerepotypo it correctly generates a 404 page saying 'no repositories found'.
These are my configuration files:
/etc/nginx/nginx.conf
user nginx nginx;
worker_processes 1;
error_log /var/log/nginx/error_log debug;
events {
worker_connections 1024;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main
'$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$gzip_ratio"';
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
connection_pool_size 256;
client_header_buffer_size 1k;
large_client_header_buffers 4 2k;
request_pool_size 4k;
gzip off;
output_buffers 1 32k;
postpone_output 1460;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75 20;
ignore_invalid_headers on;
# Cgit
server {
listen 80;
server_name example.com;
root /usr/share/webapps/cgit/1.2.1/htdocs;
access_log /var/log/nginx/access_log main;
error_log /var/log/nginx/error_log debug;
location ~* ^.+(cgit.(css|png)|favicon.ico|robots.txt) {
root /usr/share/webapps/cgit/1.2.1/htdocs;
expires 30d;
}
location / {
try_files $uri #cgit;
}
location #cgit {
include uwsgi_params;
uwsgi_modifier1 9;
uwsgi_pass unix:/run/uwsgi/cgit.sock;
}
}
}
cgit.ini (I load this using uwsgi --ini /etc/uwsgi.d/cgit.ini)
[uwsgi]
master = true
plugins = cgi
chmod-socket = 666
socket = /run/uwsgi/%n.sock
uid = nginx
gid = nginx
processes = 1
threads = 1
cgi = /usr/share/webapps/cgit/1.2.1/hostroot/cgi-bin/cgit.cgi
/etc/cgitrc
css=/cgit.css
logo=/cgit.png
mimetype-file=/etc/mime.types
virtual-root=/
remove-suffix=1
enable-git-config=1
scan-path=/usr/local/cgitrepos
Can you help me fix this? Thanks in advance

NGINX resolving a non configured domain, why?

I have one server running on: http://localhost:8080
I'm configuring a sample NGINX server.
I copied from internet the following configuration:
# user nobody;
worker_processes 1;
error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# gzip on;
server
{
listen 80;
server_name mydomain01.com www.mydomain01.com;
location /
{
proxy_pass http://localhost:8080;
include "../proxy_params.conf";
}
}
}
On the hosts file I have just the following entries:
127.0.0.1 mydomain01.com
127.0.0.1 www.mydomain01.com;
127.0.0.1 mydomain02.com
127.0.0.1 www.mydomain02.com;
When I go to: http://mydomain01.com I get the same content as on: http://localhost:8080
My question is:
Why when I go to: http://mydomain02.com I also get the same content as on: http://localhost:8080?
I think I should not get that content because this last domain is not on the NGINX configuration.
Do I have an error on the configuration above?
Thanks!
nginx always contains a default server which will handle requests for server names that do not match a server_name directive. If you do not define a default_server, nginx will use the first server block with a matching location. See this document for details.

configuring nginx as proxy to work with devpi mirror on HP-cloud

I'm trying to create a devpi mirror on HP-cloud that will be accessed via nginx, i.e - nginx listens to port 80 and used as a proxy to devpi that is using port 4040 on the same machine.
I have configured an HP-cloud security group that is opened for all ports (inbound and outbound) in hp-cloud (just for the beginning, I'll change it later of-course), and started an ubuntu 14 instance.
I have allocated a public IP to the instance that I have created.
I have installed devpi-server using pip, and nginx using apt-get.
I have followed the instructions on devpi's tutuorial page here:
ran devpi-server --port 4040 --gen-config, and copied the contents that was created in nginx-devpi.conf into nginx.conf.
Then, I have started the server using devpi-server --port 4040 --start.
Started nginx using sudo nginx.
My problem is as follows:
When I'm SSHing to the hp-instance on which the nginx and devpi are running, and executing pip install -i http://<public-ip>:80/root/pypi/ simplejson it succeeded.
But, when I'm running the same command from my laptop I get
Downloading/unpacking simplejson
Cannot fetch index base URL http://<public-ip>:80/root/pypi/
http://<public-ip>:80/root/pypi/simplejson/ uses an insecure transport scheme (http). Consider using https if <public-ip>:80 has it available
Could not find any downloads that satisfy the requirement simplejson
Cleaning up...
No distributions at all found for simplejson
Storing debug log for failure in /home/hagai/.pip/pip.log
I thought it might be security/network issue, but I think that this is not the case, because curl http://<public-ip>:80 returns the same thing when I'm executing it from my laptop and from the HP instance:
{
"type": "list:userconfig",
"result": {
"root": {
"username": "root",
"indexes": {
"pypi": {
"type": "mirror",
"bases": [],
"volatile": false
}
}
}
}
}
I have also tried to start another instance in HP-cloud and execute pip install -i http://<public-ip>:80/root/pypi/ simplejson, but I got the same error as in my laptop.
I can't understand what is the difference between these two cases, and I'd be happy if someone would have a solution for this case, or any idea what might be the problem.
My nginx.conf file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
server {
server_name localhost;
listen 80;
gzip on;
gzip_min_length 2000;
gzip_proxied any;
#gzip_types text/html application/json;
proxy_read_timeout 60s;
client_max_body_size 64M;
# set to where your devpi-server state is on the filesystem
root /home/ubuntu/.devpi/server;
# try serving static files directly
location ~ /\+f/ {
error_page 418 = #proxy_to_app;
if ($request_method != GET) {
return 418;
}
try_files /+files$uri #proxy_to_app;
}
# try serving docs directly
location ~ /\+doc/ {
try_files $uri #proxy_to_app;
}
location / {
error_page 418 = #proxy_to_app;
return 418;
}
location #proxy_to_app {
proxy_pass http://localhost:4040;
#dynamic: proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-outside-url http://localhost:80;
proxy_set_header X-Real-IP $remote_addr;
}
}
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
#include /etc/nginx/sites-enabled/*;
}
edit:
I have tried to use devpi-client from my laptop, and when I've executed devpi use http://<public-ip>:80 from my laptop I get the following:
using server: http://localhost/ (not logged in)
no current index: type 'devpi use -l' to discover indices
~/.pydistutils.cfg : no config file exists
~/.pip/pip.conf : no config file exists
~/.buildout/default.cfg: no config file exists
always-set-cfg: no
You can try modify from this:
location #proxy_to_app {
proxy_pass http://localhost:4040;
#dynamic: proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-outside-url http://localhost:80;
proxy_set_header X-Real-IP $remote_addr;
}
To this
location #proxy_to_app {
proxy_pass http://localhost:4040;
proxy_set_header X-outside-url $scheme://$host;
proxy_set_header X-Real-IP $remote_addr;
}
This has been work for me :-).

nginx configuration for rhodecode & redmine on ubuntu 12.04

I am trying to setup rhodecode + redmine on ubuntu with the following
configuration
http://my_ip/redmine
and
http://my_ip/rhodecode
I am using nginx as the web server with redmine running on localhost:3000
and rhodecode running on localhost:5000, somehow iam missing the point in
configuring nginx.conf
I am able to redirect both redmine on port 3000( while testing with webrick) and rhodecode on port 5000 individually but not able to set them as
http://my_ip/redmine
and
http://my_ip/rhodecode
Following is my nginx.conf file
worker_processes 1;
events {
worker_connections 1024;
}
http {
passenger_root /usr/local/rvm/gems/ruby-1.9.3-p374/gems/passenger-3.0.19;
passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.3-p374/ruby;
upstream rhodecode {
server 127.0.0.1:5000;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /var/data/redmine/public;
passenger_enabled on;
client_max_body_size 25m; # Max attachemnt size
location /rhodecode/ {
try_files $uri #rhodecode;
proxy_pass http://127.0.0.1:5000;
}
location /rhodecode {
proxy_pass http://127.0.0.1:5000;
}
}
}
It will be easier to make subdomains redmine.yousite.com and rhodecode.yoursite.com. It's also prettier and more agile - you can easily move one of the apps to another server.

Resources