I had Wordpress running in Apache but it giving me slow performance (idk if it is for apache) and i tried to migrate it to nginx.
I configured the nginx.conf but it giving me: 502 Bad Gateway
My wordpress path is: C:\nginx\html
My php path is: C:\nginx\php
And my nginx.conf is:
listen 443 ssl;
server_name domain.com www.domain.com;
root html;
index index.php;
try_files $uri $uri/ /index.php?$args;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_certificate $CERTPATH;
ssl_certificate_key $CERTKEYPATH;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
root "html";
index index.php index.html index.htm;
}
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
}
i tried without location ~ .php and it was giving me a download file everytime that i execute website.
Related
in sites-enabled file test.com.conf:
map $http_host $blogid {
default 0;
test.com 1;
}
server {
listen 5.187.1.93:80;
server_name test.com *.test.com;
root /home/fornex/wordpress;
access_log /var/log/nginx/test.com-access.log;
error_log /var/log/nginx/test.com-error.log;
include conf.d/restrictions.conf;
# include /home/fornex/wordpress/nginx.conf;
include conf.d/wordpress-mu.conf;
}
file site.com.conf:
server {
listen 5.187.1.93:80;
server_name site.com *.site.com;
return 301 https://$host$request_uri;
}
server {
listen 5.187.1.93:443 ssl;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_certificate /etc/letsencrypt/live/site.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site.com/privkey.pem;
server_name site.com *.site.com;
root /home/fornex/site.com;
index index.php;
client_max_body_size 7m;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~* /\. {
deny all;
}
location ~*\.(php)$ {
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
when I open test.com(It should be a wordpress installation) - site.com opens. What is wrong? How can I make them separate sites? I searched a lot in WEB but didn't find anything that helps in my situation. Adding *.test.com didn't help.
I'm currently working on creating a dockerized server with two sites. I want them both to run over port 443. So far, I've managed to get one of them running on their own using the nginx reverse proxy, but when I try to do both, it seems to be totally ignoring my server.
stream {
upstream shop_local_xposi_com {
server 127.0.0.1:9000;
}
upstream sockets_local_xposi_com {
server 127.0.0.1:9001;
}
map $ssl_preread_server_name $upstream {
shop.local.xposi.com shop_local_website_com;
socket.local.xposi.com sockets_local_website_com;
}
# SHOP webserver
server {
# SSL
listen 127.0.0.1:9000 ssl;
ssl_certificate /etc/nginx/certs/website.com.crt;
ssl_certificate_key /etc/nginx/certs/website.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
ssl_prefer_server_ciphers on;
index index.php index.html;
root /var/www/public;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
# SOCKET webserver
server {
# SSL
listen 127.0.0.1:9001 ssl;
ssl_certificate /etc/nginx/certs/website.com.crt;
ssl_certificate_key /etc/nginx/certs/website.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
ssl_prefer_server_ciphers on;
index index.php index.html;
root /var/www/public;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass socket:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
server {
listen 443;
ssl_preread on;
proxy_pass $upstream;
}
}
When running just one server, this config gile was just one of the larger server sections, which worked perfectly. But when trying to create the set up I'm trying to create (diagram below), it instantly redirects to the API on my accept environment. My guess as to why this specific api is because it's the next available line with the same domain in my window's hosts file, so the browser gets told to go there(?).
For any further information that I forgot to give, please ask.
I have similar function but I do have different "servers" listening to different server_name configuration
server {
listen 80 ; (or listen 443 ;)
server_name shop-local.website.com ;
location / {
... some code
proxy_pass http://shoplocalwebsiteIP:port;
}
}
server {
listen 80 ; (or listen 443 ;)
server_name socket-local.website.com ;
location / {
... some code
proxy_pass http://socketlocalwebsiteIP:port;
}
}
You could encapsulate the server name inside the desired block and then set the correct proxy_pass to backend.
I'm trying to configure nginx with two domains, each with a different ssl certificate and a different root location.
The following code shows two times the exact same thing.
domain1.com should go to /home/ubuntu/web/html/domain1 and should use this: /etc/letsencrypt/live/domain1.com/fullchain.pem certificate. domain2.com should go to /home/ubuntu/web/html/domain2 and use this /etc/letsencrypt/live/domain2.com/fullchain.pem certificate.
I tried the following:
server {
listen 80;
server_name www.domain1.com;
return 301 https://domain1.com$request_uri;
}
server {
listen 80;
server_name domain1.com;
return 301 https://domain1.com$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
root /home/ubuntu/web/html/domain1;
index index.php index.html index.htm;
server_name domain1.com, www.domain1.com;
location / {
try_files $uri $uri/ $uri.html $uri.php?$query_string;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_param MAGE_MODE "developer";
}
}
server {
listen 80;
server_name www.domain2.com;
return 301 https://domain2.com$request_uri;
}
server {
listen 80;
server_name domain2.com;
return 301 https://domain2.com$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/letsencrypt/live/domain2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain2.com/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
root /home/ubuntu/web/html/domain2;
index index.php index.html index.htm;
server_name domain2.com, www.domain2.com;
location / {
try_files $uri $uri/ $uri.html $uri.php?$query_string;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_param MAGE_MODE "developer";
}
}
It turns out that always one server block is chosen as default and also used for the the other domain.
Also adding an extra default server doesn't work.
i'm new to nginx and i have a problem with virtual host. The virtual host didn't work when i try to access the vhost it'll be redirect to localhost "Welcome to nginx". Here are the contents of my config:
/etc/hosts config:
127.0.0.1 localhost localhost.localdomain
::1 localhost localhost.localdomain
****Generated by Admin****
18.200.10.50 mail.testingweb.com
18.200.10.50 testingweb.com
SSL config on /etc/nginx/conf.d/ssl.conf:
server {
listen 443 default_server ssl;
server_name testingweb.com;
ssl_certificate /etc/nginx/sslcert/xxxx.crt;
ssl_certificate_key /etc/nginx/sslcert/xxxxx.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
keepalive_timeout 70;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNU$
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
root /usr/share/nginx/html;
index index.php index.html index.htm;
}
location ~ \.php$ {
try_files $uri =404;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
/etc/nginx/sites-available/default config:
server {
listen 80 default_server;
# listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/xhtml;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name testingweb.com;
return 301 https://$host$request_uri;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri =404;
# # With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
server {
listen 80;
listen 443;
return 403;
}
I want to access another sites from new root directory, /usr/share/nginx/html/www on www directory there is a wordpress.
/etc/nginx/sites-available/testingweb config:
server {
listen 80 default_server;
# listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html/www;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name testingweb.com;
# rewrite ^ https://$http_host$request_uri? permanent;
return 301 https://$host$request_uri;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?q=$uri&$args;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
# # With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# location = /favicon.ico {
# alias /usr/share/nginx/html/favicon.ico;
# }
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
According the configs, what's wrong with my config ? i cannot access the wordpress file on /usr/share/nginx/html/www directory by domain testingweb.com ? its always redirect to default host instead of testingweb host ?
sorry for my bad english..
This is a revised version of the nginx configuration from your pastebin code:
server {
listen 80;
# listen [::]:80 default_server ipv6only=on;
# Make site accessible from http://devdev.com/
server_name devdev.com;
return 301 https://$host$request_uri;
}
# HTTPS server
#
server {
listen 443 default_server ssl;
server_name devdev.com;
root /var/www;
index index.php index.html index.htm;
# uncomment to add your access log path here
# access_log /var/log/nginx/devdev.com.access.log main;
ssl_certificate /etc/ssl/ssl-unified.crt;
ssl_certificate_key /etc/ssl/ssl-my-private-decrypted.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
keepalive_timeout 70;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS +RC4 RC4";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location #default {
rewrite ^/(.*) /index.php?uri=$request_uri last;
}
location / {
try_files $uri $uri/index.php #default;
}
location ~ \.php$ {
try_files $uri =404;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
The first server block listening on port 80 just redirects to https://devdev.com/. This will redirect all http requests to https so you don't need any other processing rules.
The second server block listens on port 443 and will proxy requests with a path ending with .php to php-fpm (you want to double-check that it's running on a unix socket and your permissions are correct).
The location block matching the / prefix (location /) will try to match files in the request URI and handle the request appropriately. For example:
If the request is for /index.php and the file exists, the following block will match the .php suffix and proxy to php-fpm.
If the request is for /foo and there's no match for a file by that name, nginx will try to match /foo/index.php and then proxy to php-fpm.
If there is still no match, try_files will use the #default location block, which just sends the request to your top-level /index.php with the request URI as parameters.
If your WordPress site is located in /var/www -- the top-level entry point should be /var/www/index.php -- this configuration should work. You might need to tweak the configurations based on your WordPress settings -- though this is generic enough that it should work without a lot of changes.
I'm very new to nginx, so forgive me if my explanations are off. I'll do my best to explain what I am trying to achieve.
Using WordPress and nginx, I would like user accounts to be mapped to a subdomain of the main domain. For example, if the user creates an account called "sample", the subdomain for that user would be sample.example.com.
When the user goes to sample.example.com, the subdomain should be mapped to example.com/sample/. Similarly, if a user visits sample.example.com/account/, it should map to example.com/sample/account/, and so on and so forth. It should be noted that the example.com/sample/ URLs are rewrites of this type of structure: example.com/index.php?user=sample.
There are also a few reserved subdomains that should not be redirected, such as cdn and admin. They should be ignored by these rules if they are requested.
How can I achieve this automatically when a user creates an account? The goal here is automation - set it up once correctly and not worry about it. Since I have literally just started working with nginx a few days ago, I'm not sure where to start at all. Any advice to move me in the right direction would be incredibly helpful. Here is my current config file for the domain:
server {
listen 80;
server_name www.example.com;
rewrite ^(.*) $scheme://example.com$1 permanent;
}
server {
listen 443 ssl;
server_name www.example.com;
rewrite ^(.*) $scheme://example.com$1 permanent;
}
server {
listen 80;
server_name example.com;
access_log /var/www/example.com/logs/access.log;
error_log /var/www/example.com/logs/error.log;
root /var/www/example.com/public;
index index.php;
location / {
try_files $uri $uri/ #wordpress /index.php?q=$request_uri;
}
location #wordpress {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME /var/www/example.com/public/index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_NAME /index.php;
}
# Pass the PHP scripts to FastCGI server listening on UNIX sockets.
#
location ~ \.php$ {
try_files $uri #wordpress;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/example.com/public$fastcgi_script_name;
include fastcgi_params;
}
}
server {
listen 443 ssl;
ssl on;
keepalive_timeout 70;
server_name example.com;
ssl_certificate ssl/example.com.chained.crt;
ssl_certificate_key ssl/example.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
root /var/www/example.com/public;
index index.php;
location / {
try_files $uri $uri/ #wordpress /index.php?q=$request_uri;
}
location #wordpress {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME /var/www/example.com/public/index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_NAME /index.php;
}
# Pass the PHP scripts to FastCGI server listening on UNIX sockets.
#
location ~ \.php$ {
try_files $uri #wordpress;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/example.com/public$fastcgi_script_name;
include fastcgi_params;
}
}
I understand that what I am trying to achieve probably needs to go into the /etc/nginx/nginx.conf file if I want it to be automated, and I am actively trying to learn how to achieve this. I'm just stuck where I am at now and am looking for any advice/help that would point me in the right direction. I'm eager to learn!
ANSWER
After days of searching, tweaking, and configuring, I've gotten down the code needed to map subdomains to URLs exactly like in my example. Here is my vhost for example.com: https://gist.github.com/thomasgriffin/4733283
server {
listen 80;
listen 443 ssl;
server_name ~^(?<user>[a-zA-Z0-9-]+)\.example\.com$;
location / {
resolver 8.8.8.8;
rewrite ^([^.]*[^/])$ $1/ permanent;
proxy_pass_header Set-Cookie;
proxy_pass $scheme://example.com/user/$user$request_uri;
}
}
server {
listen 80;
listen 443 ssl;
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
access_log /var/www/example.com/logs/access.log;
error_log /var/www/example.com/logs/error.log;
root /var/www/example.com/public;
index index.php;
location / {
try_files $uri $uri/ #wordpress /index.php?q=$request_uri;
}
location #wordpress {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_NAME /index.php;
}
# Pass the PHP scripts to FastCGI server listening on UNIX sockets.
#
location ~ \.php$ {
try_files $uri #wordpress;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
server {
listen 443 ssl;
ssl on;
keepalive_timeout 70;
server_name example.com;
ssl_certificate ssl/example.com.chained.crt;
ssl_certificate_key ssl/example.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
root /var/www/example.com/public;
index index.php;
location / {
try_files $uri $uri/ #wordpress /index.php?q=$request_uri;
}
location #wordpress {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_NAME /index.php;
}
# Pass the PHP scripts to FastCGI server listening on UNIX sockets.
#
location ~ \.php$ {
try_files $uri #wordpress;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
The main chunk of the mapping is done in the first server block. I'm targeting any subdomain (I will have already weeded out restricted subdomains with other non-relevant code) and rewriting it to ensure that it has a trailing slash to avoid any internal redirects by WordPress for URLs without a trailing slash. From there, the resolver directive is required to resolve URLs defined in proxy_pass, so I am resolving with Google's DNS. I'm also using the proxy_pass_header directive to send over cookies in order to keep WordPress login authentication in tact. proxy_pass defines the URL to map to.
It should also be noted that if you want to use login authentication as well with subdomains, you need to define your custom cookie domain in wp-config.php like this:
define('COOKIE_DOMAIN', '.example.com');
And that should be it. You can now enjoy URLs like subdomain.example.com that map to example.com/user/subdomain/ or whatever you want. From there, you can utilize WordPress' Rewrite API to map the mapped URL to specific query args that can be sent to $wp_query for loading custom templates, etc.
the following should do it:
server {
listen 80; listen 443;
server_name *.example.com;
if ($host ~ "^(.*)\.example\.com$" ) { set $subdomain $1;}
rewrite ^ $scheme://example.com/$subdomain/$request_uri permanent;
}
(as an aside: the regex ^ matches all url's the most efficiently, and the standard nginx variable $request_uri holds the uri including arguments so you don't need the (.*) group in the rewrite)
additionally add a second serverblock for the domains you don't want redirected:
server {
listen 80; listen 443;
server_name cdn.example.com admin.example.com;
# do whatever with the requests of the reserved subdomains;
}
I think .htaccess is not working with nginx.
I use Nginx As Reverse Proxy Server port 80 and Apache as web server
HERE