Vagrant Server (Homestead) does not load css files - wordpress

I'm currently using using the Homestead Vagrant Box to develop several Wordpress websites on. However there are significant issues with the Vagrant server, as it fails to update my 'style.css' files. Sometimes it loads an old version of the file and sometime the get request returns 'ERR_EMPTY_RESPONSE'. Other files such as 'index.php' load timely and consistently. Restarting nginx fixes the issue from time to time but not always.
Here is a copy of my nginx configs:
# nginx.conf
#
# Customizations should be done in conf.d ideally.
# Run as vagrant rather than the default www-data
user vagrant;
# Default to the number of CPU cores available
worker_processes auto;
# Process identifier
pid /run/nginx.pid;
# Enable just-in-time compilation of regex during config parsing
#pcre_jit on;
events {
# max clients = worker_processes * worker_connections
worker_connections 2000;
# Accept all new connectons on a worker process rather than one at a time
multi_accept on;
# Most efficient connection processing method on linux 2.6+
use epoll;
}
http {
## Define MIME types
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Default error log
error_log /var/log/nginx/error.log;
# Default access log
access_log /var/log/nginx/access.log;
# Default access log
access_log /var/log/nginx/access.log;
# PHP Upstream
upstream php {
server unix:/var/run/php5-fpm.sock;
}
# HHVM Upstream
upstream hhvm {
server unix:/var/run/hhvm/hhvm.sock;
}
# Default index pages
index index.html index.php index.hh;
# Turn sendfile off in a virtual machine because of issues
#
# The nginx default for sendfile is on, which appears to not jive with something
# about the VM for some things, causing weird encoding issues in Javascript
# that create syntax errors and weird encoding issues in CSS that make it seem
# like your file has been cached forever. Crazy stuff - so off it is.
#
# See - http://jeremyfelt.com/code/2013/01/08/clear-nginx-cache-in-vagrant/
# From - https://github.com/Varying-Vagrant-Vagrants/VVV
#
# Note that this should most likely be turned on in a production environment
sendfile off;
# Don't send out partial TCP frames
tcp_nopush on;
tcp_nodelay on;
# How long each connection should stay idle
keepalive_timeout 65;
# Reset lingering timed out connections. Deflect DDoS and free memory.
reset_timedout_connection on;
# If a request line or header field does not fit into this buffer, then larger
# buffers via large_client_header_buffers are allocated
client_header_buffer_size 1k;
# The maximum number and size of large headers to accept from a client
large_client_header_buffers 4 8k;
# If the requested body size is more than the buffer size, the entire body is
# written to a temporary file. Default is 8k or 16k depending on the platform.
client_body_buffer_size 16k;
# Max size of a body to allow. Essentially the max upload size
client_max_body_size 16M;
# Accommodate server directives that have hundred(s) of server_names, such as large multisite networks
types_hash_max_size 2048;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Hide nginx version information
server_tokens off;
# Hide PHP version and other related fastcgi headers
fastcgi_hide_header X-Powered-By;
fastcgi_hide_header X-Pingback;
fastcgi_hide_header Link;
proxy_hide_header X-Powered-By;
proxy_hide_header X-Pingback;
proxy_hide_header X-Link;
# Define a zone for limiting the number of simultaneous connections nginx accepts.
# 1m means 32000 simultaneous sessions. We need to define for each server the limit_conn
# value refering to this or other zones.
limit_conn_zone $binary_remote_addr zone=arbeit_conn:10m;
# Define a zone for limiting the number of simultaneous requests nginx accepts.
# Like the connection zone above.
limit_req_zone $binary_remote_addr zone=arbeit_req:10m rate=250r/m;
# Additional configuration (including gzip and SSL)
include /etc/nginx/conf.d/*.conf;
# Virtual hosts
include /etc/nginx/sites-enabled/*;
ssl.conf:
# Default SSL certificates
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
# Protocols and ciphers based on Cloudflare's sslconfig minus RC4
# See - https://github.com/cloudflare/sslconfig
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5:!RC4;
ssl_prefer_server_ciphers on;
# Use a stronger DHE key of 2048-bits
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
# Disable gzip of dynamic content over SSL/TLS
#gzip off;
# Enable sessions cache
# 1m is equal to about 4000 sessions.
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Name servers to resolve upstream servers
resolver 8.8.4.4 8.8.8.8 valid=300s;
# Enable spdy
spdy_headers_comp 5;
add_header Alternate-Protocol 443:npn-spdy/3.1;
# Enable Strict Transport Security (HSTS)
#map $scheme $hsts_header {
# https max-age=31536000;
#}
#add_header Strict-Transport-Security $hsts_header;
gzip.conf:
# Enable Gzip compression
gzip on;
gzip_static on;
gzip_vary on;
gzip_proxied any;
gzip_http_version 1.1;
gzip_buffers 16 8k;
# Compression level (1-9)
gzip_comp_level 5;
# Don't compress anything under 256 bytes
gzip_min_length 256;
# Compress output of these MIME-types
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-javascript
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/xml
text/javascript
text/plain
text/x-component;
# Disable gzip for bad browsers
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
Anyone know what could be wrong/ has any suggestions or answers on why this is happening?

Related

Can Cloudflare or Nginx strip the body of an inbound request?

I'm trying to create a basic api which does stuff, as an api does, however it is sitting behind both an Nginx instance and a Cloudflare layer for security, however every time I make a request all the headers go through find but the body of the request (application/json) seems to be getting removed.
I have tried logging it on the nginx instance and I just get '-' every request so I think it could be Cloudflare. I have tested locally and I am definitely able to receive the body as it is. I've looked through the req object and there is no body anywhere, all the auth headers are fine just the body.
EDIT (in response to AD7six): Sorry i'll clear my question up, i'm saying that both the access log is missing the body and that my code behind the proxy does not receive it. I'll attach the nginx config / log now.
On further inspection my nginx config is listening to port 80 however all the responses are going to https... I hope that makes sense.
NGINX Config
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
log_format postdata $request_body;
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80;
listen [::]:80;
server_name dev.ru-pirie.com;
location / {
access_log /var/log/nginx/postdata.log postdata;
proxy_pass http://192.168.1.74:3000;
}
}
}
All the log contains is - per request
When requests are proxied via Cloudflare, by default they are modified with additional headers, for example CF-Connecting-IP that shows the IP of the original client that has sent the request (full list here).
There are other features that Cloudflare users can implement that may alter the request, but only when explicitly configured to do so: for example, someone could write a Cloudflare Worker that modifies arbitrarily the incoming request before forwarding it to the origin server. Other general HTTP request changes are possible using Cloudflare Rules.
Cloudflare would not alter the body of an incoming request before passing it to the origin, unless explicitly configured to do so for example with Workers.

Nginx remembering old subdomain

I am having a weird issue. My ubuntu server is still remembering an old subdomain, even though the configuration files is deleted, both symlink and the original one.
The nginx service is restarted, and the health is just fine as seen below:
website:/etc/nginx/sites-enabled# sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Why on earth is the old subdomain still accessible? there is no specific DNS settings for the subdomain, all I have is
*.domain.com | ip
www.domain.com | ip
This is the contents of nginx:
website:/etc/nginx/sites-enabled# ls
- is empty
website:/etc/nginx/sites-available# ls
- is empty
Nginx conf:
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I have tried to disable nginx caching, but still it's like its remembering the old subdomain somehow. Any experience on this?

Nginx returns old HTTP responses

I am having an issue where Nginx does cache some HTTP responses. I don't know why? I have installed Nginx on Ubuntu 18.04 and never changeed any cofig from nginx.conf file
I am serving a Flask application works with MySQL and what is happening is the when I go to the website and delete a record from a table and refresh the page the record still there. but after multiple requests the record goes away. however after multiple request the record appears again even though when I checked the database the record is not there and it is deleted successfully!
Therefore, apparently Nginx is returning old HTTP responses from old HTTP requests for no reasons and sometimes it does actually return fresh/correct HTTP responses
Note that guys when I run my Flask server standalone without Nginx This issue is longer there.
I also checked the Flask logs in realtime and sometime when I request something from the database, Nginx returns a response without even sending the request to Flask. So I can actually see no request being sent to Flask even though i still get a response and it always has old data
so it is definitely Nginx what causing it
nginx.conf (File)
user root;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 6500;
types_hash_max_size 2048;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
client_max_body_size 50M;
proxy_read_timeout 5m;
proxy_send_timeout 300;
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
I am also using uwsgi with flask with multiple workers and run them as service "systemctl" and here is the mywebsite.ini file for it. But as far as I know uwsgi does not cache by default.
[uwsgi]
module = application:app
master = true
processes = 5
socket = mywebsite.sock
chmod-socket = 660
vacuum = true
enable-threads = true
die-on-term = true
UPDATE 05/03/2019
The issue is with uWSGI processes and not Nginx. When I changed them to 0 in mywebsite.ini it works fine without incorrect old data being returned. But I still want multiple processes for better performance!! any idea how?

Ubuntu 18.04 Nginx - always only shows the welcome page

I am using Ubuntu 18.04 default nginx configuration, with some minor changes. But I always sees the nginx's default welcome message. All config files follows:
/etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/conf.d/default.conf:
This file is empty. And this directory only contains one file.
/etc/nginx/sites-enabled/default:
upstream backend {
server 127.0.0.1:8068;
}
server {
listen 80;
listen [::]:80;
# server_name _;
location / {
proxy_pass http://backend;
}
}
On the server, the http server runs correctly:
$ curl 127.0.0.1:8068
some contents...
In another computer, only the welcome message shows:
$ curl http://the-server-ip-address
the default welcome to nginx message follows...
I've also tried to access the ip address in my Chrome and disabled caches, but the same welcome page always shows.
/var/log/nginx/access.log and /var/log/nginx/error.log shows no access logs and no errors.
Does my nginx config files have a problem? Thanks!
(I just tried the same config files on a 16.04 ubuntu server and it works correctly.)
I've run the follow commands to test and reload the configs:
$ nginx -t
$ nginx -s reload
updated
Sorry, I just tried some other operation:
I run the following command on this server:
$ ifconfig
and get the ip address is 192.*** rather than the ip address I used to access(ssh to) this server.
I run
$ curl 192.***:80
and the server responds correctly.
But why the ip address I used to ssh to the server always shows the welcome page?
$ curl the-server-ip-address-for-ssh:80
still the welcome page...
Thanks!

NGINX not serving CSS, Images and other media files

I have installed NGINX on my ubuntu 16.04 LTS server to satisfy the need to navigate to different applications on the same linux server.
So I have installed it and followed this tutorial : https://www.youtube.com/watch?v=PTmFbYG0hK4&t=677s
I defined it exactly as the tutorial shows but I ran into a problem where the NGINX not serving any media files for a specific application (CSS, Images, stylesheets etc). I will be clearer: I defined inside sites-available a configuration file as such (of course I made a symbolic link to the sites-enabled directory.):
server{
listen 80;
listen 443 ssl;
location / {
root /home/agent/lexicala;
}
location /test {
proxy_pass "http://127.0.0.1:5000";
rewrite ^/test(.*) $1 break;
}}
The "location /" - serving my HTML files and website perfectly.
But when I try to approach to "MyServersIP/test/" (serving a node app) which supposed to be served from "location /test" - the routing is good but NGINX serving it without any media.
On the chrome console I have inspected it in chrome and see the following errors:
GET http://MyServersIP/stylesheets/style.css net::ERR_ABORTED
GET http://MyServersIP/scripts/jquery.multiselect.js net::ERR_ABORTED
GET http://MyServersIP/css/jquery.multiselect.css net::ERR_ABORTED
I have tried to follow posts which I saw that people ran into the same problem:
Nginx fails to load css files ;
https://superuser.com/questions/923237/nginx-cannot-serve-css-files-because-of-mime-type-error ; https://www.digitalocean.com/community/questions/css-files-not-interpreted-by-the-client-s-browser-i-think-my-nginx-config-is-not-good
And many more, but nothing worked for me.
Another thing worth mentioning - when I swap routings like this:
server{
listen 80;
listen 443 ssl;
location / {
proxy_pass "http://127.0.0.1:5000";
}
location /test {
root /home/agent/lexicala;
rewrite ^/test(.*) $1 break;
}}
The node app is served perfectly, but it is not good for me as I want the users to approach my node app through the 'test' URL.
This is my nginx.conf file (I have made no changes):
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I tried to supply as much details as I could but if something is missing I would be glad to add.
Hope you guys help me find solution to this bug cause I spend over it good working days.

Resources