I trying to deploy my angular application with kubernates inside a container with nginx.
I create my docker file:
FROM node:10-alpine as builder
COPY package.json package-lock.json ./
RUN npm ci && mkdir /ng-app && mv ./node_modules ./ng-app
WORKDIR /ng-app
COPY . .
RUN npm run ng build -- --prod --output-path=dist
FROM nginx:1.14.1-alpine
COPY nginx/default.conf /etc/nginx/conf.d/
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /ng-app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
My nginx config:
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
location /api {
proxy_pass https://my-api;
}
}
If I launch this image locally It works perfectly but when I deploy this container inside a kubernate cluster the site load fine but all api request shows the error ERR_CONNECTION_REFUSED.
I'm trying to deploy in GCP I build the image and then publish my image by GCP dashboard.
Some idea for this ERR_CONNECTION_REFUSED?
I found the solution. The problem was with my requests, I was using localhost at the URL, with that I took the wrong pod IP. I've just changed the request to use straight the service IP and that sort out my problem.
Kubernetes Engine nodes are provisioned as instances in Compute Engine. As such, they adhere to the same stateful firewall mechanism as other instances. Have you configured the firewall rules?
https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod#firewalling
Good that you have figured out the issue. But, Did you try using the service_names instead of the Pod IPs? It is suggested method(https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls) of accessing services by their names within the Kubernetes cluster and with NodeIP or LoadBalancerIP outside the cluster.
Related
I've been following this article trying to host multiple websites on the same machine using IIS and Nginx.
Based on the provided article I produced the following nginx.conf:
http {
server {
listen 80;
server_name localhost;
keepalive_timeout 1;
gzip_types text/css text/plain text/xml application/xml application/javascript application/x-javascript text/javascript application/json text/x-json;
gzip_proxied no-store no-cache private expired auth;
gzip_disable "MSIE [1-6]\.";
# new website
location /bacon/ {
proxy_pass http://127.0.0.1:1500/;
proxy_http_version 1.1;
gzip_static on;
}
# old website
location / {
proxy_pass http://127.0.0.1:8881;
proxy_http_version 1.1;
gzip_static on;
}
}
}
My old website is working just fine.
Yet when I try to access my new website I get the following errors:
Note that my new website works just fine if diretly requested trough http://127.0.0.1:1500/.
What am I missing here?
Url rewrite of proxy_pass directive works only with http request and http redirect in response. That means, that if http://127.0.0.1:1500/; will reply with HTTP 30x Location: http://127.0.0.1:1500/aaaa/, nginx will rewrite it to http://localhost/bacon/aaaa/.
But this rewrite dose not touch response body. Any links in response HTML will be same - <a href="/aaaa/", so no /bacon/ part here.
To fix it there is two ways. First - edit your application. Replace all links with /beacon/ prefix or use relative URL and add <base href="/bacon/"> in head on each file.
If edit of file is not possible, you can rewrite body with ngx_http_sub_module. There is doc of module http://nginx.org/en/docs/http/ngx_http_sub_module.html
In this way you need to add sub_filter for all html constructions where is link used. For example
sub_filter_once off;
sub_filter ' href="/' ' href="/bacon/';
sub_filter ' src="/' ' src="/bacon/';
just need to be careful. You should put all sub_filter to /bacon/ location.
Setup backend application is much preferred but sometimes only sub_filter can help.
Also there is third method but it can be used in some rare cases. If /flutter_servivce_worker.js doesn't exists in 127.0.0.1:8881 backend, you can add custom location for this file and proxy_pass to bacon backend:
location = /flutter_servivce_worker.js {
proxy_pass http://127.0.0.1:1500;
}
Sure this method can help in very limited cases when you are missing only few files and do not use any links.
I believe your first app is loading main.dart.js from the second (at root path) because you forgot to change <base href="/"> to <base href="/bacon/"> in index.html file.
There is nothing to do with NGINX.
For the new site, the request is routed fine and HTML is loaded onto the browser. But the dependant static file references of the application are still pointed to the base location path '/'.
Based on the chosen frontend language, the base route should be changed to /bacon
(or)
create a folder with the name bacon and place the built files in that folder and serve the static content simply using Nginx with webserver configuration
Have you tried?
# new website
location /new/ {
proxy_pass http://127.0.0.1:1500;
proxy_http_version 1.1;
gzip_static on;
}
# old website
location /old/ {
proxy_pass http://127.0.0.1:8881;
proxy_http_version 1.1;
gzip_static on;
}
I have site served from S3 with Nginx with following Nginx configuration.
server {
listen 80 default_server;
server_name localhost;
keepalive_timeout 70;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/xml application/xml application/xml+rss text/javascript;
location / {
proxy_pass http://my-bucket.s3-website-us-west-2.amazonaws.com;
expires 30d;
}
At present whenever I build new version, I just delete target bucket contain and upload new frontend files to it.
Since I am deleting bucket contain, there is no way I can go back to previous version of frontend even versioning is enabled on bucket. So want to upload new frontend files into version dir (for example 15) in S3 bucket and then setup a redirect from http://my-bucket.s3-website-us-west-2.amazonaws.com/latest to http://my-bucket.s3-website-us-west-2.amazonaws.com/15
anyone knows how this can be done ?
There are multiple ways to do this:
The easiest may be through a symbolic link, provided that your environment allows that.
ln -fhs ./15 ./latest
Another option is an explicit external redirect issued to the user, where the user would see the new URL; this has a benefit in that multiple versions could be accessed at the same time without any sort of synchronisation issues, for example, if a client decides to do a partial download, everything should still be handy, because they'll most likely be doing the partial download on the actual target, not the /latest shortcut.
location /latest {
rewrite ^/latest(.*) /15$1 redirect;
}
The final option is an internal redirect within nginx; this is usually called URL masquerading in some third-party applications; this may or may not be recommended, depending on requirements; an obvious deficiency would be with partial downloads, where a resume of a big download may result in corrupted files:
location /latest {
rewrite ^/latest(.*) /15$1 last;
}
References:
http://nginx.org/r/location
http://nginx.org/r/rewrite
One of the simple ways to handle this situation is using variables. You can easily import a file to set the current latest version. You will need to reload your nginx config when you update the version with this method.
Create a simple configuration file for setting the latest version
# /path/to/latest.conf
set $latest 15;
Import your latest configuration in the server block, and add a location to proxy to the latest version.
server {
listen 80 default_server;
server_name localhost;
# SET LATEST
import /path/to/latest.conf;
location / {
proxy_pass http://s3host;
expires 30d;
}
# Note the / at the end of the location and the proxy_pass directive
# This will strip the "/latest/" part of the request uri, and pass the
# rest like so: /$version/$remaining_request_uri
location /latest/ {
proxy_pass http://s3host/$latest/;
expires 30d;
}
...
}
Another way to do this dynamically would be to use lua to script this behavior. That is a little more involved though, so I will not get into that for this answer.
I have 5 micro-services that are up and running. One of them is an nginx server that acts as a gateway( reverse proxy for other services). There is another service called 'web' that is an nginx server that serves all the client side static bundles. I have enabled gzipping in the web nginx server. But when the compressed response comes through the gateway nginx server, it decompresses the files and sends them back to the client. I tried setting gzip off and gunzip off in gateway nginx server but it is not working.
Here is the configuration of the web-nginx server:
gzip on;
gzip_comp_level 3;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_min_length 100;
gzip_buffers 4 32k;
Here is the configuration for the gateway ngnix server:
gzip off;
gunzip off;
Any kind of help is appreciated.
You need to add gzip_proxied any; to the backend nginx servers (serving static files)
Compress data even for clients that are connecting to us via proxies,
identified by the "Via" header (required for CloudFront/Cloudflare).
The default value is off which disables compression for all proxied requests, ignoring other parameters; For more info checkout the nginx docs
I found the mistake that, i failed to forward the header using proxy_pass from proxy server to actual server. With help of above answer. it worked.
I want to enable gzip compression on my virtual host with nginx. My control panel is Plesk17 but I have access to server root. I found the vhost nginx config file in this dir:
/etc/nginx/plesk.conf.d/vhosts
and add this codes in server block to enable gzip:
gzip on;
gzip_disable msie6;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
gzip_vary on;
After all and restarting the nginx, when I check the gzip status, it looks disabled!
For your information, I also have this comments at the top of my config file:
#ATTENTION!
#
#DO NOT MODIFY THIS FILE BECAUSE IT WAS GENERATED AUTOMATICALLY,
#SO ALL YOUR CHANGES WILL BE LOST THE NEXT TIME THE FILE IS GENERATED.
What's wrong? how can I enable the gzip?
To enable gzip compression for particular domain open Domains > example.com > Apache & nginx Settings > Additional nginx directives and add directives to this section.
If you want to enable it server-wide just create new file /etc/nginx/conf.d/gzip.conf add content there and restart nginx.
are there any steps to setup uwsgi with nginx with a simple wsgi python script. Most of the places I see only django and flask and other frameworks are being setup. Also I need steps to serve static files.. are there any .. ?
Obviously there are two steps: uwsgi configuration and nginx configuration.
The simplest uwsgi configuration is as follows (uwsgi accepts many different configuration formats, in this example I use xml):
<uwsgi>
<chdir>/path/to/your/script/</chdir>
<pythonpath>/path/to/your/script/</pythonpath>
<processes>2</processes>
<module>myscript.wsgi:WSGIHandler()</module>
<master/>
<socket>/var/run/uwsgi/my_script.sock</socket>
</uwsgi>
The only tricky option here is module, it should point to your WSGI handler class.
Also, make sure that /var/run/uwsgi/my_script.sock is readable and writeable for both uwsgi and nginx.
The corresponding nginx configuration would look like this:
server {
listen 80;
server_name my.hostname;
location / {
include uwsgi_params;
uwsgi_pass unix:/var/run/uwsgi/my_script.sock;
}
}
If you need to serve static files, the smiplest way would be to add following code to the server clause:
location /static/ {
alias /path/to/static/root/;
gzip on;
gzip_types text/css application/x-javascript application/javascript;
expires +1M;
}
This example already includes gzip compression and support for browser cache.