I've struggled for couple of weeks on this configuration.What I want to achieve can be listed as follows.
1.I registered a domain not long ago.And I've set up some web service on my VPS,such as a blog,a forum and Owncloud. Now I want to configured the Nginx so that I can run all the service on one VPS and one IP address. In order to run owncloud,I have to modify the /etc/php5/fpm/pool.d/www.confto listen = 9000.In this case,I can only get one service (Owncloud)function,because if I want to run the forum I must uncomment the listen = /var/run/php5-fpm.sock.What's more,I've tried to uncomment both of them,Nginx showed 502 afterwards.
2.I'm using Hexo as my blog.When I start the server,I can access into my blog on IP:4000.So I wonder if I could run my blog server on background and edit the posts online via a subdomain which has been redirected to port 4000.If it's possible,should I modify the nginx.conf or add something in sites-available?
3.Can I deploy different web services on different subdomain?Which file is to modify?It's said that I can achieve this by using reverse proxy?
Sorry for the pathetic English and expression.Thanks in advance.
Going at it point by point:
The advantage of PHP-FPM, which you are using, is that you can have multiple separate interpreters running in your pool. To do so, simply copy the file at /etc/php5/fpm/pool.d/www.conf to somewhere else, say /etc/php5/fpm/pool.d/forum.conf, change the listen directive, and you've got a second php interpreter running, entirely separate from the first one. That way owncloud (www) and your forum (forum) have their own distinct php.
This is called reverse-proxying. nginx does that well. You simply add a new site definition in sites-available that does reverse-proxying to port 4000 on your server, then symlink (or copy) that site definition to sites-enabled and restart nginx. You will have to setup Hexo to start automatically for that to work.
You can deploy different web services on different subdomains. As long as the dns is configured to point that name to your server, you can configure the server to respond differently for every subdomain using site definitions. You need to modify the files in sites-enabled to determine which names nginx knows how to respond to.
Related
Is it possible to configure two different NGINX on different servers with the same domain but each will have different context. Let me give you more information. I have an existing NGINX that is server a production environment; for example
mydomain.com/prod
I want to create
mydomain.com/dev
but I don't want to change the NGINX conf on prod environment. I will spin up a dev environment with a new NGINX server but using the same domain but have the location redirect to /dev
After some research, this is actually not a recommended approach.
I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?
Is it possible, without editing the /etc/hosts file, tell my computer redirect to 127.0.0.1 every time I visit domain1.com or domain1.com through the web browser as well as when I request the content of the same pages through curl?
Run a DNS server/resolver on your machine, configure to forward every query that it can't resolve to the DNS resolvers upstream and set /etc/resolv.conf to direct all queries to the locally running resolver.
Then in the local resolver add entries for the domains you want to blackhole toward localhost.
There are several of options to chose from. The currently most popular caching resolver is unbound, but you can also use dnscache for it.
Is there any way I can configure ngnix other than through the normal ngnix.conf file ?
Like xml configuration or memcache or any other ways..?
My objective is to add/remove upstreams to the configuration dynamically. Ngnix doesnt seem to have a direct solution for this so I was planning to play with the configuration file, but I am finding it very difficult and error prone to modify the file through script/programs.
Any suggestions ?
No. You can't. The only way to "dynamically" reconfigure nginx is to process the config files in external software and then reload the server. Neither you can "program" config like in Apache. The nginx config is mostly a static thing which is praised for its performance.
Source: I needed it too, done some research.
Edit: I have a "supervising" tool installed on my hosts that monitors load and clusters and such. I've ended up implementing the upstreams scaling through it. Whenever a new upstream is ready, it notifies my "supervisor" on all web servers. The "supervisors" then query for served "virtual hosts" on the new upstream and add all of them to their context on the nginx host. then it just nginx -t && nginx -s reload everything. This is for nginx fastcgiing to php-fpms.
Edit2: I have many server blocks for different server_names (sites), each has an upstream associated to it on another host(s). In the server block I have include /path/to/where/my/upstream/configs/are/us-<unique_site_id>.conf line. the us-<unique_site_id>.conf is generated when the server block is created and populated with existing upstream(s) information. When there are changes in the upstreams pool or the site configuration, the file is rewritten to reflect it.
On my Nginx I've got two hosts.
One with the values
server_name = www.mydomain.com;
root /var/www/production/myFirstWebSite;
and the other with
server_name=localhost;
root /var/www/development/mySecondWebSite;
To my domain registrar account I configured the DNS with two A record "
www IN A myIP
IN A myIP
This is cool, i can reach my first website with www.mydomain.com or mydomain.com.
Now the problem is how to reach my second website which is in development and I don't buy the domain name. And myIP/development/myScondWebSite is no more working ...
I think that the problem come from the DNS entries but I'm not sure.
Do you've got some ideas ?
Thanks in advance.
There's a couple of ways I could think of to access the localhost one.
Creating a subdomain instead of localhost
This is the best one I'd recommend, try doing something like server_name localhost.mydomain.com.
If you need to put further security, you could make it only allow a certain IP(s) or a range of IPs.
Play with your hosts file
In this specific case I would not recommend this, because you're messing with localhost it self, might break some other stuff on your machine, if it was any other name I could have said it's fine.
Use an ssh tunnel to the server
In this method you create a dynamic port on your ssh connection and set your browser to pass all traffic through tunnel which goes to the server then it's handled from there, so if you run localhost for example it would be like running localhost from over there, but since this involved a browser setting, you need to remember to disable it after you disconnect the ssh connection otherwise the browser would return an error saying that the proxy server is refusing the connection.
Using a local Nginx as a proxy
This one I just came up with right now, and I can't say If it would work or not, the 3 before I've worked with before and I know they work.
You'd set a certain domain name that your local nginx would capture and then proxy it to the remote server, but edit the host header setting it to localhost instead, that way it would match the localhost in the remote machine, if this one works it would not need any setting to be turned on and off every time.
Out of all these, I'd recommend the first one first (if it's an option), then try the last one if you don't want to keep turning things on and off before and after each setting.