Hosting multiple meteor apps on one server - meteor

I have 2 meteor apps running on one Ubuntu server on DO. I have also set up nginx for "servering"
Config files:
sailsadria.conf : http://pastebin.com/eCicpNxK
ytp.klancir.work.conf : http://pastebin.com/cNKtA0dV
Now...
http://sailsadria.com/ which is on port 3000 works smoothly as expected while http://ytp.klancir.work/ goes on ngnix root. On the other hand http://ytp.klancir.work:3010 goes to the right app that is working on that port (but I suppose that any URL or the IP i forward with the appended port will end up on the right location)
Symlinks are also set up
The domains are configured:
sailsadria: http://screencast.com/t/iqKUlQlDgj8
ytp.klancir.work: http://screencast.com/t/DJJdLfqna
I dont know how to set up that http://ytp.klancir.work/ goes directly to port 3010 in other words - directly to the app...

The SOLUTION: sudo service nginx restart....

Related

module.run not executing in state.highstate, but works with state.sls

I'm attempting to re-run a state from another state. I'm not using watch or watch_in etc b/c i want it to run each time. I configure all my nginx virtual hosts and then at the end another state runs called nginx-certs the relevant portion is here:
nginx-frontend:
module.run:
- name: state.sls
- mods:
- nginx-frontend
During the highstate i see the state_id is executed but has no comments, nor shows it reruns that state, it just completes as Result: True. I can then jump to the salt master and run
sudo salt webserver state.sls nginx-certs
and when it hits nginx-frontend, it does reload all of the virtual hosts, putting the new cert in the config.
I'm curious why this does not run in the highstate.
I have attempted ll sorts of different variations of the simple block outlined above. This one works, but not in the highstate, which is my goal to fix.
If you wonder why i do it this way, all certificates for production and staging terminate at HAProxy and nginx only serves up 80/http1 81/h2, but when building out dev servers i want to assign the cert directly to the server as it will be public facing. I need to build out the virtual hosts first to get port 80 open which is used for lets-encrypt. Then when the cert is available, update the nginx vhosts listen directive and cert paths.
From what I understand: you have one server which you want temporarily configured with Nginx on port 80, then generate its certificate with letsencrypt, then change Nginx configuration to be on port 443.
What you can do is:
have one state which installs and configures Nginx to listen on port 80
have another state with installs/configures/runs letsencrypt
a third state which configures Nginx as you want it to be at the end [1]
you just include them in salt to be run in the specific order like
# custom_nginx.sls
include:
- temp_nginx_on_port_80
- letsencrypt_cert
- nginx
[1] for this I think its better to use formula like the one from the community https://github.com/saltstack-formulas/nginx-formula/ and configure it with pillar data. Obviously if you use it for step 3, you won't be able to use for step 1 (or at least I don't see right now how)

Xdebug with PHPStorm and a Docker container

Setup: Windows 10; Docker running with Boot2Docker on Hyper-V; PHPStorm 9
Webserver on the VM is Nginx. I've configured the xdebug.ini for php5-fpm as:
zend_extension=xdebug.so
xdebug.remote_enable=on
xdebug.remote_port=9000
xdebug.remote_connect_back=On
xdebug.remote_handler=dbgp
xdebug.profiler_enable=0
If I set a breakpoint and reload the page I get an incoming connection from Xdebug in PHPStorm:
I wonder that there is only one file shown and not the entire project which is much bigger. If I accept the connection I can debug the very first line but it is not stopping on my breakpoint and creates a server entry which looks like:
What is very strange that host is empty.
I already added the server with the correct mapping but it got ignored.
So how to get Xdebug to stop on breakpoints?
What is very strange that host is empty.
PhpStorm requires this field to be filled as it uses this to recognize what server entry (and therefore path mappings) to use -- IDE supports debugging the same code base running on different domains / remote servers.
In this particular case the servername field / parameter of your nginx configuration is empty. You can fix this by providing some value in nginx config file.

Docker restart not showing the desired effect

I have a small nginx based test application that I want to run inside a docker container. So I followed the example given here docker installation
So I have a foder name restartTest and it contains an index.html file that has this single line in it that says Docker Test 1. I mount this up as my volume during runtime for docker container. So the commmand I use is
docker run -dP -v /Users/Sachin/restartTest/:/usr/share/nginx/html --name engine2 nginx
And it runs fine. I use curl to verify that the volume has mounted properly and the application is running as desired. Now what I do is that I change the content of the index.html file (from my localhost) to Docker test 2 and then I restart the container. I execute the following command to verify that the content has indeed changed inside the docker container
docker exec engine2 cat /usr/share/nginx/html/index.html
And as expected, the file reads Docker Test 2. However, when I use the curl command to see if the webpage also reflects the change I see that I still get Docker Test 1 as the response. The index.html reflects the change however when I run the curl command or if I access the app from the browser, I still get the same result. I have tried the following but to no avail.
Restart the service
Stop and start the container
Stop and start the boot2docker VM and docker daemon.
I have no clue as to why this is happening.
So I found this known bug with VirtualBox VM that is used for running Docker on Mac.
When we have shared content between our host machine and the VirtualBox, then only we face this bug. There is a optimisation as far as web servers like nginx, apache (and apparently vertx) are concerned. Whenever we request a static file from the server, it uses sendfile to provide us with the file. The bug is that in case of VirtualBox (in the scenario described above) we always get the first version of the file no matter what we try. The workaround for this in case of nginx and apache is to turn sendfile off . However, there is a hack that we use as far as vertx is concerned.
rename the file say login.html to login.html.moved (anything)
curl :/….../login.html (we won’t get anything)
rename the file back to its original name login.html.moved to login.html
Hard refresh the page (Command + Shift + R).
For further reading about this bug consult the following
Link1
Link2
Link3
Link4
I assume it is a caching problem. Did you try to set expires -1 in your index.html location configuration to disable server side caching for static files?

Exposing localhost to the internet via tunneling (using ngrok): HTTP error 400: bad request; invalid hostname

From previous versions of the question, there is this: Browse website with ip address rather than localhost, which outlines pretty much what I've done so far...I've got the local IP working. Then I found ngrok, and apparently I don't need to connect via the IP.
What I am trying to do is expose my website running on localhost to the internet. I found a tool that will do this: ngrok.
Running the website in visual studio, the website starts up on localhost/port#. I run the command "ngrok http port#" in the command line. Everything seems to start up fine. I generate a couple of URLs, and the ngrok inspection url (localhost:4040) works.
The only problem is that when I go to the generated URLs, I get an HTTP error 400: bad request invalid hostname. This is a different error than when I run "ngrok http wrongport#", which is a host not found error...so I think something good is happening. I just can't tell what...
Is there a step I am missing in exposing my site to the internet via the tunneling service? If there is, I can't find it in the ngrok documentation.
Troubleshot this issue with ngrok. In the words of inconshrevable, some applications get angry when they see a different host header than expected.
Running the following command should fix the problem:
ngrok http [port] --host-header="localhost:[port]"
Depending on the version, you may also want to try:
ngrok http [port] --host-header="localhost:[port]"
Following command will fix the issue
ngrok http -host-header=localhost 8080
This didn't work for me.
you could do the following:
For IIS Express
In VS 2015:
Go to the .vs\config\applicationhost.config folder in your project
In VS 2013 and earlier:
Go to %USERPROFILE%\My Documents\IISExpress\config\applicationhost.config
Find the binding that says:
<binding protocol="http" bindingInformation="*:5219:localhost" />
For me it was a project running on port 5219
change it to
<binding protocol="http" bindingInformation="*:5219:" />
IIS Express will now accept all incoming connections on that port.
Disadvantage: you need to run IIS Express as admin.
or you could rewrite the host header in Ngrok:
ngrok.exe http -host-header=rewrite localhost:5219
For https this works:
ngrok http https://localhost:<PORT> --host-header="localhost:<PORT>"
UPDATED COMMAND FOR LATEST VERSION
Tested with: (Windows) (ngrok v3.0.5)
Use -- instead of -
ngrok http --host-header=localhost 8080
The simplest thing for me was using iisexpress-proxy + ngrok.
First I install iisexpress-proxy globally with npm
npm install -g iisexpress-proxy
Then I proxy my localhost with it. Say for instance my site is running on 3003.
iisexpress-proxy 3003 to 12345 where 12345 is the new http port I want to proxy to.
Then I can run ngrok on it.
./ngrok.exe http 12345
It just works! 😃
But I think it works only with http. Right now I don't use https to test, but even if it works, usually it's a lot of work as always.
For https this works:
ngrok http https://localhost:<PORT> --host-header="localhost:<PORT>"
Try with different locations from the Global infrastructure > Locations
ngrok http -region eu 8080
You can make a request and view any traffic passing through your tunnel using the ngrok traffic inspector at http://localhost:4040.
OR in command line
ngrok http -region eu 8080 --log=stdout
If one region fails then try with another.
ngrok runs tunnel servers in datacenters around the world. The location of the datacenter within a given region may change without notice (e.g. the European servers may move from Frankfurt to London).
us - United States (Ohio)
eu - Europe (Frankfurt)
ap - Asia/Pacific (Singapore)
au - Australia (Sydney)
sa - South America (Sao Paulo)
jp - Japan (Tokyo)
in - India (Mumbai)
First open ngrok configuration YAML file, run from terminal:
ngrok config edit
Example of yaml for localhost setup (client & server):
version: "2"
authtoken: {YOUR_AUTH_TOKEN_FROM_NGROK_WEBSITE}
tunnels:
client:
addr: 3000
proto: http
host_header: localhost
server:
addr: 4000
proto: http
host_header: localhost
Save the config file based on your client and server ports and run the following command:
ngrok start --all
This will make ngrok open a tunnel for all the configurations declared in the yaml file
Had IIS Express .net web API, had installed NGROK in docker (windows as a host)
Had "Bad Request" error, the next command worked for me:
docker run -it -e NGROK_AUTHTOKEN=<token> ngrok/ngrok --host-header=localhost:21852 http host.docker.internal:21852
As I understood later, --host-header needed because IIS Express refuses all requests from outside (must be "localhost:port
"), host.docker.internal I've used instead of localhost, because NGROK was running inside docker, while IIS Express was running on a windows host.
I had the same issue and used the following solution:
Make sure your application binding in your IIS is set to All Unassigned IP address
Run ngrok HTTP 127.0.0.1:173 --region=eu --hostname=yourcustomdomain.eu.ngrok.io
That's it. Works perfectly. This solution is also for paid pro accounts
Steps.
Run command on your console from ngrok.exe directory . ngrok http
port i.e ngrok http 80 https://www.screencast.com/t/oyuEPlR6Z Set
Ngrok url to your app .
It will create a tunnel to your application.
Thanks .

is my nginx config correct?

hi im trying to run my Ruby on rails app in nginx using
passenger start -e production
but it is missing the cache: [HEAD /] miss
im guessing this i dont have actualy a file in public sorry for this question this may be to easy to answer and when i route to www.tock.com it renders a live page in the internet :(
server {
listen 80;
server_name www.tock.com;
passenger_enabled on;
root /home/led/Aptana\ Studio\ 3\ Workspace/djors/public;
}
Where ever you point the webserver, nginx in this case, you need your DNS to match the location. If this is your production server, then you need DNS records to point www.tock.com to your server.
If this is your development or local machine, you probably don't want to name your server something that will overwrite the public DNS records. For example, I name all of my apps in my local nginx config like the following:
server_name my_app_name.local
Once you've given it a name, you'll need to add "my_app_name.local" to your hosts file (your local DNS records). Your hosts file should now have entries like below.
127.0.0.1 localhost
127.0.0.1 my_app_name.local
Restart nginx, and you can now goto my_app_name.local in your browser.
You can get rid of passenger and nginx conf all together, as it looks like you are doing this locally and if you want named links (as opposed to just running bundle exec rails server; use Pow to facilitate this. Personally, i'm a rails server guy, but ymmv.

Resources