Make Laravel Homestead Accessible via the Internet - networking

How can I make Laravel Homestead (a Vagrant vm) accessible via the internet? Currently, I have set my router to port-forward to my host machine's local IP. However, that causes the Laravel site to think that all incoming requests are coming from 10.0.2.2.
What would be the correct way to make the site accessible via the internet? Would I have to get the VM to be assigned an IP from the routers DHCP? If so, how do I do that?

The correct answer these days would be to use Homestead's share alias on the command line via ssh.
eg. share acme.app
Behind the scenes, this uses ngrok and is documented in the Laravel documentation.

You can make it work with xip.io service. More details here: http://christoph-rumpel.com/2014/10/access-laravel-homestead-projects-through-other-devices-in-three-little-steps/

Chances are you need to tell Laravel to trust the router as a proxy:
Request::setTrustedProxies([
'10.0.2.2',
]);
This will work if the router correctly sets X-Forwarded-For sort of headers.

Related

I want to access Zammand (which runs on nginx) from the mobile app but i get the standard "welcome to nginx"?

Like the title says it, I need to configure my server to that it is reachable via the IP. I'm just asking where i can redirect/reroute it in the config?
Thanks
So i found out that in the nginx config you have to add an ip and an DNS Entry and it works :)

Netlify and ngrok linking

I have a front-end deployed on Netlify and a back-end is deployed on localhost which is exposed using ngrok.
Is it possible to link them so that when I click on the Netlify link, it would send request to my localhost server exposed from ngrok ?
Netify can proxy to a dynamic backend, that is an intended use case. The problem we'll have is using "localhost" - netlify needs a valid hostname to connect to. So, if your ngrok is exposed (not firewalled) at some public IP, you can put that into your redirects configuration:
/backend-stuff-in-this-path/* https://1.2.3.4/:splat 200!
will send all requests to the path /backend-stuff-in-this-path/ANYTHING to the server at 1.2.3.4/ANYTHING
This may not be incredibly useful since your machine will change IP addresses sometimes one presumes, but if you were using localhost anyway, you weren't planning to put it in production quite yet. Note that redirects are deploy-specific, so you do need to redeploy to change the location if your IP changes.

Setting up SSL on AWS EC2

I'm trying to set up SSL on my wordpress site.
I've an EC2 instance running wordpress on nginx and ubuntu. Database running on RDS.
I've launched an application load balancer with listeners on ports 80 and 443 and attached the SSL certificate which I got via ACM. I've set my targets to point to the EC2 instance I am using.
At this point the how-to guides and information stops. Apparently that's all there is to it and it should now all be working. However it's not. I'm getting connection refused errors when I add the https to my site's URL.
When I put my URL into https://www.sslchecker.com/sslchecker I'm told that no certificates are found.
So clearly I need to something more to get this working - can anyone point me to the next step?
Using the ELB and ACB is the way to go here. It sounds like you might be using the wrong type of ELB though. You mentioned application load balancer, you should use a classic load balancer. Also make sure your security groups are setup correctly to allow your ELB to talk to the EC2 instance.
You didn't mention Route53 but I assume you have the DNS entry setup to point at the ELB as well.
Share more and I will help more. Good luck.

Looking up a container's address via its hostname dynamically in Nginx

I'm currently trying to run two containers on a single host, one being an application (Ruby on Rails) and the other Nginx as a reverse proxy and cache. The app is running on TCP port 80. What I want to be able to do is bring down my application container, remove it and then bring it up again without having to restart nginx. The problem is that Nginx only seems to look up the IP of the container once, so if it goes down then back up at a different address then Nginx will just complain that there's nothing there.
I've tried a few things:
Using resolver 127.0.0.11 valid=5 to use Docker's DNS
Using an upstream block
Using a variable to try to get nginx to resolve at runtime.
I'm not sure where else to look but none of these options work if the application is brought up on a different IP address. Is there something I'm missing making this impossible?
Thanks.
Ended up reading through the 12 factor app which inspired me to remove the Nginx proxying to Rails upstream altogether, and instead used it as a proxy cache which has an upstream of the external DNS name.

Configuring BIND for Ubuntu Web Server

I am looking for some assistance configuring BIND to host a DNS server on my web server.
I recently acquired a dedicated server running Ubuntu 14.04 LTS and I already have Nginx, PHP-FPM, MariaDB installed and working perfectly. My knowledge of postfix & dovecot are slim, so I followed this guide: A Mailserver on Ubuntu 14.04: Postfix, Dovecot, MySQL. The good news is that I've got mail coming in and going out as expected, but have come across another issue, which is some ISP and providers are denying the mail since there is no PTR records used.
So, I'm assuming I need to install and configure BIND to set up DNS and to set up a PTR record so that my mail will reach its destinations. I've tried Google with some tutorials but none of them seem clear for my purpose.
Installing a control panel, or one of those all-in-one scripts is out of the question since I already have the web server configured. Another issue is that some of them don't work with Nginx or use a different configuration of PHP. Plus, I want to learn how to do this on my own.
You don't have to install bind. Who ever has reverse DNS authority for your IP block will typically create a reverse name for you. Just request a reverse pointer record with the mail domain name for your IP.

Resources