Google Compute Engine: how to make requests from outside? - nginx

I'm completely new to Google Cloud and Google Compute Engine. I have a VM instance set up in GCE, and would like to make requests to it.
Inside the instance, I have a basic Nginx running (of which I admittedly have also a very limited understanding), with the following configuration:
http {
server {
listen 80 default_server;
return 200 hello;
}
}
If I access it from inside the instance through the google cloud console, for instance with a curl, it does work, but I don't know how to access it from outside.
In the list of Compute Engine VM instances, the instance has an external IP associated (let's say for example 35.204.94.110), but requests to http://35.204.94.110:80 don't get a response.
How can I do it to access the instance from the outside?

I would make sure that HTTP access is enabled on the VM instance. When creating a VM instance, there are two check boxex:
Allow HTTP traffic
Allow HTTPS traffic
If the box is unchecked for “Allow HTTP traffic”, then this would explain the behavior. Go into your console and click on the affected VM instance and then scroll down until you see if the “Allow HTTP traffic” box is checked. If not, click Edit, checkmark the box to allow HTTP traffic and then save the changes. You should now be able to load the page externally.
I tested this myself by just installing and enabling nginx on a VM instance. If I disable “Allow HTTP traffic” the page does not load. When it is enabled, I am able to load the default web page of nginx successfully.

Looks like you don't have http access enabled. Check the firewall rules and add the default-allow-http label to your GCE instance.

Related

JupyterLab does not work when redirected using TLS

I have a local jupyter lab instance, running on mint-2 computer with command jupyter lab --ip "*", and it listens to port 8888. I can access it just fine via the URL mint-2:8888.
I also have a server instance ubuntu-2. I reverse ssh tunnel from mint-2:8888 to ubuntu-2:8888, meaning I can access it on my mint-1 laptop just fine via the URL ubuntu-2:8888 anywhere in the world.
However, it is not encrypted with TLS, so I wanted to improve this. On ubuntu-2 I have an nginx load balancer container that strips https traffic, and redirects http traffic to other locations. I have set up jupyter.ubuntu-2:443 so that it redirects to ubuntu-2:8888 so that it redirects to mint-2:8888. This version initially seems to open up just fine, and I can navigate directories. However, whenever I want to launch a new terminal or notebook instance, or even create new directories, it wouldn't work. Here's the network log when I save a modified notebook:
My question is, why won't the requests go through, considering I can still interact with the interface just fine everywhere else, but just not when creating folders/notebooks/terminals. I am thinking that JupyterLab might be using UDP and I'm considering passing UDP traffic through nginx, but this doesn't really make sense, as this is clearly a PUT request. Any other help regarding where to find more logs or speculation on what might have gone wrong is much appreciated.
I dig into it a little more, and managed to figured it out.
JupyterLab has CORS policy that doesn't allow requests to ubuntu-2. I then added c.NotebookApp.allow_origin = "*" to JupyterLab's config at ~/.jupyter/jupyter_lab_config.py, as mentioned here.
Then I found out that everything is still not functional, and this is because Jupyter requires both HTTP and WebSocket protocols, and my current server setup only allows http traffic. So I need to enable generic TCP traffic on ubuntu-2's HAProxy load balancer. Because I have multiple virtual hosts on the server, I need to distinguish between them, so I used Server Name Indication, server name included in TLS traffic.

Pass mixed content with reverse proxy

I have a website and users create their own app. But i can't embed these apps on my website via iframe, because my website has SSL certificate and got this error:
Mixed Content: The page at 'https://domain' was loaded over HTTPS, but requested an insecure resource 'http://IP_ADDR'. This request has been blocked; the content must be served over HTTPS.
My workflow is like that:
Click create button
Deploy EC2 instance from AWS
Get IP EC2 address from AWS
Embed this app via iframe
I want to embed these IPs to my website, IP addresses are dynamic. Anyone can create machine anytime.
What is best practice solution for this issue?
Best practice (and also the only I can think of) solution IMHO would be to use proper HTTPS for the iframe content also. You'd need a possibility to automatically create DNS records though (you can do so with AWS Route 53). Regarding SSL you could use a wildcard certificate (e.g. Let's Encrypt). Nginx could be configured to proxy_pass by DNS name as opposed to IP. Then your workflow would become this:
Click create button
Deploy EC2 instance from AWS
Get IP EC2 address from AWS
Create DNS record
Embed this app via iframe

Building Proxy Site with Nginx and Rotating Proxy Service

Im' looking to build a similar application to https://www.proxysite.com/ but am not sure on the best architecture.
Looking to have a data flow like this.
User Web Browser -> myproxysite.com -> Ngninx Proxy Server (somehow rotating IP for each client session) -> Targetsite.com
Then the user would need to maintain a full session on Targetsite.com as a logged in user.
In this example, targetsite.com is always the same site and is pre-determined. The challenge we are facing is that targetsite.com is blocking our users based on IP, many of whom are accessing it from the same office network.
So my questions are:
Does this seem correct?
Is there anyway for me to configure nginx with a rotating proxy service like luminati? Or do I need to add an API software layer to handle the actual IP changes?
Any guidance on this one would be greatly appreciated!
While I can't help you with your application, I do want to suggest an alternative. You mentioned an office so it sounds like the users who will use the proxy are workers.
Luminati (now BrightData) has a proxy manager which you can host on any server. The proxy manager allows you to create ports (ie port 24000) and configure it with whatever proxy you want (doesn't have to be BrightData's proxy). It has a ton of different parameters that you can include for each proxy (including IP rotation) and each port can be configured to have a unique setup.
Then you simply go to your user PC, open the browser proxy settings, type the IP address of the server that the proxy manager is running on and the specific port you configured and voila. You have central control of the managing the proxies and your user's browser is proxied.
A big benefit of this is the logs in the proxy manager show all activity on each port you setup, so you can monitor traffic and the success rates right there.
Proxy manager: https://prnt.sc/13uyjgj

When I run my daemon the service is a http proxy instead off http

I am currently running a service with systemctl, and it is running as an http proxy, not normal http. Is this something that Google does? I am using port 8080 and I can't connect to it via http. My daemon is using port 8080, while using the type http-proxy (I am seeing this with the command nmap -sV -sC -p 8080 35.208.25.61 -vvvv -Pn). Instead, I want the daemon I'm running (wings.service) to use http, so it can use that type of connection to connect to my panel.
The panel is part of a piece of software along with the daemon, it's called pterodactyl. Anyways, I have tried everything on what to do, and I think this problem that I am addressing is the problem that causes dysfunction on my panel. I might just have to move to a different service to host my bots for discord.
Let me know if there's anything I can do to fix this.
As per I can understand you are unable to access the panel via web URL.
Pterodactyl web server can be installed using NGINX or Apache web servers, and both web servers by default listed on port 80 based on Pterodactyl web server installation guide, so you must enable HTTP port 80 traffic on your Compute Engine VM instance
The default firewall rules on GCP do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them following this steps:
1.-Go to the VM instances page.
2.- Click the name of the desired instance.
3.- Click Edit button at the top of the page.
4.- Scroll down to the Firewalls section.
5.- Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
6.- Click Save.
Note: The Pterodactyl panel and Daemon installation are not the same for each operating system, if after checking the VPC firewall rules on the VM settings and also the status of the web server in the instance (NGINX or Apache) you still cannot access your panel, please provide a step by step list with all commands you followed to complete the installation, including the OS version you used.

Docker Google cloud

I have a CentOS VM instance in google cloud and I have installed docker on CentOS. I have created a container with web interface. I am not able to access it When i try to access it from outside (In browser Other tab). What do I need to do to access it from outside of cloud?
There are several leaps between your browser your containerised web interface.
The first will be from the IP through the GCP firewall into the Instance, you might be getting stuck here, when you created the instance, in the Firewall section, did you select "Allow HTTP traffic and Allow HTTPS traffic"?
If you click through to your instance details in the GCP dashboard you can see under Firewalls if this is selected, also if you look under Network you can see which network profile your instance it using, you can click the network listed to check if it is set up to allow the traffic you are trying to send though.
If this all looks right and traffic is getting to the instance but not the web interface, it could be that the port from docker is not mapped to the port of the host, when you started the container did you use the -p option to map the ports?
If this is also right, then it could be that the Docker image is not exposing it's port internally, in the Dockerfile used to create the Image for the container is there a line starting with EXPOSE, or does if build FROM an Image that does?
There are more possible points of failure in this chain but I have tried to list some likely answers. If none of this helps then let me know in the comments and we can try and debug the issue.

Resources