I have created a windows VM in GCloud and have updated proxy settings to ensure all calls go through the proxy.
There are two cases:
I set the Proxy setting to call the proxy server. This ensure all the calls that are made through any browser, go through the proxy.
I have setup http_proxy and https_proxy environment variables, with this, any curl commands I hit through Command Prompt or Bash also go via the proxy.
Now I have a case where I need to bypass a few calls and not allow them through the proxy.
This is only required by some desktop Apps I have in my VM and not for the browser call.
CASE1: From some research, in order to bypass browsers call, there is a .pac file where we can added domains to bypass
CASE2: But for non-browser calls, I could only find a way to add a no_proxy environment variable.
Following are my questions related to CASE2
Question 1? When I setup no_proxy env variable, git bash does not seem to respect that unless I set it explicitly in git bash before making any call. So is this the right way to do? or I am missing something.
Question 2: Google internal makes a few calls from the VM to get Metadata, those calls are getting proxied. But even though I update the no_proxy env variable, it still does not respect and calls still go through the proxy. where should I set this up so that I can bypass these internal VM calls to go through without being proxied?
Following is my setup
VM is on GCP with windows image
Proxy server is Squid setup on a static public IP.
The applications are calling some internal APIs
The vm calls http://metadata.google.internal API
Nay help on this would be highly appreciated.
TIA
Related
I have a local jupyter lab instance, running on mint-2 computer with command jupyter lab --ip "*", and it listens to port 8888. I can access it just fine via the URL mint-2:8888.
I also have a server instance ubuntu-2. I reverse ssh tunnel from mint-2:8888 to ubuntu-2:8888, meaning I can access it on my mint-1 laptop just fine via the URL ubuntu-2:8888 anywhere in the world.
However, it is not encrypted with TLS, so I wanted to improve this. On ubuntu-2 I have an nginx load balancer container that strips https traffic, and redirects http traffic to other locations. I have set up jupyter.ubuntu-2:443 so that it redirects to ubuntu-2:8888 so that it redirects to mint-2:8888. This version initially seems to open up just fine, and I can navigate directories. However, whenever I want to launch a new terminal or notebook instance, or even create new directories, it wouldn't work. Here's the network log when I save a modified notebook:
My question is, why won't the requests go through, considering I can still interact with the interface just fine everywhere else, but just not when creating folders/notebooks/terminals. I am thinking that JupyterLab might be using UDP and I'm considering passing UDP traffic through nginx, but this doesn't really make sense, as this is clearly a PUT request. Any other help regarding where to find more logs or speculation on what might have gone wrong is much appreciated.
I dig into it a little more, and managed to figured it out.
JupyterLab has CORS policy that doesn't allow requests to ubuntu-2. I then added c.NotebookApp.allow_origin = "*" to JupyterLab's config at ~/.jupyter/jupyter_lab_config.py, as mentioned here.
Then I found out that everything is still not functional, and this is because Jupyter requires both HTTP and WebSocket protocols, and my current server setup only allows http traffic. So I need to enable generic TCP traffic on ubuntu-2's HAProxy load balancer. Because I have multiple virtual hosts on the server, I need to distinguish between them, so I used Server Name Indication, server name included in TLS traffic.
My application currently supports both http and https and I would like to force the use of the latter when someone tries to access the first one (which also happens to be the default). However, I am a bit unsure of how to set this up when it comes to how I've deployed things.
To give a higher-level perspective, I have 3 nodes running on Heroku corresponding to:
A Next.js frontend app
An Express backend server
An nginx reverse proxy that acts as the entrypoint of the system and redirects requests to either the front or the backend.
How would one go about forcing the use of https? Is that configured at the proxy level? at the frontend level? Or maybe at the dns config level?
I think that's usually done at the proxy level but I'm not sure, plus the fact that I'm using the ssl certificate that heroku provides out the box, makes things even more confusing.
Any suggestions?
I have deployed my Meteor Application on my local machine using:
https://guide.meteor.com/deployment.html#custom-deployment
Now during the process I used:
$ export ROOT_URL='http://192.168.100.2:9000'
Now my is not accessible on http://192.168.100.2:9000, but instead it is accessible on http://192.168.100.2:46223, so every time I do node main.js, it choose some random port for my application.
How can I specify a port of my own choice here?
You should also supply the PORT environment variable to instruct the app which port to listen on, as it is not inferred from the ROOT_URL. It is also not necessarily the same, as apps may have a reverse proxy in front of them.
See the official documentation for more environment variables.
I'm trying to understand how a web client and a server connect and how those connections are handled in dev mode versus production mode.
The part that I am having trouble wrapping my mind around is how to differentiate making a request to localhost from the client as opposed to making a request to a server that is in production (hosted on Heroku for example).
I know how the client makes a request to the API, but how does the client know whether to make a request to localhost:3000 in dev mode or to a different URL in production mode.
My idea of production mode is that the server is hosted (by Heroku for example) and therefore can no longer be queried at localhost.
Any insight greatly appreciated.
This is almost always handled through configuration (e.g., *.properties) files that are different for each environment. The difference is usually handled in the build (selecting different properties files for build artifact) or by passing arguments or environment variables to the application when it is started.
I have the following situation.
The webapp in my company is deployed to several environments before reaching live. Every testing environment is called qa-X and has a different IP Address. What I would like to do is to specify in the jenkins job "test app in qa-x" the app's IP for the x environment so that my tests start running only knowing the apps url.
Jenkins itself is outside the qa-x environments.
I have been looking around for solutions but all of them destroy the other tests of qa-X. For instance, changing /etc/hosts, or changing the dns server. What would be great is that I can specify in that job only the ip as a config parameter and that that definition remains local.
Any thoughts/ideas?
If I'm understanding your query correctly, you should look into creating a Parameterized build which would expose an environment variable with the desired server IP, which your test script could consume.