How to redirect certain outgoing request to my local server? - http

we have a microservice setup with multiple small services.
I want to redirect requests for a specific service to my local server. Of course, I can change BASE_URL in my webapp, but I'm looking for a way that I don't have to change the codebase or environment variables.
So, is there a way to do that at Mac OS level?
"mitmproxy" will do this job?

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Deployment of web application delegating to the fixed openam sso url

Given I have following problem:
I have the web application which delegates sso login to openam
I deploy web application, database, openam using docker-compose as docker containers
When I want to deploy it, I need to specify SSO login url as deployment property in a format:
http://OPENAM_IP_ADDRESS:PORT/openam/UI/Login
Please note port number is external to the docker network containers live in
It has to be that way because browser redirects to the openam URL I described above
This implies I need to know IP address and port beforehand, which makes the deployment process cumbersome. Especially when the number of environments I want to deploy to grows. Instead of having one deployment properties file, I have one for each environment.
I was told, reverse proxy could solve the problem where I can store a record, e.g. openam.sso.url and then web application can somehow be registered with reverse proxy
Is there a solution to the problem above?

How to identify an application run in server using url

How can we identify an application running on a server using a url? What will be included in the url to identify the application? Is it the IP address of the server?
If you are attempting to identify what applications a server is running (or serving) based on just the URL, it isn't that simple.
A single server could be running zero, one, or many applications. Additionally, not all applications running on a server will be accessible via HTTP. Therefore, not all applications can be identified via URL.
URLs tend to represent the file/routing structure on a web server. Your best guess at what application is running on a server, solely based on URL, would be to reference the fully qualified domain name (FQDN). The FQDN may hint about what the web server you are utilizing may be intended for.

Is NLB a good way to keep a website available while deploying new code?

I want to be able to deploy a new version of my asp.net/mvc website without loosing client session state or causing any downtime. The way I'm thinking of accomplishing this is by creating a Windows Network Load Balancing server so that clients can reach it via a single url such as https://mysite.org/. It would then redirect traffic to one of two other sites (A.mysite.org or B.mysite.org). I'll set the NLB's affinity to Single, and disable site B so that all sessions are are directed to site A. When I need to deploy a new version of the website, I'll deploy to site B, enable site B, and disable site A. So, everybody that was on site A can stay there (using version 1) until they log off. All new sessions will connect to site B and run version 2. The next time I deploy, I'll do the reverse.
I've never used NLB. Is this appropriate? Is there a simpler, easier way?
How does NLB know when a request from client X already has a session on A or B? Ie. when they log off the website, and try to login again, will the nlb send them to the same site they were on before?
There are quite a few considerations here
Firstly, rather than juggling the affinity on your NLB, you will probably be better storing your ASP.NET Sessions in StateServer or SQL based Session management to allow web clients (or web service clients) to access your site without 'sticky' affinity. Once you've set up the StateServer or created the SQL Session DB, it should be a simple change to your app's web config.
NLB itself works great for keeping your site up while you upgrade your site. You will typically drainstop a server in the cluster before reinstalling your app to it, test it, and then bring it back into the NLB cluster, before repeating the process with the next server etc.
AFAIK, NLB Single Affinity works at TCP/IP level and is does not interrogate ASP.NET sessions. Basically any connection from the same client IP to the same server IP:Port combination will be directed to the same server. Also AFAIK, both servers will be sharing the NLB IP (In addition to any existing IP's they have).
Since it seems your site uses SSL, it seems that unless you have affinity, that the SSL session keys will need to be renegotiated on each request, which could have performance implications.

Local proxy question

Is it possible to install a proxy locally (on Windows XP) and redirect, for example, all traffic from "google.com" to "yahoo.com".
If I call http://www.google.com/test it should redirect it to http://www.yahoo.com/test and return the response from yahoo.
Long story short : I have an old program and there is a URL used in it (for a Web service), but the value of the URL is compiled in the app.
For now, it's connecting in production but I'd like to make some tests in QA, so I would just redirect the URL "http://prod.webservice.website.com" to "http://qa.webservice.website.com" without having to recompile the old application.
Maybe Fiddler will do the job. It's a local proxy that is capable of transforming requests.
You can setup a host file (or edit), if it's just your computer. Would that suffice?

Resources