Do client services need ports? - networking

Recently, I was having a chat with a much experienced engineer. We had a service on the server that only initiated requests to a partner. I suggested that this service requires us to configure a port and he turned down my suggestion. I believe he said something on the line of "Since we are not hosting a service that is not accessed by anyone rather we are accessing a partner's service, we don't require a port." It got me thinking, given on the same server, we have so many services, how does the server know that this response is for this given service?

broadly, the server is really acting as a client and the ports used for connections are assigned dynamically by the networking stack
under normal conditions, the port
is numbered >1000 (low ports are reserved for root processes)
not in use

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Is there a network communication protocol whose use won't require an app's user to grant permissions in Windows Firewall?

I want my client program to communicate with a server without making the user add an exception to Windows Firewall in elevated mode. Is there a way to do this? HTTP? For instance, uTorrent and Google Chrome can both be installed by a regular (non-admin) user, and both programs network quite extensively - how do they do this? Am I missing something about how the firewall and/or ports works?
Yes there is a way. Assuming that your client program is the one running on the users machine and that your client program is the one initiating communication with the server then your client program generally would not need to require end user to open any exceptions in the windows firewall as long as you stick to using http over port 80. Http on port 80 is generally open for outbound traffic (initiated by the client) and therefor you could build your communication (and if needed your own protocol) on top of the http protocol. This is the typical scenario for webserver and webbrowsers (clients).
If you need the server to initiate the communication it becomes more complex and a lot of different approaches could be used. Choice of communication channels and structure should depend on factors like whether you would want to communicate to one client at a time or many (broadcast/multicast), do you need encryption, what are your needs for speed (throughput and latency), what kind of system are you trying to build and so on.
Many webapplications achieve an effect of a server initiated communication by using special techniques called polling, long polls, comet, websockets and so on. these work through http on top of tcp/ip on port 80. Other systems employs subscription mechanisms to be able to get notified through a third part if something new has happened. If you need server initiated communications please let me now and i will try to give a better explanation on the options.

Restricting access to IP addresses on a web application

I'm working on an application that will be used from different locations so it has to be on a network, and since the distance from each workstation that will use the application is quite far, it will be on the internet. Definitely on a dedicated Windows Server.
I have security concerns because it is such an application that black-hat hackers and crackers will like to abuse to their own ends.
So I'm thinking, I can (since I am the I.T head of the company) procure a static IP address for all the workstations that will use the application, then I can compile a white list of IP addresses. If a request is not coming from an IP address in the white list, the request will be denied. Does this make sense?
I could also use more security tips on securing the server and the application.
It's an ASP.NET MVC application.
Does this make sense?
At a network level? Somewhat. At an application level? Probably not.
IP filtering is something that makes sense at the network level. So setting firewall rules to dictate which IPs are allowed to access certain ports on a server. That is both sensible and common.
Trying to do the same thing at the application layer is error prone and problematic. For instance, if your application is behind a load balancer, the IP address your application sees may well be the one belonging to the load balancer, not the client who originated the request.
As an additional note, just because a request is coming from a trusted IP, doesn't mean that you don't have to be careful. Your "trusted" client systems could be compromised or an attacker could be using a CSRF attack.

How to restrict connections to glassfish?

I'm running a web application on a glassfish 3 server. The application should not be accessible for anyone. Instead I want to limit access to a handful static IP addresses. To block all communication via a firewall is not an option, since the server hosts other web services too.
Given this background, my question would be:
How can I tell glassfish to only respond to requests from given number of IP addresses?
Your help is highly appreciated!
IP based security is not very robust and... secure (think network topology changes, IP spoofing) but it should be possible to:
create a virtual server
configure the application to be available on that virtual server only
define allowRemoteHost/denyRemoteHost properties at the virtual server level
A better alternative would be to move to certificates.
You can always write a filter that returns 404 or whatever for invalid IPs. Note that IPs can be spoofed.

Website currently being viewed

I have 50 machines in a LAN and each of these have internet access. Can a program be developed using vc++ which will tell what are all the websites which is being opened by users in each machine?
You can easily accomplish this by writing an application which captures packets outbound on port 80 (and the associated DNS information). The problem is that this application must run on every client computer which you want to trace. The easier method, as stated by others, is to take advantage of your network architecture and tunnel all traffic through a central proxy which can record the same information.
There are many-many enterprise tools suited for just this task in the latter instance.
Route your internet traffic through a centralized proxy and monitor the traffic from proxy say using Fiddler, or something else. In case proxying is not possible, use Fiddler to generate data at known location and then collate it at required intervals.
Install a firewall, if you don't already have one, and use it to log connections.

Resources