Why are ports below 1024 privileged? [closed] - unix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I've heard it's meant to be a security feature, but it often seems like a security problem. If I want to write a server that uses a privileged port, not only do I have to worry about how secure my code is, I have to especially worry about whether I'm using setuid right and dropping privileges.

True. But it also means that anyone talking to you knows that you must have to root privileges to run that server. When you log in to a server on port 22 (say), you know you're talking to a process that was run by root (security problems aside), so you trust it with your password for that system, or other information you might not trust to anyone with a user account on that system.
Reference: http://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html.
Edit to elaborate on the reasoning: a lot of the most important network services - telnet (yes, it's still used - surprisingly often), SSH, many HTTP services, FTP etc. etc. - involve sending important data like passwords over the wire. In a secure setup some sort of encryption, whether inherent in the protocol (SSH) or wrapped around it (stunnel, IPSec), protects the data from being snooped on the wire, but all these protections end at the server.
In order to protect your data properly, you need to be sure that you're talking to the 'real' server. Today secure certificates are the most important way of doing this on the web (and elsewhere): you assume that only the 'real' server has access to the certificate, so if you verify that the server you're talking to has that certificate you'll trust it.
Privileged ports work in a very similar way: only root has access to privileged ports, so if you're talking to a privileged port you know you're talking to root. This isn't very useful on the modern web: what matters is the identity of the server, not its IP. In other types of networks, this isn't the case: in an academic network, for example, servers are often physically controlled by trusted staff in secure rooms, but students and staff have quite free access as users. In this situation it's often safe to assume you can always trust root, so you can log in and send private data to a privileged port safely. If ordinary users could listen on all ports, you'd need a whole extra layer to verify that a particular program was trusted with certain data.

You don't say what platform you are using, but on Linux at least you can use capabilities (specifically CAP_NET_BIND_SERVICE) to allow a non-root process to listen on a port less than 1024. See, for example, Is there a way for non-root processes to bind to "privileged" ports on Linux?
Another alternative is to set up iptables rules to forward traffic from the privileged port to the non-privileged port (I've used this in production, and it's fairly simple and works well). It's also described in the above link.

Related

Efficient ways to encrypt data sent between client and server [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What I'm trying to do is send information in web requests between a application I've made for a computer, and obviously a web server.
I want this information to be encrypted for security issues, this software may be something people want to crack and I don't want them seeing whats being exchanged between the client and the server.
So, my question is, what is the most efficient way to encrypt data on the client side, send it to the server side, then be decrypted. And then also in reverse with the server encrypting and the client decrypting.
EDIT:
I just want a method of valid encryption for the data being sent between the client and the server. A secure way to encrypt data on the client, then send it to the server, and have it decrypted. This whole thing was described very poorly. As programs such as fiddler, can monitor the requests sent from the C++ application to the server, and the response it gives back. All in plain text. I just need this data and response to be encrypted and be able to be decrypted on both sides.
The tool you want is a pinned TLS certificate. See the OWASP introduction to the topic.
The point of pinning a certificate is that your HTTPS session will not trust every root in the local keystore. It will instead only trust a limited number of roots, specifically the ones you specify (and ideally control). With that, it is not possible to simply inject a rogue root certificate into the local keystore in order to monitor local traffic.
That said, it is not particularly difficult to circumvent pinned certificates if you control the machine the client is running on. But it is not particularly difficult to circumvent any simple mechanism if you control the machine the client is running on. The techniques used to circumvent certificate pinning (namely, modifying the client), will circumvent every client-side encryption scheme.
This is discussed regularly on StackOverflow, and has been for years. Here is one of the various answers that links to several others:
Secure https encryption for iPhone app to webpage
The key lesson is that "anti-cracking" is not "security." It is achieved through obfuscation and other anti-reverse-engineering techniques. This is not a winnable problem. It requires ongoing improvements as attackers defeat your defenses. You should expect to allocate non-trivial resources to this on an ongoing basis, or you should apply modest resources (like pinning) and accept that they won't be very effective but they aren't very costly to manage.
(I used to do this as part of a team of over a dozen full-time people committed to preventing these kinds of attacks. We spend millions of dollars a year on the problem, working together with law enforcement around the world, and deploying extensive custom security hardware. We still got beaten and had to adapt our methods as attacks improved. That's what I mean by "non-trivial resources.")
Use SSL to encrypt traffic between client and server.

Why can't torrent traffic be encrypted? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
The goal of this question is that I am just trying to better understand the nature of P2P and networking and security / encryption. I am a front-end web developer and my knowledge of the networking stack is not great if we go lower than HTTP requests.
That being said, I am trying to understand how torrent traffic is "sniffed" by ISPs and the content identified. I feel like this question will expose my ignorance, but is it not possible to have some sort of HTTPS-like P2P protocol that would not be so readable?
I grasp that a given packet has to identify its destination to the network along the way, but couldn't torrent packets be configured to show ONLY their destination, so that nobody could identify its purpose along the way, until it arrived at its destination? Why is it apparently an unrectifiable situation that ISPs can just look at P2P traffic and know everything about it, yet SSH is extremely safe?
Every answer here seems to have a different interpretation of the question, or rather, a different assumed purpose of the encryption. Since you compare it to https, it seems like a reasonable assumption is that you're looking for authentication and confidentiality. I'll enumerate a few attempts in decreasing level of "security". This is a bittorrent centric answer, because you tagged the question with bittorrent.
SSL
Starting with the strongest system, it is possible to run bittorrent over SSL (it's not supported by many clients, but in a fully controlled deployment it can be done). This gives you:
Authentication of every peer participating
The ability to pick which peers are let into the swarm by signing their certificate with the swarm root.
SSL encryption of all peer connections + tracker connections
The tracker can authenticate every peer connecting to it, but even if the peer list (or one peer) is leaked or guessed, there's still the peer-to-peer authentication, blocking any unauthorized access.
Bittorrent over SSL has been implemented and deployed.
encrypted torrents
At BitTorrent (in the uTorrent client) we added support for symmetric encryption of torrents at the disk layer:
Everything in the bittorrent engine would operate on encrypted blocks. The data integrity checks (sha-1 hashes of pieces) would be done on encrypted blocks and the .torrent file would have hashes of the encrypted data. An encrypted torrent like this is backwards compatible with clients that don't support the feature, but they won't be able to access the data (just help out the swarm and seed it).
To download the torrent in an unencrypted form, you would add the &key= argument to the magnet link, and uTorrent decrypts and encrypts data at the disk boundary (leaving the data on disk in the clear). Anyone adding the magnet link without the key, would just get encrypted data.
There are some other details involved too, like encrypting some of the metadata in the .torrent file. Such as the list of files etc.
This does not let you pick which peers get to join. You can give access to the peers you want, but since it's a symmetric key, anyone with access can invite anyone else, or publish the key. It does not give you any stronger authentication than you had when you found the magnet link.
It gives you confidentiality among trusted peers and the ability to have untrusted peers help out with seeding.
bittorrent protocol encryption
The bittorrent protocol encryption is probably better described as obfuscation. Its primary intention is not to authenticate or control access to a swarm (it derives the encryption key from the info-hash, so if you can keep that a secret you do get that property). The main purpose is to avoid trivial passive snooping and shaping of traffic. My understanding is that it's less effective to avoid being identified as bittorrent traffic these days. It also provides weak protection against sophisticated and active attacks. For instance, if the DHT is enabled, or tracker connections are not encrypted, it's easy to learn about the info-hash, which is the key.
In the case of private torrents (where DHT and peer exchange are disabled) assuming the tracker runs HTTPS there aren't any obvious holes in it. However, my experience is that it's not uncommon for https trackers to have self signed certificates, and for clients to not authenticate trackers. Which means poisoning the DNS entry for the tracker may be enough to enter the swarm.
Torrent traffic can be encrypted, and there are VPNs/SOCKS proxies that can be used to redirect traffic, i.e., via another country through an encrypted tunnel before connecting to peers. That said, even if you use such services, there are a lot of ways of leaking traffic via side channels (e.g., DNS lookups, insecure trackers, compromised nodes), and most people aren't knowledgeable enough to follow all proper security/anonymity precautions. Furthermore, restricting yourself to communicating only with clients who have also forced encryption will limit the number of peers you can connect to.
The problem you're considering is the difference between point-to-point encryption where there are only two peers in a private context and an unbounded number of peers in a public context.
Decryption by any of the public peers can only be effected if there's a primer somewhere -- a decryption key that is available for all the public peers to use. In the case of protecting from the ISPs, they would also have access to that key unless there was some exclusionary protocol for only sharing the key amongst everyone else. It's not practical to do this.
In a point-to-point connection, a TLS key negotiation eventually creates a session encryption key that is shared by both peers. The key is pseudorandom and session-specific. Data shared on the internet this way would be unusable to clients that didn't participate in the key negotiation.
Bittorrent traffic (specifically the peer-peer protocol used to transfer the bulk of the data) can be encrypted. But it's the kind of encryption that does not provide strong confidentiality/authentication guarantees, similar (but not identical) to HTTP2's opportunistic encryption
Client-Tracker communication can be encrypted with HTTPS.
These two components give you a working, albeit restricted, bittorrent stack that's encrypted and whose contents are not visible to a passive observer.
ISPs may still be able to identify it as "bittorrent, probably" based on side-channel data (packet sizes/traffic patterns, domains contacted, ...) but they won't know exactly what is being transferred.

Active Directory unable to recognize systems [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My Configuration: I'm using Windows Server 2008 from VMware(8.0.0) & it's working fine,Now I have 5 Windows 7 system.And I have created Active Directory and DNS (i.e. abc.com).
Issue: Whenever I'm trying to add those systems into a Group,AD can't detect those 5 systems(I mean hostname at all).
To Do(?):
Shall I install DHCP into my server,and then add those client manually,and then it'll show in the Active Directory.
or
I have to create a Network domain which makes the server recognize other 5 systems
Any inputs on this regards making my Active Directory recognize 5 other windows7 client will be appreciated.
Ok long form answer, so its not in Comment.
This is inresponse to the 2 formats out in the market
IPV4 - Oldest current standard for 90% of the NIC's on the market now. 100% of the NIC's will be compliant to this standard
IPV6 - Newest (~2yrs old) and has roughly 50% market implementation.
Acknowledging the industry transition from the IPV4 to IPV6 will help you in solving some DHCP/Network transmission problems. I know of the problem, have tasked the more specialized Network/System Admins with performing this problem solving. I know its a task of inventory control and standardization on the infrastructure.
First off do you have a Single-Node AD or a Farm?
Farm: Do a broad search from the main trunk of AD for the computer.
This will do a full "inventory" search. In the past, the AD implementor didnt setup a default directory for "new" computers and so they went to the top trunk for proper routing. So i would have to do a top-trunk search for the computer, and once found manually drag it to the correct branch or limb.
Single-Node: Follow the same methodology, as above "Farm", but you have less of a headache as your Trunk is more local and not necessarily spread out amongst several locations.
There is always a need for DHCP. Especially if you want to fight against static IP assignment and want to force users to use the Computer Name vs just an IP. Most Servers use Static DHCP for back-door access for administration. In my past Computer management, i have used DHCP to implement and enforce a dynamic IP address for the computer to mitigate anyone from using an IP address as a entry point into your network.
One way to be 100% sure that the computer is not registered in any way with AD:
Login as Local Administrator for all these steps, until otherwise stated.
Disconnect the computer from the network
Change it from Domain to Workgroup (i always use "TEMP" as my Workgroup)
Then do the mandatory reboot.
Rename the computer to another valid infrastructure naming convention
Do mandatory reboot.
During this Reboot, reconnect the Patch Cord back to the NIC (preferably before the Black Windows Loading screen)
Optional Add the new computer name to the needed branch/folder
Login as Local Admin, change from Workgroup to Domain, authenticate with the AD Administrator's credentials
If you error out here, you need to validate that the computer is actually on the network (should have pulled a valid subnet IP address)
If you pulled a non-valid IP address, verify the validity of the Network port and activity on the back of the computer.
Verify the location (if you didnt pre-add the computer at pt 8) and move to the appropriate branch/folder.
Now all of this assuming you are doing this with a Physical Computer.
If you are doing with a Virtual Computer from VMWare, then you will have to verify these points of problems:
VMachines equipment settings, make sure they are similar/same as other valid VBoxes.
Make sure you havent outspent your licensed number of CPU's or Instances
I cant remember if AD is licensed anymore by client licenses or not, or if they just bulked it in with the Server License. They have changed the CAL licensing options so much over the year its hard for implementors to keep track, only the sales people ;)

Creating a networking application that can work over internet connections

I have a somewhat basic understanding of network programming (and networking concepts in general) from taking a networking course in university a few years ago.
I remember being able to create a simple chat application, where the chat server is used as a central directory aware of which clients are currently online, but once a client knows another client it wants to chat with, the actual messages between them don't need to go through the server. I remember we could only test this over a bunch of LAN machines.
This C# chat program also has several comments mentioning that the program does not work over the internet: http://www.geekpedia.com/tutorial239_Csharp-Chat-Part-1---Building-the-Chat-Client.html
My question is why do these applications not work over the internet when "commercial" chat applications can. Surely, there is some way to make my computer accessible to the outer network even if its IP address is not valid outside the network of the ISP.
I see no problem with the linked-to code. The server doesn't even bind to a local address, which means it will listen for connections on all ip-addresses on the computer. There is however a comment for in the server article where the user changed the TcpListener object creation to bind to a specific address, which means clients only can connect to that specific address.
In the original server design, with using TcpListenet with only a port number, there should be nothing preventing its use on an Internet connected computer, unless there is a firewall blocking access.
Were you aware of networkComms.net and in particular the short chat example demonstrating the functionality here (It's less than 15 lines of code)? This was written specifically for people writing server-client apps in c# and given most of the problems you might come across will already have been solved and it might save you some time. This library is completely plug & play and has no issues working over the internet (as long as you can setup the necessary port forwarding where necessary).
Generally if both of your targets are behind NAT (so no true external ip addresses) and you are unable to configure port forwarding you need to look at 'TCP / UDP hole punching', quite an advanced technique.

Two Computers Finding Each Other over Internet [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Given two computers attached to the Internet that know nothing about each other before hand, is it possible for one computer to be able to broadcast a message so that the second computer could receive it and respond?
I know UDP broadcast exsits, but I believe that those are generally filtered by the ISP before it reaches the true Internet. Is this true?
The current best way to achieve a multinode network without centralized coordination is through the use of Distributed Hash Tables. That link explains a bit and links to various implementations you can leverage.
That said, you still need each machine to coordinate with at least some peers. It's just that you don't need it to coordinate with a central server. A solution using a central server that know both (all) participating machines will also work, but imposes further restrictions on anonymity and scalability, just remember what happened to Napster.
You need an intermediate third party that they both know, that could distribute messages directed towards it in a broadcast-like fashion.
A solution for this problem (where none of your peers know the final address of the other) could be relying on IM protocols.
In particular, the XMPP protocol is extensible, open and used by many providers such as Google Talk. Libraries exist for most languages and it has the plus of being able to work (slowly and going through a 3rd party server) even if both hosts are behind a NAT-box.
If communication must use another channel, you can use XMPP to exchange IP address and then proceed with the standard socket route (but if you encrypt your messages, there should be no problem even going through a 3rd party server - to be true all packets go through untrusted 3rd party routers so you should encrypt anyway if you have sensitive data..).
Hope this helps.
No, you can't broadcast like that over the internet. You need to know which address you want your packets to go to.
A possible solution for you is to use a dynamic DNS service.
Your application would need to know in advance which hostname the other host will be using, but this service would at least get around the fact that you don't know exactly which IP address the other computer is on.
Note that this won't solve the potential issue of firewalls between the two hosts blocking your packets. The only practical way around that is for both hosts to open an outbound connection to a central host which can then relay data between them.
Look at the chord or pastry algorithm. It is an overlay network (DHT based) which has a discovery mechanism involved. It's a P2P (Peer 2 Peer) routing algorithm.
UDP is a dead end - its just a protocol where the order the packets are received is less important and there are issues routing over WANS. You said that you want to connect two computer on the "internet" presumably with the end points moving around etc. The only way is to use a central server as a register/directory. If each end point allso a web service or something and registeres its current IP address and name periodically then the other end point can look up the IP address of the other using this service. (could host your own DNS server and code your end point to register on this DNS?)
One of the problems is that even if you have the IP address what is one or more nodes are behind a firewall or NAT router ? You will need to host a server to proxy traffic. The best example is SKYPE - look into how it works it is documented, very interesting.
The simplist answer might be to jump on the back on an existing service such as messanger, skype, bit torrent, etc.
Simon
If the computers are running Windows, I'd look at using PNRP.
Multicasting is also a possible solution. It's certainly feasible in a corporate network

Resources