Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What I'm trying to do is send information in web requests between a application I've made for a computer, and obviously a web server.
I want this information to be encrypted for security issues, this software may be something people want to crack and I don't want them seeing whats being exchanged between the client and the server.
So, my question is, what is the most efficient way to encrypt data on the client side, send it to the server side, then be decrypted. And then also in reverse with the server encrypting and the client decrypting.
EDIT:
I just want a method of valid encryption for the data being sent between the client and the server. A secure way to encrypt data on the client, then send it to the server, and have it decrypted. This whole thing was described very poorly. As programs such as fiddler, can monitor the requests sent from the C++ application to the server, and the response it gives back. All in plain text. I just need this data and response to be encrypted and be able to be decrypted on both sides.
The tool you want is a pinned TLS certificate. See the OWASP introduction to the topic.
The point of pinning a certificate is that your HTTPS session will not trust every root in the local keystore. It will instead only trust a limited number of roots, specifically the ones you specify (and ideally control). With that, it is not possible to simply inject a rogue root certificate into the local keystore in order to monitor local traffic.
That said, it is not particularly difficult to circumvent pinned certificates if you control the machine the client is running on. But it is not particularly difficult to circumvent any simple mechanism if you control the machine the client is running on. The techniques used to circumvent certificate pinning (namely, modifying the client), will circumvent every client-side encryption scheme.
This is discussed regularly on StackOverflow, and has been for years. Here is one of the various answers that links to several others:
Secure https encryption for iPhone app to webpage
The key lesson is that "anti-cracking" is not "security." It is achieved through obfuscation and other anti-reverse-engineering techniques. This is not a winnable problem. It requires ongoing improvements as attackers defeat your defenses. You should expect to allocate non-trivial resources to this on an ongoing basis, or you should apply modest resources (like pinning) and accept that they won't be very effective but they aren't very costly to manage.
(I used to do this as part of a team of over a dozen full-time people committed to preventing these kinds of attacks. We spend millions of dollars a year on the problem, working together with law enforcement around the world, and deploying extensive custom security hardware. We still got beaten and had to adapt our methods as attacks improved. That's what I mean by "non-trivial resources.")
Use SSL to encrypt traffic between client and server.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are developing a browser extension which would send all the URLs visited by a logged in user to backend APIs to be persisted.
Now as number of requests send to backend API would be huge and hence we are confused between if we create a persistent connection via websocket OR do it via TCP connection i.e. using HTTP rest API calls.
The data post to backend API doesn't need to be real time as we anyway would be using that data in our models which doesn't demand them to be real time.
We are inclined towards HTTP rest API calls as due to below reasons
Easy to implement
Easy to scale(using auto-scaling techniques)
Everyone in the team is already comfortable with the rest APIs
But at the same time cons would be
On the scale where we would have a lot of post requests going to server not sure it would be optimised
Feels like websockets can give us an optimised infrastructure :(
I would love if I can hear from community if we can have any pitfalls going with rest API calls option.
So first of all TCP is the transport layer. It is not possible to use raw TCP, you have to create some protocol on top of it. You have to give meaning to the stream of data.
REST or HTTP or even WebSockets will never be as efficient as customly designed protocol on top of raw TCP (or even UDP). However the gain may not be as spectacular as one may think. I've actually done such transition once and we've experienced only few percent of performance gain. And it was neither easy to do correctly nor easy to maintain. Of course YMMV.
Why is that? Well, the reason is that HTTP is already quite highly optimized. First of all you have "keep alive" header that keeps the connection open if it is used. And so the default HTTP mechanisms already persists connection when used. Secondly HTTP handles body compression by default, and with HTTP/2 it also handles headers compression. With HTTP/3 you even have more efficient TLS usage and better support in case of unstable network (e.g. mobile).
Another thing is that since you do not require real time data then you can do buffering. So you don't send data each time it is available, but you gather it for say few seconds, or minutes or maybe even hours, and send it all in one go. With such approach the difference between HTTP and custom protocol will be even less noticable.
All in all: I advice you start with the simplest solution there is, in your case it seems to be REST. Design your code so that transition to other protocol is as simple as possible. Optimize later if needed. Always measure.
Btw, there are lots of valid privacy and security concerns around your extension. For example I'm quite surprised that you didn't mention TLS at all. Which matters, not only because of security, but also because of performance: establishing TLS connections is not free (although once established, encryption does not affect performance much).
Putting my discomfort aside (privacy, anyone?)...
Assuming your extension collates the Information, you might consider "pushing" to the server every time the browser starts / quits and then once again every hour or so (users hardly ever quite their browsers these days)... this would make REST much more logical.
If you aren't collating the information on the client side, you might prefer a WebSocket implementation that pushes data in real time.
However, whatever you decide, you would also want to decouple the API from the transmission layer.
This means that (ignoring authentication paradigms) the WebSockets and REST implementations would look largely the same and be routed to the same function that contains the actual business logic... a function you could also call from a script or from the terminal. The network layer details should be irrelevant as far as the API implementation is concerned.
As a last note: I would never knowingly install an extension that collects so much data on me. Especially since URLs often contain private information (used for REST API routing). Please reconsider if you want to take part in creating such a product... they cannot violate our privacy if we don't build the tools that make it possible.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
The goal of this question is that I am just trying to better understand the nature of P2P and networking and security / encryption. I am a front-end web developer and my knowledge of the networking stack is not great if we go lower than HTTP requests.
That being said, I am trying to understand how torrent traffic is "sniffed" by ISPs and the content identified. I feel like this question will expose my ignorance, but is it not possible to have some sort of HTTPS-like P2P protocol that would not be so readable?
I grasp that a given packet has to identify its destination to the network along the way, but couldn't torrent packets be configured to show ONLY their destination, so that nobody could identify its purpose along the way, until it arrived at its destination? Why is it apparently an unrectifiable situation that ISPs can just look at P2P traffic and know everything about it, yet SSH is extremely safe?
Every answer here seems to have a different interpretation of the question, or rather, a different assumed purpose of the encryption. Since you compare it to https, it seems like a reasonable assumption is that you're looking for authentication and confidentiality. I'll enumerate a few attempts in decreasing level of "security". This is a bittorrent centric answer, because you tagged the question with bittorrent.
SSL
Starting with the strongest system, it is possible to run bittorrent over SSL (it's not supported by many clients, but in a fully controlled deployment it can be done). This gives you:
Authentication of every peer participating
The ability to pick which peers are let into the swarm by signing their certificate with the swarm root.
SSL encryption of all peer connections + tracker connections
The tracker can authenticate every peer connecting to it, but even if the peer list (or one peer) is leaked or guessed, there's still the peer-to-peer authentication, blocking any unauthorized access.
Bittorrent over SSL has been implemented and deployed.
encrypted torrents
At BitTorrent (in the uTorrent client) we added support for symmetric encryption of torrents at the disk layer:
Everything in the bittorrent engine would operate on encrypted blocks. The data integrity checks (sha-1 hashes of pieces) would be done on encrypted blocks and the .torrent file would have hashes of the encrypted data. An encrypted torrent like this is backwards compatible with clients that don't support the feature, but they won't be able to access the data (just help out the swarm and seed it).
To download the torrent in an unencrypted form, you would add the &key= argument to the magnet link, and uTorrent decrypts and encrypts data at the disk boundary (leaving the data on disk in the clear). Anyone adding the magnet link without the key, would just get encrypted data.
There are some other details involved too, like encrypting some of the metadata in the .torrent file. Such as the list of files etc.
This does not let you pick which peers get to join. You can give access to the peers you want, but since it's a symmetric key, anyone with access can invite anyone else, or publish the key. It does not give you any stronger authentication than you had when you found the magnet link.
It gives you confidentiality among trusted peers and the ability to have untrusted peers help out with seeding.
bittorrent protocol encryption
The bittorrent protocol encryption is probably better described as obfuscation. Its primary intention is not to authenticate or control access to a swarm (it derives the encryption key from the info-hash, so if you can keep that a secret you do get that property). The main purpose is to avoid trivial passive snooping and shaping of traffic. My understanding is that it's less effective to avoid being identified as bittorrent traffic these days. It also provides weak protection against sophisticated and active attacks. For instance, if the DHT is enabled, or tracker connections are not encrypted, it's easy to learn about the info-hash, which is the key.
In the case of private torrents (where DHT and peer exchange are disabled) assuming the tracker runs HTTPS there aren't any obvious holes in it. However, my experience is that it's not uncommon for https trackers to have self signed certificates, and for clients to not authenticate trackers. Which means poisoning the DNS entry for the tracker may be enough to enter the swarm.
Torrent traffic can be encrypted, and there are VPNs/SOCKS proxies that can be used to redirect traffic, i.e., via another country through an encrypted tunnel before connecting to peers. That said, even if you use such services, there are a lot of ways of leaking traffic via side channels (e.g., DNS lookups, insecure trackers, compromised nodes), and most people aren't knowledgeable enough to follow all proper security/anonymity precautions. Furthermore, restricting yourself to communicating only with clients who have also forced encryption will limit the number of peers you can connect to.
The problem you're considering is the difference between point-to-point encryption where there are only two peers in a private context and an unbounded number of peers in a public context.
Decryption by any of the public peers can only be effected if there's a primer somewhere -- a decryption key that is available for all the public peers to use. In the case of protecting from the ISPs, they would also have access to that key unless there was some exclusionary protocol for only sharing the key amongst everyone else. It's not practical to do this.
In a point-to-point connection, a TLS key negotiation eventually creates a session encryption key that is shared by both peers. The key is pseudorandom and session-specific. Data shared on the internet this way would be unusable to clients that didn't participate in the key negotiation.
Bittorrent traffic (specifically the peer-peer protocol used to transfer the bulk of the data) can be encrypted. But it's the kind of encryption that does not provide strong confidentiality/authentication guarantees, similar (but not identical) to HTTP2's opportunistic encryption
Client-Tracker communication can be encrypted with HTTPS.
These two components give you a working, albeit restricted, bittorrent stack that's encrypted and whose contents are not visible to a passive observer.
ISPs may still be able to identify it as "bittorrent, probably" based on side-channel data (packet sizes/traffic patterns, domains contacted, ...) but they won't know exactly what is being transferred.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm about to start writing a web app (Asp.Net/IIS7) which will be accessible over the internet. It will be placed behind a firewall which accepts http and https.
The previous system which we are going to replace doesn't let this web server talk directly to a database, but rather have it making highly specialized web service calls (through a new firewall which only allows this kind of calls) to a separate app server which then go to the DB to operate on the data.
I have worked on many systems in my day, but this is the first one which has taken security this seriously. Is this a common setup? My first thought was to use Windows Authentication in the connectionstring on the web server and have the user be a crippled DB-user (can only view and update its own data) and then allow DB access through the inner firewall as well.
Am I Naïve? Seems like I will have to do a lot of mapping of data if we use the current setup for the new system.
Edit: The domain of this app is online ordering of goods (Business to business), Users (businesses) log in, input what they can deliver at any given time period, view previous transaction history, view projected demand for goods etc. No actual money is exchanged through this system, but this system provides the information on which goods are available for sale, which is data input to the ordering system
This type of arrangement (DMZ with web server, communicating through firewall with app server, communicating through firewall with db) is very common in certain types of environment, especially in large transactional systems (online corporate banking, for example)
There are very good security reasons for doing this, the main one being that it will slow down an attack on your systems. The traditional term for it is Defence in Depth (or Defense if you are over that side of the water)
Reasonable security assumption: your webserver will be continually under attack
So you stick it in a DMZ and limit the types of connection it can make by using a firewall. You also limit the webserver to just being a web server - this reduces the number of possible attacks (the attack surface)
2nd reasonable security assumption: at some point a zero-day exploit will be found that will get to your web server and allow it to be compromised, which could lead to to an attack on your user/customer database
So you have a firewall limiting the number of connections to the application server.
3rd reasonable security assumption: zero-days will be found for the app server, but the odds of finding zero-days for the web and app servers at the same time are reduced dramatically if you patch regularly.
So if the value of your data/transactions is high enough, adding that extra layer could be essential to protect yourself.
We have an app that is configured similarly. The interface layer lives on a web server in the DMZ, the DAL is on a server inside the firewall with a web service bridging the gap between them. In conjunction with this we have an authorization manager inside the firewall which exposes another web service that is used to control what users are allowed to see and do within the app. This app in one of our main client data tracking systems, and is accessible to our internal employees and outside contractors. It also deals with medical information so it falls under the HIPAA rules. So while I don’t think this set up is particularly common it is not unheard of, particularly with highly sensitive data or in situations where you have to deal with audits by a regulatory body.
Any reasonably scalable, reasonably secure, conventional web application is going to abstract the database away from the web machine using one or more service and caching tiers. SQL injection is one of the leading vectors for penetration/hacking/cracking, and databases often tend to be one of the more complex, expensive pieces of the overall architecture/TOC. Using services tiers allows you to move logic out of the DB, to employ out-of-process caching, to shield the DB from injection attempts, etc. etc. You get better, cheaper, more secure performance this way. It also allows for greater flexibility when it comes to upgrades, redundancy or maintenance.
Configuring the user's access rights seems like a more robust solution to me. Also your DataAccess layer should have some security built in, too. Adding this additional layer could end up being a performance hit but it really depends on what mechanism you're using to move data from "WebServer1" to "WebServer2." Without more specific information in that regard, it's not possible to give a more solid answer.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
First of all,
do those successful commercial MMORPGs use encryption for game data transmission?
I got an impression that many developers tend to not use encryption, because it can not prevent reverse engineering for cheating and making private server, but doesn't it effectively reduce the number of those?
Encryption also impacts performance, even just a little.
Good encryption does prevent network sniffering and man-in-the-middle, are these important for MMORPGs?
How about protecting chat messages for privacy concerns?
How do you think?
PS: I'm talking about game data, not user/password, auth info need to be encrypted for sure.
Encryption is a tool. Make sure the tool fits the problem.
Encryption is useful for essentially three things: 1) 3rd party can't view data, 2) both parties are who they say they are, 3) data hasn't be modified. None of those really apply here. Remember the client is on the user (attacker) machine. If they modify the client it will gladly sign & encrypt any message they want.
The second thing to consider is the fact that the client has the keys and thus you should assume the attacker also has the keys. Even if you use asymmetric encryption the client has the key to decrypt anything it receives. If you send "private data" to the client an attack can find the key and decrypt it.
A good MMORPG (deisgned to make cheating difficult) should assume two things:
a) user/attacker can see any data sent to client (so don't send things to client you don't want user to see)
b) an attacker can send any possible command to the user (so don't rely on the client for security).
In most MMORPG the client is little more than a dumb terminal with impressive graphics. All computation, error checking, and validation occurs server side. The client doesn't determine is you hit or miss, nor does it determine how much damage. The client simply tells the server "I am attack with item 382903128." or some other action (not result). The server validates that the player has access to that option, has the item, and the command is valid at this time. To prevent sniffing attacks the client is only given data that the user would have access to anyways.
In any security context, you need to think about what exactly the threat scenarios are.
Attacker A has access to a machine running the game client and wants to write a bot to automate his actions so as to win battles easily.
Attacker B is eavesdropping on packets on a local network, with the aim of
Stealing login credentials so as to play the game for free.
Spying on player-to-player private chat, perhaps to gain advantage in the game, or maybe for blackmail or harassment in the real world.
Inserting extra behaviour into the stream of commands, e.g. instructions to buy or sell items at prices that make money for the attacker.
Encryption has no effect on attacker A (since the game client can decrypt the communication, so can the attacker; counter-measures must be taken on the server) but defeats attacker B.
I disagree with some of the other answers about the value of the data being transmitted. Your private chats with other players are as worthy of protection as your instant messages with them, and your gold and possessions, earned with hours of toil, deserve some protection from attackers, if perhaps not as much as your dollars in a bank account.
During the 90's Ever Quest used a low level packet encryption. I recall fondly as there used to be a side application that would sniff the packet data and give you a zone wide info about everyone in the zone. The EQ team crippled this for a while when they added the packet encryption, but that didn't stop the hacker community as they would just get the key out off the client machine. So in the end, it really didn't help in any way.
As to the other MMO's out there, I've not looked at the packet data to make a determination one way or the other.
You don't need encryption for security per se.
Consider this 'packet':
<USER_ID><COMMAND><MD5HASH>
The MD5HASH is generated from the USER_ID + COMMAND + some other value both the server and client know, but is not transmitted over the wire (user email or some token supplied securely during login). The server can reconstruct the string used for hashing and verify the authenticity of the command. If some man-in-the-middle changes the COMMAND, the hash won't match.
Besides validating authenticity this method also allows you to check you received the entire instruction. (It's possible that your 'game packet' is spread across multiple TCP/IP packets, some might get lost, etc.)
This does not prevent snooping around in messages, but is does prevent tampering. It's a game, who cares about what players say? I mean, emails are unencrypted and nobody cares about those, while their contents is more valuable than the average in-game chat.
Encryption is always a good thing if it actually protects valuable data. For that would be bank data, mails, instant messaging and file transfers. Not because I'm terribly paranoid of my ISP or network provider, but there is a specific risk if you are in an open network (for instance, school networks or company networks), that someone might be reading sniffing network traffic.
For MMORPGS I don't see a benefit in either security nor in performance, since most data is highly session based and man-in-the-middle attacks are rather unlikely (because, afterall, why would you want to sniff and attack such a connection?).
What I would do is to transmit passwords and login credentials as hashed values (or even encrypt just that part), and leave the rest of the connection cleartext; so you don't suffer from CPU hogging and lag caused by encryption (especially when there is a heavy load on the server).
At least the login should be encrypted, and the client should verify the public key of the server against a white-list to prevent man-in-the-middle attacks.
Encrypting the data transfered during the game itself isn't that important.
You need to distinguish encryption and obfuscation which have quite different goals.For example SSL is useful as encryption but useless as obfuscation since the encryption happens in known APIs, and it's trivial to intercept the plaintext when it gets passing into/out from these APIs.
Obfuscation needs to be mixed into your own code and doesn't need to be cryptographically secure.
Encryption needed vs administrators of local network or Wi-Fi, they potentially could sniff your traffic/packets and grab/change game information/passwords.
Tipically (99.99999%) accounts are hacked by trojans, not by packet sniffing. So in 99.99999% encryption is useless.
Encryption is totally useless vs Botting or Cheating. For that cases there is special forces, like - Anti-Cheating/Bot systems.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to get down to the details of what happens once a server gets a request from a client...
Open a socket on the port specified by the request...
Then access the asset or resource?
What if the resource refers to a cgi/script?
What "layers" does the request info have to pass through?
How is the response generated?
I've looked up info on "how the internet works", and "request response cycle", but I'm looking for details as to what happens inside the server.
It seems like you're having a little trouble separating out the different parts of your question so I'll do my best to help you out with that.
First and foremost, a common method for understanding communication between two computers is described using what is called the OSI model. This model attempts to distinguish the responsibilities between each protocol in a protocol stack. For example, when you surf a website on your home network the protocol stack is most likely something like
Ethernet-IPv4-TCP-HTTP
This modularization of protocols is used to create a separation of concerns so that developers don't have to "reinvent the wheel" each time they try to get two computers to communicate in some way. If you're trying to write a chat program you don't want to worry about packet loss or internet routing methodologies so you go ahead and take advantage of the lower level protocols that already exist and handle more of the nitty gritty stuff for you.
When people refer to socket communication these days they're typically using TCP or UDP. These are both known as transport protocols. If you'd like to learn more of the fine details on socket communication I would start with UDP because it's a simpler protocol and then move on to TCP.
While your web server is aware of some information in the lower level protocols it doesn't really do much with it. Primarily that's all handled by the operating system libraries which eventually hand the web server some raw HTTP data which the web server then begins to process.
To add another layer, HTTP has nothing to do with the gateway language running behind the scenes. This is fairly obvious due to the fact that the protocol is the same whether the web server is serving CGI perl scripts, PHP, ASP.Net or static HTML files. HTTP simply makes the request and the webserver processes the request accordingly.
Hopefully this clarifies a few concepts for you and gives you a better idea what you're trying to understand.
It depends on the server. An apache 2 server could do any amount of request rewriting, automatic responses (301, 303, 307, 403, 404, 500) based on rules, starting a CGI script, exchanging data with a FastCGI script, passing some data to a script module like mod_php, and so on. The CouchDB web server would do something else entirely.
Basically, aside from parsing the request and sending back the appropriate response, there's no real common aspect to web servers.
You could try looking into the documentation of the various web servers: Apache, IIS, lighttpd, nginx...