So basically what i want to do is, let my app keep sending package/ receiving package so I know which people i met before. the sending data is easy: i just need a uuid to identify that this is my app, and a number or string to represent which user it represents. Now the question is, how can I make it secure? Let's say if anyone who can detect it, then he can mock a exactly same package and send out, then if he stand next to me, my app will believe that he is the user even he is not(he mocked).
iBeacon is absolutely not fit this requirement because its uuid/major/minor can be easily detected. One way i can think of is using BLE instead of iBeacon to writing the encrypting data in a service, then the question is can anyone else easily mock a service with the data he detect?
The typical solution is to combine the real identifier with a timestamp and encrypt the pair using a secret key. This produces an encrypted identifier that constantly changes and is worthless unless you can decode it. Once decoded, the timestamp protects against replay attacks -- it must be within a small interval from the current time for the decrypted identifier to be considered valid.
Obviously, this solution requires synchronized clocks on the transmitter and receiver.
The details of the implementation are more of an encryption question than a beacon question. And the devil is in the details.
Related
I don't think this is possible, but I'll ask anyway. Here is what I am trying to do:
I have a HTML5 game that users play in their browser. When the game is over, they see their final score. I want to be able to send that score in an encrypted format to the server. I don't want the players to be able to reverse engineer the server call and set their score higher than what they actually earned. Is there a way to encrypt this and make it impossible for the player to reverse engineer it?
Short Answer: No, what you want to achieve cannot be done, but not for the reasons you think.
Long Answer: You can most certainly encrypt the final score and send it to your server. You can even do this in a way that means that the user couldn't hope to decrypt it once encrypted.
The flaw lies in the fact that the user can encrypt whatever they like in the first place. Let's say you send the encrypted score to the server in an HTTP POST request at the end of the game. Nothing stops the user from taking apart your JS, finding the public encryption key and submitting that same POST request without ever actually playing your game.
To actually solve your problem: The game must be controlled on the server. The client side of the game must simply send input actions which are then interpreted on the server. Since the gamestate is only ever modified by the server, no fake scores can be generated.
My understanding of HMAC is that it can help to verify the integrity of encrypted data before the data is processed i.e. it can be used to determine whether or not the data being sent to a decryption routine has been modified in any way.
That being the case, is there any advantage in incorporating it into an encryption scheme if the data is never transmitted outside of the application generating it? My use case is quite simple - a user submits data (in plaintext) to the scripts I've written to store customer details. My scripts then encrypt this data and save it to the database, and my scripts then provide a way for the user to retrieve the data and decrypt it based on the record ID they supply. There is no way for my users to send encrypted data directly to the decryption routine and I don't need to provide an external API.
Therefore, is it reasonable to assume that there is a chain of trust in the application by default because the same application is responsible for writing and retrieving the data? If I add HMAC to this scheme, is it redundant in this context or is it best practice to always implement HMAC regardless of the context? I'm intending to use the Defuse library but I'd like to understand what the benefit of HMAC is to my project.
Thanks in advance for any advice or input :)
First, you should understand that there are attacks that allow an attacker to modify encrypted data without decrypting it. See Is there an attack that can modify ciphertext while still allowing it to be decrypted? on Security.SE and Malleability attacks against encryption without authentication on Crypto.SE. If an attacker gets write access to the encrypted data -- even without any decryption keys -- they could cause significant havoc.
You say that the encrypted data is "never transmitted outside of the application generating it" but in the next two sentences you say that you "save it to the database" which appears (to me) to be something of a contradiction. Trusting the processing of encrypted data in memory is one thing, but trusting its serialization to disk, especially if done by another program (such as a database system) and/or on a separate physical machine (now or in the future, as the system evolves).
The significant question here is: would it ever be a possible for an attacker to modify or replace the encrypted data with alternate encrypted data, without access to the application and keys? If the attacker is an insider and runs the program as a normal user, then it's not generally possible to defend your data: anything the program allows the attacker to do is on the table. However, HMAC is relevant when write access to the data is possible for a non-user (or for a user in excess of their normal permissions). If the database is compromised, an attacker could possibly modify data with impunity, even without access to the application itself. Using HMAC verification severely limits the attacker's ability to modify the data usefully, even if they get write access.
My OCD usually dictates that implementing HMAC is always good practice, if for no other reason, to remove the warning from logs.
In your case I do not believe there is a defined upside to implementing HMAC other than ensuring the integrity of the plain text submission. Your script may encrypt the data but it would not be useful in the unlikely event that bad data is passed to it.
On the client device, a synced Realm can be setup with an encryption key that's unique to the user and stored on the device keychain, so data is stored encrypted on the client.
(related question: Can "data at rest" in the Realm Mobile Platform be encrypted?)
Realm Object Server and the clients can communicate via TLS, so data is encrypted in transit.
But the Realm Object Server does not appear to store data using encryption, since an admin user is able to access all the database contents via Realm Browser (https://realm.io/docs/realm-object-server/#data-browser).
Is it possible to setup Realm Mobile Platform so user data is encrypted end-to-end, such as no one but the user (not even server admins) have access to the decryption key?
Due to the way we handle conflict resolution, we currently are unable to provide end-to-end encryption, as you correctly deduced. Let's go a tiny bit into detail with regards to the conflict resolution.
In order to handle conflicts the way we do, we use something called operational transformation. This means that instead of sending the data over directly, the client tells the server the intent of the change, rather than the result. For example, when two users edit a text field, we would tell the server insert(data='new text', offset=0) because the first user prepended data at the beginning of the text field, and insert(data='some more stuff', offset=10) because the second user added data in the middle of the field. These two separate operations allow the server to uniquely resolve what happened, and have conflictless resolution of the two writes.
This also means that if we encrypt everything, the server would be unable to handle this conflict resolution.
This being said, that's for the current version. We do have a number of thoughts on how we could handle this in the future, while providing (some degree) of encryption. Mainly this would mean more work on the client, and maybe find a new algorithm that would allow us to tell the client the intent, and let the client figure out how to merge everything. This is a quadratic problem, though, so we're reticent to putting too much work on the client side, as it could really drain the battery.
That might be acceptable for some users, which is why we're looking into it. Basically, there will be a trade-off. As the old adage goes: fast, secure, convenient: pick two. We just have to figure out how to handle this properly.
I just opened a feature request around possibly using Tresorit's ZeroKit to solve the end-to-end encryption question posed. Sounds like the conflict resolution implementation will still cause an issue though, but maybe there is a different conflict resolution level that can be applied for those that don't need the realtime dynamic editing of individual data fields (like patient health data, where only a single clinician ever really edits a record at any given time).
https://github.com/realm/realm-mobile-platform/issues/96
Can't find any flowcharts on how communication works between peers. I know how it works in Radius with PAP enabled, but it appears that with MS-Chapv2 there's a whole lot of work to be developed.
I'm trying to develop a RADIUS server to receive and authenticate user requests. Please help me in the form of Information not code.
MSCHAPv2 is pretty complicated and is typically performed within another EAP method such as EAP-TLS, EAP-TTLS or PEAP. These outer methods encrypt the MSCHAPv2 exchange using TLS. The figure below for example, shows a PEAP flowchart where a client or supplicant establishes a TLS tunnel with the RADIUS server (the Authentication Server) and performs the MSCHAPv2 exchange.
The MSCHAPv2 exchange itself can be summarized as follows:
The AS starts by generating a 16-byte random server challenge and sends it to the Supplicant.
The Supplicant also generates a random 16-byte peer challenge. Then the challenge response is calculated based on the user's password. This challenge response is transmitted back to the AS, along with the peer challenge.
The AS checks the challenge response.
The AS calculates a peer challenge response based on the password and peer challenge.
The Supplicant checks the peer challenge response, completing the MSCHAPv2 authentication.
If you'd like to learn about the details and precise calculations involved, feel free to check out my thesis here. Sections 4.5.4 and 4.5.3 should contain all information you need in order to implement a RADIUS server capable of performing an MSCHAP exchange.
As you can see in the figure, many different keys are derived and used. This document provides a very untuitive insight into their functionality. However, the CSK is not explained in this document. This key is optionally used for "cryptobinding", i.e. in order to prove to the AS that both the TLS tunnel and MSCHAPv2 exchange were performed by the same peer. It is possible to derive the MSK from only the TLS master secret, but then you will be vulnerable to a relay attack (the thesis also contains a research paper which gives an example of such an attack).
Finally, the asleap readme gives another good and general step by step description of the MSCHAPv2 protocol, which might help you further.
Unfortunately i can't add anymore comments, the demand is for me to have 50 reputation.
To your request:
My lab enviorment is of SSL-VPN used with AS of RADIUS.
Constructed with the following 3 items:
End-User -> there's no 'client' installed, the connection starts through a web portal. client = web browser
NAS -> This is the machine that provides the web-portal(the place the End-User enters the Username & Password) AND acts as a RADIUS CLient, transfering requests to the AS.
AS(RADIUS) -> This is me. I receive the access-requests and validate the username & password.
So in accordance with that, what i receive in the Access-Request is:
MS-CHAP2-Response:
7d00995134e04768014856243ebad1136e3f00000000000000005a7d2e6888dd31963e220fa0b700b71e07644437bd9c9e09
MS-CHAP-Challenge: 838577fcbd20e293d7b06029f8b1cd0b
According to RFC2548:
MS-CHAP-Challenge This Attribute contains the challenge sent by a NAS to a Microsoft Challenge-Handshake Authentication Protocol (MS-CHAP) user. It MAY be used in both Access-Request and Access-Challenge packets.
MS-CHAP2-Response This Attribute contains the response value provided by an MS-
CHAP-V2 peer in response to the challenge. It is only used in
Access-Request packets.
If i understand correctly, and please be calm this is all very new to me, based on your flowchart the AS is also the Authenticator who inits the LCP.
And in my case, the LCP is initiated by the NAS, So my life made simple and i only get the Access-Request without needing to create the tunnel.
My question now is, how do i decrypt the password? I understood there's a random challenge 16-byte key but that is held by the NAS.
From my recollection, i only need to know the shared secret and decrypt the whole thing using the algorithem described in your thesis.
But the algorithem is huge, i've tried different sites to see which part of it the AS supposed to use and failed in each attempt to decrypt.
Since i can't ask for help anymore in this thread, i can only say this little textbox cannot fill the amount of gratitude i have for your help, truely lucky to have you see my thread.
Do email me, my contact info are in my profile.
Also, for some reason i can't mark your answer as a solution.
"is typically performed within another EAP method such as EAP-TLS, EAP-TTLS or PEAP."
Well...
RADIUS win2008 server here, configured to NO EAP, only MS-CHAPv2 encryption, to replace the PAP.
This is why alot of what you said and what i said wasn't adding up...
I'm not MITM, i'm the AS, and my NAS(the one who knocks) is the RADIUS_Client/Authenticator.
When the user enters UN&PW a random encryption, which i'm now on the look for, is created with MS-CHAPv2 and all of the above is irrelevant.
With the items received from the Authenticator which again are:
- Username, MS-CHAP-Challenge, MS-CHAP2-Response
The AS performs a magical ceremony to come up with the following:
-Access-Accept
-MPPE-Send-Key
-MPPE-Recv-Key
-MS-CHAP2-Sucess
-MS-CHAP-DOMAIN
This is from a working scenario, where i have a RADIUS server, a radius client and a user.
A NOT working scenario, is the one where i am the RADIUS Server(AS), cause that's my goal, building a RADIUS server, not MITM.
So all i got left is finding out what decryption algorithem needed for those and how.
I did some research on the topic but could not find anything similar to my question. So I hope some of you great guys may help me out.
I want to use AES128 encryption (CFB-Mode) for the networking in my application between two individual clients. The data being exchanged consists only of textual strings of a specific structure, for example, the first bytes allways tell the recipient the kind of message they are receiving, so they can process them. With AES I want to ensure the confidentiality of the message, but now the question of "integrity" arises.
Normaly you would consider using a MAC. But isn't it guaranteed that nobody has altered the message, if the recipient is able to decrypt it correctly, i.e. that the message can be used correctly in his application because of the string's format? Wouldn't altering (even 1 bit) the encrypted message by a third party result in garbage during decryption?
Furthermore let's assume that the application is a multi-party peer-to-peer-game, where two of the players are communicating with each other on a private but AES-encrypted channel. Now the originator of the message is not playing fair and intentionally sending a fraudulent encrypted message to convey an impression that the message has been altered by a random third party (to force a player to quit). Now the recipient would have no chance to determine if the message has been altered or if the sender acts fraudulent, am I right? So Integrity would not be of much use in such a situation and could be neglected?
This may sound like an odd and out of world example. But it's something I recently encountered in a similar application and I am asking myself if there is a solution to the problem or if I got the basic Idea of AES encryption.
As you said, you may detect changes in the format of the plain text message after encryption. But at what level would it go wrong? Do you have something that is large and redundant enough to be tested? What are you going to do if the altered plain text results in some obscure exception somewhere down the line? With CFB (like most modes) an attacker can make sure that only the last part of the message is altered, for instance, and leave the first blocks intact.
And you are worried about cheats as well.
In my opinion, you are much better off using a MAC or HMAC algorithm, or a cipher mode that provides integrity/authentication on top of confidentiality (EAX or GCM for instance). If you are sure nobody else has the symmetric key, an authentication check (such as a MAC) will prove that the data has been signed by the correct key. So no, the user cannot claim that the data has been changed in transport if the authenticity checks succeed.
The next question becomes: can you trust that the symmetric key is only in possession of the other player? For this you might want to use some sort of PKI scheme (using assymetric keys) together with a key exchange mechanism such as DH. But that is for a later, if you decide to go that way.
This is a bit out of my depth, but...
Yes, modifying the encrypted bytes of an AES encrypted message should cause the decryption to fail (this has been my experience with the c# implementation). The client who decrypts will know the message is invalid. EDIT: apparently this is not the case. Looks like you'd need a CRC or hash to verify the message was successfully decrypted. The more serious problem is if the secret AES key is leaked (and in a peer-to-peer environment, the key has to be sent so the receiver can decrypt the message at all). Then a 3rd party can send messages as if they were a legitimate client, and they will be accepted as OK.
Integrity is much harder. I'm not entirely sure how robust you want things to be, but I suspect you want to use public key encryption. This allows you to include a hash of the message (like a signature or MAC) based on the private key to assert the message validity. The receiver uses the public key to verify the hash and thus the original message is OK. The main advantage of public key encryption over symmetric encryption like AES is you don't have to send the private key, only the public key. This makes it much harder to impersonate a client. SSL/TLS uses public key encryption.
In any case, once you have identified a client sending invalid messages, you're in the world of deciding to trust that client or not. That is, is the corruption due to malicious behaviour (what you're worried about)? Or a faulty client implementation (incompetence)? Or a faulty communications link?. And this is where encryption (or at least my knowledge of it) won't help you any more!
Additional regarding integrity:
If you assume no one else has access to your secret key, a CRC, hash, or HMAC would all suffice to ensure you detected changes. Simply take the body of your message, calculate the CRC, hash, whatever and append as a footer. If the hash doesn't match when you decrypt, the message has been altered.
The assumption that the secret key remains secret is quite reasonable. Especially if after some number of messages you generate new ones. SSH and WiFi's WPA both generate new keys periodically.
If you can't assume the secret key is secret, then you need to go to PKI to sign the message. With the AES key in a malicious 3rd party, they'll just generate whatever messages they want with the key.
There may be some mileage in including a sequence number in your message based on a RNG. If you use the same RNG and same seed for both parties, they should be able to predict what sequence number comes next. A 3rd party would need to intercept the original seed, and know how many messages have been sent to send valid but forged messages. (This assumes no messages can ever be lost or dropped.)