iBeacon Data Encryption (Advertising Mode) - encryption

my goal is to encrypt iBeacon data (UUID+MajorID+MinorID: 20 bytes), preferably with a timestamp included (encrypted and plain), thus summing up to around 31/32 bytes.
As you have already found out, this exceeds the maximum allowed iBeacon Data (31 bytes, including iBeacon prefix and tx power).
Thus, encryption only works by creating a custom beacon format instead of the iBeacon format?
Talking about encryption: I would think about using a symmetrical cipher using CBC operation mode (to avoid decipher due to repetition, and most important to avoid cipher text adjustments resulting in a changed UUID, Major-/MinorID). It's not problem to make the IV (Initialization Vector) public (unencrypted), right?
Remember, the iBeacons work in advertising mode (transfer only), without being connected to beforehand, thus I am not able to exchange an initialization vector (IV) before any data is sent.
Which considerations should be made using the most appropriate cipher algorithm? Is AES-128 OK? With or without padding-scheme? I also thought about a AES-GCM constellation, but I don't this it's really necessary/applicable due to the used advertising mode. How should I exchange session tokens? Moreover, there's no real session in my case, the iBeacons send data 24/7, open end, without a real initialization of a connection.
Suppose a system containing of 100 iBeacons, 20 devices and 1 application. Each iBeacon sends data periodically (i.e. 500ms), being received by near devices via BLE, that forward the data to an application via udp.
So the system overview relation is:
n iBeacons -(ble)- k devices -(udp)- 1 Application
Is it acceptable to use the same encryption key on each iBeacon? If I would work with a tuple (iBeacon Id / encryption key), i would additionally have to add the iBeacon Id to each packet, thus being able to lookup the key in a dictionary.
Should the data be decrypted on the device or only later in the application?

You can look at the Eddystone-EID spec to see how Google tried to solve a similar problem.
I'm not an expert on all the issues you bring up, but I think Google offers a good solution for your first question: how can you get your encrypted payload to fit in the small number of bytes available to a beacon packet?
Their answer is: you don't. They solve this problem by truncating the encrypted payload to make a hash, and then using an online service to "resolve" this hash and see if it matches any of your beacons. The online service simply performs the same hashing algorithm against every beacon you have registered, and see if it comes up with the same value for the time period. If so, it can convert the encrypted hash identifier into a useful one.
Before you go too far with this approach, be sure to consider #Paulw11's point that on iOS devices you can must know the 16-byte iBeacon UUID up front or the operating system blocks you from reading the rest of the packet. For this reason, you may have to stick with the Android platform if using iBeacon, or switch to the similar AltBeacon format that does not have this restriction on iOS.

thanks for your excellent post, I hadn't heard about the Eddystone spec before, it's really something to look into, and it has a maximum payload length of 31 bytes, just like iBeacon. In the paper it is written that in general, it is an encryption of a timestamp value (since a pre-defined epoch). Did I get the processing steps right?
Define secret keys (EIKs) for each beacon and share those keys with another resolver or server (if there's a constellation without a online resource);
Create lookup tables on the server with a PRF (like AES CTR);
Encrypt timestamp on beacon with PRF(i.e. AES CTR?) and send data in Eddystone format to resolver/server;
Server compares received ciphertext with entries in lookup table, thus knows about the beacon id, because each beacon has a unique secret key, which leads to a different ciphertext
Would it work the same way with AES-EAX? All I'd have to put into the algorithm is an IV(nonce value) as well as the key and message? The other end would build the same tuple to compare ciphertext and tag'?
Question: What about the IRK (identity resolution key), what is it? Which nonce would be used for AES CTR? Would each beacon use the same nonce? It has to be known to the resolver/server, too.
What about time synchronization? There's no real connection between beacon and resolver/server, but is it assumed that both ends are synchronized with the same time server? Or how should beacon and resolver/server get the same ciphertext, if one end works in the year 2011, while the resolver has arrived in 2017?

Related

Private/Public key encryption algorithm for short messages, giving short results via ED25519?

I have short messages (<=256bit) that need to be encrypted and published as a (HTTP URL) QR code, along with the public key(s). Because of the QR requirement the result should also stay 256bits long - with the scheme, servername, and base64 encoding the resulting URL already has quite a length, and so the QR becomes "too" big easily.
RSA is out of the question for that key size.
libsodium provides crypto_box functions using ED25519; but for these I need to transport the nonce (24 bytes) as well, and the result is eg. 48 bytes - this makes the QR code already a bit unwieldy.
Furthermore, using one (constant) key pair and another randomly generated per message means the random key needs to be embedded as well, enlarging the result
Using a single key pair doesn't work - If I encrypt with sec1 and pub1, I need to publish exactly these values for decrypting too.
So I'm pondering using plain, raw ED25519 en- and decryption. Are there any pitfalls like with RSA (padding, bad keys (like pub exp 3)) that I need to look out for?
My plan would be to take the input, do an SHA256 of it, use the hash value to pad the input to 256 bits, and then do a plain ED25519 encryption. (I'll prepend a key marker to the result to make key rotation possible.)
What can go wrong? After all, all the complexity in libsodium has to have a reason, right?
Thanks a lot for any help!

Question: Homomorphic encyrption can "read" the encrypted word?

it is a question I have. I am not expert in this subject, so please, be kind on the answer.
I understood the homomorphic encryption process allow to read a message as if it has been decrypted, but it will do so without removing the protective layer that the encryption process placed on it.
Let's suppose the word "TESTE" is cryptographed, and a homomorphic encryption is done on that encrypted word.
My question is:
Homomorphic will understand the "meaning" of encrypted text? Homomorphic will know that the encrypted word is also "TESTE" ?
Thank you.
Let me give you a different example. I'm not sure the example can be implemented with today's systems. But it illustrates it anyway:
Party A has 10 numbers and encrypts them.
The encrypted numbers are given to party B.
Party B calculates the sum of the 10 numbers. The result is encrypted data.
Party A is given the encrypted result.
Party A decrypts the result.
The main feature is that party B does not need to decrypt the 10 numbers. Futhermore, the encryption is maintained throughout the entire sum calculation. Therefore, party B has neither knowledge of the input numbers nor of the calculated sum because all operations are carried out on the encrypted data.
I understood the homomorphic encryption process allow to read a message as if it has been decrypted.
No. Homomorphic encryption is public key encryption that allows someone to evaluate (technically circuit evaluation) on the encrypted data without accessing the data. The good side, the client can give the heavy process to the cloud without thinking that it's data is compromised as long as the scheme is not broken.
To understand the FHE, we can look at the textbook RSA, which has no padding. Textbook RSA enables multiplication that is if you multiply two the ciphertexts then decrypt you will get the multiplication of the plaintexts. So if you want to multiplication of your data on the cloud just send your data encrypted with RSA.
RSA only multiplies but there is no other operation and this called partially homomorphic.
There are other public key cryptosystems that support only one operation, for example x-or. This can be used to verify fingerprints on the cloud without revealing the data to the cloud.
If two operations are performed then it is called Fully-Homomorphic and we can build an arbitrary circuit, in theory.
The main idea is semantically encrypting your data (input for the calculation), then send the circuit (the operation you want to be performed) and send to the cloud to calculate with your public key. The cloud calculates the circuit and returns back to you. Only, you can decrypt with your private the return to get the result.
The main point is the calculation is done without accessing the data. As long as the Cryptographic primitives are not broken, no one will access your data.
Note: The breakthrough of Gentry's seminal work is finding a novel way to deal with the noise which is doubles with multiplication.

using tiny-AES-c for encrypted wireless link

I recently cam across an AES implementation in C for embedded systems and have been looking at using it in a wireless point-to-many-points link (or point-to-point for starters).
The code compiles correctly on my platform and it seems to give the correct results when using the test code provided with CTR mode, back to back (this CRT mode is what I require as my packets are of 120 bytes each and cannot be changed).
Essentially the transmitter Tx should use the tiny-AES encryption function to generate a continuous stream of encrypted packets of 120 bytes each, transmit them with a CTR attached and the header etc, and to be received and decrypted on the other end by the receiver Rx.
It is envisaged the transmitter Tx and receiver Rx use exactly the same tiny-AES code with the same key that is known to me only. This key can be set and would be different on different Tx-Rx pairs.
My question relates to the CRT mode in particular and the use of the following functions both on the Tx and the Rx :
/* Initialize context calling (iv can be NULL): */
void AES_ctx_init(AES_ctx *ctx, uint32_t keylen, const uint8_t *key, const uint8_t *iv);
/* ... or reset IV or key at some point: */
void AES_ctx_set_key(AES_ctx *ctx, uint32_t keylen, const uint8_t *key);
void AES_ctx_set_iv(AES_ctx *ctx, const uint8_t *iv);
So
What is the purpose of AES_ctx_init() and is this done only once at the start of an encrypt/decrypt session of a secure link ?
What is the purpose of the AES_ctx_set_iv() function and again, is this done only once at the start of the secured link session ? The comment "... or reset IV or key at some point" is not clear to me – the reason for my question ... That is when does the IV need to be reset when using AES in CRT mode ?
Do the Tx and the Rx require to use the very same IV and can this be left as is or does it need to be changed with every new secured link session ? I understand that the secret key would normally be chosen for a Tx/Rx pair and only modified by the user on both when required but normally it would stay the same for every session. Is this generally correct ? Is this the case (or not) for the IV, and at what point does one reset the IV ?
In wireless links, due to noise, and if there is no implementation of a FEC, the receiver will be forced to drop packets due to bad CRCs. In my link the CRC is verified for the received 120 bytes packet (one by one as they are received) and the decryption is initiated to recover the original data if the CRC is correct, but the data is dropped if not. What are the implications of this to the stream of encrypted Tx packets as they keep getting transmitted regardless, as there is no hand-shaking protocol to tell the Tx the Rx dropped a packet due to a CRC error, if any ?
If answers to these questions exist in the form of some further documentation on the tiny-AES I would appreciate some links to it as the notes provided assume people are already familiar with AES etc.
So here's some further to the comments/replies already made. Basically the Tx / Rx pair has a pre-defined packet structure with a certain payload, header CRC etc but let’s call this the “packet”. The payload is what I need to encrypt and this is fixed to 120 bytes, not negotiable regardless and it’s in the packet.
So what you are saying is that with each transmission of a packet the Tx and Rx will need to change the nonce with every packet, and both Tx & Rx need to be using the same nonce with each treatment of a packet ?
Say I begin transmission, and have packet(1), packet(2) … packet(n) etc. then with each packet transmitted I need to update the “counter”, and that both transmitter and receiver need to be synchronized so that both use the same nonce in a session ?
This can be problematic when and if, due to noise or interference, the Tx/Rx system loses synchronization as it can happen, and somehow the two independent counters for the nonce are no longer synced and on the same page so-to-speak…
Generally a session would not require more than 2^16 packets on average, so can you see a way around this ? As I said, the option of sending a different nonce with every individual packet is totally out of the question, as the payload is already complete and full.
Another way I thought perhaps you can shed some light if this may work, is through GPS. Say if each Tx and Rx have each a GPS module (which is the case), then timing information could be derived from the GPS clock on both Tx & Rx as both would receive this independently, and some kind of synchronized counter could be rolling to update the nonce on both, say counting from 0 to 2^16 in a session… would you agree ? So even if due to noise/interference, packets are lost by the receiver, the counter would continue updating the nonce with some form of reliable “tick” in the background…
With regards to the source of entropy, well apparently the lampert circuit would provide this for a good PRNG running locally, say for a session of 2^16 packets, but this is not available on my system yet, but could be if we decide to take this further …
What are your thoughts on this as a professional in the field ?
Regards,
I'm assuming the Tiny AES you're referring to is tiny-AES-c (though your code is a little different than that code; I can't find anywhere that AES_ctx_set_key is defined). I assume when you say "CRT mode" you mean CTR mode.
What is the purpose of AES_ctx_init() and is this done only once at the start of an encrypt/decrypt session of a secure link ?
This initializes the internal data structures. Specifically it performs key expansion which creates the necessary round keys from the master key. In this library, it looks like it would be called once at the beginning of the session, to set the key and IV.
In CTR mode, it is absolutely critical that the Key+Nonce (IV) combination is never reused on two different messages. (CTR mode doesn't technically have an "IV." It has a "nonce," but it looks exactly like an IV and is passed identically in every crypto library I've ever seen.) If you need to reuse a nonce, you must change the key. This generally means that you will have to keep a persistent (i.e. across system resets) running counter on your encrypting device to keep track of the last-used nonce. There are ways to use a semi-random nonce safely, but you need a high-quality source of entropy, which is often not available on embedded devices.
If you pass NULL as the IV/nonce and reuse the key, CTR mode is a nearly worthless encryption scheme.
Since you need to be able to drop packets cleanly, what you'll need to do is send the nonce along with each packet. The nonce is not a secret; it just has to be unique for a given key. The nonce is 16 bytes long. If this is too much data, a common approach is to exchange the first 12 bytes of the nonce at the beginning of the session, and then use the bottom 4 bytes as a counter starting a 0. If sessions can be longer than 2^32 blocks, then you would need to reset the session after you run out of values.
Keep in mind that "blocks" here are 16 bytes long, and each block requires its own nonce. To fit in 120 bytes, you would probably do something like send the 4 byte starting counter, and then append 7 blocks for n, n+1, n+2, ... n+6.
Designing this well is a little complicated, and it's easy to do it wrong and undermine your cryptography. If this is important, you should probably engage someone with experience to design this for you. I do this kind of work, but I'm sure there are many people you could find.
So what you are saying is that with each transmission of a packet the Tx and Rx will need to change the nonce with every packet, and both Tx & Rx need to be using the same nonce with each treatment of a packet ?
Correct. The Tx side will determine the nonce, and the Rx side will consume it. Though they could coordinate somehow other than sending it explicitly, as long as it never repeats, and they always agree. This is one of the benefits of CTR mode; it relies on a "counter" that doesn't necessarily have to be sent.
The clock is nice because it's non-repeating. The tricky part is staying synchronized, while being absolutely certain you don't reuse a nonce. It's easier to stay synchronized if the time window for each nonce is large. It's easier to make sure you never reuse a nonce if the time window is small. Combining a clock-derived nonce with a sequential counter might be a good approach. There will still be ongoing synchronization issues on the receiving side where the receiver needs to try a few different counters (or nonce+counters) to find the right one. But it should be possible.
Note that synchronization assumes that there is a way to distinguish valid plaintext from invalid plaintext. If there is a structure to the plaintext, then you can try one nonce and if the output is gibberish, try another. But if there is no structure to the plaintext, then you have a big problem, because you won't be able to synchronize. By "structure" I mean "can you tell the difference between valid data and random noise.
Using the clock, as long as your window is small enough that you would never reuse a starting nonce, even across resets, you could avoid needing a PRNG. The nonce doesn't have to be random; it just can never repeat.
Note that if you have a 4-byte counter portion, then you have 2^32 blocks, not 2^16.

Need details about EDIV and Rand

Two questions about EDIV and Rand:
I need to be sure I understand exactly how EDIV and Rand are used in the BLE Legacy Pairing. What I understood from the Bluetooth specs is that these are generated during the pairing phase by the slave device and exchanged with the master along with the LTK. The part that I am not sure I understood well is how it is used by the slave device during encryption setup. It seems to me that the specs give freedom to the actual BLE implementation about this: either you use EDIV/Rand as a kind of index to retrieve the associated LTK after receiving the encryption request or you re-generate the LTK each time using EDIV/Rand and a device-specific, never shared, secret value. Is that correct?
Why have they been removed from Secure Connections pairing? How is the association made between the LTK and the peer device in that case? With the Identity Address?
Thanks in advance.
You seem to be correct about all your thoughts.
For LE Legacy Pairing, according to the Bluetooth Core specification, Vol 3 Part H section 2.4.1:
Encrypted Diversifier (EDIV) is a 16-bit stored value used to identify the LTK distributed during LE legacy pairing. A new EDIV is generated each time a unique LTK is distributed.
Random Number (Rand) is a 64-bit stored valued used to identify the LTK distributed during LE legacy pairing. A new Rand is generated each time a unique LTK is distributed.
So in the Legacy Pairing the idea is that the EDIV and Random Number pair identifies the LTK that is to be used.
And in section 2.4.2:
2.4.2 Generation of Distributed Keys
Any method of generation of keys that are being distributed that results in the keys having 128 bits of entropy can be used, as the generation method is not visible outside the slave device (see Section Appendix B). The keys shall not be generated only from information that is distributed to the master device or only from information that is visible outside of the slave device.
So yes you can use any method to generate the EDIV/Rand/LTK values as long as the method provides good security. The specification itself suggests two different method in Vol 3 Part H Appendix B "Key Management":
The first one is that EDIV is an index to a database of LTKs. I suppose here they mean that a new EDIV and LTK pair is generated for each pairing. To me it seems a bit stupid to not use both EDIV and Rand as a lookup key. A variation mentioned is to use (AES(DHK, Rand) mod 2^16) XOR EDIV (where DHK is a per device-unique secret key) as index for which I don't get their point either.
The second method is to have per device-unique secret keys DHK and ER. Then for each new pairing you randomly generate 16-bit DIV and 64-bit Rand. EDIV is derived as (AES(DHK, Rand) mod 2^16) XOR DIV. The LTK is derived as AES(ER, DIV), which according to me is very stupid compared to simply AES(ER, Rand || EDIV) (and let EDIV be randomly generated instead of DIV) since with their method there can only be 2^16 different keys, which when applying the Birthday Paradox means that after around 256 generated keys there will probably be duplicates. (If anyone in the Bluetooth SIG is reading this - please tell me the reason for this weird method). Anyway by deriving LTK from EDIV and Rand you don't need to store the LTK nor the EDIV/Rand values. A thing they seem to have forgot about is that since a different key size (7-16 bytes) may be negotiated during the pairing, you must still for each bonded device store the key size the resulting key should be truncated to upon further encryptions. Or you can workaround this by for example hardcoding some 4 bits in the Rand value which key size to be used.
There is one issue with simply ignoring having a security database at all and just rely on that the LTK can always be recovered from the master's EDIV and Rand: you will never be able to "remove" the bond or revoke the key. Also, if you strictly follow GAP you should know whether you have a usable key to start encryption for a current connection. For example, when responding with an error when reading a GATT characteristic because the characteristic requires an encrypted link, there are different error codes depending on if an LTK is available or not; "Insufficient Encryption" if LTK is available and "Insufficient Authentication" if not.
In LE Secure Connections, the LTK is not distributed but contributed, which means it's derived from values from both peers (using a Diffie Hellman function as the core). Therefore one part cannot select the LTK freely. The input values to the LTK generation function are the Diffie Hellman shared secret, random nonces from both peers and the Bluetooth Device Addresses of both peers (the address used in the connection, not the Identity Address). Since the input values take up more space than LTK, it's more feasible to just store the LTK in a database.
Since there must be exactly one LTK per bonded device there is no more any point in having EDIV and Rand so they shall be set to 0 in encryption requests. That means we must also now map device to LTK rather than EDIV/Rand to LTK. To do that the Identity Address is used when looking up the LTK. If a random resolvable address is used for a connection we must test all stored IRKs and get the corresponding Identity Address. If public or static random address is used for a connection - that is the Identity Address.

How ciphertext was generated in card reader using DUKPT encryption?

For
`BDK = "0123456789ABCDEFFEDCBA9876543210"` `KSN = "FFFF9876543210E00008"`
The ciphertext generated was below
"C25C1D1197D31CAA87285D59A892047426D9182EC11353C051ADD6D0F072A6CB3436560B3071FC1FD11D9F7E74886742D9BEE0CFD1EA1064C213BB55278B2F12"`
which I found here. I know this cipher-text is based on BDK and KSN but how this 128 length cipher text was generated? What are steps involved in it or algorithm used for this? Could someone explain in simple steps. I found it hard to understand the documents I got while googled.
Regarding DUKPT , there are some explanations given on Wiki. If that doesn't suffice you, here goes some brief explanation.
Quoting http://www.maravis.com/library/derived-unique-key-per-transaction-dukpt/
What is DUKPT?
Derived Unique Key Per Transaction (DUKPT) is a key management scheme. It uses one time encryption keys that are derived from a secret master key that is shared by the entity (or device) that encrypts and the entity (or device) that decrypts the data.
Why DUKPT?
Any encryption algorithm is only as secure as its keys. The strongest algorithm is useless if the keys used to encrypt the data with the algorithm are not secure. This is like locking your door with the biggest and strongest lock, but if you hid the key under the doormat, the lock itself is useless. When we talk about encryption, we also need to keep in mind that the data has to be decrypted at the other end.
Typically, the weakest link in any encryption scheme is the sharing of the keys between the encrypting and decrypting parties. DUKPT is an attempt to ensure that both the parties can encrypt and decrypt data without having to pass the encryption/decryption keys around.
The Cryptographic Best Practices document that VISA has published also recommends the use of DUKPT for PCI DSS compliance.
How DUKPT Works
DUKPT uses one time keys that are generated for every transaction and then discarded. The advantage is that if one of these keys is compromised, only one transaction will be compromised. With DUKPT, the originating (say, a Pin Entry Device or PED) and the receiving (processor, gateway, etc) parties share a key. This key is not actually used for encryption. Instead, another one time key that is derived from this master key is used for encrypting and decrypting the data. It is important to note that the master key should not be recoverable from the derived one time key.
To decrypt data, the receiving end has to know which master key was used to generate the one time key. This means that the receiving end has to store and keep track of a master key for each device. This can be a lot of work for someone that supports a lot of devices. A better way is required to deal with this.
This is how it works in real-life: The receiver has a master key called the Base Derivation Key (BDK). The BDK is supposed to be secret and will never be shared with anyone. This key is used to generate keys called the Initial Pin Encryption Key (IPEK). From this a set of keys called Future Keys is generated and the IPEK discarded. Each of the Future keys is embedded into a PED by the device manufacturer, with whom these are shared. This additional derivation step means that the receiver does not have to keep track of each and every key that goes into the PEDs. They can be re-generated when required.
The receiver shares the Future keys with the PED manufacturer, who embeds one key into each PED. If one of these keys is compromised, the PED can be rekeyed with a new Future key that is derived from the BDK, since the BDK is still safe.
Encryption and Decryption
When data needs to be sent from the PED to the receiver, the Future key within that device is used to generate a one time key and then this key is used with an encryption algorithm to encrypt the data. This data is then sent to the receiver along with the Key Serial Number (KSN) which consists of the Device ID and the device transaction counter.
Based on the KSN, the receiver then generates the IPEK and from that generates the Future Key that was used by the device and then the actual key that was used to encrypt the data. With this key, the receiver will be able to decrypt the data.
Source
First, let me quote the complete sourcecode you linked and of which you provided only 3 lines...
require 'bundler/setup'
require 'test/unit'
require 'dukpt'
class DUKPT::DecrypterTest < Test::Unit::TestCase
def test_decrypt_track_data
bdk = "0123456789ABCDEFFEDCBA9876543210"
ksn = "FFFF9876543210E00008"
ciphertext = "C25C1D1197D31CAA87285D59A892047426D9182EC11353C051ADD6D0F072A6CB3436560B3071FC1FD11D9F7E74886742D9BEE0CFD1EA1064C213BB55278B2F12"
plaintext = "%B5452300551227189^HOGAN/PAUL ^08043210000000725000000?\x00\x00\x00\x00"
decrypter = DUKPT::Decrypter.new(bdk, "cbc")
assert_equal plaintext, decrypter.decrypt(ciphertext, ksn)
end
end
Now, you're asking is how the "ciphertext" was created...
Well, first thing we know is that it is based on "plaintext", which is used in the code to verify if decryption works.
The plaintext is 0-padded - which fits the encryption that is being tested by verifying decryption with this DecrypterTest TestCase.
Let's look at the encoding code then...
I found the related encryption code at https://github.com/Shopify/dukpt/blob/master/lib/dukpt/encryption.rb.
As the DecrypterTEst uses "cbc", it becomes apparent that the encrypting uses:
#cipher_type_des = "des-cbc"
#cipher_type_tdes = "des-ede-cbc"
A bit more down that encryption code, the following solves our quest for an answer:
ciphertext = des_encrypt(...
Which shows we're indeed looking at the result of a DES encryption.
Now, DES has a block size of 64 bits. That's (64/8=) 8 bytes binary, or - as the "ciphertext" is a hex-encoded text representation of the bytes - 16 chars hex.
The "ciphertext" is 128 hex chars long, which means it holds (128 hex chars/16 hex chars=) 8 DES blocks with each 64 bits of encrypted information.
Wrapping all this up in a simple answer:
When looking at "ciphertext", you are looking at (8 blocks of) DES encrypted data, which is being represented using a human-readable, hexadecimal (2 hex chars = 1 byte) notation instead of the original binary bytes that DES encryption would produce.
As for the steps involved in "recreating" the ciphertext, I tend to tell you to simply use the relevant parts of the ruby project where you based your question upon. Simply have to look at the sourcecode. The file at "https://github.com/Shopify/dukpt/blob/master/lib/dukpt/encryption.rb" pretty much explains it all and I'm pretty sure all functionality you need can be found at the project's GitHub repository. Alternatively, you can try to recreate it yourself - using the preferred programming language of your choice. You only need to handle 2 things: DES encryption/decryption and bin-to-hex/hex-to-bin translation.
Since this is one of the first topics that come up regarding this I figured I'd share how I was able to encode the ciphertext. This is the first time I've worked with Ruby and it was specifically to work with DUKPT
First I had to get the ipek and pek (same as in the decrypt) method. Then unpack the plaintext string. Convert the unpacked string to a 72 byte array (again, forgive me if my terminology is incorrect).
I noticed in the dukpt gem author example he used the following plain text string
"%B5452300551227189^HOGAN/PAUL ^08043210000000725000000?\x00\x00\x00\x00"
I feel this string is incorrect as there shouldn't be a space after the name (AFAIK).. so it should be
"%B5452300551227189^HOGAN/PAUL^08043210000000725000000?\x00\x00\x00\x00"
All in all, this is the solution I ended up on that can encrypt a string and then decrypt it using DUKPT
class Encrypt
include DUKPT::Encryption
attr_reader :bdk
def initialize(bdk, mode=nil)
#bdk = bdk
self.cipher_mode = mode.nil? ? 'cbc' : mode
end
def encrypt(plaintext, ksn)
ipek = derive_IPEK(bdk, ksn)
pek = derive_PEK(ipek, ksn)
message = plaintext.unpack("H*").first
message = hex_string_from_unpacked(message, 72)
encrypted_cryptogram = triple_des_encrypt(pek,message).upcase
encrypted_cryptogram
end
def hex_string_from_unpacked val, bytes
val.ljust(bytes * 2, "0")
end
end
boomedukpt FFFF9876543210E00008 "%B5452300551227189^HOGAN/PAUL^08043210000000725000000?"
(my ruby gem, the KSN and the plain text string)
2542353435323330303535313232373138395e484f47414e2f5041554c5e30383034333231303030303030303732353030303030303f000000000000000000000000000000000000
(my ruby gem doing a puts on the unpacked string after calling hex_string_from_unpacked)
C25C1D1197D31CAA87285D59A892047426D9182EC11353C0B82D407291CED53DA14FB107DC0AAB9974DB6E5943735BFFE7D72062708FB389E65A38C444432A6421B7F7EDD559AF11
(my ruby gem doing a puts on the encrypted string)
%B5452300551227189^HOGAN/PAUL^08043210000000725000000?
(my ruby gem doing a puts after calling decrypt on the dukpt gem)
Look at this: https://github.com/sgbj/Dukpt.NET, I was in a similar situation where i wondered how to implement dukpt on the terminal when the terminal has its own function calls which take the INIT and KSN to create the first key, so my only problem was to make sure the INIT key was generated the same way on the terminal as it is in the above mentioned repo's code, which was simple enough using ossl encryption library for 3des with ebc and applying the appropriate masks.

Resources