Signature falsification and Biometrics in 21 CFR Part 11 - biometrics

In 21 CFR Part 11, section 11.200 outlines the electronic signatures requirements, notably
(a) Electronic signatures that are not based upon biometrics shall:
[...]
(3) Be administered and executed to ensure that attempted use of an individual's electronic signature by anyone other than its genuine
owner requires collaboration of two or more individuals.
We interpret this as notably requiring two administrators to reset a user password (otherwise a single administrator could reset the user's password on its own and then happily falsify away)
But when biometrics are used, the requirements appear much weaker:
(b) Electronic signatures based upon biometrics shall be designed to
ensure that they cannot be used by anyone other than their genuine
owners.
meaning that f.i. in the case of a fingerprint authentication, a single administrator could reset the fingerprints alone and then falsify away.
How did you implement that requirement? We are tempted to just ignore the (b) because it appears to be quite weak, and treat biometrics just like passwords.

While I am not a Lawyer, I would interpret this as the following:
(a.3) ... "use of an individual's electronic signature by anyone other than its genuine owner requires collaboration of two or more individuals."
Section (a.3) could refer to a guardian and a witness e-signing a document on a patient's behalf.
(b) ... "cannot be used by anyone other than their genuine owners"
Whereas (b) clearly states that no one except the patient can use their Biometric eSignature.
Or,
(a) eSignatures based on UserID/Passcode may be used by others on the patient's behalf.
(b) eSignatures based on biometrics may not be used by others.
Or, for your example
(a) An eSignature based on UserID/Passcode can be reset, but require at least the resetter and witness to ensure that the reset it trustworthy.
(b) An eSignature based on biometrics may only be reset by the user.
In general, I would interpret (b) to have tighter use restrictions than (a).

Related

How to convert natural language to OCL constraint?

I have a class diagram which consist a class on the name of SYSTEM. I have written a constraint for availability of this system.
For example :
The system should be available 24/7.
Now I want to convert the above statement into OCL constraint. I am new to OCL. I have searched and tried some research papers and videos but nothing found specific for availability.
Ar run time: OCL evaluates and checks a query using the instantaneous system state.
OCL has no support for time, but you may Google for Temporal OCL to see what various researchers are doing. More generally time is an active research area without solid solutions. Unchanged, OCL can only access an up-time variable and check that it greater than 24 hours.... When you first start, is your system supposed to fail because it has not been available 24/7?
If you consider your specific query, it is obviously impossible. In practice designers may analyze the failure rates on one/two/three/...-fold redundant systems with respect to relevant foreseeable failure mechanisms. No system is likely to survive an unforeseen failure, let alone a hostile act by some insider, or a well-informed outsider. Again more realistically, there should be an SLA that accepts a certain amount of down time per year, the smaller the downtime the higher the cost.
At design time, you may use OCL as the formulation of your design constraints. e.g. the mathematics that computes the aggregate failure rate of a single server, or the composite failure rate of redundant servers. But OCL wouldn't be my first choice for complex floating point calculations.

A peer-to-peer and privacy-aware data mining/aggregation algorithm: is it possible?

Suppose I have a network of N nodes, each with a unique identity (e.g. public key) communicating with a central-server-less protocol (e.g. DHT, Kad). Each node stores a variable V. With reference to e-voting as an easy example, that variable could be the name of a candidate.
Now I want to execute an "aggregation" function on all V variables available in the network. With reference to e-voting example, I want to count votes.
My question is completely theoretical (I have to prove a statement, details at the end of the question), so please don't focus on the e-voting and all of its security aspects. Do I have to say it again? Don't answer me that "a node may have any number identities by generating more keys", "IPs can be traced back" etc. because that's another matter.
Let's see the distributed aggregation only from the privacy point of view.
THE question
Is it possible, in a general case, for a node to compute a function of variables stored at other nodes without getting their value associated to the node's identity? Did researchers design such a privacy-aware distributed algorithm?
I'm only dealing with privacy aspects, not general security!
Current thoughts
My current answer is no, so I say that a central server, obtaining all Vs and processes them without storing, is necessary and there are more legal than technical means to assure that no individual node's data is either stored or retransmitted by the central server. I'm asking to prove that my previous statement is false :)
In the e-voting example, I think it's impossible to count how many people voted for Alice and Bob without asking all the nodes, one by one "Hey, who do you vote for?"
Real case
I'm doing research in the Personal Data Store field. Suppose you store your call log in the PDS and somebody wants to find statistical values about the phone calls (i.e. mean duration, number of calls per day, variance, st-dev) without being revealed neither aggregated nor punctual data about an individual (that is, nobody must know neither whom do I call, nor my own mean call duration).
If a trusted broker exists, and everybody trusts it, that node can expose a double getMeanCallDuration() API that first invokes CallRecord[] getCalls() on every PDS in the network and then operates statistics on all rows. Without the central trusted broker, each PDS exposing double getMyMeanCallDuration() isn't statistically usable (the mean of the means shouldn't be the mean of all...) and most importantly reveals the identity of the single user.
Yes, it is possible. There is work that actually answers your question solving the problem, given some assumptions. Check the following paper: Privacy, efficiency & fault tolerance in aggregate computations on massive star networks.
You can do some computation (for example summing) of a group of nodes at another node without having the participants nodes to reveal any data between themselves and not even the node that is computing. After the computation, everyone learns the result (but no one learns any individual data besides their own which they knew already anyways). The paper describes the protocol and proves its security (and the protocol itself gives you the privacy level I just described).
As for protecting the identity of the nodes to unlink their value from their identity, that would be another problem. You could use anonymous credentials (check this: https://idemix.wordpress.com/2009/08/18/quick-intro-to-credentials/) or something alike to show that you are who you are without revealing your identity (in a distributed scenario).
The catch of this protocol is that you need a semi-trusted node to do the computation. A fully distributed protocol (for example, in a P2P network scenario) is not that easy though. Not because of a lack of a storage (you can have a DHT, for example) but rather you need to replace that trusted or semi-trusted node by the network, and that is when you find your issues, who does it? Why that one and not another one? And what if there is a collusion? Etc...
How about when each node publishes two sets of data x and y, such that
x - y = v
Assuming that I can emit x and y independently, you can correctly compute the overall mean and sum, while every single message is largely worthless.
So for the voting example and candidates X, Y, Z, I might have one identity publishing the vote
+2 -1 +3
and my second identity publishes the vote:
-2 +2 -3
But of course you cannot verify that I didn't vote multiple times anymore.

time-based encryption algorithm?

I've an idea in my mind but I've no idea what the magic words are to use in Google - I'm hoping to describe the idea here and maybe someone will know what I'm looking for.
Imagine you have a database. Lots of data. It's encrypted. What I'm looking for is an encryption whereby to decrypt, a variable N must at a given time hold the value M (obtained from a third party, like a hardware token) or it failed to decrypt.
So imagine AES - well, AES is just a single key. If you have the key, you're in. Now imagine AES modified in such a way that the algorithm itself requires an extra fact, above and beyond the key - this extra datum from an external source, and where that datum varies over time.
Does this exist? does it have a name?
This is easy to do with the help of a trusted third party. Yeah, I know, you probably want a solution that doesn't need one, but bear with me — we'll get to that, or at least close to that.
Anyway, if you have a suitable trusted third party, this is easy: after encrypting your file with AES, you just send your AES key to the third party, ask them to encrypt it with their own key, to send the result back to you, and to publish their key at some specific time in the future. At that point (but no sooner), anyone who has the encrypted AES key can now decrypt it and use it to decrypt the file.
Of course, the third party may need a lot of key-encryption keys, each to be published at a different time. Rather than storing them all on a disk or something, an easier way is for them to generate each key-encryption key from a secret master key and the designated release time, e.g. by applying a suitable key-derivation function to them. That way, a distinct and (apparently) independent key can be generated for any desired release date or time.
In some cases, this solution might actually be practical. For example, the "trusted third party" might be a tamper-resistant hardware security module with a built-in real time clock and a secure external interface that allows keys to be encrypted for any release date, but to be decrypted only for dates that have passed.
However, if the trusted third party is a remote entity providing a global service, sending each AES key to them for encryption may be impractical, not to mention a potential security risk. In that case, public-key cryptography can provide a solution: instead of using symmetric encryption to encrypt the file encryption keys (which would require them either to know the file encryption key or to release the key-encryption key), the trusted third party can instead generate a public/private key pair for each release date and publish the public half of the key pair immediately, but refuse to disclose the private half until the specified release date. Anyone else holding the public key may encrypt their own keys with it, but nobody can decrypt them until the corresponding private key has been disclosed.
(Another partial solution would be to use secret sharing to split the AES key into the shares and to send only one share to the third party for encryption. Like the public-key solution described above, this would avoid disclosing the AES key to the third party, but unlike the public-key solution, it would still require two-way communication between the encryptor and the trusted third party.)
The obvious problem with both of the solutions above is that you (and everyone else involved) do need to trust the third party generating the keys: if the third party is dishonest or compromised by an attacker, they can easily disclose the private keys ahead of time.
There is, however, a clever method published in 2006 by Michael Rabin and Christopher Thorpe (and mentioned in this answer on crypto.SE by one of the authors) that gets at least partially around the problem. The trick is to distribute the key generation among a network of several more or less trustworthy third parties in such a way that, even if a limited number of the parties are dishonest or compromised, none of them can learn the private keys until a sufficient majority of the parties agree that it is indeed time to release them.
The Rabin & Thorpe protocol also protects against a variety of other possible attacks by compromised parties, such as attempts to prevent the disclosure of private keys at the designated time or to cause the generated private or public keys not to match. I don't claim to understand their protocol entirely, but, given that it's based on a combination of existing and well studies cryptographic techniques, I see no reason why it shouldn't meet its stated security specifications.
Of course, the major difficulty here is that, for those security specifications to actually amount to anything useful, you do need a distributed network of key generators large enough that no single attacker can plausibly compromise a sufficient majority of them. Establishing and maintaining such a network is not a trivial exercise.
Yes, the kind of encrpytion you are looking for exists. It is called timed-release encryption, or abbreviated TRE. Here is a paper about it: http://cs.brown.edu/~foteini/papers/MathTRE.pdf
The following is an excerpt from the abstract of the above paper:
There are nowdays various e-business applications, such as sealedbid auctions and electronic voting, that require time-delayed decryption of encrypted data. The literature oers at least three main categories of protocols that provide such timed-release encryption (TRE).
They rely either on forcing the recipient of a message to solve some time-consuming, non-paralellizable problem before being able to decrypt, or on the use of a trusted entity responsible for providing a piece of information which is necessary for decryption.
I personally like another name, which is "time capsule cryptography", probably coined at crypto.stackoverflow.com: Time Capsule cryptography?.
A quick answer is no: the key used to decrypt the data cannot change in time, unless you decrypt and re-encrypt all the database periodically (I suppose it is not feasible).
The solution suggested by #Ilmari Karonen is the only one feasible but it needs a trusted third party, furthermore, once obtained the master AES key it is reusable in the future: you cannot use 'one time pads' with that solution.
If you want your token to be time-based you can use TOTP algorithm
TOTP can help you generate a value for variable N (token) at a given time M. So the service requesting the access to your database would attach a token which was generated using TOTP. During validation of token at access provider end, you'll validate if the token holds the correct value based on the current time. You'll need to have a Shared Key at both the ends to generate same TOTP.
The advantage of TOTP is that the value changes with time and one token cannot be reused.
I have implemented a similar thing for two factor authentication.
"One time Password" could be your google words.
I believe what you are looking for is called Public Key Cryptography or Public Key Encryption.
Another good word to google is "asymmetric key encryption scheme".
Google that and I'm quite sure you'll find what you're looking for.
For more information Wikipedia's article
An example of this is : Diffie–Hellman key exchange
Edit (putting things into perspective)
The second key can be determined by an algorithm that uses a specific time (for example at the insert of data) to generate the second key which can be stored in another location.
As other guys pointed out One Time Password may be a good solution for the scenario you proposed.
There's an OTP implemented in C# that you might take a look https://code.google.com/p/otpnet/.
Ideally, we want a generator that depends on the time, but I don't know any algorithm that can do that today.
More generally, if Alice wants to let Bob know about something at a specific point in time, you can consider this setup:
Assume we have a public algorithm that has two parameters: a very large random seed number and the expected number of seconds the algorithm will take to find the unique solution of the problem.
Alice generates a large seed.
Alice runs it first on her computer and computes the solution to the problem. It is the key. She encrypts the message with this key and sends it to Bob along with the seed.
As soon as Bob receives the message, Bob runs the algorithm with the correct seed and finds the solution. He then decrypts the message with this key.
Three flaws exist with this approach:
Some computers can be faster than others, so the algorithm has to be made in such a way as to minimize the discrepancies between two different computers.
It requires a proof of work which may be OK in most scenarios (hello Bitcoin!).
If Bob has some delay, then it will take him more time to see this message.
However, if the algorithm is independent of the machine it runs on, and the seed is large enough, it is guaranteed that Bob will not see the content of the message before the deadline.

Symmetric and Asymmetric ciphers, non-repudiation?

I have read on wikipedia "However, symmetric ciphers also can be used for non-repudiation purposes by ISO 13888-2 standard."
Then again and I read on another wiki page, "Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature. This is in contrast to symmetric systems, where both sender and receiver share the same secret key, and thus in a dispute a third party cannot determine which entity was the true source of the information."
This means one page says symmetric algorithms have non-repudiation and another page says they don't have it and therefore they are not used for digital signatures. So do symmetric keys have non-repudiation or not? It makes sense that they can not be used for signatures and non-repudiation since symmetric keys are the same and thus the system can't distinguish which one belongs to which person and which one is first etc. In that case I think symmetric keys are only a tool for confidentiality and not used for non-repudiation or digital signatures.
As for non-repudiation, the tricky part is that it's not technical but rather legal term and it causes a lot of misunderstanding if placed in technical context. The thing is that you can always repudiate anything you have done. And that's why there are courts.
In the court two parties are confronted and try to prove each other wrong using evidence. Here's where technology comes, as it allows to collect sufficient electronic evidence to prove wrong the party that tries to deny a transaction, message etc.
And this is exactly what ISO 13888 series does in part 1: it provides guidelines on what evidence to collect and how to protect it to maximise your chances of countering a repudiation of electronic transaction. This standard talks about a number of tokens that serve this purpose. These tokens are for example: identifiers of both parties, timestamps, message hashes etc. Then it goes into details on how you should protect these tokens so that they retain their value as evidence.
The two other parts (2 and 3) describe specific cryptographic techniques that can be applied to obtain tokens. Symmetric ones are just keyed hashes if I remember correctly (such as HMAC), while assymetric is digital signature.
I think the answer depends on whether the shared key is public or not. If the parties agree to public source (third party) for their shared key there is non-repudiation of origin.
ISO 13888-2 introduces structures and protocols which can be used to introduce non-repudiation services, in the context of symmetric techniques. However all these "tricks" rely on the existence of a Trusted Third Party.
The point of the second Wikipedia citation in the question is that asymmetric key systems intrinsically [and without the need of thrid parties] offer non-repudiation features (specifically NRO i.e. non-repudation of the the Origin).

Where in the world are encrypted software in cash registers required and in that case what security measures are required?

Background
Sweden is transitioning to a compulsory law for all business owners handling cash or card transactions, to implement/buy partially-encrypted POS (point of sale)/cash registers:
Signing and encryption are used to
securely store the information from
the cash register in the control unit.
The control system with a certified
control unit is based on the
manufacturer for each control unit
model obtaining a main encryption key
from the Swedish Tax Agency. The
manufacturer then uses the main key to
create unique encryption keys that are
placed in the control unit during the
manufacturing process. In order to
obtain main encryption keys,
manufacturers must submit an
application to the Swedish Tax Agency.
Source SKV
This has caused somewhat of an uproar among Swedish traders because of the necessitated complexity and strong encryption to be used, along with a highly sophisticated technical implementation from the shop owner's perspective, as the alternative is to buy the system from companies who have traversed the documentation, gotten their security keys and built the software and integrated it into the hardware.
So my first question is if any other countries in the world even comes close to the preciseness that the Swedish Tax Agency requires of its companies (alongside having extensive guidelines for bookkeeping)?
I'd like to hear about any other encryption schemes of interest and how they are applied through legislation when it comes to verifying the transactions and book keeping entries. Examples of such legislation could be similar to another Swedish rule; that book keeping entries (transactions) must be write-only, at most written 4 days after the occurrance and only changeable through a tuple of (date, signature of person doing it, new bookings).
Finally, what are your opinions on these rules? Are we going towards all-time uplinks for book keeping + POS systems to the tax agency's servers which verify and detect fraudulent patterns in real-time similar to the collective intelligence algorithms out there or will there be a back-lash against the increased complexity of running business?
Offhand, I can't think of anywhere else in the world that implements this strict of a requirement. However, some existing POS systems may implement this sort of encryption depending on what the definition of "control unit" is and where the differentiation between "control unit" and "cash register" lies.
The reason I say this is because many POS systems (at least the ones that I've worked with) are essentially a bunch of dumb terminals that are networked to a central database and transaction processing server. No data is actually stored in the cash register itself, so there is only a need to encrypt it on the server side (and over the wire, but I'm assuming a secure network protocol is being used). You would need to get a lawyer to interpret what exactly constitutes a "control unit", but if this can be defined as something on the server side (in a networked POS case like this) then the necessary complexity to implement such a system would not be too onerous.
The difficult case would be where each individual cash register requires a unique encryption key, whether to encrypt data to be stored inside the register itself or to encrypt data before sending it to a central server. This would require a modification or replacement of each cash register, which could indeed prove costly depending on the size of the business and the age of the existing equipment. In many countries (the US in particular), mandating such an extensive and costly change would likely have to be either accompanied by a bill providing funds to businesses (to help pay for the equipment conversion) or written in a manner more like "all Point-Of-Sale equipment manufactured or sold after {{{some future date}}} must implement the following features:". Implementing rules that place expensive burdens on businesses is a good way for politicians to lose a lot of support, so it's not likely that a rule like this will get implemented over a short period of time without some kind of assistance being offered.
The possibly interesting case would be the "old-fashioned" style of cash registers which essentially consist of a cash drawer, calculator, and receipt printer and store no data whatsoever. This law may require such systems to start recording transaction information (ask your lawyer). Related would be the case where transactions are rung up by hand, written on a paper ticket (like is commonly done in some restaurants and small stores in the US). I often find it amusing how legislation focuses on such security for high-tech systems but leaves the "analog" systems unchanged and wide open for problems. Then again, Sweden may not be using older systems like this anymore.
I'm not sure exactly what US law requires in terms of encrypted records, but I do know that certain levels of security are required by many non-government entities. For example, if a business wants to accept credit card payments, then the credit card company will require them to follow certain security and encryption guidelines when handling and submitting credit card payment information. This is in part dictated by the local rules of legal liability. If a transaction record gets tampered with, lost, or hijacked by a third party the security of the transaction and record-keeping systems will be investigated. If the business did not make a reasonable effort to keep the data secure and verified, then the business may be held at fault (or possibly the equipment manufacturer) for the security breach which can lead to large losses through lawsuits. Because of this, companies tend to voluntarily secure their systems in order to reduce the incidence of security breaches and to limit their legal liability should such a breach happen.
Since device manufacturers can sell their equipment internationally, equipment complying with these Swedish restrictions will likely end up being used in other places as well over time. If the system ends up being successful, other businesses will probably volunteer to use such an encrypted system, even in the absence of legislation forcing them to do so. I compare it to the RoHS rules that the EU passed several years ago. Many countries that did not sign the RoHS legislation now manufacture and use RoHS-certified materials, not because of a legal mandate but because they are available.
Edit: I just read this in the linked article:
Certified control unit
A certified control unit must be
connected to a cash register. The
control unit must read registrations
made by the cash register.
To me, this sounds like the certified control unit attaches to the register but is not necessarily connected to it (or necessarily unique to a register). This definition alone doesn't (to my non-lawyer ears) sound like it prohibits existing cash registers from being connected over a network to a certified control unit on the server side. If so, this might be as simple as installing some additional software (and possibly a peripheral device) on the server side. The details link may clarify this, but it's not in English so I'm not sure what it says.
These types of requirements are becoming more and more common across most of Europe (and to a lesser, but increasing, extent North America). I'm not sure exactly which Europe-based banks are moving fastest on this, but in North America one of the front-runners is First Data (who have already made available the fully-encrypted POS devices like you describe needing).
I would further postulate that most merchants will not develop systems internally that do this (due to the PCI requirements, and challenges in doing so), but will instead rely on their merchant providers for the required technology.

Resources