Conflicting signing certificate attributes present - Timestamp from TSA for PDF - x509certificate

Our TSA recently upgraded their TSA servers, but this is causing the "Conflicting signing certificate attributes present" error to occur when we attempt to Digitally Sign and Timestamp our PDFs (using C# code).
If I used the TSA's old server or SignFiles test server, then everything works as expected, but once I switch to the new server, then issues occur.
We are using SignFiles which looks to be a simple wrapper around BouncyCastle. The error occurs within the Bouncy Castle "Validate(...)" method.
VS Code Showing Error
I believe the error is happening as a result of the highlighted calls below.
BouncyCastle Code - Validate(...) where error happens
At this point, I am a bit lost as I do not know what values the above code is checking, but I am thinking it might be the "Enhanced Key Usage" values.
If yes, then I see the following differences between the old Timestamp (left side) and the new one that does not work (right).
Working and Non-Working Certificates Returned by TSA
Can anyone advise if this is the correct values that it is checking?
Can anyone advise why this error is happening and if its something that the TSA can correct within their settings and how the Timestamp is generated OR does Bouncy Castle need to be updated to work with new TSA servers?
We are continuing to use the legacy servers, but need to get this fixed to take advantage of the new faster servers.
I only found one other question regarding this at:
http://bouncy-castle.1462172.n4.nabble.com/Time-Stamp-with-both-SigningCertificate-and-SigningCertificateV2-td3457605.html
Thank you,
Rob

It seems that your client library is internally using BouncyCastle older than 1.8 which was incorrectly processing timestamps. As you have correctly found out this problem was reported (by me) in JIRA issue 85 and fixed in BouncyCastle 1.8.
Original RFC3161 compliant timestamps included only SigningCertificate (OID 1.2.840.113549.1.9.16.2.12) CMS attribute with SHA-1 hash of TSA certificate. But since whole world abandoned SHA-1, RFC5035 updated RFC3161 and allowed the usage of SigningCertificateV2 (OID 1.2.840.113549.1.9.16.2.47) CMS attribute with SHA-256 (or any other) hash of TSA certificate. It also allowed the usage of both attributes at the same time to support legacy clients that may not support new hash algorithms. So IMO it is perfectly OK for TSA to include both attributes.
You have two options:
Ask the provider of your client library to upgrade it to BouncyCastle 1.8 where the issue has been fixed. IMO this would be easier.
Ask your timestamp provider to reconfigure their TSA server so it does not include both CMS attributes in timestamps it generates. IMO this would be harder.
Let me know if something is not clear enough or you need more details.

Related

TOTP defaults and extensibility

Recently, I was implementing 2FA using TOTP according to RFC 6238. What caught my attention were the default values: 30s time step, epoch as the start time of counting, and especially the widely used parameters (not directly recommended by the RFC): secret represented in Base32, codes of lengths 6 and HMAC-SHA1 as the underlying algorithm. My questions:
Is it reasonable to assume changes in widely used implementations, using the parameters above? This implies implementing a way to customize the parameters instead of hard coding the default values.
Are there any known plans to "upgrade" the used parameters by widely used client implementations, e.g. Authy, 1Password, Google Authenticator etc.?
Answer on the first question depends on your needs. If you have implemented 2FA on your server and looking for some app to generate codes on a client side - you just need to choose an app which already supports working with different parameters, so in this way you can be sure that next app update won't broke your auth system.
As for the common realization: most of the auth servers use 6-digits codes using 32-symbols seeds in Base32 and SHA1 as a hashing function, but I've met some systems with SHA-256 and 52-symbols seeds.

Qt/C++ store IM Messages offline

I have developed a Client/Server application for IM with Qt. So far messages are sent and displayed at the client side, but when the program is closed the messages are no longer available since a proper storage is missing.
I would like to keep the messages on the client devices and avoid to store everything on the server. I don't want to use a DB either since it needs to be installed and I would like to keep everything quite easy.
Therefore I was thinking of simply storing everything in an encrypted file, but I couldn't think of a proper format to do that.
Has anyone experience with that or any suggestions how to save the messages from different clients?
You do have a concern with data integrity in face of unplanned termination of your software, due to bugs in your code, transient hardware errors, power outages, etc. That's the problem that everyone using "plain files" usually ignores, as it's a hard problem to solve and requires extensive testing and know-how.
That's why you should use an embedded database. It will solve that, and many other problems as well. SQLite is a de-facto standard for applications such as yours. You can add any encryption you wish, as SQLite provides hooks that let you implement writing and reading of the pages. You'd do the encryption there.
One little-appreciated aspect of SQLite specifically is the amount of testing it gets during development. The test harness, most of it non-public, is probably worth way more than the published SQLite code (>1M USD). SQLite is used in aerospace applications, e.g. IIRC in code classified as DAL-B under DO-178B.

Where can I find a description of BrowserID local verification

The FAQ recommends I don't do local verification of BrowserID (persona) security assertions, however I've never been good at following instructions.
So... I want to implement local verification anyway. It looks like the only thing the client libraries pass to the server side is a block of encrypted stuff called an "assertion". Presumably it is encrypted or signed using some public key encryption scheme, but I'm having trouble finding any details.
Can anyone explain it, or point me to the details?
The spec is currently not up to date with the latest data format changes, but this Python library has the ability to verify Persona assertions by itself (i.e. not using verifier.login.persona.org):
http://pypi.python.org/pypi/PyBrowserID

appropriate user-agent header value

I'm using HttpBuilder (a Groovy HTTP library built on top of apache's httpclient) to sent requests to the last.fm API. The docs for this API say you should set the user-agent header to "something appropriate" in order to reduce your chances of getting blocked.
Any idea what kind of values would be deemed appropriate?
The name of your application including a version number?
I work for Last.fm. "Appropriate" means something which will identify your app in a helpful way to us when we're looking at our logs. Examples of when we use this information:
investigating bugs or odd behaviour; for example if you've found an edge case we don't handle, or are accidentally causing unusual load on a system
investigating behaviour that we think is inappropriate; we might want to get in touch to help your application work better with our services
we might use this information to judge which API methods are used, how often, and by whom, in order to do capacity planning or to get general statistics on the API eco-system.
A helpful (appropriate) User-Agent:
tells us the name and version of your application (preferably something unique and easy to find on Google!)
tells us the specific version of your application
might also contain a URL at which to find out more, e.g. your application's homepage
Examples of unhelpful (inappropriate) User-Agents:
the same as any of the popular web browsers
the default user-agent for your HTTP Client library (e.g. curl/7.10.6 or PEAR HTTP_Request)
We're aware that it's not possible to change the User-Agent sent when your application is browser-based (e.g. Javascript or Flash) and don't expect you to do so. (That shouldn't be a problem in your case.)
If you're using a 3rd party Last.fm API library, such as one of the ones listed at http://www.last.fm/api/downloads , then we would prefer it if you added extra information to the User-Agent to identify your application, but left the library name and version in there as well. This is immensely useful when tracking down bugs (in either our service or in the client libraries).

How to keep multiple connectionString passwords safe, separate, and easy to deploy?

I know there are plenty of questions here already about this topic (I've read through as many as I could find), but I haven't yet been able to figure out how best to satisfy my particular criteria. Here are the goals:
The ASP.NET application will run on a few different web servers, including localhost workstations for development. This means encrypting web.config using a machine key is out. Each "type" or environment of web server (dev, test, prod) has its own corresponding database (dev, test, prod). We want to separate these connection strings so that a developer working on the "dev" code is not able to see any "prod" connection string passwords, nor allow these production passwords to ever get deployed to the wrong server or committed to SVN.
The application will should be able to decide which connection string to attempt to use based on the server name (using a switch statement). For example, "localhost" and "dev.example.com" will should know to use the DevDatabaseConnectionString, "test.example.com" will use the TestDatabaseConnectionString, and "www.example.com" will use the ProdDatabaseConnectionString, for example. The reason for this is to limit the chance for any deployment accidents, where the wrong type of web server connects to the wrong database.
Ideally, the exact same executables and web.config should be able to run on any of these environments, without needing to tailor or configure each environment separately every time that we deploy (something that seems like it would be easy to forget/mess up one day during a deployment, which is why we moved away from having just one connectionstring that has to be changed on each target). Deployment is currently accomplished via FTP. Update: Using "build events " and revising our deployment procedures is probably not a bad idea.
We will not have command-line access to the production web server. This means using aspnet_regiis.exe to encrypt the web.config is out. Update: We can do this programmatically so this point is moot.
We would prefer to not have to recompile the application whenever a password changes, so using web.config (or db.config or whatever) seems to make the most sense.
A developer should not be able to get to the production database password. If a developer checks the source code out onto their localhost laptop (which would determine that it should be using the DevDatabaseConnectionString, remember?) and the laptop gets lost or stolen, it should not be possible to get at the other connection strings. Thus, having a single RSA private key to un-encrypt all three passwords cannot be considered. (Contrary to #3 above, it does seem like we'd need to have three separate key files if we went this route; these could be installed once per machine, and should the wrong key file get deployed to the wrong server, the worst that should happen is that the app can't decrypt anything---and not allow the wrong host to access the wrong database!)
UPDATE/ADDENDUM: The app has several separate web-facing components to it: a classic ASMX Web Services project, an ASPX Web Forms app, and a newer MVC app. In order to not go mad having the same connection string configured in each of these separate projects for each separate environment, it would be nice to have this only appear in one place. (Probably in our DAL class library or in a single linked config file.)
I know this is probably a subjective question (asking for a "best" way to do something), but given the criteria I've mentioned, I'm hoping that a single best answer will indeed arise.
Thank you!
Integrated authentication/windows authentication is a good option. No passwords, at least none that need be stored in the web.config. In fact, it's the option I prefer unless admins have explicity taken it away from me.
Personally, for anything that varies by machine (which isn't just connection string) I put in a external reference from the web.config using this technique: http://www.devx.com/vb2themax/Tip/18880
When I throw code over the fence to the production server admin, he gets a new web.config, but doesn't get the external file-- he uses the one he had earlier.
you can have multiple web servers with the same encrypted key. you would do this in machine config just ensure each key is the same.
..
one common practice, is to store first connection string encrypted somewhere on the machine such as registry. after the server connects using that string, it will than retrieve all other connection strings which would be managed in the database (also encrypted). that way connection strings can be dynamically generated based on authorization requirements (requestor, application being used, etc) for example the same tables can be accessed with different rights depending on context and users/groups
i believe this scenario addresses all (or most?) of your points..
(First, Wow, I think 2 or 3 "quick paragraphs" turned out a little longer than I'd thought! Here I go...)
I've come to the conclusion (perhaps you'll disagree with me on this) that the ability to "protect" the web.config whilst on the server (or by using aspnet_iisreg) has only limited benefit, and is perhaps maybe not such a good thing as it may possibly give a false sense of security. My theory is that if someone is able to obtain access to the filesystem in order to read this web.config in the first place, then they also probably have access to create their own simple ASPX file which can "unprotect" it and reveal its secrets to them. But if unauthorized people are trouncing around in your filesystem—well… then you have bigger problems at hand, so my whole concern is now moot! 1
I also realize that there isn’t a foolproof way to securely hide passwords within a DLL either, as they can eventually be disassembled and discovered, perhaps by using something like ILDASM. 2 An additional measure of security obscurity can be obtained by obfuscating and encrypting your binaries, such as by using Dotfuscator, but this isn’t to be considered “secure.” And again, if someone has read access (and likely write access too) to your binaries and filesystem, you’ve again got bigger problems at hand methinks.
To address the concerns I mentioned about not wanting the passwords to live on developer laptops or in SVN: solving this through a separate “.config” file that does not live in SVN is (now!) the blindingly obvious choice. Web.config can live happily in source control, while just the secret parts do not. However---and this is why I’m following up on my own question with such a long response---there are still a few extra steps I’ve taken to try and make this if not any more secure, then at least a little bit more obscure.
Connection strings we want to try to keep secret (those other than the development passwords) won’t ever live as plain text in any files. These are now encrypted first with a secret (symmetric) key---using, of course, the new ridiculous Encryptinator(TM)! utility built just for this purpose---before they get placed in a copy of a “db.config” file. The db.config is then just uploaded only to its respective server. The secret key is compiled directly into the DAL’s dll, which itself would then (ideally!) be further obfuscated and encrypted with something like Dotfuscator. This will hopefully keep out any casual curiosity at the least.
I’m not going to worry much at all about the symmetric "DbKey" living in the DLLs or SVN or on developer laptops. It’s the passwords themselves I’ll keep out. We do still need to have a “db.config” file in the project in order to develop and debug, but it has all fake passwords in it except for development ones. Actual servers have actual copies with just their own proper secrets. The db.config file is typically reverted (using SVN) to a safe state and never stored with real secrets in our subversion repository.
With all this said, I know it’s not a perfect solution (does one exist?), and one that does still require a post-it note with some deployment reminders on it, but it does seem like enough of an extra layer of hassle that might very well keep out all but the most clever and determined attackers. I’ve had to resign myself to "good-enough" security which isn’t perfect, but does let me get back to work after feeling alright about having given it the ol’ "College Try!"
1. Per my comment on June 15 here http://www.dotnetcurry.com/ShowArticle.aspx?ID=185 - let me know if I'm off-base! -and some more good commentary here Encrypting connection strings so other devs can't decrypt, but app still has access here Is encrypting web.config pointless? and here Encrypting web.config using Protected Configuration pointless?
2. Good discussion and food for thought on a different subject but very-related concepts here: Securely store a password in program code? - what really hit home is the Pidgin FAQ linked from the selected answer: If someone has your program, they can get to its secrets.

Resources