To the best of my knowledge OpenSSL's function FIPS_mode_set should not affect encryption. All it does is terminating the program if a weak cipher is used.
I have a piece of code that uses EVP_aes_128 encryption:
EVP_CIPHER_CTX ctx;// = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(&ctx);
const EVP_CIPHER *cipher = EVP_aes_128_cbc();
EVP_EncryptInit(&ctx, cipher, key, IV);
EVP_CIPHER_CTX_set_padding (&ctx, 0);
EVP_EncryptUpdate(&ctx, encrypted.get(), &encrypted_size, paddedPlain.get(), encrypted_size);
return encrypted;
This code is consistent (I get the same output on every run) and working always as expected (decryption function is decrypting it back with no problems). But when I call FIPS_mode_set(1) on the beginning of the run, I get an inconsistent (different output on every run) output in the output buffer.
the input IV:
the key file:
the input text:
the encryption output without 'FIPS_mode_set'(1):
the encryption output with 'FIPS_mode_set'(1):
I'm using OpenSSL version 1.0.2k.
What could possibly cause such behavior?
You're not using the API correctly, as you are forgetting to call EVP_EncryptFinal_ex(). FIPS mode has more stringent requirements with regards to clearing buffers, so maybe that's why you don't get any ciphertext back before the call to EVP_EncryptFinal_ex() - which you don't seem to use.
Furthermore, you are using obsolete functions:
The functions EVP_EncryptInit(), EVP_EncryptFinal(), EVP_DecryptInit(), EVP_CipherInit() and EVP_CipherFinal() are obsolete but are retained for compatibility with existing code. New code should use EVP_EncryptInit_ex(), EVP_EncryptFinal_ex(), EVP_DecryptInit_ex(), EVP_DecryptFinal_ex(), EVP_CipherInit_ex() and EVP_CipherFinal_ex() because they can reuse an existing context without allocating and freeing it up on each call.
Please make sure you keep as much as possible to the examples in the OpenSSL (EVP) Wiki.
Related
Working with Evernote IOS SDK 3.0
I would like to retrieve a specific resource from note using
fetchResourceByHashWith
This is how I am using it. Just for this example, to be 100% sure about the hash being correct I first download the note with a single resource using fetchNote and then request this resource using its unique hash using fetchResourceByHashWith (hash looks correct when I print it)
ENSession.shared.primaryNoteStore()?.fetchNote(withGuid: guid, includingContent: true, resourceOptions: ENResourceFetchOption.includeData, completion: { note, error in
if error != nil {
print(error)
seal.reject(error!)
} else {
let hash = note?.resources[0].data.bodyHash
ENSession.shared.primaryNoteStore()?.fetchResourceByHashWith(guid: guid, contentHash: hash, options: ENResourceFetchOption.includeData, completion: { res, error in
if error != nil {
print(error)
seal.reject(error!)
} else {
print("works")
seal.fulfill(res!)
}})
}
})
Call to fetchResourceByHashWith fails with
Optional(Error Domain=ENErrorDomain Code=0 "Unknown error" UserInfo={EDAMErrorCode=0, NSLocalizedDescription=Unknown error})
The equivalent setup works on Android SDK.
Everything else works so far in IOS SDK (chunkSync, auth, getting notebooks etc.. so this is not an issue with auth tokens)
would be great to know if this is an sdk bug or I am still doing something wrong.
Thanks
This is a bug in the SDK's "EDAM" Thrift client stub code. First the analysis and then your workarounds.
Evernote's underlying API transport uses a Thrift protocol with a documented schema. The SDK framework includes a layer of autogenerated stub code that is supposed to marshal input and output params correctly for each request and response. You are invoking the underlying getResourceByHash API method on the note store, which is defined per the docs to accept a string type for the contentHash argument. But it turns out the client is sending the hash value as a purely binary field. The service is failing to parse the request, so you're seeing a generic error on the client. This could reflect evolution in the API definition, but more likely this has always been broken in the iOS SDK (getResourceByHash probably doesn't see a lot of usage). If you dig into the more recent Python version of the SDK, or indeed also the Java/Android version, you can see a different pattern for this method: it says it's going to write a string-type field, and then actually emits a binary one. Weirdly, this works. And if you hack up the iOS SDK to do the same thing, it will work, too.
Workarounds:
Best advice is to report the bug and just avoid this method on the note store. You can get resource data in different ways: First of all, you actually got all the data you needed in the response to your fetchNote call, i.e. let resourceData = note?.resources[0].data.body and you're good! You can also pull individual resources by their own guid (not their hash), using fetchResource (use note?.resources[0].guid as the param). Of course, you may really want to use the access-by-hash pattern. In that case...
You can hack in the correct protocol behavior. In the SDK files, which you'll need to build as part of your project, find the ObjC file called ENTProtocol.m. Find the method +sendMessage:toProtocol:withArguments.
It has one line like this:
[outProtocol writeFieldBeginWithName:field.name type:field.type fieldID:field.index];
Replace that line with:
[outProtocol writeFieldBeginWithName:field.name type:(field.type == TType_BINARY ? TType_STRING : field.type) fieldID:field.index];
Rebuild the project and you should find that your code snippet works as expected. This is a massive hack however and although I don't think any other note store methods will be impacted adversely by it, it's possible that other internal user store or other calls will suddenly start acting funny. Also you'd have to maintain the hack through updates. Probably better to report the bug and don't use the method until Evernote publishes a proper fix.
I am using WsReceive() function of the ATEasy framework and wanted to ask what is the meaning of the values "aioDefault
and aioDisableWsReceiveEarlyReturn" of "enMode" parameter?
I found this in the ATEASY documentation:
If enMode, input receive mode includes aioDisableWsReceiveEarlyReturn,
it prevents WsReceive from an "early return" when there is a momentary
interruption in the data being received.
And this from the online help of ateasy (By a tip of an expert from the ateasy forum) :
If sEos parameter is an empty string and aioDisableWsReceiveEarlyReturn mode flag is not used (default case), the function will return immediately if characters are found in the input buffer, and the timeout will be ignored. Using the aioDisableWsReceiveEarlyReturn flag will ensure that the function will return only if the timeout is reached or all lBytes characters were received.
I've and Zope utility with a method that perform network processes.
As the result of the is valid for a while, I'm using plone.memoize.ram to cache the result.
MyClass(object):
#cache(cache_key)
def do_auth(self, adapter, data):
# performing expensive network process here
...and the cache function:
def cache_key(method, utility, data):
return time() // 60 * 60))
But I want to prevent the memoization to take place when the do_auth call returns empty results (or raise network errors).
Looking at the plone.memoize code it seems I need to raise ram.DontCache() exception, but before doing this I need a way to investigate the old cached value.
How can I get the cached data from the cache storage?
I put this together from several code I wrote...
It's not tested but may help you.
You may access the cached data using the ICacheChooser utility.
It's call method needs the dotted name to the function you cached, in your case itself
key = '{0}.{1}'.format(__name__, method.__name__)
cache = getUtility(ICacheChooser)(key)
storage = cache.ramcache._getStorage()._data
cached_infos = storage.get(key)
In cached_infos there should be all infos you need.
I saw a very strange behavior in my rebus handler which is self hosted in exe. Right after sending response using bus.send method it adds up some memory consumed by process. I tried to look up object graph using memory profile and found that rebus is holding response message in serialized format somewhere.
Object graph was showing below hierarchy to the root.
System.Message --> CachedBodyMessage --> stream
Give me some pointers if anybody is aware of this thing.
I understand that a memory leak is a grave concern, but my belief is that it is unlikely that Rebus should contain a memory leak.
This belief is rooted in the fact that I have been running Windows Service-hosted Rebus endpoints in production for 1,5 years now, and several of them (e.g. the timeout managers) have sometimes been running for several months without being restarted.
I'd like to be absolutely bulletproof sure though, so I'm willing to investigate the issue you're reporting.
You're mentioning "CachedBodyMessage" - judging by the names of fields inside System.Messaging.Message, it sounds like it's something within MSMQ. To try to reproduce your issue, I coded the following test:
[Test, Ignore("Only works in RELEASE mode because otherwise object references are held on to for the duration of the method")]
public void DoesNotLeakMessages()
{
// arrange
const string inputQueueName = "test.leak.input";
var queue = new MsmqMessageQueue(inputQueueName);
disposables.Add(queue);
var body = Encoding.UTF8.GetBytes(new string('*', 32768));
var message = new TransportMessageToSend
{
Headers = new Dictionary<string, object> { { Headers.MessageId, "msg-1" } },
Body = body
};
var weakMessageRef = new WeakReference(message);
var weakBodyRef = new WeakReference(body);
// act
queue.Send(inputQueueName, message, new NoTransaction());
message = null;
body = null;
GC.Collect();
GC.WaitForPendingFinalizers();
// assert
Assert.That(weakMessageRef.IsAlive, Is.False, "Expected the message to have been collected");
Assert.That(weakBodyRef.IsAlive, Is.False, "Expected the body bytes to have been collected");
}
which verifies that the sent transport message is collected as it should (will only do this in RELEASE mode though, because of the way DEBUG mode holds on to object references within scope)
I'll try and run the TimePrinter sample now and leave it running for a while to see if I can reproduce the issue. If you stumble upon more information about e.g. exactly which objects are leaking, it would be very helpful.
Thanks again for taking the time to report your worries to me :)
Followup:
I've modified the TimePrinter sample so that it sends 50 msg/s and includes a 64 KB random string payload with each message, and I've tracked the memory usage for almost four hours now. As you can see, it does not look like memory is being leaked.
I'll leave it running the rest of the day, just to be sure.
Maybe you can tell me some more about why you suspected there was a memory leak in the first place?
Update:
As you can see from the trace, it has now been running for 7 hours and thus more than 1,200,000 messages containing more than 70 GB of data has been sent and consumed by the same process. If cached message bodies were leaking, I am pretty sure that we would have been able to see something rising on the graph.
I'm new to OpenSSL. I understand that encryption should be performed using the EVP API which acts as a common interface to all the ciphers. AES CTR mode seems to be present in the version of OpenSSL that I have, but the definition for EVP_aes_128_ctr is disabled in evp.h:
#if 0
const EVP_CIPHER *EVP_aes_128_ctr(void);
#endif
Any idea why this is? Can I just remove the #if 0? Any other pointers on getting 128 bit AES CTR mode encryption to work in OpenSSL would be appreciated!
Thanks!
Btw, it looks like the answer to this is no, not yet. But maybe soon. I found this email thread indicating that a patch to address this issue may have been submitted in June 2010:
http://www.mail-archive.com/libssh2-devel#cool.haxx.se/msg01972.html
But when I downloaded the latest development branch from SVN, AES CTR was still not enabled in EVP. I ended up just implementing it directly, for which I found this link helpful:
AES CTR 256 Encryption Mode of operation on OpenSSL
I'm using AES CTR 128 mode and it works. I'm using libssl1.0.0 (I'm not sure if I'm answering the right question! I hope it would be helpful).
Here is a part of my code:
EVP_CipherInit_ex(ctx, EVP_aes_128_ctr(), NULL, key, iv,1);
EVP_CipherUpdate (ctx, ciphertext, &len, plaintext, plaintext_len);
/* Finalise the encryption. */
if(! EVP_CipherFinal_ex(ctx, ciphertext + len, &len)) handleErrors();
/*setting padding option*/
EVP_CIPHER_CTX_set_padding(ctx,0);
/* Clean up */
EVP_CIPHER_CTX_free(ctx);