My application access the HSM via a ASP.NET web service through PKCS#11. I initialise the cryptoki library and obtain a session handle. Web-service hold on to this handle to perform encryption/decryption/signing/verifying in a batch mode.
The problem i am facing is
The ASP.NET web service time-outs' after 20 minutes. This act- i think, unloads the cryptoki library and the session handle held by the web-service becomes invalid. Yes, i agree that the ASP.NET web-service can be reconfigured not to time-out, which will keep the cryptoki library always loaded.
My question is What happens to the session handle which i obtained in the first place from the HSM?. Will it be lost or will it be there unused? I am asking this because, i am not closing the opened session properly by calling c_closeSession.
The web-service is implemented via a Thread pool
Thanks
You are supposed to call C_Finalize() when you are done using the cryptoki library. A well-written implementation might be robust against you not doing so, but there are no guarantees. Your open sessions may be kept alive on the HSM and perhaps in the driver.
Strongly consider calling C_Finalize() from your Application_End().
From the theoretical perspective, you should read the PKCS#11 spec, it is all written there, from section 6.6 onwards
From the practical perspecgive, an application becomes a cryptoki application after it calls C_Initialize. The concept of a session and its identifier may be relayed by a small wrapper library to a longrunning PKCS#11 process, that actually talks to the HSM, but may not. If the process that was a cryptoki application dies, so will do all the virtual resources (what a session is).
Where exactly is the problem? Opening a session could be a pretty cheap operation most of the time, unless you are sure (have measured) that it is the bottleneck, don't optimize and open and close a session for a request, if you can't control the lifespan of the cryptoki process.
if i understood that, you need to create a "global" login for that session.
Furthermore you need to open/close session for each local session.
So,
- Global variable with "Login" (Once on startup or when u want)
- Check global login status when you will create a new sessión.
- Create Individual sessions for each action (closing the "local" sessión not the global login)
With this you obtain a global variable with a logged session and individual session using that global login.
Good luck
I have also this problem and year is 2020 :S
.Net Framework + Rest Api couple have this problem this time.
I'm using HSM for decrypt method. I have a login method interactive channel, and we need to make performance test. The service has an instance from Pkcs11
pkcs11 = new Pkcs11(hsmPath, true);
slot = GetUsableSlot(pkcs11);
TokenInfo tokenInfo = slot.GetTokenInfo();
session = slot.OpenSession(true);
session.Login(CKU.CKU_USER, userLoginPin);
secretKey = GenerateKey(session);
And this is the Decrypt method.
public byte[] Decrypt(byte[] encryptedTextByteArray)
{
Mechanism mechanism = new Mechanism(CKM.CKM_AES_ECB);
byte[] sourceData = encryptedTextByteArray;
byte[] decryptedData = null;
using (MemoryStream inputStream = new MemoryStream(sourceData), outputStream = new MemoryStream())
{
try
{
session.Decrypt(mechanism, secretKey, inputStream, outputStream, 4096);
}
catch (Pkcs11Exception ex)
{
throw;
}
decryptedData = outputStream.ToArray();
}
return decryptedData;
}
When I try to make performance test using Postman runner, there is no problem for one thread.
If I increase thread count, It appears these errors.
First error: CKR_OPERATION_ACTIVE
Next error: CKR_DEVICE_MEMORY
I tried these methods.
-For every request closed session. And also opened session for new request. But not succeeed. The same errors appeared. (Of course request and response time increased)
-For evey request closed the conenction. And also opened new connection for new request. The same errors appeared. (Of course request and response time increased)
Anyone helps me? :)
Related
I have some fairly typical SQL calls in an app that look something like this (Dapper typically in the middle), .NET 6:
var connection = new SqlConnection("constring");
using (connection)
{
await connection.OpenAsync();
var command = new SqlCommand("sql");
await command.ExecuteAsync();
await connection.CloseAsync();
connection.Dispose();
}
A request to the app probably generates a half-dozen calls like this, usually returning in <0 to 10ms. I almost never see any SQL usage (it's SQL Azure) beyond a high of 5%.
The problem comes when a bot hits the app with 50+ simultaneous requests, coming all within the same 300 or so milliseconds. This causes the classic error
InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached
I have the following things in place:
I have the connection string set to a max pool size of 250.
I'm running three nodes as an Azure App Service.
The call stacks are all async.
I do have ARR Affinity on because I'm using SignalR, but I assume the load balancer would spread out the requests as the bot likely isn't sending ARR cookies.
The app services and SQL Server do not break a sweat even with these traffic storms.
Here's the question: How do I scale this? I assume human users don't see this and the connection pool exhaustion heals quickly, but it creates a lot of logging noise. The App Service and SQL Server instance are not at all stressed or working beyond their limits, so it appears it's the connection pool mechanics that are a problem. They're kind of a black box abstraction, but a leaky abstraction since I clearly need to know more about them to make them work right.
Here's the question: How do I scale this?
.NET 6 introduced Rate Limiting, which is really the right solution here. Test how many concurrent requests your API app and database can comfortably handle, and stall or reject additional requests.
Take the analogy of an Emergency Room. Do you want to let everyone into the back who walks in the door? No once all the rooms are full, you make them wait in the waiting room, or send them away.
So put in a request throttle like:
builder.Services.AddRateLimiter(options =>
{
options.GlobalLimiter = PartitionedRateLimiter.Create<HttpContext, string>(httpContext =>
RateLimitPartition.GetFixedWindowLimiter(
partitionKey: httpContext.Request.QueryString.Value!,
factory: partition => new FixedWindowRateLimiterOptions
{
AutoReplenishment = true,
PermitLimit = 50,
QueueLimit = 10,
Window = TimeSpan.FromSeconds(10)
}));
options.OnRejected = (context, cancellationToken) =>
{
context.HttpContext.Response.StatusCode = StatusCodes.Status429TooManyRequests;
return new ValueTask();
};
});
We are using the SQLite.NET PCL in a Xamarin application.
When putting the database under pressure by doing inserts into multiple tables we are seeing BUSY exceptions being thrown.
Can anyone explain what the difference is between BUSY and LOCKED? And what causes the database to be BUSY?
Our code uses a single connection to the database created using the following code:
var connectionString = new SQLiteConnectionString(GetDefaultConnectionString(),
_databaseConfiguration.StoreTimeAsTicks);
var connectionWithLock = new SQLiteConnectionWithLock(new SQLitePlatformAndroid(), connectionString);
return new SQLiteAsyncConnection (() => { return connectionWithLock; });
So our problem turned out to be that although we had ensured within the class we'd written that it only created a single connection to the database we hadn't ensured that this class was a singleton, therefore we were still creating multiple connections to the database. Once we ensured it was a singleton then the busy errors stopped
What I've take from this is:
Locked means you have multiple threads trying to access the database, the code is inherently not thread safe.
Busy means you have a thread waiting on another thread to complete, your code is thread safe but you are seeing contention in using the database.
...current operation cannot proceed because the required resources are locked...
I am assuming that you are using async-style inserts and are on different threads and thus an insert is timing out waiting for the lock of a different insert to complete. You can use synchronous inserts to avoid this condition. I personally avoid this, when needed, by creating a FIFO queue and consuming that queue synchronously on a dedicated thread. You could also handle the condition by retrying your transaction X number of times before letting the Exception ripple up.
SQLiteBusyException is a special exception that is thrown whenever SQLite returns SQLITE_BUSY or SQLITE_IOERR_BLOCKED error code. These codes mean that the current operation cannot proceed because the required resources are locked.
When a timeout is set via SQLiteConnection.setBusyTimeout(long), SQLite will attempt to get the lock during the specified timeout before returning this error.
Ref: http://www.sqlite.org/lockingv3.html
Ref: http://sqlite.org/capi3ref.html#sqlite3_busy_timeout
I have applied the following solution which works in my case(mobile app).
Use sqlitepclraw.bundle_green nugget package with SqlitePCL.
Try to use the single connection throughout the app.
After creating the SQLiteConnection.
Apply busytime out using following call.
var connection = new SQLiteConnection(databasePath: path);
SQLite3.BusyTimeout(connection.Handle, 5000); // 5000 millisecond.
We have a Java class that listens to a database (Oracle) queue table and process it if there are records placed in that queue. It worked normally in UAT and development environments. Upon deployment in production, there are times when it cannot read a record from the queue. When a record is inserted, it cannot detect it and the records remain in the queue. This seldom happens but it happens. If I would give statistic, out of 30 records queued in a day, about 8 don't make it. We would need to restart the whole app for it to be able to read the records.
Here is a code snippet of my class..
public class SomeListener implements MessageListener{
public void onMessage(Message msg){
InputStream input = null;
try {
TextMessage txtMsg = (TextMessage) msg;
String text = txtMsg.getText();
input = new ByteArrayInputStream(text.getBytes());
} catch (Exception e1) {
// TODO Auto-generated catch block
logger.error("Parsing from the queue.... failed",e1);
e1.printStackTrace();
}
//process text message
}
}
Weird thing we cant find any traces of exceptions from the logs.
Can anyone help? by the way we set the receiveTimeout to 10 secs
We would need to restart the whole app for it to be able to read the records.
The most common reason for this is the listener thread is "stuck" in user code (//process text message). You can take a thread dump with jstack or jvisualvm or similar to see what the thread is doing.
Another possibility (with low volume apps like this) is the network (most likely a router someplace in the network) silently closes an idle socket because it has not been used for some time. If the container (actually the broker's JMS client library) doesn't know the socket is dead, it will never receive any more messages.
The solution to the first is to fix the code; the solution to the second is to enable some kind of heartbeat or keepalives on the connection so that the network/router does not close the socket when it has no "real" traffic on it.
You would need to consult your broker's documentation about configuring heartbeats/keepalives.
In my application, I'm hosting a fairly CPU-intensive engine on a web server, which is connected to clients via SignalR. From the client, the server will be signalled to do some work (via an AJAX request), and every 200ms will send down a queue of "animation events" which describe the work being done.
This is the code used to set up the connection on the client:
$.connection.hub.start({ transport: ['webSockets', 'serverSentEvents', 'longPolling'] })
And here's the related code in the backend:
private const int PUSH_INTERVAL = 200;
private ManualResetEvent _mrs;
private void SetupTimer(bool running)
{
if (running)
{
UpdateTimer = new Timer(PushEventQueue, null, 0, PUSH_INTERVAL);
}
else
{
/* Lock here to prevent race condition where the final call to PushEventQueue()
* could be followed by the timer calling PushEventQueue() one last time and
* thus the End event would not be the final event to arrive clientside,
* which causes a crash */
_mrs = new ManualResetEvent(false);
UpdateTimer.Dispose(_mrs);
_mrs.WaitOne();
Observer.End();
PushEventQueue(null);
}
}
private void PushEventQueue(object state)
{
SentMessages++;
SignalRConnectionManager<SimulationHub>.PushEventQueueToClient(ConnectionId, new AnimationEventSeries { AnimationPackets = SimulationObserver.EventQueue.FlushQueue(), UpdateTime = DateTime.UtcNow });
}
public static void PushEventQueueToClient(string connectionId, AnimationEventSeries series)
{
HubContext.Clients.Client(connectionId).queue(series);
}
And for completeness' sake, the related Javascript method:
self.hub.client.queue = function(data) {
self.eventQueue.addEvents(data);
};
When testing this functionality on localhost, it works absolutely smoothly, with no delay (as you would expect), using serverSentEvents as a transport method.
However, when used in production, this more often than not takes a very long time to complete. Using SignalR's logging and a bit of my own instrumentation, it can be seen that the first series of events reaches the client within a couple of seconds, which is totally acceptable. However, after that SignalR often gives the following error:
Keep alive has been missed, connection may be dead/slow.
Followed soon after by:
Keep alive timed out. Notifying transport that connection has been lost.
This will happen a few times, and then eventually, up to a minute later, the events will arrive, with my own instrumentation showing that they were sen from the server approximately 200ms apart, as expected. It can also be seen that in production, they were sent with the primary transport method, web sockets.
Is anyone aware of any issues that sending multiple SignalR requests on a timer might cause? Like I say, this primarily seems to happen with web sockets. I've been told that using web sockets is best practice, so I'm keen to keep using them, but if there isn't a workaround to these kinds of issues, then I'm afraid I'll have to remove them permanently.
Edit
I've now removed the option to use web sockets on the live site, and I'm running into the same issues with server sent events - several failed attempts to reconnect after the first queue update arrives.
Summing up our discussion, I don't think there are specific issues with websockets/signalr on azure.
I've sample code here: https://github.com/jonegerton/SignalR.StockTicker which can be used for testing, with some minor tweaks (I'll probably develop it as a test platform at some point).
Its based on the sample project from MS which can be found here: https://github.com/SignalR/SignalR-StockTicker.
I've put an example in azure here (http://stockticker.azurewebsites.net) for testing purposes. It has the default transport configurations enabled (ie websockets >> serversentevents >> longpolling)
I am writing a web app where the application runs a command on the system using System.Diagnostics class.
I wanted to display realtime output from a command which takes a lot of time to complete. After searching a bit, I found that BeginOutputReadLine can stream output to an event handler.
I am also using jquery ajax to call this method and have the process run asynchronously.
So far, I am trying to do it this way:
Process p2= new Process();
p2.OutputDataReceived += new DataReceivedEventHandler(opHandler);
p2= Process.Start (psi2);
p2.BeginOutputReadLine();
I have declared a class with a static variable to store the output of the command as a Label on the page wont be accessible from a static method.
public class ProcessOutput
{
public static string strOutput;
[WebMethod]
public static string getOutput()
{
return strOutput;
}
}
In the event handler for BeginOutputReadLine, set the variable with the line from output.
private static void opHandler(object sendingProcess,DataReceivedEventArgs outLine)
{
if (!String.IsNullOrEmpty(outLine.Data))
{
ProcessOutput.strOutput= outLine.Data;
}
}
and from the aspx page, I am calling the method to get the value of strOutput
$(document).ready(function() {
setInterval(function() {
$.ajax({
type: "GET",
url: "newscan.aspx/getOutput",
data: "",
success: function(msg){
$('#txtAsyncOp').append(msg.d);
}
});
}, 1000);
});
I dont know why, but the lable is not getting updated. If I put alert, I get 'undefined' in the alert box every 10 seconds.
Can anybody suggest me how to do it correctly?
Each request begins a new thread as a part of the Request pipeline. This is by design. Each thread has its own stack and can't access each others stacks. When a thread starts running a new method it stores the arguments and local variables in that method on its own stack. Long story short you won't be able to assign that variable and expect to retrieve its value from another Request.
There are a couple approaches you can take, you can scope it to the session variable (most common) with:
System.Web.HttpContext.Current.Session["variable"] = value ;
Or you set it to application scope using:
if (System.Web.Caching.Cache["Key1"] == null)
System.Web.Caching.Cache.Add("Key1", "Value 1", null, DateTime.Now.AddSeconds(60), Cache.NoSlidingExpiration, CacheItemPriority.High, onRemove);
Alternatively, you can log the output to a database or file and echo out the results via the WebMethod. If your long running process is running asynchronously, you won't have access to the HttpContext -- so the Session state bag will not be available; the application Cache could be used, however it is generally not used for this type of mechanism (cache is available for performance reasons, not a persistence mechanism -- its important to remember that you cannot control when your web application recycles).
I'd highly suggest writing to a database or log file. Asynchronous processes commonly require logged output or trace to diagnose potential problems and to validate results.
Furthermore, because you cannot control when the web app recycles, you can easily lose control of that child process you're launching. A better design would start an asynchronous method in-process, or an out-of-process application or service that polls a database to pick up jobs (possibly use the task scheduler or cron depending on your platform).