We currently have third-party software that we use to extract information from.
We use their SDK to send and receive request and noted that the request aren't always accurate. After troubleshooting and reading (not well documented) documentation we realised that the SDK can only receive and send one request at a time.
This causes a issue for us as we are using an ASP.Net web application to access the SDK which means that we have multiple clients that access the SDK at the same time and send multiple request. What the SDK does is if it get a new request while busy with a current request it discards the current request and continues with the new request.
I would like to find out what would be the best way of creating a queuing system for the requests.
I was thinking of creating a WCF service and set the instancecontexctmode to single so that there is only one instance of the service running. Then setting the ThreadPool max threads to 1 and using it to queue the functions so that there is only one active call to the SDK at a time. Although I do not know much about ThreadPool queuing the solution should work.
Here is what I have in mind
Public Sub Sub1(var As String)
'Do work
ThreadPool.QueueUserWorkItem(New WaitCallback(AddressOf Function1), "Text")
End Sub
Public Function Function1(var As String) As DataTable
'Do Work
Return DataTable
End Function
Sub New()
ThreadPool.SetMaxThreads(1, 1)
End Sub
How would I create the queueing using ThreadPool or is there another way to accomplish the same result?
Will the web application wait for a response from the service?
Update 1
I found another way while fiddling with some code
If I specify the InstanceContextMode must be single and the the function's ReleaseInstanceMode to AfterCall this blocks any other functions from executing while the function is busy. It uses instance deactivation (Details found here)
<ServiceBehavior(InstanceContextMode:=InstanceContextMode.Single)>
Public Class Service1
<OperationBehavior(ReleaseInstanceMode:=ReleaseInstanceMode.AfterCall)>
Public Function DoWork() As String
Return WorkDone
End Function
End Class
Will this work and is there any specific problems that I could run into?
If I was you I would prefer to make one more abstraction, as you can control restriction third party component.
You can inherit third party class (if it's a class) and organize queue inside. Use it as a singleton. Singleton
Using tasks and task factory with limited threads (very similar to your idea): Scheduler
Your idea with ThreadPool.
In case it's a service, the easiest way is to create your own WCF service which will be responsible for queueing, which can be organized by WCF: Throttling
Related
I have an existing service that notifies a large number of clients when an event occurs. It uses a long polling mechanism that I rolled myself. I'm exploring replacing that mechanism with a signalr hub, and have a prototype working. But it has a pretty significant inefficiency that feels like there should be a solution to, but I'm not finding it.
I understand the idea of groups in signalr, and groups are obviously intended to prevent this inefficiency, but there is a reason that I cannot use groups. I hope it suffices to say that I need to call the same client method, with the same parameter values, on many clients using each client's ConnectionId. I can explain why if necessary, but it's really beside the point.
Assume I have a list of 200 ConnectionId's and I need to call the same method on each of them passing the same object parameter. If I simply iterate through the ConnectionId's calling Clients.Client(ConnectionId).clientMethod(param), I presume that the param object would be serialized 200 times.
Is there a way to serialize the parameter(s) one time, then invoke the client method using the already-serialized parameters?
UPDATE
I've found a github issue that sounds related (maybe even this exact issue) at Allow to Send Json Strings without duplicate Serialization. It appears that the functionality was added to signalr, but the github issue doesn't say anything about how to do it, and I can't find anything regarding it in the signalr docs.
UPDATE 2
In the github issue referenced above, the new functionality was implemented for PersistentConnection only -- not hubs. With persistent connections, when sending a parameter of type ArraySegment, signalr assumes it to be pre-serialized and sends it as-is without serializing it.
For some reason, this was not implemented for hubs, although it would be useful for hubs for the same reason it was useful for persistent connections.
Store all connectionId's in a Static List<string> atOnConnected` event and use the following,
Static List<string> allconnections = new List<string>();
public override Task OnConnected()
{
allconnections.Add(Context.ConnectionId);
return base.OnConnected();
}
Public void YourServerMethod(params)
{
Clients.Clients(allConnections).clientMethod(params)
}
I'm developing an app with VS2013, using EF6.02, and Web API 2. I'm using the ASP.NET SPA template, and creating a RESTful api against an entity framework data source backed by a sql server. (In development, this resides on the SQL Server local instance.)
I've got two API methods so far (one that just reads data, one that writes data), and I'm testing them by calling them in the javascript. When I only call a single method in my script, either one works perfectly. But if I call both in script (without waiting for either's callback to fire), I get bad results and different exceptions in the debugger. Some exceptions state that the save can't be completed because there are pending transactions. Another exception stated something about a conflict with other threads. And sometimes, the read operation fails with a null pointer exception when trying to read a result set.
"New transaction is not allowed because there are other threads running in the session."
This makes me question if I'm correctly getting a new DBContext per request. My code for this looks like:
static Startup()
{
context = new Data.SqlServer.AppDbContext();
...
}
and then whenever instantiating a unit of work, I access Startup.context.
I've tried to implement the unit of work pattern, and each request shares a single UOW object which has a single DBContext object.
My question: Do I have additional responsibility to ensure that web requests "play nicely" with eachother? I hope that this is a problem that others have already dealt with. Perhaps the errors that I'm seeing are legitimate in the sense that if one user's data is being touched, it is temporarily in an invalid state and if other requests come in at that exact moment, they indeed will fail (and I should code anticipating these failures). I guess that even if each request has its own DBContext, they still share the same underlying SQL data source so perhaps that's causing issues.
I can try to put together a testcase, but I get differing behavior depending on where I put breakpoints and how long I spend on them, reaffirming to me that this is timing related.
Thanks for any help or suggestions...
-Ben
Your problem is where you are setting your context. The Startup method is for when the entire application starts, thus any request made will all use the same context. This is not a per request setup, but rather a per application setup. As to why you are getting the errors, EntityFramework is NOT thread-safe. Since IIS spawns many threads to handle concurrent request, your single context is being used across multiple threads.
As for a solution, you can look into
-Dependency Injection frameworks (such as Ninject or Unity)
-place a using statement in your UnitOfWork classes
using(var context = new Data.SqlServer.AppDbContext()){//do stuff}
-Or, I have seen instances of people creating a class that gets the context for that request and stores it in the HttpContext.Cache[] element (using a unique name so you can retrieve it in another class easily), making it so that you will reuse the same context for the same request. Something like this:
public AppDbContext GetDbContext()
{
var httpContext = HttpContext.Current;
if (httpContext == null) return new AppDbContext();
const string contextTypeKey = "AppDbContext";
if (httpContext.Items[contextTypeKey] == null)
{
httpContext.Items.Add(contextTypeKey, new AppDbContext());
}
return httpContext.Items[contextTypeKey] as AppDbContext;
}
To use the above method, make a simple call var context = GetDbContext();
Note
We have all of the above methods, but this is specifically to the third method. It seems to work well with two caveats. First, do not use this in a using statement as it will not be available to any other classes during the scope of the request (you dispose it). And secondly, ensure that you have a call on Application_EndRequest that does actually dispose of it. We saw these little buggers hanging around after the request ended in memory causing a huge spike in memory usage.
I have a ASP.NET web application that allows end users to upload a file. Once the file is on the server, I spawn a thread to process the file. The thread is passed data regarding the specific operation (UserId, file path, various options, etc.). Most of the data is passed around via objects and method parameters but UserId needs to be available more globally so I put it in thread-local storage.
The thread is lengthy but it just processes the file and aborts. Is my use of the named data slot safe in this circumstance? If UserA uploads a file then UserB uploads a file while the first file is still processing, is it possible that the thread for UserA will also be delegated to handle UserB, thus producing a conflict for the named slot? (i.e. The slot gets overwritten with UserB's id and the rest of the operation of UserA's file is linked to the wrong User, UserB).
Public Class FileUploadProcess
Public UserId as String
Public Sub ExecuteAsync()
Dim t As New Thread(New ThreadStart(AddressOf ProcessFile))
t.Start()
End Sub
Protected Sub ProcessFile()
Dim slot As LocalDataStoreSlot = Thread.GetNamedDataSlot("UserId")
Thread.SetData(slot, UserId)
'lengthy operation to process file
Thread.FreeNamedDataSlot("UserId")
Thread.CurrentThread.Abort()
End Sub
End Class
Note that I am not asking if the LocalNamedDataStore slots are thread-safe. By definition, I know that they are.
In this case your use of thread local storage is safe. No two threads will ever share the same local storage (hence it's thread local). So there is no chance that two concurrent requests will stomp on the others data.
Couple of other comments though
Do avoid the use of Thread.Abort. It's a very dangerous operation and truthfully not needed here. The thread will end the statement afterwards.
A better approach would be to create a class which contains the background operation that has the UserId as a local field. Each request gets a new class instance. This is a much easier way to pass the data around to the background tasks
This is a safe operation.
I have to say that I that JaredPars opinion that it would be better to create a class and store the userid in that class as a field is incomplete to say the least.
Where do you then store that object? Since it is created per request you have to store it somewhere. Do you couple the page with this functionality? I wouldn't. Do you store in the Context.Items collection? That is a possibility but what do you do with unit tests where you are trying to abstract the code away from ASP.Net so it will be more testable?
I have personally done a hybrid of the two approaches: I create a single class that will contain all of the data elements that are request specific then I cache that object in Thread Local Storage. This allows the code to run in unit test frameworks without having to mock the ASP.Net runtime environment.
Another important point is this: if you intend to use asynchronous patterns in ASP.Net you should be aware that TLS is not forward to new threads when switching the execution context to a new thread. It is truly "Thread local".
It is safe for the time being, but be careful if performing async operations on that thread that may run on other threads. Those other threads won't have access to the origin threads TLS. A "safer" option that will allow you to use async calls in future is to store the user id in an AsyncLocal which is context that will flow with any async tasks.
My ASP .Net C# web application allows its users to send files from their account on my server to any remote server using FTP. I have implemented a WCF service to do this. The service instantiates a class for each user that spawns a worker thread which performs the FTP operations on the server. The client sends a command to the service, the service finds the worker thread assigned to the client and starts the FTP commands. The client then polls the service every two seconds to get the status of the FTP operation. When the client sends the "disconnect" command, the class and the worker thread doing the FTP operations is destroyed.
The FTP worker thread needed to persist between the client's queries because the FTP processing can take a long time. So, I needed a way for the client to always get the same instance of the FTP class between calls to the service. I implemented this service as a singleton, thus:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
public class UserFtpService : IUserFtpService
{
private SortedDictionary<string, UserFTPConnection> _clients = new SortedDictionary<string, UserFTPConnection>();
...
}
Where "UserFTPConnection" is the class containing the worker thread and the user's account name is used for the indexing in the dictionary.
The question I have is this: In the books I have read about WCF, the singleton instance is called "the enemy of scalability." And I can see why this is so. Is there a better way to make sure the client gets the same instance of UserFTPConnection between queries to the WCF service other than using a singleton?
Actually here your first problem is synchronizing the access to this static object. Dictionary<TKey, TValue> is not thread safe so you must ensure that only one thread is accessing it at the same time. So you should wrap every access to this dictionary in a lock, assuming of course you have methods that are writing and others that are reading. If you are only going to be reading you don't need to synchronize. As far as singleton being the enemy of scalability, that's really an exaggerated statement and pretty meaningless without a specific scenario. It would really depend on the exact scenario and implementation. In your example you've only shown a dictionary => so all we can say is that you need to ensure that no thread is reading from this dictionary while other is writing and that no thread is writing to this dictionary while other thread is reading.
For example in .NET 4.0 you could use the ConcurrentDictionary<TKey, TValue> class which is thread safe in situations like this.
One thing's for sure though: while the singleton pattern might or might not be an enemy of scalability depending on the specific implementation, the singleton pattern is the arch-enemy of unit testability in isolation.
If you are going to use a singleton, I'd recommend also setting ConcurrencyMode to ConcurrencyMode.Multiple. For example...
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)]
public class UserFtpService : IUserFtpService
{
}
If you don't do this, your WCF service will be a singleton but only allow one thread to access at a time, which would certainly effect performance. Of course you will need to ensure thread safety of collections (as in previously mentioned answer).
I am writing a custom Windows Workflow Foundation activity, that starts some process asynchronously, and then should wake up when an async event arrives.
All the samples I’ve found (e.g. this one by Kirk Evans) involve a custom workflow service, that does most of the work, and then posts an event to the activity-created queue. The main reason for that seems to be that the only method to post an event [that works from a non-WF thread] is WorkflowInstance.EnqueueItem, and the activities don’t have access to workflow instances, so they can't post events (from non-WF thread where I receive the result of async operation).
I don't like this design, as this splits functionality into two pieces, and requires adding a service to a host when a new activity type is added. Ugly.
So I wrote the following generic service that I call from the activity’s async event handler, and that can reused by various async activities (error handling omitted):
class WorkflowEnqueuerService : WorkflowRuntimeService
{
public void EnqueueItem(Guid workflowInstanceId, IComparable queueId, object item)
{
this.Runtime.GetWorkflow(workflowInstanceId).EnqueueItem(queueId, item, null, null);
}
}
Now in the activity code, I can obtain and store a reference to this service, start my async operation, an when it completes, use this service to post an event to my queue. The benefits of this - I keep all the activity-specific code inside activity, and I don't have to add new services for each activity types.
But seeing the official and internet samples doing it will specialized non-reusable services, I would like to check if this approach is OK, or I’m creating some problems here?
There is a potential problem here with regard to workflow persistence.
If you create long running worklfows that are persisted in a database to the runtime will be able to restart these workflows are not reloaded into memory until there is some external event that reloads them. As there they are responsible for triggering the event themselves but cannot until they are reloaded. And we have a catch 22 :-(
The proper way to do this is using an external service. And while this might feel like dividing the code into two places it really isn't. The reason is that the workflow is responsible for the big picture, IE what should be done. And the runtime service is responsible for the actual implementation or how it should be done. That way you can change the how without changing the why and when part.
A followup - regardless of all the reasons, why it "should be done" using a service, this will be directly supported by .NET 4.0, which provides a clean way for an activity to start an asynchronous work, while suspending the persistence of the activity.
See
http://msdn.microsoft.com/en-us/library/system.activities.codeactivitycontext.setupasyncoperationblock(VS.100).aspx
for details.