I know that one can start a new thread by
CALL FUNCTION 'ZTEST_RFC'
STARTING NEW TASK 'ABC'.
but as I am writing a web application in ABAP, it feels so wrong to have my OO handler parse an http call, get the request data, then call an Old Skool function module and then have that FM call again an OO object with all the application logic.
Is there any way to start a new task providing a object & method?
Not really. I understand that this feels wrong, but STARTING NEW TASK uses a lot of the basic RFC mechanisms, and since classes were never really RFC-enabled (though you can see in some internal details that someone at least made some provisions to do so), you still have to rely on basic procedural programming there. On the other hand, I've rarely seen an appropriate use for parallel processing in ABAP...
Related
ASP.NET is known to exhibit what is called "thread agility". In short, it means that multiple threads may be employed to fulfill a single request, although not more than one thread at a time. This is an optimization that means a thread waiting for asynchronous I/O may be returned to the pool and used to service other requests.
However, ASP.NET does not migrate all thread-related data when moving a request. Microsoft either forgot to do so, or thought that using thread-local storage (made easy by the ThreadStatic attribute) was something only the people coding ASP.NET themselves should do.
Based on quick googling, it seems to me that the only way to avoid the issue is to rely on HttpContext instead. The context is indeed migrated if ASP.NET decides to switch threads mid-request, so this overcomes the problem. But it creates a brand new headache instead: It ties your application logic to HttpContext, and therefore to a web context. That's not acceptable in all situations (in fact, I'd say it's unacceptable in most). Besides, since HttpContext is sealed and has internal constructors, you cannot mock or stub it, and therefore your logic also becomes untestable.
According to this (old) blog post, CallContext does NOT work, which is pretty infuriating given that a call context is conceptually precisely a logical thread!
Is there a simple way to reliably implement "per-LOGICAL-thread" isolation that will work in asp.net contexts as well as other contexts?
If not, does anyone know of a lightweight third-party framework that solves the problem? Does StructureMap behave correctly when ASP.NET migrates threads?
I would like a general answer, but in case anyone wonders, the specific use case I'm looking at is for use of Entity Framework in a SharePoint context. We're unfortunately stuck with SP-2010 and EF 3.5 for a while. EF basically requires that data is saved using the same context as they were originally read from - or else you have to keep track of changes yourself. I would like to introduce a "current model" concept. The first time the model is called upon in processing each HTTP request it should be instantiated, and then that same model instance should be used for the duration of the request. But the code relying on "Model.Current" should also work if executed in the context of a timer job. I'm fine with the timer job code explicitly disposing of the model when done with it (a task I'd like to give to a handler for HttpApplication.EndRequest in the SharePoint web context).
There may be reasons not to do this, and that's interesting too, but I would anyway really appreciate to learn of a way to achieve "logical thread isolation" in an asp.net context, as it'd be remarkably useful.
There is a nice post related to the problem: Implicit Async Context ("AsyncLocal").
If I got everything right, Logical CallContext i.e. CallContext.LogicalGetData and CallContext.LogicalSetData make it real to migrate immutable data correctly given you live in the world past .NET 4.5. This immutable limitation is a nut but still...way to go.
I'm writing web services in C++/CLI (not my choice) using Microsoft's Web API. A lot of functions in Web API are async, but because I'm using C++/CLI, I don't get the async/await support of C# or VB. So the fallback position is to use ContinueWith() to schedule a continuation delegate for reading the async task's result safely.
However, because C++/CLI also doesn't support inline anonymous delegates or managed lambdas, every delegate continuation must be written as a separate function somewhere. That quickly turns into spaghetti with the number of async functions in Web API.
So, to avoid the deadlock issues of Task<T>::Result, I've been trying this:
[HttpGet, Route( "get/some/dto" )]
Task< SomeDTO ^ > ^ MyActionMethod()
{
return Task::Run( gcnew Func< SomeDTO ^ >( this, &MyController::MyActionMethod2 ) );
}
SomeDTO ^ MyActionMethod2()
{
// execute code and use any task->Result calls I need without deadlocking
}
Okay, so I know this isn't great, but how bad is it? I don't yet understand enough of the guts of Web API or ASP.NET to comprehend the performance or scaling ramifications this will have.
Also, what other consequences may this have that aren't necessarily related to performance? For example, exceptions get wrapped in an extra AggregateException, which represents additional complexity and work for handling exceptions.
Your memory usage will increase with your application's parallelism. For every concurrent call to MyActionMethod you will need a separate thread with its own stack. That will cost you about 1 MB of RAM for each concurrent call. If MyActionMethod runs long enough so that 10000 instances run at once, you're looking at 10 GB of RAM. There is also CPU overhead in setting up each thread.
If concurrency is low, dropping async support won't be a problem. In that case, don't bother with Task::Run. Just change MyActionMethod to return SomeDTO^ (no Task wrapper).
Another potential concern is that lose easy use of cancellation tokens. However, for Web API it's usually fine to just let an exception propagate back to Web API, which ends up cancelling the synchronous call anyway.
Finally, if you were planning on performing any operation within your action method in parallel, you'll still need to use ContinueWith to accomplish that. Going non-async by default means you'll always perform one operation at a time. Fortunately, it's often just fine to do so.
Okay, so I know this isn't great, but how bad is it?
It's difficult to answer this without load-testing your specific scenario. But you can walk through the known semantics (taken largely from my blog).
First, when a request comes in, ASP.NET executes your handler on a thread pool thread within that request context. Your request handler calls Task.Run, which takes another thread from the thread pool and executes the actual request logic on it. The handler then returns the task returned from Task.Run; this releases the original request thread back to the thread pool.
Then, the Task.Run delegate will block on any asynchronous parts. So, this pattern has the scaling disadvantages of a regular synchronous handler, plus an extra thread context switch. Also, it uses a thread from the ASP.NET thread pool, which is not necessarily a bad thing, but in some scenarios it may throw off the ASP.NET thread pool heuristics.
Also, what other consequences may this have that aren't necessarily related to performance? For example, exceptions get wrapped in an extra AggregateException, which represents additional complexity and work for handling exceptions.
Yes, the exceptions from any .Result or Wait() calls will be wrapped in AggregateException. You may be able to avoid this by calling .GetAwaiter().GetResult() instead.
Another important consideration is that the code executing within the Task.Run is executing without a request context. So, ambient data like HttpContext.Current, current culture, thread principal, etc. are not going to be set correctly. You'll have to capture any important data before calling Task.Run and pass it down manually.
Trying to fix a problem in a classic ASP application, however I am inexperienced. Tried to find more info but was unable to.
The app instantiates a COM object for data retrieval which is not thread-safe, so the following instructions are added.
comObject=CreateObject("comServer.comObject")
returnValue=comObject.DoWork(.......)
...
comObject = Nothing
However, when processing two different http requests at the same time, the latter one seems to overwrite the first request, giving the first requester an error. It looks as if the comObject variable is shared between the requests.
How to instantiate the object in such a way that every separate request in IIS, gets it's own instance of the comObject?
Without knowing what the object does or how it does it, it's impossible to give specific advice. A general description will have to do:
The object is broken/buggy. It is the object's responsibility to handle the problem.
A COM object is supposed to handle all threading issues internally, or defer to COM STA apartments if it cannot do it, or doesn't want to (for those aspects that an STA can handle). This goes deep into the design of the object.
Regardless of COM Apartment choice, a DoWork(...) method with a semantic that precludes multiple separate COM objects in separate threads from handling simultaneous calls - is a seriously problematic design at best. A proper design would either include mechanisms to handle the conflict explicitly, or just hide the conflict from the calling code and handle the conflict internally.
Depending on the details of what DoWork() does, there might be ways to fix the object in such a way that the calls can succeed in parallel, or block each other so the calls are effectively serialized, or to cause the second call to throw a "You already called me" error. Again, which approach is more appropriate depends heavily on what the method does.
If you can't modify this broken component, your best option would be to write a COM wrapper that ensures serialization to the real object.
In any case, there is nothing reasonable you can do from the client (ASP VBScript) side.
I am creating a node.js module which communicates with a program through XML-RPC. The API for this program changed recently after a certain version. For this reason, when a client is created (createClient) I want to ask the program its version (through XML-RPC) and base my API definitions on that.
The problem with this is that, because I do the above asynchronously, there exists a possibility that the work has not finished before the client is actually used. In other words:
var client = program.createClient();
client.doSomething();
doSomething() will fail because the API definitions have not been set, I imagine because HTTP XML-RPC response has not returned from the program.
What are some ways to remedy this? I want to be able to have a variable named client and work with that, as later I will be calling methods on it to get information (which will be returned via a callback).
Set it up this way:
program.createClient(function (client) {
client.doSomething()
})
Any time there is IO, it must be async. Another approach to this would be with a promise/future/coroutine type thing, but imo, just learning to love the callback is best :)
I'm setting up a web service in Axis2 whose job it will be to take a bunch of XML and put it on to a queue to be processed later. I understand its possible to set up a client to invoke a synchronous web service asynchronously by creating a using an "invokeNonBlocking" operation on the "Call" instance. (ref http://onjava.com/pub/a/onjava/2005/07/27/axis2.html?page=4)
So, my question is, is there any advantage to using an asynchronous web service in this case? It seems redundant because 1) the client isn't blocked and 2) the service has to accept and write the xml to queue regardless if it's synchronous or asynchronous
In my opinion, asynchronous is the appropriate way to go. A couple of things to consider:
Do you have multiple clients accessing this service at any given moment?
How often is this process occurring?
It does take a little more effort to implement the async methods. But I guarantee, in the end you will be much happier with the result. For one, you don't have to manage threading. Your primary concern might just be the volatility of the data in the que (i.e. race/deadlock conditions).
A "sync call" seems appropriate, I agree.
If the request from the client isn't time consuming, then I don't see the advantage either in making the call asynchronous. From what I understand of the situation in question here, the web-service will perform its "processing" against the request some time in the future.
If, on the contrary, the request had required a time consuming process, then an async call would haven been appropriate.
After ruminating some more about it, I'm thinking that the service should be asynchronous. The reason is that it would put the task of writing the data to the queue into a separate thread, thus lessening the chances of a timeout. It makes the process more complicated, but if I can avoid a timeout, then it's got to be done.