I'm having problems because of a poorly written third-party library which our system heavily depends on. This library is not thread-safe (because of some bugs and static variables) and I need to use it in a ASP.NET webservice, which handles each user request in a separate thread.
I've tried many solutions for this problem. The best solution for now is, in my opinion, let subprocesses handle the requests. One subprocess will listen and handle the requests for one user, so I can synchronize access to the library code in a per user fashion, which is much better than all that I can do when sharing static variables between requests.
How can I route requests received by IPC communication to the appropriate WebMethods without reinventing the wheel? If possible, I would like to use the classes from .Net that handle this in a normal ASP.NET webservice, but I'm having a hard time trying to find their names.
TL;DR: I have a class MyWebService (that inherits from System.Web.Services.WebService) with some methods marked with WebMethodAttribute and I want to pass a made-up HttpRequest (or HttpContext) to it and tell it "handle it like you're receiving this from a real HTTP server, despite the fact the current process is a console application".
First, you may want to consider using WCF instead of ASMX, which is a legacy technology, kept only for backwards compatibility.
Second, you have another option: ensure that only a single thread ever uses the third-party libarary at a time. Placing lock blocks around all access to the third-party library may solve the problem.
Related
I understand that in order for a ASP.Net handler to support session state you need to implement both IHttpHandler and IRequireSessionState, but why isn't session state provided by default? If for performance reasons, then wouldn't it be better to have an interface like IDoesNotRequireSessionState?
Its because the session is block the asynchronous operations, and the handle is usually used for long time operations, like the making and download of a file - if you keep the session on long time operation you block the rest of your pages.
Also the handle is made with the idea of the minimum required to get a response.
About the session lock:
Web app blocked while processing another web app on sharing same session
jQuery Ajax calls to web service seem to be synchronous
ASP.NET Server does not process pages asynchronously
Replacing ASP.Net's session entirely
If for performance reasons, then wouldn't it be better to have an
interface like IDoesNotRequireSessionState?
Absolutely not, because then everybody implementing a handler must know about the existence of this interface. An HTTP handler is the fastest in terms of performance you might ever get from ASP.NET. So if you want to pollute it with crap like session then you'd better do it explicitly, and taking full responsibility of doing so, by implementing some interface that you should know about.
I am very new to remoting in flex. I am using flex 4.5 and talking to a web application built by someone else on the team using AMF. They have used Zend_AMF to serialize and unserialize the data.
One of the main issues I am facing at the moment is that I will need to talk to a lot of services (about 60 or so).
From examples on remoting I have seen online and from adobe, it seems that I need to define a remoting object for EACH service:
<mx:RemoteObject id="testservice" fault="testservice_faultHandler(event)" showBusyCursor="true" destination="account"/>
With so many services, I think I might have to define about 60 of those, which I don't think is very elegant.
At the same time, I have been playing with Pinta to test out the AMF endpoint. Pinta seems to be able to allow one to define an arbitary amount of services, methods and parameters without any of these limitations. Digging through the source, I find that they have actually drilled down deep into the remoting and are handling a lot of low level stuff.
So, the question is, is there a way to approach this problem without having to define loads or remoteobjects and without having to go down too deep and start having to handling low level remoting events ourselves?
Cheers
It seems unusual for an application to require that many RemoteObjects. I've worked on extremely large applications, and we typically end up with no more than ~6-10 RemoteObject declarations.
Although you don't give a lot of specifics in your post about the variations of RemoteObjects, I suspect you may be confusing RemoteObject with Operation.
You typically declare a RemoteObject instance for every end-point in your application. However, that endpoint can (and normally does) expose many different methods to be invoked. Each of these server-side methods gets results in a client-side Operation.
You can explicitly declare these if you wish, however the RemoteObject builds Operations for you if you don't declare them:
var remoteObject:RemoteObject;
// creates an operation for the saveAccount RPC call, and invokes it,
// returning the AsyncToken
var token:AsyncToken = remoteObject.saveAccount(account);
token.addResponder(this);
//... etc
If you're interacting with a single server layer, you can often get away with a single RemoteObject, pointing to a single destination on the API, which exposes many methods. This is approach is often referred to as an API Façade, and can be very useful, if backed with a solid dependency injection discipline on the API.
Another common approach is to segregate your API methods by logical business area, eg., AccountService, ShoppingCartService, etc. This has the benefit of being able to mix & match protocols between services (eg., AccountService may run over HTTPS).
How you choose to split up these RemoteObjects is up to you. However, 60 in a single applications sounds a bit suspect to me.
We have a Flex application which relies heavily on data driven content supplied via asp.net. Currently the majority of this data is provided via asp.net objects which are then XML serialised and sent via a simple ASHX handler. This is then parsed via e4x in singleton classes to populate either its self or arrays of sub classes which are then available to the rest of the application without making additional data calls.
This works but is it the best way? I've read quite a few articles discussing the subject but couldn't really find any consensus.
Should I look into converting these to Web Services? If so, how should I manage the bindings, automatically import them via Flex or build my own? What are the pro's and con's. An important factor in this decision is speed, lowest latency and highest throughput is essential
As a separate matter our application doesn't sit at the root of the domain, and when in local development makes data calls to our development servers. As a result we add flash vars to the application to specify the appRoot which is then appended to the service url as necessary.
MyService.url = GeneralData.ApplicationRootUrl + "Services/foobar.ashx";
Is this the best way? I have since discovered the rootURL property, should I be using this, how does it work in this context? If I were to convert the services to web services how would I go about implementing the same functionality to allow local development?
Many thanks
This works but is it the best way?
Best is very subjective based on your situation. If at all possible, I would recommend you use an AMF gateway. That way your objects can immediately convert from server side objects (.NET Classes) to client side objects (AS3 classes). This is a big time savings because you don't have to manually create your XML on the back end, nor manually process it in the front end. Also the binary format of AMF is going to give much smaller packets than XML or a SOAP WebService would.
For .NET AMF options, I'd look into WebORB or FlourineFX
Flex Application is always loaded in browser, and you can use relative URL, so that your application will connect to same server from where it is loaded.
MyService.url = "/Services/foobar.ashx";
"/" will certainly append host where it came from. And it is always good practice to connect to same host where the flash is loaded from.
Secondly, SOAP web services use xml serialization, so if you use your handler to do e4x serialization or you use SOAP web service generator of Flash Builder, speed will be almost same. SOAP web service will certainly be little slower, but the difference will be in micro seconds to milli seconds.
However, with Web services, your development will speed improve as you will not have to create proxy classes.
I'm looking into building an ASP.NET MVC application that exposes (other than the usual HTML pages) JSON and XML REST services, as well as Web Sockets.
In a perfect world, I would be able to use the same URLs for the Web Sockets interface as I do for the other services (and determine which data to return by what the user agent requests) but, knowing that IIS wasn't built for persistent connections, I need to know if there's a way that I can accept (and possibly even handshake) the Web Sockets connection and then pass the connection off to another service running on the server.
I do have a workaround in mind if this isn't possible that basically involves using ASP.NET to check for the Web Sockets connection upgrade headers, and responding with a HTTP/1.1 302 Found that points to a different host that has my Web Sockets service configured to directly listen to the appopriate endpoint(s).
If I completely understand your goal, I believe you can use the IIS7/7.5 Application Request Routing module to accomplish this.
Here's a quick reference: http://learn.iis.net/page.aspx/489/using-the-application-request-routing-module/
Rather than 302 responses you could use ISAPI_rewrite to direct to an appropriate endpoint (and manipulate the HTTP header to get it there)
http://www.isapirewrite.com/docs/
Otherwise no, IIS cannot natively pass off an HTTP connection. The current MSFT method is to use a 302 or something else that is intercepting the raw socket and performing header manipulation prior to sending to IIS (or whatever other application)
It strikes me that this would be a better question to ask Microsoft than to ask us. Web Sockets is new technology, and rather than looking for a hack, you might want to ask Microsoft how they plan to support it. IIS is their software. Poke around on http://iis.net (maybe in http://forums.iis.net) and see what you learn.
The way to do this is to use a unique Session ID that is associated with the Http Session. From the description, it seems like you might want to scope this to a single HttpApplication instance, but this is not necessary (you may also persist a session across many application instances). Anyway, this session ID needs to be attached somehow to each Http Request (either with a cookie, querystring, static variable with the HttpApplication instance, form data). Then you persist the identifying information about the Http session somewhere with the ID.
This identifying information may vary depending on your needs but could entail the entire http request or just some stripped down representation that serves your particular purpose.
Using this SessionID somewhere in the Http request allows you to restore whatever information you need to call and interact with the appropriate services. The instances of the services may also need to be scoped to the session as well.
Basically, what I am suggesting is that you NOT directly pass the Http connection to an external process, but instead pass the necessary data to the external process, and allow create a mechanism for sending callback data. I think looking into the mediator pattern may be helpful for you in understanding what I mean here. http://en.wikipedia.org/wiki/Mediator_pattern . I hope this helps.
I'm wondering if it is a good approach in the ASP.NET project if I set a field which "holds" a connection to a DB as a static field (Entity Framework)
public class DBConnector
{
public static AdServiceDB db;
....
}
That means it'll be only one object for entire application to communicate with a DB. I'm also wondering about if that object will be refreshing data changes from DB tables, or maybe it shouldn't be static and I shoud create a connection dyniamically. What do You think ?
With connection pooling in .NET, generally creating a new connection for each request is acceptable. I'd evaluate the performance of creating a new one each time, and if it isn't a bottleneck, then avoid using the static approach. I have tried it before, and while I haven't run into any issues, it doesn't seem to help much.
A singleton connection to a database that is used across multiple web page requests from multiple users presents a large risk of cross-contamination of personal information across users. It doesn't matter what the performance impact is, this is a huge security risk.
If you don't have users or personal information, perhaps this doesn't apply to your project right now, but always keep it in mind. Databases and the information they contain tend to evolve in the direction of more specifics and more details over time.
This is why you should not use a singleton design pattern with your database connection
Hope it helps
Is using a singleton for the connection a good idea in ASP.NET website
Bad idea. Besides the potential mistakes you could make by not closing connections properly and so forth, accessing a static object makes it very difficult to unit test your code. I'd suggest using a class that implements an interface, and then use dependency injection to get an instance of that class wherever you need it. If you determine that you want it to be a singleton, that can be determined in your DI bindings, not as a foundational point of your architecture.
I would say no.
A database connection should be created when needed to run a query and cleaned up after that query is done and the results are fetched.
If you use a single static instance to control all access to the DB, you may lose out on the automatic Connection Pooling that .NET provides (which could impact performance).
I think the recommendation is to "refresh often."
Since none of the answers have been marked as an answer and I don't believe any have really addressed question or issue thereof...
In ASP.NET, you have Global or HttpApplication. The way this works is that IIS will cache instances of your "application" (that is an instance of your Global class). Normally (default settings in IIS) you could have up to 10 instances of Global and IIS will pick any one of these instances in order to satisfy a request.
Further, keep in mind that, there could be multiple requests at any given moment in time. Which means multiple instances of your Global class will be used. These instances could be ones that were previously instantiated and cached or new instances (depending on the load your IIS server is seeing).
IIS also has a notion of App Pools and worker processes. A Worker process will host your application and all the instances of your Global classes (as discussed earlier). So this translates to an App Domain (in .NET terms).
Just to re-cap before moving on…
Multiple instances of your Global class will exist in the Worker process for your application (in IIS). Each one waiting to be called upon by IIS to satisfy a request. IIS will pick any one of these instances. They are effectively threads that have been cached by IIS and each thread has an instance of your Global class. When a request comes in, one of these threads is called upon to handle the request-response cycle. If multiple requests arrive simultaneously, then multiple threads (each contains an instance of your Global class) will be called upon to satisfy each of those requests.
Moving on…
Since there will be only one instance of a static class per App Domain you'll effectively have one instances of your class shared across all (up to 10) instances of Global. This is a bad idea because when multiple simultaneous requests hit your server they'll either be blocked (if your class’s methods use locks) or threads will be stepping on each other’s toes. In other words, this approach is not inherently thread-safe and if you make it thread safe using thread synchronization primitives then you’re unnecessarily blocking threads, negatively impacting performance and scalability of your web application, with no gain whatsoever.
The real solution (and I use this in all my ASP.NET apps) is to have an instance of your BLL or DAL (as the case may be) per instance of Global. This will ensure the following:
1. Multiple threads are not an issue since IIS guarantees one request-response per instance of Global) at any given moment in time. So you’re code is inherently threads-safe.
2. You only have up to 10 instances of your BLL/DAL up and running at any given moment in time ensuring that you're not constantly creating and disposing instances of (typically) large objects to satisfy each request, which on busy sites is huge
3. You get really good performance well due to #2 above.
You do have to ensure that your BLL/DAL is truly stateless or that you reset any state at the start of each Request-Response cycle. You can use the BeginRequest event in Global to do that is you need to.
If you go down this route, be sure to read my blog post on this
Instantiating Business Layers – ASP.NET