What are the issues with native code in an ASP.NET app? - asp.net

Is there a summary somewhere of the issues around calling an unmanaged DLL from ASP.NET? I know how to do p-invoke, but does IIS need extra configuration? Is it likely to be a performance or scalability problem? Is it necessary to use COM interop or a mixed-mode assembly? Context: early planning stages of migrating a Windows app to an ASP.NET web app.

The main issue will be that the native code was written in a different context. It expects to be a desktop application, running for a single user, and probably on a single thread. If you run it in ASP.NET, it will be handling multiple users, and will be running on multiple threads at the same time. This can easily break it.

Probably security would be your big problem; I take it your not in some sort of partial trust situation though.
I mean it's obvious; generally you'd avoid it, but if you can't, then do it as little as possible for as least time as possible.

Related

How secure are singletons in ASP.NET?

Are singletons in ASP.NET shared between users/sessions? And if they are, are there any safety considerations? Think serializing/deserializing vulnerabilities, thread safety etc.
Is it the way to go using settings from the database that are the same for all users?
Hand crafting the anti-pattern called "Singleton" in C# code is a really bad idea in general, ASP.NET or not.
The singleton lifetime that is supported in the dependency injection framework is a good idea if it does what you need.
I would advise you to only use it for read-only data, like settings, though. You don't have an application on the desktop as of old. Your application might be recycled on the fly, or maybe stretched across multiple nodes on a server farm. So suddenly your "singleton" is actually only a singleton if you have a single instance of your program running. Building your application so this becomes an artificial problem (i.e. the framework would support it, but your own code is build to fail if you actually do so) is not a smart way to go about this.
So to summarize: Singleton lifetime in your dependency injection container? Might be okay. Depends on your use case. An actual "Singleton" pattern in your code? Bad. Very Bad. Tells me you don't actually do any unit testing and nothing is planned to bring this application over a few thousand hobbyist users who don't care if your app is down every time you deploy.

Referencing an unstable DLL

We are referencing a 3rd party proprietary CLI DLL in our .net project. This DLL is only an interface to their proprietary C++ library. Our project is an asp.net (MVC4/Web API) web application.
The C++ unmanaged library is rather unstable. Sometimes it crashes with e.g. dangling pointers. We have no way of solving it, and using this library is a first-class customer requirement.
When the application crashes, the application pool in IIS doesn't respond anymore. We have to restart it, and doing so takes a couple minutes (yes, that long!).
We would like to keep this unstable DLL from crashing our application. What's the best way of doing it? Can we keep the CLI DLL in a separate AppDomain? How?
Thanks in advance.
I think every answer to this question will be some kind of work around.
My workaround would be to not interact directly with the DLL from your web application.
Instead write your requests from the web application to either a Message Queue or a SQL table. You can then have another application such as a Windows Service which reads the requests, interacts with the DLL and then writes the results back for your web application to read.
I'm not saying that SQL / Message Queues are the right way, I'm more thinking of the general process flow.
I had this exact problem with a third party library that accessed protected memory for purposes of interacting with a hardware copy protection dongle. It worked fine in a console or winforms app, but crashed like crazy when called from an IIS application.
We tried several different things, some of which are mentioned in other answers on this page. But ultimately, the best solution for us was to us a very old technology - .Net Remoting. I know - it's somewhat frowned on these days. But it fit this particular need quite well.
The unstable code was placed in a Windows Service application. The web application made remoting calls to this service, which relayed the commands to the third-party library.
Now I'm sure you could do the same thing with WCF, sockets, etc. But remoting was quick and easy to setup, and since we only talk to the same server it works without opening any ports. It just talks on a named pipe.
It does mean a second service to install besides the web application, but that was acceptable in my particular use case.
If you did something similar, and the third-party code actually crashed the service, you could probably write some code in your main application to bring it back up.
So perhaps a process boundary is more useful than an App Domain when you have unstable code to wrangle.
I would first increase the IIS process recyling rate, maybe the the DLL code fails after a certain number of calls, or after the process reaches a certain amount of memory usage.
You can find information on the configuration of IIS 7.0 recycling options here: http://technet.microsoft.com/en-us/library/cc753179(v=ws.10).aspx
In your case I would recycle the process at a specific time, when you know there is less load on the application. And after a certain number of requests (lower than the default) to try and have "fresh" process most of the time.
The recycling process is graceful in the sense that the the old process is not terminated until the one that will replace it is ready, so there should be no noticeable downtime.
More information about the recycling mechanism here: http://technet.microsoft.com/en-us/library/cc745955.aspx
If the above does not solve the problem I would wrap the calls in my own code that manages the unstable DLL execution.
This code should recover from the failures for example by repeating the failing calls until a result is obtained and failing with a graceful error if it is not possible after a number of attempts.
Internally the calls to the unstable DLL could be made in a spawned thread or even the code could be in an new external executable that you could launch with Process.Start.
This last option has more overhead but it might be your only option. See this SO question for more information on this: How do you handle a thread that has a hung call?
I suggest following solution.
Wrap this dll with another web application. Can be one of the following ones. Since you already use web api, it is most suitable for you.
Simple ASMX Web Service
WCF Service
Asp.Net MVC - WEB Api Service
Control your p-invoke code so that you do not have any bug? See following articles.
The Black Art of P/Invoke and Marshaling in .NET
P/Invoke Revisited
Publish this application to IIS with different application pool.
Use standard techniques suggested before like. I suggest configure recycling IIS for both memory and scheduled times.
IIS process recycling rate
How to limit the memory used by an application in IIS?

Make a .Net DLL Thread-safe for Web App Consumption?

I've written a class in VB.Net that is consumed in an ASP.NET Web Application running IIS7. I use .NET Framework 4.0. The class performs a REST request to an online and retrieves an XML response containing strongly typed data.
This class also performs caching using an SQL Server database. The class is compiled to a .DLL and referenced by the Web Application. It works excellent and now I need to know how to make the class thread-safe.
I have no experience with making code 'thread-safe'. I don't know where to begin in determining whether or not it is thread-safe. Am assuming, because I didn't pay attention to this during development, that it is NOT thread-safe and that since it the web application will be used by many users at the same time that it must be payed attention to.
Can anyone point me on how to test for thread-safety? Are there any resources online that that will give me some ideas? Are there any rules of thumb that will tip me off as to where my main concerns are?
The easiest possible thing you can look out for is the use of "static" (C#) or "Shared" (VB.NET) variables. If these variables can be modified throughout the lifetime of the application you will likely run into threading issues which can really often result in "random looking" problems.
I would also be concerned about how you are doing the caching in your database as multiple .NET threads hitting SQL (for the cache) could cause issues as well depending on how its designed.
Bottom line is you are likely going to need to learn more about threading if you want to be sure this is going to not have issues. Probably the best book I have ever read in terms of simple to very complex C# topics is C# 4.0 In a Nutshell I would take a look at that book especially the threading chapters. (Seriously read the whole thing though) If you get that read through and have a good understanding of the concepts mentioned you should be fine.

Is caching using Application slower or more problematic than using Global.asax.cs static variable?

We have a Webforms application that stores a bunch of settings and terminology mappings (several hundred) that are used throughout the application.
http://www.dotnetperls.com/global-variables-aspnet makes these assertions:
The Application[] collection .... may be slower and harder to deal with.
the Application[] object ...is inefficient in ASP.NET.
Is this recommended? Yes, and not just by the writer of this article. It is noted in Professional ASP.NET by Apress and many sites on the Internet. It works well
So I am wondering if these statements are true. Can anyone elaborate on why using Application is slower or what kind of problems can crop up if you use Application? I am sort of assuming that any problems or slowdowns might only surface under production loads, so that is why I am asking for real world experience, rather than just benchmarking myself.
I am aware that there are many alternatives to caching (HttpRuntime.Cache, memcached, etc) but specifically I want to know if I need to go back and rewrite my legacy code that uses Application[]. Specifically if in any way I am incurring a performance penalty I would want to get rid of that.
How are you saving these settings? I would recommend the web.config
If you're using the web.config to store these settings (if they're application variables that's a solid place to start), then no need for Application variables.
I try to steer clear of the Application level variables because they are way more expensive than Session variables.
Also, variables in the web.config / app.config files can change without having to change code and/or recompile your project.
Application class (global variables) only exist in ASP.NET to help with backwards compatibility with classic ASP, you could say it's deprecated.
Another thing you could look into would be caching your settings so you're not always reading from disk.

Speeding up a Web Service

I have a web service running and I consume it from my desk application that is written on Compact Framework.
It takes 13 seconds to retrieve 8 results which is kinda slow. I also expect to be retrieving more results in the future. The database query runs fast.
Two questions: how do I detect where the speed slow down occurs? Do I put timers in the Web services code?
I would like to detect whether it is the network or the application code.
This is my first exposure to web services in a real environment so please bear with me.
i used asp.net 2.0 and c# to write a simple web service.
Another good profiler is the EQATEC Profiler. I did a write up on it here: http://elegantcode.com/2009/07/02/eqatec-profiler-and-net-cf-profiling-and-regular-net/
And it works find for .net CF projects. But this will allow you to see if there performance issues in unexpected places.
Your already on the right track of adding event logging, and include timers in them. Note, doing so will add to the over all time it takes, so you'll want to remove them after you track down the culprit. Also look into running the same webservice call multiple-times without re-initiating the connection, that may be cause as well.
-Jay
A starting point is to profile your web service to see where the delay is comming from
Did you know the CLR Profiler? There are some tools you can use to see what is happening
http://msdn.microsoft.com/en-us/library/ms998579.aspx
The database connectivity from your service to the DB could be a possible cause for slowdown. Adding timers should do the trick. If the code isnt too huge, you can look at the coding constructs to come up with an informed decision of where exactly things can be slow. Then add the timers. You would get a fair idea of where things are slowing down.
Two biggest pain points are going to be instantiating the web service reference and transferring all the data over the network. Pending anything turning up where some obvious blunder was made, I would look at ways of reducing the size of your xml and ways of better handling your web service reference.
All I know about the compact framework is that it is a pain to work in. I've worked on a number of web projects though and profiling your server, putting in logging to record the time taken will be helpful. If all the time is being taking post server response, however, it won't do much more than prove your server is working quickly.
SoapUI is a fantastic java application for consuming web services. It has a lot of functionality, including time metrics. I would start with that and see how long it takes to consume the same thing your client would be. Failing issues there, start with what I recommended above.

Resources