I want to understand how to work with unmanaged resources and when I need the SafeHandle class. What can be the problem statement when you can say: "Oh, here I need the SafeHandle class!"?
I would be grateful for links to articles, examples, explanations
I think MSDN is pretty clear in definition:
The SafeHandle class provides critical finalization of handle
resources, preventing handles from being reclaimed prematurely by
garbage collection and from being recycled by Windows to reference
unintended unmanaged objects. Before the .NET Framework version 2.0,
all operating system handles could only be encapsulated in the IntPtr
managed wrapper object.
The SafeHandle class contains a finalizer that ensures that the handle
is closed and is guaranteed to run, even during unexpected AppDomain
unloads when a host may not trust the consistency of the state of the
AppDomain.
For more information about the benefits of using a SafeHandle, see
Safe Handles and Critical Finalization.
This class is abstract because you cannot create a generic handle. To
implement SafeHandle, you must create a derived class. To create
SafeHandle derived classes, you must know how to create and free an
operating system handle. This process is different for different
handle types because some use CloseHandle, while others use more
specific methods such as UnmapViewOfFile or FindClose. For this
reason, you must create a derived class of SafeHandle for each
operating system handle type; such as MySafeRegistryHandle,
MySafeFileHandle, and MySpecialSafeFileHandle. Some of these derived
classes are prewritten and provided for you in the
Microsoft.Win32.SafeHandles namespace.
This is related to this question. Context: .Net Core 3.1, using Microsoft.Extensions.Logging
Loggers are singletons in the application's IHost. If I inject (DI) an ILogger<T> into my class or method, the injected object is the same instance other classes or methods receive if they ask for ILogger<T>. This poses the question of what happens when I use logger.BeginScope($"Processing {transactionId}") in one thread. What happens with the other threads? Do they change the logging scope as well? Do logging scopes get mixed up? If they don't: how does that work, being their loggers the same object? If they do mix scopes, how can I make two threads use different logging scopes for a given ILogger<T> type?
This depends on the logger implementation, but typically they're implemented using a type of stack held within an AsyncLocal.
A call to BeginScope will put a new item onto that stack, and the adjoining Dispose will pop it off of that stack.
When the logger is invoked via LogInformation or otherwise, the data of the current stack object will be copied to write it to the console or whatever output that logger instance is configured to do.
The AsyncLocal is what gives the framework the ability to store information across threads and tasks.
For reference, check out the Microsoft.Extensions.Logging.Console source code:
ConsoleLogger.cs#L67
LoggerExternalScopeProvider.cs#L14
We're writing a class we'll use in our asp.net site. This class will pull down some json using HttpClients and such, and use it to provide information to other clients.
Some of this information will change very infrequently and it doesn't make sense to query for it on each client request.
For that reason I'm thinking of making a static constructor in this new class for the slow-changing information and stashing the results in a few static member variables. That'll save us a few HttpRequests down the line-- I think.
My question is, how long can I expect that information to be there before the class is recycled by ASP.Net and a new one comes into play, with the static constructor called once more? Is what I'm trying to do worth it? Are there better ways in ASP.Net to go about this?
I'm no expert on ASP.Net thread pooling or how it works and what objects get recycled and when.
Typical use of the new class (MyComponent, let's call it) would be as below, if that helps any.
//from mywebpage.aspx.cs:
var myComponent = new MyComponent();
myComponent.doStuff(); //etc etc.
//Method calls like the above may rely on some
//of the data we stored from the static constructor call.
Static fields last as long as the AppDomain. It is a good strategy that you have in mind but consider that the asp runtime may recycle the app pool or someone may restart the web site/server.
As an extension to your idea, save the data locally (via a separate service dedicated to this or simply to the hard drive) and refresh this at specific intervals as required.
You will still use a static field in asp.net for storing the value, but you will aquire it from the above local service or disk ... here I recommend a System.Lazy with instantiation and publication options on thrread safe (see the constructor documentation).
#WebListener
public class AllRequestsWebListener implements ServletRequestListener {
#Inject HttpRequestProducer producer;
public void requestInitialized(ServletRequestEvent sre) {
producer.requestInitialized(sre);
}
}
...
#RequestScoped
public class HttpRequestProducer {
...
}
I don't know howto inject request-bean as method-parameter and therefore I can guess that it will work properly when Request-bean injection is threadLocal. Can someone explain me how it's implemented in a thread-safe manner?
What you have injected in your bean is a proxy representing the real deal. The proxy will always forward the invocation to the correct bean
Intuition based answer
I believe it is thread safe, as request scope is thread safe (session and above are not, as a user can open multiple browser sessions and use the same session ID)
I tested it, although it's empiric evidence, but the injected HttpRequestProducer gets a new instance each request.
Note that the requestInitialized and requestDestroyed can be (and in practice are) different threads, so I will investigate further if you intend to use the same injected object on both methods.
Specs backed answer
The hard part was to find hard evidence for this claim in the specs.
I looked into the CDI spec and couldn't quickly find conclusive evidence that a #RequestScoped object is thread safe (e.g. using thread local) however I assume that a #RequestScoped bean is using the same scope as the scoped beans in Java EE 5: (see here)
In there this clause is interesting:
Controlling Concurrent Access to Shared Resources In a multithreaded
server, it is possible for shared resources to be accessed
concurrently. In addition to scope object attributes, shared resources
include in-memory data (such as instance or class variables) and
external objects such as files, database connections, and network
connections.
Concurrent access can arise in several situations:
Multiple web components accessing objects stored in the web context.
Multiple web components accessing objects stored in a session.
Multiple threads within a web component accessing instance variables.
A web container will typically create a thread to handle each request.
If you want to ensure that a servlet instance handles only one request
at a time, a servlet can implement the SingleThreadModel interface. If
a servlet implements this interface, you are guaranteed that no two
threads will execute concurrently in the servlet’s service method. A
web container can implement this guarantee by synchronizing access to
a single instance of the servlet, or by maintaining a pool of web
component instances and dispatching each new request to a free
instance. This interface does not prevent synchronization problems
that result from web components accessing shared resources such as
static class variables or external objects. In addition, the Servlet
2.4 specification deprecates the SingleThreadModel interface.
So in theory, it seems that the object itself is going to have one instance per request thread, however I couldn't find any hard evidence that this is supported.
The context:
(Note: in the following I am using "project" to refer to a collection of software deliverables, intended for a single customer or a specific market. I am not referring to "project" as it is used in Visual Studio to refer to a configuration that builds a single EXE or DLL, within a solution.)
We have a sizable system that consists of three layers:
A layer containing code that is shared across projects
A layer containing code that is shared across different applications within a project
A layer containing code that is specific to a particular application or website within a project.
The first two layers are built into DLL assemblies. The top layer is an assortment of EXEs and/or .aspx web applications.
IIRC, we have a number of different projects that use this pattern. All four share layer 1 (though often in slightly different versions, as managed by the VCS). Each of them has its own layer 2. Each of them has its own set of deliverables, which can range from a website, or a website and a background service, to our largest and most complex (and the bread-and-butter of our business) which consists of something like five independent web applications, 20+ console applications/background services, three or four independent web services, half-a-dozen desktop GUI apps, etc.
It's been our intent to push as much code into levels 1 and 2 as possible, to avoid duplicating logic in the top layers. We've pretty much accomplished that.
Each of layers 1 and 2 produce three deliverables, a DLL containing the code that is not web-related, a DLL containing the code that is web-related, and a DLL containing unit tests.
The problem:
The lower levels were written to make extensive use of singletons.
The non-web DLL in layer 1 contains classes to handle INI files, logging, a custom-built obect-relational mapper, which handles database connections, etc. All of these used singletons.
And when we started building things on the web, all of those singletons became a problem. Different users would hit the website, log in, and start doing different things. They'd do something that generated a query, which would result in a call into the singleton ORM to get a new database connection, which would access the singleton configuration object to get the connection string, and then the connection would be asked to perform a query. And in the query the connection would access the singleton logger to log the SQL statement that was generated, and the logger would access the singleton configuration object to get the current username, so as to include it in the log, and if someone else had logged in in the meantime that singleton configuration object would have a different current user. It was a mess.
So what what we did, when we started writing web applications using this code base was to create a singleton factory class, that was itself a singleton. Every one of the other singletons had a public static instance() method that had been calling a private constructor. Instead, the public static instance() method obtained a reference to the singleton factory object, then called a method on that to get a reference to the single instance of the class in question.
In other words, instead of having a dozen classes that each maintained its own private static reference, we now had a single class that maintained a single static reference, and the object that it maintained a reference to contained a dozen references to the other, formerly singleton classes.
Now we had only one singleton to deal with. And in its public static instance() method, we added some web-specific logic. If we had an HTTPContext and that context had an instance of the factory in its session, we'd return the instance from the session. If we had an HTTPContext, and it didn't have a factory in its session, we'd construct a new factory and store it in the session, and then return it. If we had no HTTPContext, we'd just construct a new factory and return it.
The code for this was placed in classes we derived from Page, WebControl, and MasterPage, and then we used our classes in our higher-level code.
This worked fine, for .aspx web applications, where users logged in and maintained session. It worked fine for .asmx web services running within those web applications. But it has real limits.
In particular, it won't work in situations where there is no session. We're feeling pressure to provide websites that serve a larger user base - that might have tens or hundreds of thousands of users hitting them dynamically. Up to now our users have been pretty typical desktop business users. They log into our websites, and stay in them much of the day, using our web apps as an alternative to a desktop app. A given customer might have as many as six users who might use our websites, and while we have a thousand or more customers, combined they don't make for all that heavy a load. But our current architecture will not scale to that.
We're also running into situations where ASP.NET MVC would be a better fit for building the web UI than .aspx web forms. And we're exploring building mobile apps that would be communicating with stand-alone WFC web services. And while in both of these, it looks like it's possible to run them in an environment that has a session, it looks to limit their flexibility and performance fairly severely.
So, we're really looking at ways to eliminate these singletons.
What I'd really like:
I'm trying to envision a series of refactors, that would eventually lead to a better-structured, more flexible architecture. I could easily see the advantages of an IoC framework, in our situation.
But here's the thing - from what I've seen of IoC frameworks, they need their dependencies provided to them externally via constructor parameters. My logger class, for example, needs an instance of my config class, from which to obtain the current user. Currently, it is using the public static instance() method on the config class to obtain it. To use an IoC framework, I'd need to pass it as a constructor.
In other words, from where I sit, the first, and unavoidable task, is to change every class that uses any of these singletons so as to take the singleton factory as a constructor parameter. And that's a huge amount of work.
As an example, I just spent the afternoon doing exactly that, in the level 1 libraries, to see just how much work it is. I ended up changing over 1300 lines of code. The level 2 libraries will be worse.
So, are there any alternatives?
Typically, you should try to wrap the contextual information into its own instance and provide a static accessor method to refer to it. For example, consider HttpContext and its available every where in web application via HttpContext.Current.
You should try to devise something similar so that instead of returning singleton instance, you would return the instance from the current context. That way, you need to not change your consumer code that refers to these static methods (e.g. Logger.Instance()).
I generally roll-up information such as logger, current user, configuration, security permissions into application context (can be more than one class if need arises). The AppContext.Current static method returns the current context. The method implementation goes something like
public interface IContextStorage
{
// Gets the stored context
AppContext Get();
// Stores the context, context can be null
void Set(AppContext context);
}
public class AppContext
{
private static IContextStorage _storageProvider, _defaultStorageProvider;
public static AppContext Current
{
get
{
var value = _storageProvider.Get();
// If context is not available in storage then lookup
// using default provider for worker (threadpool) therads.
if (null == value && _storageProvider != _defaultStorageProvider
&& Thread.CurrentThread.IsThreadPoolThread)
{
value = _defaultStorageProvider.Get();
}
return value;
}
}
...
}
IContextStorage implementations are application specific. The static variables _storageProvider gets injected at the application start-up time while _defaultStorageProvider is a simple implementation that looks into current call context.
App Context creation happens in multiple stages - for example, a global information such as configuration gets read and cached at application start-up while specific information such as user & security gets formed at authentication stage. Once all info is available, the actual instance is created and stored into the app specific storage location. For example, desktop application will use a singleton instance while web application can probably store the instance into the session state. For web application, you may have logic at start of each request to ensure that the context is initialized.
For a scalable web applications, you can have a storage provider that will store the context instance into the cache and if not present in the cache then re-built it.
I'd recommend starting by implementing "Poor Man's DI" pattern. This is where you define two constructors in your classes, one that accepts an instance of the dependencies (IoC), and another default constructor that new's them up (or calls a singleton).
This way you can introduce IoC incrementally, and still have everything else work using the default constructors. Eventually when you have IoC being used in most places you can start to remove the default constructors (and the singletons).
public class Foo {
public Foo(ILogger log, IConfig config) {
_logger = log;
_config = config;
}
public Foo() : this(Logger.Instance(), Config.Instance()) {}
}