Is there a thread-safety issue in this ObjectBuilder code? - asp.net

I suspect a problem with an old version of the ObjectBuilder which once was part of the WCSF Extension project and meanwhile moved into Unity. I am not sure whether I am on the right way or not so I hope someone out there has more competent thread-safety skills to explain whether this could be an issue or not.
I use this (outdated) ObjectBuilder implementation in an ASP.Net WCSF web app and rarely I can see in the logs that the ObjectBuilder is complaining that a particular property of a class cannot be injected for some reason, the problem is always that this property should never been injected at all. Property and class are changing constantly. I traced the code down to a method where a dictionary is used to hold the information whether a property is handled by the ObjectBuilder or not.
My question basically comes down to: Is there a thread-safety issue in the following code which could cause the ObjectBuilder to get inconsistent data from its dictionary?
The class which holds this code (ReflectionStrategy.cs) is created as Singleton, so all requests to my web application use this class to create its view/page objects. Its dictionary is a private field, only used in this method and declared like that:
private Dictionary<int, bool> _memberRequiresProcessingCache = new Dictionary<int, bool>();
private bool InnerMemberRequiresProcessing(IReflectionMemberInfo<TMemberInfo> member)
{
bool requires;
lock (_readLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo.GetHashCode(), out requires))
{
lock (_writeLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo.GetHashCode(), out requires))
{
requires = MemberRequiresProcessing(member);
_memberRequiresProcessingCache.Add(member.MemberInfo.GetHashCode(), requires);
}
}
}
}
return requires;
}
This code above is not the latest version you can find on Codeplex but I still want to know whether it might be the cause of my ObjectBuilder exceptions. While we speak I work on an update to get this old code replaced by the latest version. This is the latest implementation, unfortunately I cannot find any information why it has been changed. Might be for a bug, might be for performance...
private bool InnerMemberRequiresProcessing(IReflectionMemberInfo<TMemberInfo> member)
{
bool requires;
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo, out requires))
{
lock (_writeLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo, out requires))
{
Dictionary<TMemberInfo, bool> tempMemberRequiresProcessingCache =
new Dictionary<TMemberInfo, bool>(_memberRequiresProcessingCache);
requires = MemberRequiresProcessing(member);
tempMemberRequiresProcessingCache.Add(member.MemberInfo, requires);
_memberRequiresProcessingCache = tempMemberRequiresProcessingCache;
}
}
}
return requires;
}

The use of the hash code looks problematic if you run a very large number of classes / members, as can happen with the singleton approach you mentioned.
The double lock was totally odd in the old one (Only one thread goes into the whole section in all cases). Note that locking as the first thing certainly hurts performance. It is a trade of, notice that instead they create a copy to avoid modifying the list as it is being read.

Related

Where should I put a logic for querying extra data in CQRS command flow

I'm trying to implement simple DDD/CQRS architecture without event-sourcing for now.
Currently I need to write some code for adding a notification to a document entity (document can have multiple notifications).
I've already created a command NotificationAddCommand, ICommandService and IRepository.
Before inserting new notification through IRepository I have to query current user_id from db using NotificationAddCommand.User_name property.
I'm not sure how to do it right, because I can
Use IQuery from read-flow.
Pass user_name to domain entity and resolve user_id in the repository.
Code:
public class DocumentsCommandService : ICommandService<NotificationAddCommand>
{
private readonly IRepository<Notification, long> _notificationsRepository;
public DocumentsCommandService(
IRepository<Notification, long> notifsRepo)
{
_notificationsRepository = notifsRepo;
}
public void Handle(NotificationAddCommand command)
{
// command.user_id = Resolve(command.user_name) ??
// command.source_secret_id = Resolve(command.source_id, command.source_type) ??
foreach (var receiverId in command.Receivers)
{
var notificationEntity = _notificationsRepository.Get(0);
notificationEntity.TargetId = receiverId;
notificationEntity.Body = command.Text;
_notificationsRepository.Add(notificationEntity);
}
}
}
What if I need more complex logic before inserting? Is it ok to use IQuery or should I create additional services?
The idea of reusing your IQuery somewhat defeats the purpose of CQRS in the sense that your read-side is supposed to be optimized for pulling data for display/query purposes - meaning that it can be denormalized, distributed etc. in any way you deem necessary without being restricted by - or having implications for - the command side (a key example being that it might not be immediately consistent, while your command side obviously needs to be for integrity/validity purposes).
With that in mind, you should look to implement a contract for your write side that will resolve the necessary information for you. Driving from the consumer, that might look like this:
public DocumentsCommandService(IRepository<Notification, long> notifsRepo,
IUserIdResolver userIdResolver)
public interface IUserIdResolver
{
string ByName(string username);
}
With IUserIdResolver implemented as appropriate.
Of course, if both this and the query-side use the same low-level data access implementation (e.g. an immediately-consistent repository) that's fine - what's important is that your architecture is such that if you need to swap out where your read side gets its data for the purposes of, e.g. facilitating a slow offline process, your read and write sides are sufficiently separated that you can swap out where you're reading from without having to untangle reads from the writes.
Ultimately the most important thing is to know why you are making the architectural decisions you're making in your scenario - then you will find it much easier to make these sorts of decisions one way or another.
In a project i'm working i have similar issues. I see 3 options to solve this problem
1) What i did do is make a UserCommandRepository that has a query option. Then you would inject that repository into your service.
Since the few queries i did need were so simplistic (just returning single values) it seemed like a fine tradeoff in my case.
2) Another way of handling it is by forcing the user to just raise a command with the user_id. Then you can let him do the querying.
3) A third option is ask yourself why you need a user_id. If it's to make some relations when querying the data you could also have this handles when querying the data (or when propagating your writeDB to your readDB)

Why is there no static QDir::makepath()?

I know, that to create a new path in Qt from a given absolute path, you use QDir::makepath() as dir.makepath(path), as it is suggested in this question. I do not have any trouble in using it and it works fine. My question is directed, as to why the developers would not provide a static function to call in a way like QDir::makepath("/Users/me/somepath/");. Needing to create a new QDir instance seems unnecessary to me.
I can only think of two possible reasons:
1. The developers were "lazy" or did not have time so they did not add one as it is not absolutely necessary.
2. The instance of QDir on which mkpath(path) is called, will be set to path as well, so it would be convenient for further usage - but I can not seem to find any hints that this is the actual behaviour within the docs.
I know I repeat myself, but again, I do not need help as of how to do it, but I am much interested as of why one has to do it that way.
Thanks for any reason I might have missed.
Let's have a look at the code of said method:
bool QDir::mkdir(const QString &dirName) const
{
const QDirPrivate* d = d_ptr.constData();
if (dirName.isEmpty()) {
qWarning("QDir::mkdir: Empty or null file name");
return false;
}
QString fn = filePath(dirName);
if (d->fileEngine.isNull())
return QFileSystemEngine::createDirectory(QFileSystemEntry(fn), false);
return d->fileEngine->mkdir(fn, false);
}
Source: http://code.qt.io/cgit/qt/qtbase.git/tree/src/corelib/io/qdir.cpp#n1381
As we can see, a static version would be simple to implement:
bool QDir::mkdir(const QString &dirName) const
{
if (dirName.isEmpty()) {
qWarning("QDir::mkdir: Empty or null file name");
return false;
}
return QFileSystemEngine::createDirectory(QFileSystemEntry(dirName), false);
}
(see also http://code.qt.io/cgit/qt/qtbase.git/tree/src/corelib/io/qdir.cpp#n681)
First, the non-static method comes with a few advantages. Obviously there is something to using the object's existing file engine. But also, I would imagine the use-case of creating several directories under a specific directory (that the QDir already points to).
So why not provide both?
Verdict (tl/dr): I think the reason is simple code hygiene. When you use the API, the difference between QDir::makepath(path); and QDir().makepath(path); is slim. The performance hit of creating the object is also negligible, as you would reuse the same object if you happen to perform the operation more often. But on the side of the code maintainers, it is arguably much more convenient (less work and less error prone) to not maintain two versions of the same method.

ASP.NET and ThreadStatic as part of TransactionScope's implementation

I was wondering how TransactionScope class works to keep the transaction between different method calls (without the need to pass it as a parameter) and I came to this doubt. I've got two considerations about this question:
1
Looking into TransactionScope's implementation through Telerik JustDecompile, I've found that the current transaction is stored in a ThreadStatic member of the System.Transactions.ContextData class (code below).
internal class ContextData
{
internal TransactionScope CurrentScope;
internal Transaction CurrentTransaction;
internal DefaultComContextState DefaultComContextState;
[ThreadStatic]
private static ContextData staticData;
internal WeakReference WeakDefaultComContext;
internal static ContextData CurrentData
{
get
{
ContextData contextDatum = ContextData.staticData;
if (contextDatum == null)
{
contextDatum = new ContextData();
ContextData.staticData = contextDatum;
}
return contextDatum;
}
}
public ContextData()
{
}
}
The CurrentData property is called by TransactionScope's PushScope() method, and the last one is used by most of the TransactionScope constructors.
private void PushScope()
{
if (!this.interopModeSpecified)
{
this.interopOption = Transaction.InteropMode(this.savedCurrentScope);
}
this.SetCurrent(this.expectedCurrent);
this.threadContextData.CurrentScope = this;
}
public TransactionScope(TransactionScopeOption scopeOption)
{
// ...
this.PushScope();
// ...
}
Ok, I guess I've found how they do that.
2
I've read about how bad is to use ThreadStatic members to store objects within ASP.NET (http://www.hanselman.com/blog/ATaleOfTwoTechniquesTheThreadStaticAttributeAndSystemWebHttpContextCurrentItems.aspx) due the ASP.NET thread switching that might occur, so this data can be lost among the worker threads.
So, it looks like TransactionScope should not work with ASP.NET, right? But as far I have used it on my web applications, I don't remember any problem that I've run into about transaction data being lost.
My question here is "what's the TransactionScope's trick to deal with ASP.NET's thread switching?".
Did I make a superficial analysis on how TransactionScope stores its transaction objects? Or TransactionScope class wasn't made to work with ASP.NET, and I can be considered a lucky guy that never had any pain about it?
Could anyone who knows the "very deep buried secrets" of .NET explain that for me?
Thanks
I believe ASP.NET thread switching happens only in specific situations (involving asych IO operations) and early in the request life cycle. Typically, once the control is passed to the actual http handler (for example, Page), thread does not get switched. I believe that in most of situation, transaction scope will get initialized only after that (after page_init/load) and should not be an issue.
Here are few links that might interest you:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
http://piers7.blogspot.com/2005/12/log4net-context-problems-with-aspnet.html

Asynchronous Callback in GWT - why final?

I am developing an application in GWT as my Bachelor's Thesis and I am fairly new to this. I have researched asynchronous callbacks on the internet. What I want to do is this: I want to handle the login of a user and display different data if they are an admin or a plain user.
My call looks like this:
serverCall.isAdmin(new AsyncCallback<Boolean>() {
public void onFailure(Throwable caught) {
//display error
}
public void onSuccess(Boolean admin) {
if (!admin){
//do something
}
else{
//do something else
}
}
});
Now, the code examples I have seen handle the data in the //do something// part directly. We discussed this with the person who is supervising me and I had the idea that I could fire an event upon success and when this event is fired load the page accordingly. Is this a good idea? Or should I stick with loading everything in the inner function? What confuses me about async callbacks is the fact that I can only use final variables inside the onSuccess function so I would rather not handle things in there - insight would be appreciated.
Thanks!
Since the inner-class/ anonymous function it is generated at runtime it needs a static memory reference to the variables it accesses. Putting final to a variable makes its memory address static, putting it to a safe memory region. The same happens if you reference a class field.
Its just standard java why you can only use Final variables inside an inner-class. Here is a great discussion discussing this topic.
When I use the AsyncCallback I do exactly what you suggested, I fire an event though GWT's EventBus. This allows several different parts of my application to respond when a user does log in.

Debugging FLEX/AS3 memory leaks

I have a pretty big Flex & Papervision3D application that creates and destroys objects continually. It also loads and unloads SWF resource files too. While it's running the SWF slowly consumes memory til about 2GB when it croaks the player. Obviously I am pretty sure I let go of reference to instances I no longer want with expectation the GC will do its job. But I am having a heck of a time figuring out where the problem lies.
I've tried using the profiler and its options for capturing memory snapshots, etc - but my problem remains evasive. I think there are known problems using debug Flash player also? But I get no joy using the release version either.
How do you go about tracking down memory leak problems using FLEX/AS3 ? What are some strategies, tricks, or tools you have used to locate consumption
I usually implement a cleanup method in every class I make (since AS doesn't have destructors). The main problem I've noticed with the GC is with event listeners. Additional to what dirkgently said, also try to avoid anonymous listener functions (as you can't explicitly remove them). Here are a few links you may find useful:
Understanding Memory Leaks in ActionScript
Garbage Collection with Flex and Adobe Air
I stumbled across something explaining how to use Flex Profiler in Flex Builder and it was a HUGE help to me in debugging memory leaks. I would definitely suggest trying it out. It's very easy to use. Some things I found when profiling my applications:
Avoid using collections (at least LARGE collections) as properties of Value Objects. I had several types of Value Object Classes in my Cairngorm application, and each had a "children" property which was an ArrayCollection, and was used for filtering. When profiling, I found that these were one of my biggest memory eaters, so I changed my application to instead store the "parentId" as an int and use this for filtering. The memory used was cut drastically. Something like this:
Old way:
public class Owner1
{
public var id:int;
public var label:String;
public var children:ArrayCollection; // Stores any number of Owner2 Objects
}
public class Owner2
{
public var id:int;
public var label:String;
public var children:ArrayCollection; // Stores any number of Owner3 Objects
}
public class Owner3
{
public var id:int;
public var label:String;
}
New Way:
public class Owner1
{
public var id:int;
public var label:String;
}
public class Owner2
{
public var id:int;
public var label:String;
public var parentId:int; // Refers to id of Owner1 Object
}
public class Owner3
{
public var id:int;
public var label:String;
public var parentId:int; // Refers to id of Owner2 Object
}
I would also suggest removing event listeners when they are no longer needed.
because of issues like this I've developed a open source library that helps monitor all the events your running at any given time. its really easy to implement and i've re-factored projects in 10-15 minutes converting them to use the EventController i've developed.
basically for your scenario i would run through all the events and replace them from:
obj.addEventListener(...);
to :
EC.add(obj,...);
the rest is the same what that would do is register the event and make it crazy easy to see all your events at any point you want using the EC.log();
all the details and documentation are on my site i would love to know if this helps you and if you start working with it. if you have any feedback good or bad please feel free to post it up and i would look into it!
the site is:
http://fla.as/ec/
If your memory leak grows exponentially, it probably means GC is failing to do its job. Take a look at your code and see wherever you can decrease your objects' reference counts (by setting them to null). Make event-handlers weak. And re-profile.

Resources