I'm setting up some testing to automatically check that our DI has been configured correctly. In particular we want to ensure that the lifestyles of dependancies match (so we don't get transients injected into singletons) and to avoid using the service locator as much as possible, relying on the constructor injection instead.
In the past we've used Castle.Windsor as our service provider, which comes with diagnostic classes and functions to help catch these problems. Are there similar functions for MS.DI or is it something we'll have to roll ourselves?
While I agree with the advice from Steven & Chris, developers are not always in charge of the infrastructure they have to use. Where possible I will be pushing for Castle.Windsor, since it's what my team is most familiar with, but in this case we managed to cobble together the tests we wanted ourselves.
I'm presenting the test pattern here in case someone stumbles across this question and has the same struggles, but please do consider using a better DI provider, especially if you're still at the start of your project.
[Test]
public void Assert_Lifetimes_Are_Consistent()
{
var missing = new List<string>();
var errors = new HashSet<Tuple<string, string>>();
foreach (var service in _serviceCollection.Where(s => IsInYourAssembly(s.ServiceType)))
{
var serviceLifetimeRanking = LifetimeRanking(service.Lifetime);
foreach (var fieldInfo in ((System.Reflection.TypeInfo)service.ServiceType).DeclaredFields.Where(fi => fi.FieldType.IsAbstract && IsInYourAssembly(fi.FieldType)))
{
var dependencyLifetime = _serviceCollection.SingleOrDefault(fi => fi.ServiceType == fieldInfo.FieldType)?.Lifetime;
if (dependencyLifetime==null)
missing.Add($"No service found for {fieldInfo.FieldType.FullName} as a dependency for {service.ServiceType.FullName}");
var dependencyLifetimeRanking = LifetimeRanking(dependencyLifetime);
if (dependencyLifetimeRanking > serviceLifetimeRanking)
errors.Add(
Tuple.Create(
$"{service.ServiceType.Name} ({service.Lifetime})",
$"{fieldInfo.FieldType.Name} ({dependencyLifetime})"
)
);
}
}
if (missing.Any()||errors.Any())
{
var sb = new StringBuilder();
sb.AppendJoin(Environment.NewLine, missing);
if (errors.Any())
{
sb.AppendLine("Following dependency pairs have inconsistent lifestyles:");
sb.AppendLine(string.Join(Environment.NewLine, errors.Select(err => $"{err.Item1} -> {err.Item2}")));
}
Assert.Fail(sb.ToString());
}
}
private bool IsInYourAssembly(Type type)
{
return (type.Assembly.FullName?.IndexOf("YOUR_PROJECT_ASSEMBLY_HERE") ?? -1) == 0;
}
private int LifetimeRanking(ServiceLifetime serviceLifetime)
{
switch (serviceLifetime)
{
case ServiceLifetime.Singleton:
return 1;
case ServiceLifetime.Scoped:
return 2;
case ServiceLifetime.Transient:
return 3;
default:
throw new ArgumentOutOfRangeException("serviceLifetime", serviceLifetime,
$"Value is not a known member of the ServiceLifetime enum");
}
}
If the test fails, it will return a list of missing dependancies followed by a list of incompatible dependancy lifetimes.
The _serviceCollection field needs to be populated and Startup(config, env).ConfigureServices(_serviceCollection); needs to be called before running the test.
The IsInYourAssembly is an important function to filter out all the generic types which are also returned in the _serviceCollection.
Related
I'm starting with REDIS and the StackExchange Redis client. I'm wondering if I'm getting the best performance for getting multiple items at once from REDIS.
Situation:
I have an ASP.NET MVC web application that shows a personal calendar on the dashboard of the user. Because the dashboard is the landing page it's heavily used.
To show the calendar items, I first get all calendar item ID's for that particular month:
RedisManager.RedisDb.StringGet("calendaritems_2016_8");
// this returns JSON Serialized List<int>
Then, for each calendar item id I build a list of corresponding cache keys:
"CalendarItemCache_1"
"CalendarItemCache_2"
"CalendarItemCache_3"
etc.
With this collection I reach out to REDIS with a generic function:
var multipleItems = CacheHelper.GetMultiple<CalendarItemCache>(cacheKeys);
That's implemented like:
public List<T> GetMultiple<T>(List<string> keys) where T : class
{
var taskList = new List<Task>();
var returnList = new ConcurrentBag<T>();
foreach (var key in keys)
{
Task<T> stringGetAsync = RedisManager.RedisDb.StringGetAsync(key)
.ContinueWith(task =>
{
if (!string.IsNullOrWhiteSpace(task.Result))
{
var deserializeFromJson = CurrentSerializer.Serializer.DeserializeFromJson<T>(task.Result);
returnList.Add(deserializeFromJson);
return deserializeFromJson;
}
else
{
return null;
}
});
taskList.Add(stringGetAsync);
}
Task[] tasks = taskList.ToArray();
Task.WaitAll(tasks);
return returnList.ToList();
}
Am I implementing pipelining correct? The REDIS CLI monitor shows:
1472728336.718370 [0 127.0.0.1:50335] "GET" "CalendarItemCache_1"
1472728336.718389 [0 127.0.0.1:50335] "GET" "CalendarItemCache_2"
etc.
I'm expecting some kind of MGET command.
Many thanks in advance.
I noticed an overload method for StringGet that accepts a RedisKey[]. Using this, I see a MGET command in the monitor.
public List<T> GetMultiple<T>(List<string> keys) where T : class
{
List<RedisKey> list = new List<RedisKey>(keys.Count);
foreach (var key in keys)
{
list.Add(key);
}
RedisValue[] result = RedisManager.RedisDb.StringGet(list.ToArray());
var redisValues = result.Where(x=>x.HasValue);
var multiple = redisValues.Select(x => DeserializeFromJson<T>(x)).ToList();
return multiple;
}
I'm creating an app that requires todo parallel http request, I'm using HttpClient for this.
I'm looping over the urls and foreach URl I start a new Task todo the request.
after the loop I wait untill every task finishes.
However when I check the calls being made with fiddler I see that the request are being called synchronously. It's not like a bunch of request are being made, but one by one.
I've searched for a solution and found that other people have experienced this too, but not with UWP. The solution was to increase the DefaultConnectionLimit on the ServicePointManager.
The problem is that ServicePointManager does not exist for UWP. I've looked in the API's and I thought I could set the DefaultConnectionLimit on HttpClientHandler, but no.
So I have a few Questions.
Is DefaultConnectionLimit still a property that could be set somewhere?
if so, where do i set it?
if not, how do I increase the connnectionlimit?
Is there still a connectionlimit in UWP?
this is my code:
var requests = new List<Task>();
var client = GetHttpClient();
foreach (var show in shows)
{
requests.Add(Task.Factory.StartNew((x) =>
{
((Show)x).NextEpisode = GetEpisodeAsync(((Show)x).NextEpisodeUri, client).Result;}, show));
}
}
await Task.WhenAll(requests.ToArray());
and this is the request:
public async Task<Episode> GetEpisodeAsync(string nextEpisodeUri, HttpClient client)
{
try
{
if (String.IsNullOrWhiteSpace(nextEpisodeUri)) return null;
HttpResponseMessage content; = await client.GetAsync(nextEpisodeUri);
if (content.IsSuccessStatusCode)
{
return JsonConvert.DeserializeObject<EpisodeWrapper>(await content.Content.ReadAsStringAsync()).Episode;
}
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
return null;
}
Oke. I have the solution. I do need to use async/await inside the task. The problem was the fact I was using StartNew instead of Run. but I have to use StartNew because i'm passing along a state.
With the StartNew. The task inside the task is not awaited for unless you call Unwrap. So Task.StartNew(.....).Unwrap(). This way the Task.WhenAll() will wait untill the inner task is complete.
When u are using Task.Run() you don't have to do this.
Task.Run vs Task.StartNew
The stackoverflow answer
var requests = new List<Task>();
var client = GetHttpClient();
foreach (var show in shows)
{
requests.Add(Task.Factory.StartNew(async (x) =>
{
((Show)x).NextEpisode = await GetEpisodeAsync(((Show)x).NextEpisodeUri, client);
}, show)
.Unwrap());
}
Task.WaitAll(requests.ToArray());
I think an easier way to solve this is not "manually" starting requests but instead using linq with an async delegate to query the episodes and then set them afterwards.
You basically make it a two step process:
Get all next episodes
Set them in the for each
This also has the benefit of decoupling your querying code with the sideeffect of setting the show.
var shows = Enumerable.Range(0, 10).Select(x => new Show());
var client = new HttpClient();
(Show, Episode)[] nextEpisodes = await Task.WhenAll(shows
.Select(async show =>
(show, await GetEpisodeAsync(show.NextEpisodeUri, client))));
foreach ((Show Show, Episode Episode) tuple in nextEpisodes)
{
tuple.Show.NextEpisode = tuple.Episode;
}
Note that i am using the new Tuple syntax of C#7. Change to the old tuple syntax accordingly if it is not available.
I'm working on an API being developed with .net Web Api 2. I've seen many blog posts and SO questions about Web Api version 1, but answers using the changes made in version 2 seem to be scarce by comparison.
Compare these two ways of handling 'errors' in a controller ItemsController
A. Using methods that create objects from System.Web.Http.Results
// GET api/user/userID/item/itemID
[Route("{itemID:int}", Name="GetItem")]
[ResponseType(typeof(ItemDTO))]
public IHttpActionResult Get(int userID, int itemID)
{
if (userID < 0 || itemID < 0) return BadRequest("Provided user id or item id is not valid");
ItemDTO item = _repository.GetItem(itemID);
if (item == null) return NotFound();
if (item.UserID != userID) return BadRequest("Item userID does not match route userID");
return Ok<ItemDTO>(item);
}
B. Throwing exceptions that can be caught by registering a custom Global Exception Handler
// ex) in WebApiConfig.cs
// config.Services.Replace(typeof(IExceptionHandler), new GlobalExceptionHandler());
public class GlobalExceptionHandler : ExceptionHandler
{
public override void Handle(ExceptionHandlerContext context)
{
Exception exception = context.Exception;
HttpException httpException = exception as HttpException;
if (httpException != null)
{
context.Result = new SimpleErrorResult(context.Request, (HttpStatusCode)httpException.GetHttpCode(), httpException.Message);
return;
}
if (exception is RootObjectNotFoundException)
{
context.Result = new SimpleErrorResult(context.Request, HttpStatusCode.NotFound, exception.Message);
return;
}
if (exception is BadRouteParametersException || exception is RouteObjectPropertyMismatchException)
{
context.Result = new SimpleErrorResult(context.Request, HttpStatusCode.BadRequest, exception.Message);
return;
}
if (exception is BusinessRuleViolationException)
{
context.Result = new SimpleErrorResult(context.Request, (HttpStatusCode)422, exception.Message);
return;
}
context.Result = new SimpleErrorResult(context.Request, HttpStatusCode.InternalServerError, exception.Message);
}
}
GET api/user/userID/item/itemID
[Route("{itemID:int}", Name="GetItem")]
[ResponseType(typeof(ItemDTO))]
public IHttpActionResult Get(int userID, int itemID)
{
if (userID < 0 || itemID < 0)
throw new BadRouteParametersException("Provided user or item ID is not valid");
ItemDTO item = _repository.GetItem(itemID);
if (item.UserID != userID)
throw new RouteObjectPropertyMismatchException("Item userID does not match route userID");
return Ok<ItemDTO>(item);
}
Both of these seem like valid options. Since I am able to return System.Web.Http.Results objects it seems like solution A. is the best one.
But consider when in my _repository my GetItem method is implemented like so
public ItemDTO GetItem(int itemId)
{
ItemInfo itemInfo = ItemInfoProvider.GetItemInfo(itemId);
if (itemInfo == null) throw new RootObjectNotFoundException("Item not found");
ItemDTO item = _autoMapper.Map<ItemDTO>(itemInfo);
return item;
}
Here I can skip calling the autoMapper on null in GetItem and also skip checking for null in the controller.
Questions
Which way makes more sense?
Should I attempt a combination of A & B?
Should I try to keep my Controllers thin or should this type of validation & processing logic be kept there since I have access to the NotFound() and BadRequest() methods?
Should I be performing this type of logic somewhere else in the framework pipeline?
I realize my question is more architectural rather than 'how do i use this feature' but again, I haven't found too many explanations of how and when to use these different features.
From my standpoint, a global exception handler makes unit testing each action easier (read: more legible). You're now checking against a specific [expected] exception versus (essentially) comparing status codes. (404 vs. 500 vs. etc.) It also makes changes/logging of error notifications (at a global/unified level) much easier as you have a single unit of responsibility.
For instance, which unit test do you prefer to write?
[Test]
public void Id_must_not_be_less_than_zero()
{
var fooController = new FooController();
var actual = fooController.Get(-1);
Assert.IsInstanceOfType(actual, typeof(BadRequestResult));
}
[Test]
[ExpectedException(typeof(BadRouteParametersException))]
public void Id_must_not_be_less_than_zero()
{
var fooController = new FooController();
var actual = fooController.Get(-1);
}
Generally speaking, I would say this is more a preference than a hard-and-fast rule, and you should go with whatever you find to be the most maintainable and easiest to understand from both an on-boarding perspective (new eyes working on the project) and/or later maintenance by yourself.
As Brad notes, this partly comes down to preference.
Using HTTP codes is consistent with how the web works, so it's the way I lean.
The other consideration is that throwing exceptions has a cost. If you're OK with paying that cost, and take that into account in your design, it's fine to make that choice. Just be aware of it, particularly when you're using exceptions for situations that aren't really exceptional but rather are things you know you may encounter as part of normal application flow.
It's an older post, but there's an interesting discussion on the topic of exceptions and performance here:
http://blogs.msdn.com/b/ricom/archive/2006/09/14/754661.aspx
and the follow-up:
http://blogs.msdn.com/b/ricom/archive/2006/09/25/the-true-cost-of-net-exceptions-solution.aspx
I have implemented a custom test type for Visual Studio. The custom test type reads its test elements from dlls. My ITip implementation is working like a charm. The test elements are loaded and are displayed on the Test View tool window.
When I select the test elements and run them they end up in a Not Executed status. While debugging this issue I found out that a FileNotFoundException is thrown from QTAgent32.exe. It tells me that it cannot find the dll that defines the test cases. Also, it fails before my TestAdapter.Initialize method is called. I copied my test dll to the PrivateAssemblies directory of Visual studio. When I do that my test elements pass. I can also debug the code in my custom test adapter. So, the meaning of all of this is that QTAgent32.exe cannot find my test dll in its original directory.
My question is what should I do to make QTAgent32 find my test dll in the original directory? For completenes I add my Tip Load method code:
public override ICollection Load(string location, ProjectData projectData, IWarningHandler warningHandler)
{
Trace.WriteLine("Started RegexTestTip Load.");
if (string.IsNullOrEmpty(location))
{
throw new ArgumentException("File location was not specified!", "location");
}
var fileInfo = new FileInfo(location);
if (!fileInfo.Exists)
{
throw new ErrorReadingStorageException(
string.Format("Could not find a file on the specified location: {0}", location));
}
var result = new List<ITestElement>();
var extension = fileInfo.Extension.ToLower();
if (extension != ".dll")
{
return result;
}
Assembly testAssembly = Assembly.LoadFrom(location);
var testClasses = testAssembly.GetTypes().
Where(t => Attribute.IsDefined(t, typeof(RegexTestClassAttribute)));
foreach (Type testClass in testClasses)
{
PropertyInfo property = testClass.GetProperties().
SingleOrDefault(p => Attribute.IsDefined(p, typeof(TestedRegexAttribute)));
if (property == null || !TestedRegexAttribute.Validate(property))
{
throw new InvalidDataInStorageException("A Regex test must define a Tested Regex property with type Regex");
}
var testCases = testClass.GetProperties().
Where(p => Attribute.IsDefined(p, typeof(RegexTestCaseAttribute)));
foreach (PropertyInfo testCase in testCases)
{
if (!RegexTestCaseAttribute.Validate(testCase))
{
throw new InvalidDataInStorageException("A test case property must return a String value.");
}
var testElement = new RegexTestElement(property, testCase);
testElement.Storage = location;
testElement.Name = testCase.Name;
testElement.Description = "A simple description";
testElement.ProjectData = projectData;
result.Add(testElement);
}
}
Trace.WriteLine("Finished RegexTestTip Load.");
return result;
}
Have you tried just putting the dll in the same directory as the executable? Forgive the obviousness of that, but sometimes its the really simple things that bite us. But it's in the GAC right?
I'm working with a programmatically configurated WCF Client (System.ServiceModel.ClientBase). This WCF Client is configured using a CustomBinding, which has a TextMessageEncodingBindingElement by default.
Now when I try to switch to Mtom encoding, I change the Client's Endpoint.Binding property, which works fine. The Endpoint.Binding property show's it has changed.
Unfortunately when I execute one of the methods the WCF service exposes, it still uses TextMessageEncoding and I can't figure out why.
I've got it working though, by constructing a new ClientBase and passing the new EndPointBinding in the constructor:
socialProxy = new SocialProxyClient(SocialProxyClientSettings.SocialProxyMTomEndPointBinding, new EndpointAddress(SocialProxyClientSettings.SocialProxyEndPointAddress));
But when I try this it doesn't work:
socialProxy.Endpoint.Binding = SocialProxyClientSettings.SocialProxyMTomEndPointBinding;
These are my definitions for the EndPointBindings:
public static TextMessageEncodingBindingElement TextMessageEncodingBindingElement
{
get
{
if (_textMessageEncodingBindingElement == null)
{
_textMessageEncodingBindingElement = new TextMessageEncodingBindingElement() { MessageVersion = MessageVersion.Soap11 };
_textMessageEncodingBindingElement.ReaderQuotas = new System.Xml.XmlDictionaryReaderQuotas()
{
MaxDepth = 32,
MaxStringContentLength = 5242880,
MaxArrayLength = 204800000,
MaxBytesPerRead = 5242880,
MaxNameTableCharCount = 5242880
};
}
return _textMessageEncodingBindingElement;
}
}
public static MtomMessageEncodingBindingElement MtomMessageEncodingBindingElement
{
get
{
if (_mtomMessageEncodingBindingElement == null)
{
_mtomMessageEncodingBindingElement = new MtomMessageEncodingBindingElement();
_mtomMessageEncodingBindingElement.MaxReadPoolSize = TextMessageEncodingBindingElement.MaxReadPoolSize;
_mtomMessageEncodingBindingElement.MaxWritePoolSize = TextMessageEncodingBindingElement.MaxWritePoolSize;
_mtomMessageEncodingBindingElement.MessageVersion = TextMessageEncodingBindingElement.MessageVersion;
_mtomMessageEncodingBindingElement.ReaderQuotas.MaxDepth = TextMessageEncodingBindingElement.ReaderQuotas.MaxDepth;
_mtomMessageEncodingBindingElement.ReaderQuotas.MaxStringContentLength = TextMessageEncodingBindingElement.ReaderQuotas.MaxStringContentLength;
_mtomMessageEncodingBindingElement.ReaderQuotas.MaxArrayLength = TextMessageEncodingBindingElement.ReaderQuotas.MaxArrayLength;
_mtomMessageEncodingBindingElement.ReaderQuotas.MaxBytesPerRead = TextMessageEncodingBindingElement.ReaderQuotas.MaxBytesPerRead;
_mtomMessageEncodingBindingElement.ReaderQuotas.MaxNameTableCharCount = TextMessageEncodingBindingElement.ReaderQuotas.MaxNameTableCharCount;
}
return _mtomMessageEncodingBindingElement;
}
}
Can someone explain why changing the Endpoint.Binding programmatically doesn't work?
I believe that during construction of the ClientBase, the original Binding is used to create some helper objects. Changing the binding later does not change those helper objects.
To make any adjustments after construction, you likely need a custom Binding Behavior that you can tweak the internals of the Binding as you need. Use that in the construction so all helper objects are prepared for your later changes. As usual, all you want is one simple behavior change, but you will need to also write the ancillary helper classes to support your one behavior change.
See the SO thread: ONVIF Authentication in .NET 4.0 with Visual Studio 2010
For a discussion of CustomBinding issues.
See the blog post: Supporting the WS-I Basic Profile Password Digest in a WCF Client Proxy
For an example of a custom Behavior that lets you change the Username Token on the fly.
Perhaps something similar can be done to let you control the local endpoint binding on the fly.
UPDATE: More reading here in StackOverflow, and pages it links to and I believe i have found the answer you are looking for.
For PasswordDigestBehavior:
see: ONVIF Authentication in .NET 4.0 with Visual Studios 2010
and: http://benpowell.org/supporting-the-ws-i-basic-profile-password-digest-in-a-wcf-client-proxy/
For local NIC binding:
see: Specify the outgoing IP address to use with WCF client
// ASSUMPTIONS:
// 1: DeviceClient is generated by svcutil from your WSDL.
// 1.1: DeviceClient is derived from
// System.ServiceModel.ClientBase<Your.Wsdl.Device>
// 2: serviceAddress is the Uri provided for your service.
//
private static DeviceClient CreateDeviceClient(IPAddress nicAddress,
Uri serviceAddress,
String username,
String password)
{
if (null == serviceAddress)
throw new ArgumentNullException("serviceAddress");
//////////////////////////////////////////////////////////////////////////////
// I didn't know how to put a variable set of credentials into a static
// app.config file.
// But I found this article that talks about how to set up the right kind
// of binding on the fly.
// I also found the implementation of PasswordDigestBehavior to get it all to work.
//
// from: https://stackoverflow.com/questions/5638247/onvif-authentication-in-net-4-0-with-visual-studios-2010
// see: http://benpowell.org/supporting-the-ws-i-basic-profile-password-digest-in-a-wcf-client-proxy/
//
EndpointAddress serviceEndpointAddress = new EndpointAddress(serviceAddress);
HttpTransportBindingElement httpBinding = new HttpTransportBindingElement();
if (!String.IsNullOrEmpty(username))
{
httpBinding.AuthenticationScheme = AuthenticationSchemes.Digest;
}
else
{
httpBinding.AuthenticationScheme = AuthenticationSchemes.Anonymous;
}
var messageElement = new TextMessageEncodingBindingElement();
messageElement.MessageVersion =
MessageVersion.CreateVersion(EnvelopeVersion.Soap12, AddressingVersion.None);
CustomBinding bind = new CustomBinding(messageElement, httpBinding);
////////////////////////////////////////////////////////////////////////////////
// from: https://stackoverflow.com/questions/3249846/specify-the-outgoing-ip-address-to-use-with-wcf-client
// Adjust the serviceEndpointAddress to bind to the local NIC, if at all possible.
//
ServicePoint sPoint = ServicePointManager.FindServicePoint(serviceAddress);
sPoint.BindIPEndPointDelegate = delegate(
System.Net.ServicePoint servicePoint,
System.Net.IPEndPoint remoteEndPoint,
int retryCount)
{
// if we know our NIC local address, use it
//
if ((null != nicAddress)
&& (nicAddress.AddressFamily == remoteEndPoint.AddressFamily))
{
return new System.Net.IPEndPoint(nicAddress, 0);
}
else if (System.Net.Sockets.AddressFamily.InterNetworkV6 == remoteEndPoint.AddressFamily)
{
return new System.Net.IPEndPoint(System.Net.IPAddress.IPv6Any, 0);
}
else // if (System.Net.Sockets.AddressFamily.InterNetwork == remoteEndPoint.AddressFamily)
{
return new System.Net.IPEndPoint(System.Net.IPAddress.Any, 0);
}
};
/////////////////////////////////////////////////////////////////////////////
DeviceClient client = new DeviceClient(bind, serviceEndpointAddress);
// Add our custom behavior
// - this requires the Microsoft WSE 3.0 SDK file: Microsoft.Web.Services3.dll
//
PasswordDigestBehavior behavior = new PasswordDigestBehavior(username, password);
client.Endpoint.Behaviors.Add(behavior);
return client;
}