Is it better to have one servlet running multiple tasks, or have multiple servlets?
eg.
At the moment i have it like this:
ViewCarsServlet
CarViewSalesServlet
AddCarSaleServlet
With each serlvet handliling my requests.
But would it be better to have obne such as CarServlet.
And then pass a Task variable into a If statement?
Which would be better coding practice?
It's better to have multiple servlets for multiple task. it will not affect performance as well as it is more user friendly, for a particular task we can hit separate servlet instead of making one servlet complex with lot of if else conditions, if we using only one servlet every time we need to check the conditions and then execute the respective task.
Define "better".
My personal taste would group related operations into a single servlet. I'd think about it as exposing a REST API of operations that went together. But that's just my personal opinion. I don't know of a "right" answer that everyone would agree to.
Related
I'm designing an API and I want to allow my users to combine a GET parameter with AND operators. What's the best way to do this?
Specifically I have a group_by parameter that gets passed to a Mongo backend. I want to allow users to group by multiple variables.
I can think of two ways:
?group_by=alpha&group_by=beta
or:
?group_by=alpha,beta
Is either one to be preferred? I've consulted a few API design references but no-one seems to have a view on this.
There is no strict preference. The advantage to the first approach is that many frameworks will turn group_by into an array or similar structure for you, whereas in the second approach you need to parse out the values yourself. The second approach is also less verbose, which may be relevant if your query string is particularly large.
You may also want to test with the first approach that the query strings always come into your framework in the order the client sent them. Some frameworks have a bug where that doesn't happen.
I want to generate a very short Unique ID, in my web app, that can be used to handle sessions. Sessions, as in users connecting to eachother's session, with this ID.
But how can I keep track of these IDs? Basically, I want to generate a short ID, check if it is already in use, and create a new if it is.
Could I simply have a static class, that has a collection of these IDs? Are there other smarter, better ways to do this?
I would like to avoid using a database for this, if possible.
Generally, static variables, apart from the places may be declared, will stay alive during application lifetime. An application lifetime ended after processing the last request and a specific time (configurable in web.config) as idle. As a result, you can define your variable to store Short-IDS whenever you are convenient.
However, there are a number of tools and well-known third-parties which are candidate to choose. MemCache is one of the major facilities which deserve your notice as it is been used widely in giant applications such as Facebook and others.
Based on how you want to arrange your software architecture you may prefer your own written facilities or use third-parties. Most considerable advantage of the third-parties is the fact that they are industry-standard and well-tested in different situations which has taken best practices while writing your own functions give that power to you to write minimum codes with better and more desirable responses as well as ability to debug which can not be ignored in such situations.
There is an existing third party Rest API available which would accept one set of input and return the output for the same. (Think of it as Bing's Geo coding service, which would accept address and return location detail)
My need would be is to call this API multiple times (say 500-1000) for a single asp.net request and each call may take close to 500ms to return.
I could think of three approaches on how to do this action. Need your input on which could be best possible approach keeping speed as criteria.
1. Using Http Request in a for loop
Write a simple for loop and for each input call the REST API and add the output to the result. This by far could be the slowest. But there is no overhead of threads or context switching.
2. Using async and await
Use async and await mechanisms to call REST Api. It could be efficient as thread continues to do other activites while waiting for REST call to return. The problem I am facing is that, as per recommendations I should be using await all the way to the top most caller, which is not possible in my case. Not following it may lead to dead locks in asp.net as mentioned here http://msdn.microsoft.com/en-us/magazine/jj991977.aspx
3. Using Task Parallel Library
Using a Parallel.ForEach and use the Synchronuos API to invoke the Server parallely and use ConcurrentDictionary to hold the result. But may result in thread overhead
Also, let me know is there any other better way to handle things. I understand people might suggest to track performance for each approach, but would like to understand how people has solved this problem before
The best solution is to use async and await, but in that case you will have to take it async all the way up the call stack to the controller action.
The for loop keeps it all sequential and synchronous, so it would definitely be the slowest solution. Parallel will block multiple threads per request, which will negatively impact your scalability.
Since the operation is I/O-based (calling a REST API), async is the most natural fit and should provide the best overall system performance of these options.
First, I think it's worth considering some issues that you didn't mention in your question:
500-1000 API calls sounds like quite a lot. Isn't there a way to avoid that? Doesn't the API have some kind of bulk query functionality? Or can't you download their database and query it locally? (The more open organizations like Wikimedia or Stack Exchange often support this, the more closed ones like Microsoft or Google usually don't.)
If those options are not available, then at least consider some kind of caching, if that makes sense for you.
The number of concurrent requests to the same server allowed at the same time in ASP.NET is only 10 by default. If you want to make more concurrent requests, you will need to set ServicePointManager.DefaultConnectionLimit.
Making this many requests could be considered abuse by the service provider and could lead to blocking of your IP. Make sure the provider is okay with this kind of usage.
Now, to your actual question: I think that the best option is to use async-await, even if you can't use it all the way. You can avoid deadlocks either by using ConfigureAwait(false) at every await (which is the correct solution) or by using something like Task.Run(() => /* your async code here */).Wait() to escape the ASP.NET context (which is the simple solution).
Using something like Parallel.ForEach() is not great, because it unnecessarily wastes ThreadPool threads.
If you go with async, you should probably also consider throttling. A simple way to achieve that is by using SemaphoreSlim.
Is model injection on the fly possible? In other words, if I ask for a model of the type IPhotoModel, I should get one of its implementations based on the current state of the view. If I am looking at a UserPage, I should get a user-specific implementation of that model. If I am looking at a LocationPage, I should get a location-specific implementation.
Currently, the only way that I see is introducing a command that specifies the model mapping, with a concrete one based on the current view state ...
something like...
injector.mapValue(IPhotoViewModel, injector.getInstance(UserPhotoViewModel)) or
injector.mapValue(IPhotoViewModel, injector.getInstance(LocationPhotoViewModel))
is this the best way possible? I do not really want to introduce much coupling logic outside of the context, but ...
That's how I do it, and I believe that this is the recommended way. In fact, I think that many advanced RobotLegs users will break out most of the mappings into Commands for convenience, reuse, and to make it easier to read the program--even if the Command is only run once at startup. I've used it for things like swapping out mock services for real services--the Command that maps the dependencies is different, but everything else is the same.
I don't see this as "that much" coupling logic. The Command is merely setting up the program based on current Application state. There's not really that much difference between using a Command to change Injector state vs your own custom Model state.
You may even find that you can reuse your injection mapping Commands across Applications, whereas you might not be able to reuse the entire Context.
HTH;
Amy
The short of it is: Is it costly to check an Application Variable such as Application("WebAppName") more 10-20 times each time a page loads?
Background: (feel free to critique)
Some includes in my site contain many links and images which cannot use relative urls due to their inclusion in different paths.
Hence these includes contain frequent instances of
<img src="<%=Application("Webroot")%>images\image.gif">
Is it expensive to keep calling an Application variable like this?
Should I just put the Application value in some local variable to use where needed?
IMPORTANT NOTE:
I need my webapp to run fine on a server whether it be in the root web ("/") or in a virtual subweb ("/app").
Thanks in advance for any wisdom shared.
It's cheap - very, very cheap - just a dictionary lookup. Compared with almost anything else you'll do in the app (loading something from disk or the network) this will be statistical noise.
In general though, the best thing to do if you're worried about things like this is to measure it. Arbitrarily put 10,000 calls into a page, and see how that affects performance. See how it affects concurrency as well - can you still get the throughput you need when processing multiple concurrent requests?
Just for info, another option is:
<img src="<%=VirtualPathUtility.ToAbsolute("~/images/image.gif")%>"
This works well especially in MVC, where you might write an extension method to do the job, i.e.
<%=Html.Image("~/images/image.gif")%>
The Application object is a synchronized collection which uses ReadWriteObjectLock (an internal class that just uses the lock keyword), so if you are only reading from the collection it will be as fast as a hash table lookup as Jon mentioned, but if at the same time someone is writing to this collection, readers will block until write is complete. If you are worried so much about performance, call the indexer once, store it to a local variable and use this variable in your views.
Use Request.ApplicationPath instead (only works if your app is set as a virtual directory in IIS)
Short answer - measure it and decide on your own environment. I would say it does not matter.
Longer answer - you should have the call wrapped in something anyway... Like WebConfiguration.Root.
That will give you the option to do whatever optimization to it anytime in the future.