Which is faster? Loop over object properties or linq query? - asp.net

Is this faster
var query = from prop in object.GetType().GetProperties()
where prop.Name == "Id"
select prop;
var singleProperty = query.SingleOrDefault();
//do stuff with singleProperty
than this?
var type = object.GetType();
foreach(var prop in type.GetProperties())
{
if(prop.Name == "Id")
{
//do stuff
}
}
The other way around? Or are they the same?
Why and how would you know?
Sorry to be overly direct in my question. I prefer the first one but I don't know why or if I should.
Thanks.

Technically, the first case may allocate more memory and do more processing to generate the final result than the second case because of the intermediate data and LINQ abstractions. But the amount of time and memory is so negligible in the grand scope of things, that you're way better off making your code the most readable than the most efficient for this scenario. It's probably a case of premature optimization.
Here are some references why the first may be slightly slower:
http://www.schnieds.com/2009/03/linq-vs-foreach-vs-for-loop-performance.html
http://ox.no/posts/linq-vs-loop-a-performance-test
http://geekswithblogs.net/BlackRabbitCoder/archive/2010/04/23/c-linq-vs-foreach---round-1.aspx

The correct answer is: use Reflector to see what's being generated by the compiler.
That said, your LINQ query is using the same mechanism to retrieve the property list as your other snippet. It should technically be faster without involving the linq overhead. However, I'd expect the difference to be minimal (ie, unperceivable) , so this really comes down to a code readability and maintainability decision.
And I hate LINQ, so skip it.
Retrospective, one year later:
I've found that LINQ isn't the demon that I thought it was. I'm actually pretty impressed with its implementation, and spent quite a bit of time looking at the IL trying to find a legitimate reason not to like it.
That said: LINQ-to-objects is pretty slick. However, future generations working on projects with a database: don't use this as a reason to perform all of your queries on the client instead of letting your database server do what it's very, very good at.

They are the same in terms of performance because LINQ makes use of deferred execution. However, a trivially (and irrelevant) larger amount of memory may go into the LINQ option.
Therefore, they are (effectively) identical in performance and behaviour!
I would go with the LINQ version for readability because I like LINQ but to each their own.

Related

Java Collection - ArrayList and Linkedist

Which is better used for cache based application among Arraylist or
Linkedlist and why?
You'll want to use the ArrayList. More often than not, there is a better alternative to using a LinkedList, though there are scenarios where LinkedLists are the right choice.
Why?
LinkedLists use many small chunks of memory, which ends up offering very poor performance due to cache's utilization of locality.

Is GetHashCode just cargo-cult here?

HttpContext.Current.Items["ctx_" + HttpContext.Current.GetHashCode().ToString("x")]
I see this exact code all ... over ... the ... place but I must be overlooking something. It's common in responses to these posts to question the appropriateness of using HttpContext, but no one points out that GetHashCode is redundant and a fixed string will do.
What am I overlooking here?
EDIT: The question is, GetHashCode() will be the same for every HttpContext.Current, so why use GetHashCode() in the four links I provided? Some of them are posts with quite a bit of work put into them, so I feel like perhaps they are addressing some threading or context issue I'm overlooking. I don't see why just HttpContext.Current.Items["ctx_"] wouldn't do exactly the same.
This is horrible. For one, HttpContext.Current.Items is local to the current HttpContext anyway so need to attempt to make the keys "more unique". Second, depending on how this technique is used the keys will collide once in a while causing spurious, undebuggable failures.
If you need a unique key (maybe because you are developing a library), use a saner solution like Guid.NewGuid().ToString(). This is guaranteed to work and even simpler.
So to answer your question :)
It doesn't make much sense to use GetHashcode for creating key.
Authors of posts you gave links to probably wanted to create key that will be unique. But doing this doesn't stop other team member to use same key somewhere else in the code base.
I think it's better to just use handwritten long keys. Instead of
["ctx_" + HttpContext.Current.GetHashCode().ToString("x")]
just use
["object_context_key"]
or something like that. That way you know what it is exactly (and that may be usefull in for example post mortem debugging) and I also think that if you have to come up with some long key it is possible that it will be 'more unique' then the one with GetHashCode.

View or stored procedure or Linq query

I need a help on the database access in 3-tier architecture. I am developing application in asp.net and linq. In database there are at least 9 master tables and total of 22 tables. For managing data I need to use a view for each master table.
My question is what is more convenient on runtime or for faster execution?
To use page level queries (using multiple joins having at least 5-6 tables) in DataAccessLayer using linq
To use view and refer them in DataAccessLayer
Adding all queries to a stored procedure and then bind them all.
Which one is best practice? Is views makes page heavy while runtime?
Linq2SQL queries generally wind up as parameterized queries on your database.
There are many discussions on SO comparing the difference in performance like Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS's? and Stored Procedures vs Parameterized Queries
I believe the concensus is that the benefit of the flexibility that an ORM like Linq2SQL gives generally outweighs any perceived performance loss.
IMO, LINQ2SQL will do 90% of the job just fine for most of your data access requirements, and if you have any special needs where a PROC makes more sense (e.g. a really data intensive or batch transaction), you can write one or two procs and these to your DataContext.
However, while we are at it, I wouldn't consider Linq2SQL on a new project. Why not look at Entity Framework 4+? (OP is using .NET 3.5)
Edit
If your table Foreign keys are set up correctly, when you drag your underlying tables into your Linq DBML's, you will find that you hardly ever need to join 'manually' as the ORM will handle the underlying navigation for you.
e.g.
var myUpvotes = Users.Where(u => v.UserId == someUser)
.Votes.Where(v => v.Upvote == true)
.Count();
One concept you will need is Eager vs Lazy loading. This will definitely impact the performance of your 'joins'
I think best practice would be your #1, but you must keep an open-mind about bringing #2 or #3 to bear on the problem if the performance needs demand it. In other words, run with #1 as far as you can, but be willing to use #2 or #3 to improve those parts of the code only if/when you need it.
I've found (and agree with #nonnb) that the productivity improvement and flexibility of using Linq2SQL / ORMs makes it the right choice most of the time, but there are a few times when you need to be willing to make use of a strategic SP in your overall plan - its not an either/or decision; use both as necessary. Generally SP's will be faster, but most of the time not enough to make a difference in your application - but they should be kept in your toolset because in the right scenarios, they can make HUGE improvements when they are really needed.

MSMQ - Message Queue Abstraction and Pattern

Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind.
I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted).
The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues).
However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly.
Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have any of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-)
Any and ALL input would be highly appreciated and seriously considered…
Thanks again.
For anyone interested. I decided in
the end to simply cache the
transaction in another location and
use the MSMQ as intended and described
below.
If the queue has a large-ish number of messages on it, then enumerating those messages will become a serious bottleneck. MSMQ was designed for first-in-first-out kind of access and anything that doesn't follow that pattern can cause you lots of grief in terms of performance.
The answer depends greatly on the sort of queries you're going to be executing, but the answer may be some kind of no-sql database (CouchDB or BerkeleyDB, etc)

Heavy calculations in mysql

I'm using asp.net together with mysql and my site requires some quite heavy calculations on the data. These calculations are made with mysql, I thought it was easier to make the calculations in mysql to just be able to work with the data easy in asp.net and databind my gridviews and other data controls not using that much code.
But I start thinking I made a mistake making all calculations in the back because everything seems quite slow. In one of my queries I need to get data from several tables at once and add them together into my gridview so what I do in that query is:
Selecting from four different tables each having some inner joins. Then union them together using union all. Then some sum and grouping.
I can't post the full query here because its quite long. But do you think it's bad way to do the calculations like I've done? Should it be better doing them in asp.net? What is your opinion?
MySQL does not allow embedded assembly, so writing a query which whill input a sequence of RGB records and output MPEG-4 is probably not the best idea.
On the other hand, it's good in summing or averaging.
Seeing what calculations you are talking about will surely help to improve the answer.
Self-update:
MySQL does not allow embedded assembly, so writing a query which whill input a sequence of RGB records and output MPEG-4 is probably not the best idea.
I'm really thinking how to implement this query, and worse than that: I feel it's possible :)
My experience with MySql is that it can be quite slow for calculations. I would suggest moving a substantial amount of work (particularly grouping and sum) to the ASP.NET. This has not been my experience with other database engines, but a 30 day moving average in MySQL seems quite slow.
Without knowing the actual query, it sounds like you are doing relational/table work on your query, which RDMS are good at so it seems that your are doing it at the correct place... the problem may be on the optimization of the query, you may want to do "EXPLAIN(query)" and get an idea of the query plan that MySql is doing and try to optimize that way....

Resources