I am fairly new to using packages. My team is deciding on whether or not to use packages in our applications. We have 4 applications that are using in-line sql currently. We are deciding on putting each sql statement in a stored procedure and then logically grouping the Stored procedure into packages(These stored procedures will be shared among the applications). What are the potential pros and cons of using packages.
Our applications are written in asp.net using c#.
So you want to start a debate about the advantages versus the disadvantages of using a package? Ok, then I would leave the disadvantages part on you if you could share with us. I will just share with you tye advantages, not in my own words, since it would be a repetition of what Thomas Kyte already said here:
break the dependency chain (no cascading invalidations when you install a new package body -- if
you have procedures that call procedures -- compiling one will invalidate your database)
support encapsulation -- I will be allowed to write MODULAR, easy to understand code -- rather
then MONOLITHIC, non-understandable procedures
increase my namespace measurably. package names have to be unique in a schema, but I can have
many procedures across packages with the same name without colliding
support overloading
support session variables when you need them
promote overall good coding techniques, stuff that lets you write code that is modular,
understandable, logically grouped together....
If you are a programmer - you would see the benefits of packages over a proliferation of standalone
procedures in a heartbeat.
If you're going to have different applications accessing the same tables, with the same business logic happening on the database (eg. constraints, etc), then stored procs are the way to go. Think of them as the interface between the "front end" (ie. anything that's not the database) and the database, in much the same way as a webservice provides an interface to the mid-tier (I think; I'm really not up on the non-db related architecture!).
Your publically callable stored procedures would be typically things like create_a_new_customer, add_a_new_address, retrieve_customer_details, etc, where the logic behind each action is coded and related procedures would be grouped into the same package. You wouldn't want to code a series of procedures that just do dml on the tables and expect the applications to work out when to call each procedure.
Here's a handy decision diagram for this question.
Just adding a side effect of using packages...
If I have a package with all the stored procedures and functions that is required for my application to run. Consider one small stored procedure suddenly fail due to some issue(eg. some alteration in table structure) or some other minor issue so that only one stored procedure becomes invalid.
In this case, will the entire package become invalid? Affecting the entire application?
On the other hand if do not use package and create standalone procs and functions, then only one proc will fail.
Related
I started Meteor a few months ago.
I would like to know if using cursor.observeChanges for buisness objects is a good idea
I want to separate operations and views so I can uses the same operations in many views/events, and I want to know if it is a good idea.
Someone told me, we should not separate operations on mongo from view.
So my question is : Is it a good idea to to Buisness Objects with Meteor ?
Tanks for reading me.
cursor.observeChanges is essentially what you get behind the scenes when you do normal find() queries and bind to template helpers due to its context being reactive.
In the meteor world, the traditional model/view/controller paradigm is shifted towards a reactive data-on-the-wire concept including features like latency compensation.
What you refer to as a business object is basically a representation of your business data which is strongly typed, has a type of its own, atomic, and has only one task of representing.
You can achieve that kind of separation of concerns in any language/framework, including meteor. That only depends on how you lay out, structure and abstract your code.
What Meteor brings into the equation is the toolset to build up an interface to your data with modern ux features that are otherwise very hard/expensive to get.
The only concern over business-class applications could be the fact that Meteor currently employs MongoDB by default. MongoDB has its own discussions around business applications whether they need transaction support, ad-hoc aggregation, foreign key relationships etc. But that is another topic.
I have been thinking about using JDO in my new project. However, there is always the possibility of changing the Programming language, or rewriting / adding some modules in other languages. But i fear that in that case the data may not be accessible (parseable, understandable).
Are my fears justified?
If you persist data using JDO to say RDBMS, then the data is held in RDBMS (in a logical well-defined structure) which is equally accessible using SQL via a different programming language. The datastore is what holds the data, not some mysterious "JDO" thing, which is simply an API to get the data there from a Java app and Java objects.
Can using Modules or Shared/Static references to the BLL/DAL improve the performance of an ASP.NET website?
I am working of a site that consists of two projects, one the website, the other a VB.NET class library which acts as a combination of DAL and BLL.
The library is used to communicate with databases and sometimes transform/validate the data going into/coming from the DBs.
Currently each page on the site that needs db access (vast majority) will create an instance of the relevant class in the library to access specific tables.
As I understand it this leads to a class from the library being instantiated and garbage collected for each request, with the possibility of multiple concurrent instances if multiple users view the same page.
If I converted the classes to modules (shared/static class) would performance increase and memory be saved as only one instance of each module exists at a time and a new instance is not having to be created for each request?
(if so, does anyone know if having TableAdapters as global variables in the modules would cause problems due to threading?)
Alternatively would making the references to the Library class it the ASP.NET page have the same effect? (except I would have to re-write a lot less)
I'm no expert, but think that the absence of examples of this static class / session object model in books and online is indicative of it being a bad idea.
I inherited a Linq-To-Sql application where the db contexts were static, and after n requests the whole thing just fell apart. The standard model for L2Sql is the Unit-of-Work pattern (define a task or set of tasks - do them and close). Let the framework worry about connection pooling and efficient GC.
Are you just trying to be efficient or do you have performance issues? If the latter it's usually more effective to look at caching or improving query efficiency (use stored procedures, root out queries in loops) than looking at object instantiation.
Statics don't play well with unit tests either (another reason why they have dropped out of fashion).
instances are only a problem if they are not collected by the CG (a memory leak). Instances are more flexible than static as well because you can configure the instance to the specific context you are using.
When an application has poor performance or memory problems its usually a sign that
instances are not properly released (IDisposable)
the amount of data retrieved is too big (not paging large sets of data)
a large number of queries are executed (select n+1, or just a lot of queries)
poorly constructed sql statements (missing indexes, FK, too many joins, etc)
too many remote calls (either to other servers, or disk)
These are first things I would check. then start looking at the number of instantiated objects. Chances are that correcting the above mentioned list will solve most performance bottlenecks.
Can using Modules or Shared/Static references to the BLL/DAL improve
the performance of an ASP.NET website?
It's possible, but it depends heavily on how you use your data. One tradeoff in using a single shared instance of an object instead of one per request is that you will need to apply locking unless the objects are strictly read-only, and locking can both slow things down and complicate your code (not to mention being a common source of bugs).
However, if each object is going to contain the exact same data, then the tradeoff may be worth it -- even more so if it can save a DB round-trip.
You might consider using either a Singleton or a small number of parameterized objects rather than a static, though -- and use caching to manage them. That would give you the flexibility to let go of objects that you no longer need, which is harder to do when you're dealing with statics.
I am working on architecture of mid sized web application & for my DAL layer i am having 3 options
1) Traditional Stored proc Based Architecture (Using NTiers Template of Codesmith)
2) LINQ To SQL (or PLINQO Template of codesmith)
3) LINQ To Entity
From above LINQ to Entity is out of reach as we need to start application very quickly and we don't have the sufficient skillset for the same and as team has never worked on any OR/M tools it will be steep learning curve for them (This is what i read some where)
I prefer to go ahead with LINQ to SQL (But only fear is microsoft is not going to support or enhance LINQ to SQL further), from my point of view if microsoft is not going to enhance it further i am not having any issue as whatever feature i require in my project it is sufficient.
Now my issue is should i use linq to sql or should i stick to traditional architecture ?
OR else any other option is there ...
EDIT : I am going to use SQL Server as database and it does not require to interact with any other database
One of the most important objective in designing DAL Layer is faster development and maintainability for future database table changes, as there are chances that field may increase or decrease in future.
Also if you feel that any ORM tool is really good and does not have steep learning curve then also we can use
Please provide suggestions
As you are working in medium size project, I would suggest you to use LINQ-TO-SQL because of these advantages
Advantages using LINQ to SQL:
•No magic strings, like you have in SQL queries
•Intellisense
•Compile check when database changes
•Faster development
•Unit of work pattern (context)
•Auto-generated domain objects that are usable small projects
•Lazy loading.
•Learning to write linq queries/lambdas is a must learn for .NET developers.
Regarding performance:
•Most likely the performance is not going to be a problem in most solutions. To pre-optimize is an anti-pattern. If you later see that some areas of the application are to slow, you can analyze these parts, and in some cases even swap some linq queries with stored procedures or ADO.NET.
•In many cases the lazy loading feature can speed up performance, or at least simplify the code a lot.
Regarding debuging:
•In my opinion debuging Linq2Sql is much easier than both stored procedures and ADO.NET. I recommend that you take a look at Linq2Sql Debug Visualizer, which enables you to see the query, and even trigger an execute to see the result when debugging.
•You can also configure the context to write all sql queries to the console window, more information here
Regarding another layer:
•Linq2Sql can be seen as another layer, but it is a purely data access layer. Stored procedures is also another layer of code, and I have seen many cases where part of the business logic has been implemented into stored procedures. This is much worse in my opinion because you are then splitting the business layer into two places, and it will be harder for developers to get a clear view of the business domain.
There is no absolutely preffered way of writing DAL. These are all options. Which one to choose depends on your project, your skills and your inclinations.
Normally, with LINQ you can expect to be more productive. On the other hand, the DAL built with stored procedures can be expected to perform faster.
The issue only comes when you need some specific queries that the default LINQ to SQL provider won't be able to generate to be blazingly fast. In that case you will have to tap into your LINQ code to plug in your custom stored procedures where needed.
Regarding LINQ to SQL support and further development, it was grounded a long time ago already. So no official further development. Note: that is true for LINQ to SQL (it will be taken over by EF) relational solution, not for the main LINQ functionality.
Entity Framework in its v.1 only received massive critics. You're advised to wait until v2 comes out.
The most important limitation with LINQ (over Entity Framework or any other popular ORM) is that it doesn't support 1 to n mappings. That is, each your LINQ class can only map to a single table, not represent some sort of view over several others. Maybe it's not important to you, but maybe it is. Depends on your project.
The argument of stored procedures vs ORM's is long-standing and unlikely to be resolved any time soon. My recommendation would be to go with an ORM (Linq-to-Sql in your case).
Yes, stored procedures will always be faster since the queries are precompiled. The real question you have to ask yourself is whether you have such a performance-intensive system that your users will actually notice the difference. Keep in mind that using stored procedures means that you will need to manually write all your own queries where using an ORM does this for you. This usually means that an ORM will speed up your development.
Since you mention that speeding up development time is one of your goals I would recommend Linq-to-Sql - otherwise you will basically write the entire DAL yourself.
All of the options you've provided have significant drawbacks. None of them meet the requirements you've set out.
You need to prioritize what is most important for you.
If learning curve is your biggest issue, stay away from all ORMs if you are already comfortable with ADO.NET, DataTables, etc.
If development speed is your biggest issue, you should learn an ORM and go that route. The easiest ORM to recommend is NHibernate. Every other ORM has significant weaknesses. NHibernate works in the vast majority of projects, whereas other ORMs are much more situationally appropriate (depending on your DB design, model design, agility requirements, legacy schema support, etc.). All ORMs have learning curves, they just come into play at different times and in different ways.
Just to expand on #Developer Art, using the traditional stored proc approach enables you to put business logic in the database. Usually you will want to avoid this, but sometimes it is necessary to do. Not to mention you could also enforce constraints and permissions at the database level using this approach. It all depends on your requirements.
With the limitations mention I would say just stick to adhoc/custom queries and ADO.NET and not go for any jazzy stuff. Also stored procedure based DAL are faster is a notion based lame arguments like stored procedures are precompiled but they are not. All that they have is query plan cache. So lesser the investment in stored procedures the better you are. My advice ADO.Net and custom dynamic queries constructed from entity objects.
I'm thinking through data access for an ASP.NET application. Coming from a company that uses a lot of Windows applications with Client Data sets there is a natural dendancy towards a DataSet approach for dealing with data.
I'm more keen on a Business Object approach and I don't like the idea of caching a DataSet in the session then applying an update.
Does anyone have any experience / help to pass on about the pros and cons of both approaches?
You are smart to be thinking of designing a Data Layer in your app. In an ASP.NET application this will help you standardize and pretty dramatically simplify your data access. You will need to learn how to create and use ObjectDataSources but this is quite straightforward.
The other advantage of a data access layer (built using a separate project/DLL) is that it makes Unit testing much simpler. I'd also encourage you to build a Business Layer to do much of the processing of data (the business layer, for example, would be responsible for pulling ObjectDataSources from the DAL to hand to the UI code). Not only does this let you encapsulate your business logic, it improves the testability of the code as well.
You do not want to be caching DataSets (or DAL objects, for that matter) in the session! You will build a Web app so that record modifications work through a Unique ID (or other primary key spec) and feed changes directly to the DAL as they are made. If you were to cache everything you would dramatically reduce the scalability of your app.
Update: Others on this thread are promoting the idea of using ORMs. I would be careful about adopting a full-blown ORM for reasons that I have previously outlined here and here. I do agree, though, that it would be wise to avoid DataSets. In my own work, I make extensive use of DataReaders to fill my ObjectDataSources (which is trivial due to the design of my DAL) and find it to be very efficient.
DataSets can be incredibly inefficient compared even to other ADO.NET objects like DataReaders. I would suggest going towards the BO/ORM route based off what you are saying.
If you're going to follow Microsoft's direction, then the trend is definitely towards LINQ (ORM) vs. DataSets. When DataSets came into being (ASP.NET 1.0), LINQ wasn't even possible. With LINQ you get type-safety and build-in functions to Create / Update / Delete from the database.
Microsoft has even tried to make the transition easier through LINQ to DataSet.
We're about to do a big update to an existing asp app that used DataSet objects heavily; although I am not looking forward to the pain, I am going to insist on going down the BO route. Just the thought of trying to make datasets work now causes me to break out in a sweat.
I think we are going to go down the LINQ route and use lightweight entity objects.
The company where I work makes heavy use of DataSets as well while there is a business layer as well. BL mainly loads datasets from the DB.
I personally dislike this approach. There is also a practice of direct modifying the datasets after load/before save to meet some immediate needs here and there. To me it really violates the idea of business objects but it's how it is done.
ORM frameworks can really save you a great deal of time, especially in enterprise applications with lots of views with similar buttons and operations.
But it's also easy to lose control. Since that point it will slowly be turning into a mess.
Both options are good when used in right cases. Just don't mix them. Decide to do it one way and follow it.
Mark Brittingham's answer is accurate for two tier applications. But what if I want to use a service tier. DataSets are serializable. Typed DataSets save time over hand coding your own objects. Typed DataSets are extendable. Linq to Entities has performace issues, Linq to SQL is now dead. Linq to DataSet will always be an option.
I will use Typed DataSets and a multi-layered architecture to save time and organize code. I've tried hand coded BOs and the extra time and maintenance time is not worth it.