I publish this post in order to reveal the underlying idea of the real use of this tecnology.
I know this isn't a common question, but it doesn't mean that it isn't important.
If you were trying to work with lots of tables of a Database, and you were using lots of BPEL Services, would you choose SDO (Service Data Objects) instead of DBAdapters (DataBase Adapters)??
I have been working for few weeks with SDOs and I find these really useful, but I'm not sure if the use of SDOs is better than DB Adapters or not...
What do you think about this?? SDOs or DBAdapters??
Thanks in advance.
Basically SDO is Oracle SOA's attempt at an ORM and therefore you can simple look for information on ORM compared to JDBC. The DBAdaper is slight different from plain JDBC in it has extra features around polling and stored procedure integration.
DBAdapter for -> Simple SQL, Stored procedures, basic read write, delete and update and polling
SDO -> Highly reuseable code and everything that doesn't suit DBAdapter.
Here is a thread to look at http://forum.spring.io/forum/spring-projects/data/14117-jdbc-or-orm-framework-what-are-the-pros-and-cons
Related
So I just added my servers to Zenoss and installed some postgresql zenpacks. Unfortunatly I do not care for most of the postgresql monitoring tools. Instead of what they gave me, I am wondering if it is possible to send a custom query that I wrote, then graph the result using Zenoss? How do I go about doing this? Are there any good resources that you know of?
Thanks.
You will need to write your own ZenPack (or at least Template). Check Development Guide http://wiki.zenoss.org/ZenPack_Development_Guide or http://zenosslabs.readthedocs.org/en/latest/zenpack_development/
IMHO you will need zencommand datasource, which will execute your custom SQL query and this query output (number only) will be metric value, which will be processed by Zenoss.
Or you can expose metric(s) via SNMP and then it'll be only standard SNMP metric in Zenoss.
It's up to you how do you implement it. I recommend you to use community forum http://www.zenoss.org/forum for Zenoss related questions.
I'm working on the following scenario:
I have a console up that populates a SQL Server database with some data. I have one more web app that reads the same database and displays the data on a front-end. Both of the applications use Entity Framework to communicate with the database (they have the same connection string).
I wonder how can the web app be notified for any changes that have occurred to the database. Bear in mind that the two applications are not referenced, whatsoever.
Is there event provided by EF that fires when some has changes. In essence, I would like to know when a change has happened, as well as, the nature of that change
I had a similar requirement and I solved it using the EF function:
[context].Database.CompatibleWithModel(throwIfNoMetadata: true)
It will return if your model matches the underlying database structure using the metadata table.
Note that I was using a Code First approach.
The msdn definition is below:
http://msdn.microsoft.com/en-us/library/system.data.entity.database.compatiblewithmodel(v=vs.103).aspx
Edit:
Just found an amazing article with a demonstration:
http://blog.oneunicorn.com/2011/04/08/code-first-what-is-that-edmmetadata-table/
This is not something that is related to EF at all. EF is just a library that makes SQL calls and maps them to objects. It has no inside knowledge of the database. As such, when data changes in one application, another application doesn't know unless they query to see if that data changes (and you're not going to be constantly running queries to know that, it's too impractical).
There are, potentially some ways to do this, such as adding triggers to the database, which then call extended stored procs to send messages to the app, but this is a lot of work to go through, and it can possibly compromise the robustness of the database.
There used to be something called Notification Services, but that was deprecated. There's now something called SqlDependency objects, which may help you in some cases.. but it all depends on what you're trying to do exactly.
In any event, it's usually easier to find a different way to do what you want. This is complex topic, and really requires a lot of sql server knowledge.
Python --> SQLite --> ASP.NET C#
I am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.
I want to stay away from writing to disc as it takes too long. The data is not important, if something goes wrong, it simply flushes and fills up with the next wave of data sent by players.
Next, another ASP.NET server must be able to connect to this in memory database via TCP/IP at regular intervals, say once every second, or 10 seconds. It has to pull this data, and this will in turn update on a website that displays "live" game data.
I'm looking at SQlite, and wondering, is this the right tool for the job, anyone have any suggestions?
Thanks!!!
This sounds like a premature optimization (apologizes if you've already done the profiling). What I would suggest is go ahead and write the system in the simplest, cleanest way, but put a bit of abstraction around the database bits so they can easily by swapped out. Then profile it and find your bottleneck.
If it turns out it is the database, optimize the database in the usual way (indexes, query optimizations, etc...). If its still too slow, most databases support an in-memory table format. Or you can mount a RAM disk and mount individual tables or the whole database on it.
Totally not my field, but I think Redis is along these lines.
The application of SQlite depends on your data complexity.
If you need to perform complex queries on relational data, then it might be a viable option. If your data is flat (i.e. not relational) and processed as a whole, then some python-internal data structures might be applicable.
Perhaps AppFabric would work for you?
http://msdn.microsoft.com/en-us/windowsserver/ee695849.aspx
SQLite doesn't allow remote "connections" as far as I know, it only supports being invoked as an in-process library. However, you could try to use MySQL which, while heavier, supports remote connections and does have in-memory tables.
See http://dev.mysql.com/doc/refman/5.5/en/memory-storage-engine.html
So I have a challenge to build a site that people online can use to interact with organizations.: Asp.NET MVC Customer Application
One of the requirements is financial processing and accounting.
I'm very comfortable using SQL Transactions and stored procedures to do this; i.e. CreateCustomer also creates an entity, and an account record. We have a stored procedure to do this, that does a begin transaction, creates some setup records we need, then does a commit. I'm not seeing a good way to do this with an ORM, and after reading some great blog articles I'm starting to wonder if I'm going down the wrong path.
Part of the complexity here is the data itself:
I'm querying x databases (one per existing customer) to get some of my data, though my app has its own data store as well. I need to query the x databases, run stored procedures on the x databases, and also to my own datastore.
I'm not seeing strong support for things like stored procedures and thereby transactions, though it does seem to be present.
Maybe I'm just trying to make my app a nail here, cause the MVC hammer is sooo shiny. I'm plenty comfortable with raw ADO.NET of course, but I'm in love with the expressive feel to writing Linq code in C# and I'd rather not give up on it.
Down to the question:
Is this a bad idea? Should I try to use Linq / Entity Framework, or something like nHibernate... and stick with the ORM pattern or should I trash it and use raw ADO.NET data access?
Edit: a note on scale; from a queries per second standpoint this app is not "huge". But, from a data complexity perspective, it does need to query against 50+ databases (all identical, or close to it) to read data from an external application and publish data back to that application. ORM feels right when dealing with "my" data store, but feels very wrong for accessing the data from the external application.
From a certain size (number of databases) up, you have to change the paradigm. Are you at that size?
When you deploy what ultimately is a distributed application and yet try to controll it as an ordinary local application you are going to run into a set of fundamental issues around availability, scalability and correctness. If you use concepts like 'distributed transactions', 'linked servers' and 'ORM', your are down the wrong path. True distributed applications will use terms like 'message', 'queue' and and 'service'. Terms like Linq, EF, nHibernate are all fine and good, but none will bring you anything extra from what a simple Transact-SQL SELECT statement brings. In other words, if a SELECT solves your issues, then the client side various ORM will work. If not, they won't add any miraculos value.
I recommend you go over the slides on the SQLCAT: High Performance Distributed Applications in Real World Deployments which explain how a site like MySpace manages to read and write into a store of nearly 500 servers and thousands of databases.
Ultimately what you need to internalize is this: one database can have 95% availability (uptime and acceptable service response time). A system consiting of 10 databases with 95% availability has 59% availability. And a system of 100 databases each with 99.5% availability has 60% availability. 1000 databases with 99.95% availability (5 min downtime per week) have 60% availability. And this is for an ideal situation. In reality there is always a snowball effect caused by resource consumption (eg. threads blocked on trying to access an unavailable or slow resource) that makes things far worse.
This means that one cannot write a large distributed system relying on synchronous, tightly coupled operatiosn and transactions. Is simply impossible. You always rely on asynchronous operations (usually messaging and queues), which is something completely different from your run-of-the-mill database application.
use TransactionScope object available in System.Transaction.
What I have chosen is to use Entity Framework to allow access to the application's main data store, and create a custom DAL for access to external application data and for access to stored procedures within the application.
Here's hoping Entity Framework 4.0 fixes the issue. For now, I'm using the concept listed here.
http://social.msdn.microsoft.com/forums/en-US/adodotnetentityframework/thread/44a0a7c2-7c1b-43bc-98e0-4d072b94b2ab/
Is it a best practice to use stored procedure for every single SQL call in .NET applications?
Is it encouraged for performance reasons and to reduce surface area for SQL injection attacks (in web applications)?
Stored procedures have a few advantages over parameterized queries:
When used exclusively, you can turn off CREATE, INSERT, SELECT, UPDATE, ALTER, DROP, DELETE, etc access for your application accounts, and this way add a small amount of security.
They provide a consistent, manageable interface when you have multiple applications using the same database.
Using procedures allows a DBA to manage and tune queries even after an application is deployed.
Deploying small changes and bug fixes is much simpler.
They also have a few disadvantages:
The number of procedures can quickly grow to the point where maintaining them is difficult, and current tools don't provide a simple method for adequate documentation.
Parameterized queries put the database code next to the place where it's used. Stored procedures keep it far separated, making finding related code more difficult.
Stored procedures are harder to version.
You'll need to weigh those costs/benefits for your system.
No.
If you send your queries to SQL Server as parameterized queries, SQL Server will cache the execution plan AND will sanitize your parameter inputs properly to avoid SQL injection attacks.
I prefer stored procs over inline SQL, because this way the SQL is one consolidated place; however, I prefer using a tool like nHibernate which will auto generate the SQL for me, then you have no SQL to worry about!
There is one more advantage - when it comes to tuning, especially per customer, it can be easily done with SP (by adding hints or even rewriting the code). With embedded SQL it is practically impossible.
It's just one way of doing things. Upsides include keeping all your SQL code in one place, procs being verified for syntax at creation time, and being able to set permissions on procs, which usually represent some kind of "action" and are well suited to a conceptual security model.
Downsides include massive numbers of procs for any medium or larger application, and all the housekeeping that comes with that.
My employer's product uses procs for everything, and I must say with the right practices in place it's quite bearable.