AFAIK, Memcached does not support synchronization with database (at least SQL Server and Oracle). We are planning to use Memcached (it is free) with our OLTP database.
In some business processes we do some heavy validations which requires lot of data from database, we can not keep static copy of these data as we don't know whether the data has been modified so we fetch the data every time which slows the process down.
One possible solution could be
Write triggers on database to create/update prefixed-postfixed (table-PK1-PK2-PK3-column) files on change of records
Monitor this change of file using FileSystemWatcher and expire the key (table-PK1-PK2-PK3-column) to get updated data
Problem: There would be around 100,000 users using any combination of data for 10 hours. So we will end up having a lot of files e.g. categ1-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-78-data250, categ2-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-33-data100, etc.
I am expecting 5 million files at least. Now it looks a pathetic solution :(
Other possibilities are
call a web service asynchronously from the trigger passing the key
to be expired
call an exe from trigger without waiting it to finish and then this
exe would expire the key. (I have got some success with this approach on SQL Server using xp_cmdsell to call an exe, calling an exe from oracle's trigger looks a bit difficult)
Still sounds pathetic, isn't it?
Any intelligent suggestions please
It's not clear (to me) if the use of Memcached is mandatory or not. I would personally avoid it and use instead SqlDependency and OracleDependency. The two both allow to pass a db command and get notified when the data that the command would return changes.
If Memcached is mandatory you can still use this two classes to trigger the invalidation.
MS SQL Server has "Change Tracking" features that maybe be of use to you. You enable the database for change tracking and configure which tables you wish to track. SQL Server then creates change records on every update, insert, delete on a table and then lets you query for changes to records that have been made since the last time you checked. This is very useful for syncing changes and is more efficient than using triggers. It's also easier to manage than making your own tracking tables. This has been a feature since SQL Server 2005.
How to: Use SQL Server Change Tracking
Change tracking only captures the primary keys of the tables and let's you query which fields might have been modified. Then you can query the tables join on those keys to get the current data. If you want it to capture the data also you can use Change Capture, but it requires more overhead and at least SQL Server 2008 enterprise edition.
Change Data Capture
I have no experience with Oracle, but i believe it may also have some tracking functionality as well. This article might get you started:
20 Using Oracle Streams to Record Table Changes
Related
I have an SQL Server(SQL Azure) table that is being queried at a high rate, but gets updated only few times a month.
I wonder what options do I have that can cache the result set on the application side so that it will not have to hit SQL Server all the time.
One option is to just [OutputCache] the action methods which return the views. You may even be able to get away with SQL Dependency caching, though not sure if this works with Azure.
Another option is to try implementing a second-level cache for EF.
Another option is to have an entirely different read model. This way, you wouldn't query against the table, but something else that is closer to IIS and/or faster than SQL Azure (like NoSQL or JSON from Azure cache).
I have been successfully using SQLite as a data store for my web applications, but now I am implementing a web site with mod_perl, and am running into database locking issues.
As expected, my entire web application is loaded by the Plack Apache handler (Plack::Handler::Apache2) when the web server is started. Well, the first db query creates a lock on the entire database, and any subsequent query that has to modify the db fails.
What is my way out? Can I use SQLite in a persistent web environment or not? Should I be looking for some other db store?
I am not a fan of MySQL, and don't want to use it. I could potentially use PostGres, but I'd rather use something lightweight, and preferably sql-based as using key/value databases such as Tokyo Cabinet would require learning a whole new way. I'd rather really use SQLite.
If you have an open handle to the database, it can cause this issue. I have had problems when iterating over a result set during a log process causes the lock to stick around.
Try and fetch all the rows for the query and call $sth->finish() to clear up the lock. You will use a little more memory, but you will avoid the locking.
Knowing you are going to do this, you can make use of $sth->fetchall_arrayref() or $sth->fetchall_hashref()
Use Tokyo Cabinet's table database.
I was wondering what is the easiest way to see total number of database queries from my ASP.Net (.NET 2.0) application.
My application heavily use sql 2005 database because all data are dynamic and everything goes through one connection string in web.config. Connection pooling is enabled there.
So, I am wondering how many select statements are executed for particular page I load in my browser.
I don't care if I can see that information from .net side or from db side as long as I can see only connections to MY database. Not all connections to that db server because I use shared db server and there is a lot of other databases.
The best way to do this is to set up a profiler on your database and then make a single request to your ASP.NET application. The profiler will aggregate any data you wish and you will be able to use that data to determine what queries were sent to SQL Server from your application.
The SQL Server Profiler will list all actions performed on your DB. If you use a different db login name for your project (probably a really good idea if you are not) you can filter so it only shows actions from your login (see Events Selection, Column Filters then Login Name).
Use SQL Profiler. You can configure it to filter by the database you want and to just show select statements.
If you have some sort of database layer in your code, you could modify it to write out a log message every time you run a select statement. Then just load the page once and count the number of log statements. This may or may not work, depending on how your code is structured, but it's an option.
Edit: I misread the question. I thought you had multiple clients connecting to the same database, not the same database server. In that case, a profiler probably is the best choice.
Do you have access to SQL Server Profiler? You can set up traces to monitor this sort of thing by loading a page and looking at the effects in the profiler.
JUst be aware that Profiler can affect performance, so it is best to do this on dev.
I need to write a web application using SQL Server 2005, asp.net, and ado.net. Much of the user data stored in this application must be encrypted (read HIPAA).
In the past for projects that required encryption, I encrypted/decrypted in the application code. However, this was generally for encrypting passwords or credit card information, so only a handful of columns in a couple tables. For this application, far more columns in several tables need to be encrypted, so I suspect pushing the encryption responsibilities into the data layer will be better performing, especially given SQL Server 2005's native support for several encryption types. (I could be convinced otherwise if anyone has real, empirical evidence.)
I've consulted BOL, and I'm fairly adept at using google. So I don't want links to online articles or MSDN documentation (its likely I've already read it).
One approach I've wrapped my head around so far is to use a symmetric key which is opened using a certificate.
So the one time setup steps are (performed by a DBA in theory):
Create a Master Key
Backup the Master Key to a file, burn to CD and store off site.
Open the Master Key and create a certificate.
Backup the certificate to a file, burn to CD and store off site.
Create the Symmetric key with encryption algorithm of choice using the certificate.
Then anytime a stored procedure (or a human user via Management Studio) needs to access encrypted data you have to first open the symmetric key, execute any tsql statements or batches, and then close the symmetric key.
Then as far as the asp.net application is concerned, and in my case the application code's data access layer, the data encryption is entirely transparent.
So my questions are:
Do I want to open, execute tsql statements/batches, and then close the symmetric key all within the sproc? The danger I see is, what if something goes wrong with the tsql execution, and code sproc execution never reaches the statement that closes the key. I assume this means the key will remain open until sql kills the SPID that sproc executed on.
Should I instead consider making three database calls for any given procedure I need to execute (only when encryption is necessary)? One database call to open the key, a second call to execute the sproc, and a third call to close the key. (Each call wrapped in its own try catch loop in order to maximize the odds that an open key ultimately is closed.)
Any considerations should I need to use client side transactions (meaning my code is the client, and initiates a transaction, executes several sprocs, and then commits the transaction assuming success)?
1) Look into using TRY..CATCH in SQL 2005. Unfortunately there is no FINALLY, so you'll have to handle both the success and error cases individually.
2) Not necessary if (1) handles the cleanup.
3) There isn't really a difference between client and server transactions with SQL Server. Connection.BeginTransaction() more or less executes "BEGIN TRANSACTION" on the server (and System.Transactions/TransactionScope does the same, until it's promoted to a distributed transaction). As for concerns with open/closing the key multiple times inside a transaction, I don't know of any issues to be aware of.
I'm a big fan of option 3.
Pretend for a minute you were going to set up transaction infrastructure anyways where:
Whenever a call to the datastore was about to be made if an existing transaction hadn't been started then one was created.
If a transaction is already in place then calls to the data store hook into that transaction. This is often useful for business rules that are raised by save/going-to-the-database events. IE. If you had a rule that whenever you sold a widget you needed to update a WidgetAudit table, you'd probably want to wrap the widget audit insert call in the same transaction as that which is telling the datastore a widget has been sold.
Whenever a the original caller to the datastore (from step 1) is finished it commits/rollbacks the transaction, which affects all the database actions which happened during its call (using a try/catch/finally).
Once this type of transactioning is created then it becomes simple to tack on a open key at the beginning (when the transaction opens) and close the key at the end (just before the transaction ends). Making "calls" to the datastore isn't nearly as expensive as opening a connection to the database. It's really things like SQLConnection.Open() that burns resources (even if ADO.NET is pooling them for you).
If you want an example of these types of codes I would consider looking at NetTiers. It has quite an elegant solution for the transactioning that we just described (assuming you don't already have something in mind).
Just 2 cents. Good luck.
you can use ##error to see if any errors occured during the call to a sproc in SQL.
No to complicated.
You can but I prefer to use transactions in SQL Server itself.
I'm looking for a good way to visualize ASP.NET session state data stored in SQL server, preferably without creating a throwaway .aspx page. Is there a good way to get a list of the keys (and serialized data, if possible) directly from SQL server?
Ideally, I'd like to run some T-SQL commands directly against the database to get a list of session keys that have been stored for a given session ID. It would be nice to see the serialized data for each key as well.
Can you elaborate slightly, is there no reference to a HttpContext available (You can use this from backend code as well FYI), which prevents you from utilizing the built in serialization and keys dictionary?
EDIT, in response to your update. I believe the ASPState database creates and destroys temporary tables as needed, it does not have permanent tables you can query, take a look at the stored procedures and you should find one along the lines of "TempGetItem", you can either use this sproc directly, or examine its source for more insight.
When you run asp.net application with sql server session mode, it creates two tables, dbo.ASPStateTempApplications and dbo.ASPStateTempSessions. You can find your application from first table and use it to query open sessions from second table. The ASPStateTempSessions table stores two columns SessionDataShort and SessionDataLong. All session information is binary. You need to know object types being stored in session if you want to deserialize them back again and view the contents.
I have tried this recently and it works fine. In fact, for a complex application it is worth having some tools to view and parse session data to make sure we dont store unwanted objects and leave it in database for long - it has potential of slowing things down.