I have been successfully using SQLite as a data store for my web applications, but now I am implementing a web site with mod_perl, and am running into database locking issues.
As expected, my entire web application is loaded by the Plack Apache handler (Plack::Handler::Apache2) when the web server is started. Well, the first db query creates a lock on the entire database, and any subsequent query that has to modify the db fails.
What is my way out? Can I use SQLite in a persistent web environment or not? Should I be looking for some other db store?
I am not a fan of MySQL, and don't want to use it. I could potentially use PostGres, but I'd rather use something lightweight, and preferably sql-based as using key/value databases such as Tokyo Cabinet would require learning a whole new way. I'd rather really use SQLite.
If you have an open handle to the database, it can cause this issue. I have had problems when iterating over a result set during a log process causes the lock to stick around.
Try and fetch all the rows for the query and call $sth->finish() to clear up the lock. You will use a little more memory, but you will avoid the locking.
Knowing you are going to do this, you can make use of $sth->fetchall_arrayref() or $sth->fetchall_hashref()
Use Tokyo Cabinet's table database.
Related
AFAIK, Memcached does not support synchronization with database (at least SQL Server and Oracle). We are planning to use Memcached (it is free) with our OLTP database.
In some business processes we do some heavy validations which requires lot of data from database, we can not keep static copy of these data as we don't know whether the data has been modified so we fetch the data every time which slows the process down.
One possible solution could be
Write triggers on database to create/update prefixed-postfixed (table-PK1-PK2-PK3-column) files on change of records
Monitor this change of file using FileSystemWatcher and expire the key (table-PK1-PK2-PK3-column) to get updated data
Problem: There would be around 100,000 users using any combination of data for 10 hours. So we will end up having a lot of files e.g. categ1-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-78-data250, categ2-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-33-data100, etc.
I am expecting 5 million files at least. Now it looks a pathetic solution :(
Other possibilities are
call a web service asynchronously from the trigger passing the key
to be expired
call an exe from trigger without waiting it to finish and then this
exe would expire the key. (I have got some success with this approach on SQL Server using xp_cmdsell to call an exe, calling an exe from oracle's trigger looks a bit difficult)
Still sounds pathetic, isn't it?
Any intelligent suggestions please
It's not clear (to me) if the use of Memcached is mandatory or not. I would personally avoid it and use instead SqlDependency and OracleDependency. The two both allow to pass a db command and get notified when the data that the command would return changes.
If Memcached is mandatory you can still use this two classes to trigger the invalidation.
MS SQL Server has "Change Tracking" features that maybe be of use to you. You enable the database for change tracking and configure which tables you wish to track. SQL Server then creates change records on every update, insert, delete on a table and then lets you query for changes to records that have been made since the last time you checked. This is very useful for syncing changes and is more efficient than using triggers. It's also easier to manage than making your own tracking tables. This has been a feature since SQL Server 2005.
How to: Use SQL Server Change Tracking
Change tracking only captures the primary keys of the tables and let's you query which fields might have been modified. Then you can query the tables join on those keys to get the current data. If you want it to capture the data also you can use Change Capture, but it requires more overhead and at least SQL Server 2008 enterprise edition.
Change Data Capture
I have no experience with Oracle, but i believe it may also have some tracking functionality as well. This article might get you started:
20 Using Oracle Streams to Record Table Changes
I have an SQL Server(SQL Azure) table that is being queried at a high rate, but gets updated only few times a month.
I wonder what options do I have that can cache the result set on the application side so that it will not have to hit SQL Server all the time.
One option is to just [OutputCache] the action methods which return the views. You may even be able to get away with SQL Dependency caching, though not sure if this works with Azure.
Another option is to try implementing a second-level cache for EF.
Another option is to have an entirely different read model. This way, you wouldn't query against the table, but something else that is closer to IIS and/or faster than SQL Azure (like NoSQL or JSON from Azure cache).
When working with CRUD operation in flex, I want to know how to open only one time for all update and delete operations. How can I do this?
As Flextra's mentioned in his comment above, Flex doesn't access the database directly. You'll have to make a RemoteObject call to another layer that will perform the actual CRUD operations.
That being said, unless you want to perform all of your CRUD operations in one giant batch, I would not recommend keeping the database connection open just waiting for something to do. That's generally considered poor practice. I would recommend opening and closing the connection to your database as needed when performing each operation.
It depends on the server-side technology if you aren't talking about a local SQLite database.
PHP has the mysql_pconnect() for a persistent connection for example.
Python --> SQLite --> ASP.NET C#
I am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.
I want to stay away from writing to disc as it takes too long. The data is not important, if something goes wrong, it simply flushes and fills up with the next wave of data sent by players.
Next, another ASP.NET server must be able to connect to this in memory database via TCP/IP at regular intervals, say once every second, or 10 seconds. It has to pull this data, and this will in turn update on a website that displays "live" game data.
I'm looking at SQlite, and wondering, is this the right tool for the job, anyone have any suggestions?
Thanks!!!
This sounds like a premature optimization (apologizes if you've already done the profiling). What I would suggest is go ahead and write the system in the simplest, cleanest way, but put a bit of abstraction around the database bits so they can easily by swapped out. Then profile it and find your bottleneck.
If it turns out it is the database, optimize the database in the usual way (indexes, query optimizations, etc...). If its still too slow, most databases support an in-memory table format. Or you can mount a RAM disk and mount individual tables or the whole database on it.
Totally not my field, but I think Redis is along these lines.
The application of SQlite depends on your data complexity.
If you need to perform complex queries on relational data, then it might be a viable option. If your data is flat (i.e. not relational) and processed as a whole, then some python-internal data structures might be applicable.
Perhaps AppFabric would work for you?
http://msdn.microsoft.com/en-us/windowsserver/ee695849.aspx
SQLite doesn't allow remote "connections" as far as I know, it only supports being invoked as an in-process library. However, you could try to use MySQL which, while heavier, supports remote connections and does have in-memory tables.
See http://dev.mysql.com/doc/refman/5.5/en/memory-storage-engine.html
I was wondering what is the easiest way to see total number of database queries from my ASP.Net (.NET 2.0) application.
My application heavily use sql 2005 database because all data are dynamic and everything goes through one connection string in web.config. Connection pooling is enabled there.
So, I am wondering how many select statements are executed for particular page I load in my browser.
I don't care if I can see that information from .net side or from db side as long as I can see only connections to MY database. Not all connections to that db server because I use shared db server and there is a lot of other databases.
The best way to do this is to set up a profiler on your database and then make a single request to your ASP.NET application. The profiler will aggregate any data you wish and you will be able to use that data to determine what queries were sent to SQL Server from your application.
The SQL Server Profiler will list all actions performed on your DB. If you use a different db login name for your project (probably a really good idea if you are not) you can filter so it only shows actions from your login (see Events Selection, Column Filters then Login Name).
Use SQL Profiler. You can configure it to filter by the database you want and to just show select statements.
If you have some sort of database layer in your code, you could modify it to write out a log message every time you run a select statement. Then just load the page once and count the number of log statements. This may or may not work, depending on how your code is structured, but it's an option.
Edit: I misread the question. I thought you had multiple clients connecting to the same database, not the same database server. In that case, a profiler probably is the best choice.
Do you have access to SQL Server Profiler? You can set up traces to monitor this sort of thing by loading a page and looking at the effects in the profiler.
JUst be aware that Profiler can affect performance, so it is best to do this on dev.