Oracle sequence cache aging too often - asp.net

my asp.net application uses some sequences to generate tables primary keys. Db administrators have set the cache size to 20. Now the application is under test and a few records are added daily (say 4 for each user test session).
I've found that new test session records always use new cache portions as if the preavious day cached numbers had expired, losing tenth of keys everyday. I'd like to understand if it's due to some mistake i might have made in my application (disposing of tableadapters or whatever) or if it's the usual behaviour. There are programming best practices to take into account when handling oracle sequences ?
Since the application will not have to bear an heavy load of work (say 20-40 new records at day), i was tinking if it might be the case to set a smaller cache size or none at all.
Does sequence cache resizing implies the reset of current index ?
thank you in advance for any hint

The answer from Justin Cave in this thread might be interesting for you:
http://forums.oracle.com/forums/thread.jspa?threadID=640623
In a nutshell: if the sequence is not accessed frequently enough but you have a a lot of "traffic" in the library cache, then the sequence might be aged out and removed from the cache. In that case the pre-allocated values are lost.
If that happens very frequently to you, it seems that your sequence is not used very often.
I guess that reducing the cache size (or completely disabling it) will not have a noticable impact on performance in your case (also when taking your statement of 20-40 new records a day into account)

Oracle Sequences are not gap-free. Reducing the Cache size will reduce the gaps... but you will still have gaps.
The sequence is not associated to the table by the database, but by your code (via the nextval on the insert via trigger/sql/pkg api) -- on that note you may use the same sequence over multiple tables (it is not like sql server's identity where it is associated to the column/ table)
thus changing the sequence will have no impact on the indexes.
You would just need to make sure if you drop the sequence and restart it, you 'reseed' to the +1 of the current value (e.g. create sequence seqer start with 125 nocache;)
, but
If your application requires a
gap-free set of numbers, then you
cannot use Oracle sequences. You must
serialize activities in the database
using your own developed code.
but be forewarned, you may increase disk IO and possible transaction locking if you choose not to use sequences.
The sequence generator is useful in
multiuser environments for generating
unique numbers without the overhead of
disk I/O or transaction locking.
to reiterate a_horse_with_no_name's comments, what is the issue with gaps in the id?
Edit
also have a look at the caching logic you should use located here:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/views002.htm#i1007824

If you are using the sequence for PKs and not to enforce some application logic then you shouldn't worry about gaps. However, if there is some application logic tied to sequential sequence values, you will have holes if you use sequence caching and do not have a busy system. Sequence cache values can be aged out of the library cache.
You say that your system is not very busy, in this case alter your sequence to no cache. You are in a position of taking a negligible performance hit to fix a logic issue so you might as well.

As people mentioned: Gaps shouldn't be a problem, so if you are requiring no gaps you are doing something wrong. (But I don't think this is what you want).
Reducing the cache should reduce the number and decrease the performance of the sequence especially with concurrent access to it. (which shouldn't be a problem in your use case).
Changing the sequence using the alter sequence statement (http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_2011.htm) should not reset the current/next val of the sequence.

Related

How to defragment/consolidate partially filled sqlite pages in order to reclaim space?

I am designing a little server application with local data store and sqlite3 seems to be the way to manage the persistent data. But I am worried about malicious users who know the internal logic and might trick the server into creating (and subsequent deleting) lots of records, in a way where a few valid records remain in each data page. The database size might explode quite soon.
Following the documentation of and recommendations like https://blogs.gnome.org/jnelson/2015/01/06/sqlite-vacuum-and-auto_vacuum/ implies that even auto_vacuum=incremental would not help me in this scenario because it's only effective for released pages, not for used pages with internal gaps (i.e. fragmentation).
Is there a good way to tell sqlite to consolidate such data on-the-fly?
VACUUM operation is not an option due to long-living global DB lock.
Sqlite will merge an almost empty page with neighbors automatically to help reduce fragmentation like you describe.
From an email from D Richard Hipp on the sqlite mailing list:
Once a sufficient number of rows are removed from a page, and the free space on that page gets to be a substantial fraction of the total space for the page, then the page is merged with adjacent pages, freeing up a whole page for reuse. But as doing this reorganization is expensive, it is deferred until a lot of free space accumulates on the page. (The exact thresholds for when a rebalance occurs are written down some place, but they do not come immediately to my mind, as the whole mechanism just works and we haven't touched it in about 15 years.)

High write concurrency backend for storing large set/array based data?

The problem:
I have a web service that needs to check membership of a given string against a set of strings, where number of elements in the set will be under constant growth, potentially numbering in the hundreds of millions.
If the string is not a member of the set, it gets added to the set. The string size will be a constant 32 bytes. Only one set variable is required, no other variables need to be persisted.
This check is performed as part of a callback on a webhook, thus performance is critical.
While my use case pretty much fits a bloom filter perfectly, I'm having trouble finding a solution to deal with the persistent storage vs i/o concurrency portion of the problem.
Environment:
DigitalOcean/Linux/Python/Flask, but open to change if required
Possible Solutions:
redis, storing the variable in a set, and then querying via sismember for a nice o(1) based solution. This is what we are currently using, but this solution doesn't scale well with a large number of keys given that everything must fit in memory, and it also has issues with write concurrency when traffic increases.
sqlite, with WAL mode turned on. concerned about lock contention when the server gets hit with a significant number of webhook requests (SQLITE_BUSY). Local server file doesn't scale across host machines.
postgres, seems like a nice middle ground solution, but might have to deal with lock contention here as well for write concurrency.
cassandra, given it's focus on write performance. overkill for storing a single column though?
custom bloom filter backend, not sure if something like this exists that provides the functionality of a bloom filter with a high i/o concurrency storage backend.
Thoughts?
The Redis solution can scale well with data sharding. You can set up several Redis instances (or use Redis-Cluster), split your data into several parts, i.e. shardings, and save each part in a different Redis instance.
When you want to check the membership of a given string, you can send the sismenber command to the corresponding Redis instance. Take this answer as an example of how to split data with hash functions.
Also, you can implement bloom filter with Redis (GETBIT and SETBIT). Just a reminder, bloom filter has the false positive problem.
First, you don't need to use sismember. Just do sadd systematically, and test the returned value. If it's 0, the value was already in the set, and so was not added. Doing so you will very easily reduce the number of requests to Redis.
Second, the description of your problem looks like a perfect match for Hbase, which is made for storing very large data set and query them using bloom filters. But you'll probably find it's overkill, just like Cassandra.

Teradata DELETE ALL vs DROP+CREATE

I've been recently assigned on a project using Teradata.
I've been told to strictly use DROP+CREATE instead of DELETE ALL, because the latter "leaves some space allocated someway". This is counter-intuitive to me, and I think it's probably wrong. I searched the web for a comparison between the two methods, but I found nothing.
This only reinforces my belief that DELETE ALL doesn't suffer from the issue above.
However, if this is the case, I must prove it (both practically and theoretically).
So, my question is: is there a difference in space allocation between the two methods? If not, is there an official document (user guide, technical specification, whatever else) that proves it?
Thank you!
There's a discussion here: http://teradataforum.com/teradata/20120403_105705.htm about the very same subject (although it does not really answer the "leaves some space allocated someway" part). They actually recommend DELETE ALL but for other (performance) reasons:
I'll quote just in case the link goes dead:
"Delete all" will be quicker, although being practical there often isn't a lot of difference in the performance of them.
However, especially for a process that is run regularly (say a daily batch process) then I recommend the "delete all" approach. This will do less work as it only removes the data and leaves the definition in place. Remember that if you remove the definition then this requires accessing multiple dictionary tables, and of course you then have to access those same tables (typically) when you re-create the object.
Apart from the performance aspect, the downside of the drop/create approach is that every time you create an object Teradata inserts "default rows" into the AccessRights table, even if subsequent access to the object is controlled via Role security and/or database level security. As you may well know the AccessRights table can easily get large and very skewed. In my experience many sites have a process which cleans this table on a regular basis, removing redundant rows. If your (typically batch) processes regularly drop/create objects, then you're simply adding rows into the table which have previously been removed by a clean process, and which will be removed in the future by the same process. This all sounds like a complete waste of time to me.
Your impression is correct, you didn't find any reference to "DELETE leaves some space allocated" in any place, because it's simply wrong :-)
DELETE ALL is similar to a TRUNCATE in other DBMSes and in most cases use fastpath processing:
First of all, you cannot do DROP/CREATE in one transaction in Teradata (in Oracle there are other problems with everyday DDL) so when ETL processes become complicated you might end up with the dependence where more important business processes depend on less important (like you might see the customers table empty just because the interests rates were not refreshed
or you have an exceeding varchar value in just one minor column)
My opinion: Use transactions and modular programming. In Teradata this means avoiding DDL where possible and using DELETE/UPDATE/MERGE/INSERT instead of DROP/CREATE.
We have a slightly different situation in Postgres where DDL statements are transactional.

Cache results from sql database, or query each time?

I'm generating pages based on an sql query.
This is the query:
CREATEPROCEDURE sp_searchUsersByFirstLetter
#searchQuery nvarchar(1)
AS
BEGIN
SET NOCOUNT ON;
SELECT UserName
FROM Users Join aspnet_Users asp on Users.UserId = asp.UserId
WHERE (LoweredUserName like #searchQuery + '%')
I can call this procedure for each letter in the alphabet, and get all the users that start with that letter. Then, I put these users into a list on one of my pages.
My question is this: would it be better to cache the list of users to my webserver, rather than query the database each time? Like this:
HttpRuntime.Cache.Insert("users", listOfUsersReturnedFromQuery, null, DateTime.Now.AddHours(1), System.Web.Caching.Cache.NoSlidingExpiration);
Its ok for me if the list of users is an hour out of date. Will this be more efficient that querying the database each time?
Using a cache is best reserved for situations where your query meets the following constraints:
The data is not time critical, i.e. make sure a cache hit won't break your application by causing your code to miss a recent update of the data.
The data isn't sequenced, i.e. A, B, C, D, E are cached, F is inserted by another user, your user inserts G and hits the cache, resulting in ABCDEG instead of ABCDEFG.
The data doesn't change much.
The data is queried and re-used frequently.
Size isn't really a factor unless it's going to really tax your RAM.
I have found that one of the best tables to cache is a settings table, where the data is practically static, gets queried on nearly every page request, and changes don't have to be immediate.
The best thing for you to do would be to test which queries are performed most, then select those that are taxing the database server highest. Out of those, cache anything you can afford to. You should also take a look at tweaking maximum cached object ages. If you're performing a query 100 times a second, you can cut that rate down by a factor of 99% by simply caching it for 1 second, which negates the update delay problem for most practical situations.
In case if you have few servers memory cashing isn't so good, because it will take memroy in each server and in each w3p process of every server.
It will be also hard to maintain consistent data.
I would advise to choose from:
basic output cache (assuming you are using MVC this is zero efforts and good imporevement)
Db cache using smaller pre-calculated table where you have mapping from input string to 10 possible results
It really depends. Do you bottleneck at your database server (I would hope that answer is no)? If you are hitting the database 26 times, that is nothing compared to what typically happens. You should be considering caching data in a Dataset or some other offline model if you are hitting the database hundreds of thousands of times.
So I would say, no. You should be fine with your round trips to the database.
But there is no replacement for testing. That'll tell you for sure.
Considering that each DB call is always expensive in terms of network and DB load I would prefer to avoid such extra operations and cache items even they are requested few times per hour.
Only one opposite case I see - when amount of users in terms of memory consumption is a tons of megabytes.
Well Caching data and get back is fastest but it also depends on the data size...If there is large amount of data than it will cause performance issue.
So it almost depends on your requirement.
I would like you to suggest make use of paging or make use of mix mode by loading half of the user put in cache and than load the other data when require....

Better performance to Query the DB or Cache small result sets?

Say I need to populate 4 or 5 dropdowns w/ items from a database. Each drop down will have < 15 items in it. These items almost never change.
Now I could query the DB each time the page is accessed or I could grab the values from a custom class that would check to see if they already exist in ASP.Net's cache and only if they don't query the DB to update the cache.
It's trivial for me to write but I'm unsure if the performace would be better or not. I think it would be (although not likely anything huge).
What do you think?
When dealing with performance issues you should always:
Do things the simplest way first (avoid premature optimisation)
Performance test your code with set performance goals (e.g. 200ms response time under load of N concurent users)
Then, IF your code doesn't perform then profile your code to determine what is slow, and profile your proposed performance fixes to accurately measure what the real-world performance change will be.
Having said that then yes, what you are suggesting seems sensible (you would usually expect an in-memory cache to be quicker than a database), however it also depends on what data is being returned, what the memory load of your application is, how expensive the query is, what the query parameters are etc...
You should performance test your changes before and after to determine the actual effect of your changes (including things like memory load), and you should only really be doing things like this once you have identified that these dropdowns are the cause of an unacceptable performance problem.
That's what System.Web.Helpers.WebCache class exists for.
IO is usually more expensive than memory operations (by orders of magnitude). Especially if your database is in another machine, then you would even be using network resources, and it will definitely be faster to just use the cache.
But indeed, optimize in the end when you have really identified it as a performance bottleneck by measuring.
Quick answer to your question:
Use the built in .Net cache.
Additional points to ponder over..
Preferably, retrieve all master data in a single database retrieval (think stored procedure and dataset): though, I do not advocate the used of stored procs in all scenarios.
As you rightly said, ensure that your data access layer checks the cache before making a round trip to the database
Also, as your drop down values do not change very often; do remember to keep a long expiry duration
Finally, based on your page design you could also look at Fragment Caching (partial page caching: user controls) which could give you bigger benefits since now you neither access the data cache nor the database.
Performance:
Again, the performance depends more on the application's load as compared to your direct round trips for fetching the master data. Put simply, As Thomas suggested use the cache class!

Resources