Oracle materialized view will table DMLs performance be affected? - oracle11g

May be a dump question. I just need a quick estimation.
If I create materialized view on TableX.
Will the performance of insert into TableX , update TableX, delete from TableX etc. degrade or it won't have any impact on the performance of these DMLs?
I would like to improve a performance of some selects but it would be a deal breaker if DMLs suffer even a little as a result.

It will depend on how your materialized view is configured to refresh, and how often. Any of the refresh options will have some effect on the system, as you're introducing at least some new load, so its a question of which form of load affects your DML the least.
Fast refreshes shouldn't have any effect (at least none that anyone would notice), but test to make sure as there is a tiny bit of overhead associated with updating the materialized view logs as DML is run. Use these if possible, especially if the parent table has a lot of data or processes a lot of DML transactions. Fast refreshes generally have the lowest and most predictable overall resource impact on your whole system.
Complete refreshes shouldn't have any effect either, unless (like any big query) they consume too many compute resources and affect the whole system. They won't add any overhead to the actual DML. Use these if you can't use fast refreshes for some reason, but try to limit how often you refresh if there's a lot of data or high compute requirements for the refresh itself. Too many complete refreshes can drive up your overall transaction rate and redo log rollover rate more than fast refreshes will.
On commit refreshes will absolutely have an effect on DML, as the compute and I/O required to update the MV is added directly to the DML transaction on the parent table. Don't do this unless you have very strict real-time update requirements on the MV, and your source data doesn't change much.
It's a bit counter-intuitive, but if you have fast refreshes it actually makes sense to run them as often as possible. How often depends on how fast your source data changes, how close to real-time you need your MV data to be, and how resource-intensive your MV query is. A higher refresh frequency keeps the number of DML updates as small as possible with any one, which helps to keep them from spiking compute resource usage.

Related

To speed up data refresh should I create a view or use query folding?

I have an SQLite database of 5GB which gets updated few times a day and is used to refresh a PowerBI dashboard. While below 1GB I could refresh the dashboard in under a minute, but now takes around 20 minutes.
Should I create views so merges, joins, etc. are made in the view instead of loading the table itself and using Power Query to perform data manipulation?
Should I use incremental refresh? Is it possible in SQLite?
I would implement views on the database side in this case. This is in my eyes the benefit of having control of the DB yourself - pushing these data transforms to the DB instead of tinkering in Power Query is a major benefit.
When the views are set up, work on incremental refresh if the SQLITE3 connector supports it, but if it doesn't use the "regular" SQL Database connector it may not do so, according to the official documentation.
5GB is tiny in size and your refreshes should not be overly long. How complicated are your queries and what kind of transformations are you doing?
I would first look at making sure you're not breaking query folding. Seemingly simple steps do not always support folding but by rearranging them you can ensure folding happens and have a drastic effect on refresh times. Check at what stage folding is breaking and see if you can move things around.
Next, I'd look at moving your transformations upstream to a view (Roche's maxim).
Incremental refresh will also work so it is up to you really to pick from the available options.

How to defragment/consolidate partially filled sqlite pages in order to reclaim space?

I am designing a little server application with local data store and sqlite3 seems to be the way to manage the persistent data. But I am worried about malicious users who know the internal logic and might trick the server into creating (and subsequent deleting) lots of records, in a way where a few valid records remain in each data page. The database size might explode quite soon.
Following the documentation of and recommendations like https://blogs.gnome.org/jnelson/2015/01/06/sqlite-vacuum-and-auto_vacuum/ implies that even auto_vacuum=incremental would not help me in this scenario because it's only effective for released pages, not for used pages with internal gaps (i.e. fragmentation).
Is there a good way to tell sqlite to consolidate such data on-the-fly?
VACUUM operation is not an option due to long-living global DB lock.
Sqlite will merge an almost empty page with neighbors automatically to help reduce fragmentation like you describe.
From an email from D Richard Hipp on the sqlite mailing list:
Once a sufficient number of rows are removed from a page, and the free space on that page gets to be a substantial fraction of the total space for the page, then the page is merged with adjacent pages, freeing up a whole page for reuse. But as doing this reorganization is expensive, it is deferred until a lot of free space accumulates on the page. (The exact thresholds for when a rebalance occurs are written down some place, but they do not come immediately to my mind, as the whole mechanism just works and we haven't touched it in about 15 years.)

A fast way to store views on a page and when to save in database

I want to implement a views counter like most forums, Youtube and several others have. So every time a user reads an article, that is stored and remembered. I also want to know who looked at the article.
My queston is: How do you implement this efficiently? What is the best practice?
One way would be to call a stored procedure for every view, but that would result in a lot of unneeded calls to the database.
Another way would be to store this to some global application object, and then store in DB every 5 minutes or so (and can you even do that in a good way?)
What's the best way to do this?
Database operations are surprisingly cheap and really are not worth worrying about. In the event that a DB operation was even marginally expensive then you can always delegate the blocking operation to a new thread thus freeing-up your page-generation thread (you can trivially do this for UPDATE and INSERT operations that return nothing from the database - they are inconsequential).
Sprocs aren't really in-fashion right now - the performance advantage they might have had from pre-computed execution plans is almost eliminated because modern servers cache plans from all previous queries, and for trivial SELECT, INSERT, and UPDATEs you begin to suffer from increased code complexity. There's nothing wrong with inline SQL commands now.
Anyway, back on-topic and in summary: your assumptions are wrong. There is nothing wrong with running UPDATE Pages SET ViewCount = ViewCount + 1 WHERE PageId = #pageId on every page-view. There is also nothing wrong with doing this either: INSERT INTO UserPageviews (UserId, PageId, DateTime) VALUES ( #userId, #pageId, NOW() ). Both operations are very cheap and will execute in under 2-3 miliseconds on even an old and aged database server.
Another way would be to store this to some global application object,
and then store in DB every 5 minutes or so (and can you even do that
in a good way?)
This method is very prone to data loss unless you use a durable queueing mechanism (like MSMQ). Unless you anticipate massive traffic, I wouldn't even think about this approach.
Writes of this nature are inexpensive and hundreds of operations per second are not a big deal. I recently built a comment/rating framework that acheives throughput of 3000+ complete transactions per second just on my local all-in-one workstation. This included processing the request, validation, and creating multiple records within a transaction.
As a note, you should take steps to ensure that your statistics data isn't vulnerable to artificial inflation/manipulation. This part of the process will probably be more complex than the view tracking itself. For example, a user should not be able to sit and hold down the F5 key and inflate the number of views on their video. Nor should these values be manipulable by HTTP (e.g. creating a small script to send an AJAX request over and over).
This suggests that each INSERT would be preceded by a SELECT to ensure that the same user ID or IP hadn't already been recorded in some period of time. Of course, this isn't foolproof (unless you invest a great deal of effort), but it errs on the side of conservatism which is usually a good approach.
One way would be to call a stored procedure for every view, but that
would result in a lot of unneeded calls to the database.
I regularly have to remind myself (and other developers) to not fear the database. People (me included) sometimes go to great lengths to avoid a few simple database calls. Keep your tables narrow and well-indexed, and operations like this are faster than you might think.

Better performance to Query the DB or Cache small result sets?

Say I need to populate 4 or 5 dropdowns w/ items from a database. Each drop down will have < 15 items in it. These items almost never change.
Now I could query the DB each time the page is accessed or I could grab the values from a custom class that would check to see if they already exist in ASP.Net's cache and only if they don't query the DB to update the cache.
It's trivial for me to write but I'm unsure if the performace would be better or not. I think it would be (although not likely anything huge).
What do you think?
When dealing with performance issues you should always:
Do things the simplest way first (avoid premature optimisation)
Performance test your code with set performance goals (e.g. 200ms response time under load of N concurent users)
Then, IF your code doesn't perform then profile your code to determine what is slow, and profile your proposed performance fixes to accurately measure what the real-world performance change will be.
Having said that then yes, what you are suggesting seems sensible (you would usually expect an in-memory cache to be quicker than a database), however it also depends on what data is being returned, what the memory load of your application is, how expensive the query is, what the query parameters are etc...
You should performance test your changes before and after to determine the actual effect of your changes (including things like memory load), and you should only really be doing things like this once you have identified that these dropdowns are the cause of an unacceptable performance problem.
That's what System.Web.Helpers.WebCache class exists for.
IO is usually more expensive than memory operations (by orders of magnitude). Especially if your database is in another machine, then you would even be using network resources, and it will definitely be faster to just use the cache.
But indeed, optimize in the end when you have really identified it as a performance bottleneck by measuring.
Quick answer to your question:
Use the built in .Net cache.
Additional points to ponder over..
Preferably, retrieve all master data in a single database retrieval (think stored procedure and dataset): though, I do not advocate the used of stored procs in all scenarios.
As you rightly said, ensure that your data access layer checks the cache before making a round trip to the database
Also, as your drop down values do not change very often; do remember to keep a long expiry duration
Finally, based on your page design you could also look at Fragment Caching (partial page caching: user controls) which could give you bigger benefits since now you neither access the data cache nor the database.
Performance:
Again, the performance depends more on the application's load as compared to your direct round trips for fetching the master data. Put simply, As Thomas suggested use the cache class!

ASP.NET/SQL 2008 Performance issue

We've developed a system with a search screen that looks a little something like this:
(source: nsourceservices.com)
As you can see, there is some fairly serious search functionality. You can use any combination of statuses, channels, languages, campaign types, and then narrow it down by name and so on as well.
Then, once you've searched and the leads pop up at the bottom, you can sort the headers.
The query uses ROWNUM to do a paging scheme, so we only return something like 70 rows at a time.
The Problem
Even though we're only returning 70 rows, an awful lot of IO and sorting is going on. This makes sense of course.
This has always caused some minor spikes to the Disk Queue. It started slowing down more when we hit 3 million leads, and now that we're getting closer to 5, the Disk Queue pegs for up to a second or two straight sometimes.
That would actually still be workable, but this system has another area with a time-sensitive process, lets say for simplicity that it's a web service, that needs to serve up responses very quickly or it will cause a timeout on the other end. The Disk Queue spikes are causing that part to bog down, which is causing timeouts downstream. The end result is actually dropped phone calls in our automated VoiceXML-based IVR, and that's very bad for us.
What We've Tried
We've tried:
Maintenance tasks that reduce the number of leads in the system to the bare minimum.
Added the obvious indexes to help.
Ran the index tuning wizard in profiler and applied most of its suggestions. One of them was going to more or less reproduce the entire table inside an index so I tweaked it by hand to do a bit less than that.
Added more RAM to the server. It was a little low but now it always has something like 8 gigs idle, and the SQL server is configured to use no more than 8 gigs, however it never uses more than 2 or 3. I found that odd. Why isn't it just putting the whole table in RAM? It's only 5 million leads and there's plenty of room.
Poured over query execution plans. I can see that at this point the indexes seem to be mostly doing their job -- about 90% of the work is happening during the sorting stage.
Considered partitioning the Leads table out to a different physical drive, but we don't have the resources for that, and it seems like it shouldn't be necessary.
In Closing...
Part of me feels like the server should be able to handle this. Five million records is not so many given the power of that server, which is a decent quad core with 16 gigs of ram. However, I can see how the sorting part is causing millions of rows to be touched just to return a handful.
So what have you done in situations like this? My instinct is that we should maybe slash some functionality, but if there's a way to keep this intact that will save me a war with the business unit.
Thanks in advance!
Database bottlenecks can frequently be improved by improving your SQL queries. Without knowing what those look like, consider creating an operational data store or a data warehouse that you populate on a scheduled basis.
Sometimes flattening out your complex relational databases is the way to go. It can make queries run significantly faster, and make it a lot easier to optimize your queries, since the model is very flat. That may also make it easier to determine if you need to scale your database server up or out. A capacity and growth analysis may help to make that call.
Transactional/highly normalized databases are not usually as scalable as an ODS or data warehouse.
Edit: Your ORM may have optimizations as well that it may support, that may be worth looking into, rather than just looking into how to optimize the queries that it's sending to your database. Perhaps bypassing your ORM altogether for the reports could be one way to have full control over your queries in order to gain better performance.
Consider how your ORM is creating the queries.
If you're having poor search performance perhaps you could try using stored procedures to return your results and, if necessary, multiple stored procedures specifically tailored to which search criteria are in use.
determine which ad-hoc queries will most likely be run or limit the search criteria with stored procedures.. can you summarize data?.. treat this
app like a data warehouse.
create indexes on each column involved in the search to avoid table scans.
create fragments on expressions.
periodically reorg the data and update statistics as more leads are loaded.
put the temporary files created by queries (result sets) in ramdisk.
consider migrating to a high-performance RDBMS engine like Informix OnLine.
Initiate another thread to start displaying N rows from the result set while the query
continues to execute.

Resources