The first DB call is much slower - openedge

First call at morning takes 15 seconds,
FOR EACH ... NO-LOCK:
END.
the second call takes only 1,5 seconds.
What causes this delay?
What can I log to identify it?
Even when I restart the DB I can't reproduce the behaviour of the first call.
(In complex queries I measure difference of 15 minutes to 2 seconds)

The most likely cause for this will be caching. There are two caches in place:
The -B buffer pool of the database which caches database blocks in memory. It is a typical observation, that once this cache is warmed up after a restart of the DB server queries are executing much faster. Of course this all depends on the size of your DB and the size of the -B buffer pool. Relatively small databases may fit into a relatively large -B buffer pool in large parts
The OS disk cache will also play it's part in your observation

Related

NiFi memory management

I Just want to understand how we should plan for the capacity of a NiFi instance.
We have a NiFi instance which is having around 500 flows. So, the total number of processors enabled on NiFi canvas is around 4000. We do run 2-5 flows simultaneously which does not take more than half an hour i.e. we do process data in MBs.
It was working fine till now but we are seeing outofMemory error very often. So we increased xms and xmx parameters from 4g to 8g which has resolved the problem for now. But going forward we will have more flows and we may face outofmemory issue again.
So, can anyone help with matrix of capacity planning or any suggestion to avoid such issues before happening? eg:- If we have 3000 processors enabled with/without any processing then Xg amount memory required.
Any input on NiFi capacity planning would be appreciated.
Thanks in Advance.
OOM errors can occur due to specific memory consuming processors. For example: SplitXML is loading your whole record to memory, so it could load a 1GiB file for instance.
Each processors can document what resource considerations should be taken. All of the Apache processors(as far as I can tell) are documented in that matter so you can rely on them.
In our example, by the way, SplitXML can be replaced with SplitRecord which doesn't load all of the record to memory.
So even if you use 1000 processors simultaneously, they might not consume as much memory as one processor that loads your whole FlowFile's content to memory.
Check which processors you are using and make sure you don't use one like that(there are more like this one that load the whole document to memory).

Maximum concurrent http connections for same domain between two servers

I have a Tomcat Server running which connects to another Tableau server. I need to make about 25 GET calls from Tomcat to Tableau. Now I am trying to thread this and let each thread create its own HTTP connection object and make the call. On my local system (Tomcat local, Tableau is remote), I notice that in this case each of my thread takes about 10 seconds average, so in total 10 seconds.
However, if I do this sequentially, each request takes 2 seconds, thereby total of 50.
My doubt is, when making requests in parallel, why does each take more than 2 seconds when it takes just 2 when done sequentially?
Does this have anything to do with maximum concurrent connections to same domain from one client (browser)? But here the request is going from my Tomcat server, not browser.
If yes, what is the default rule and is there any way to change that?
In my opinion, its most likely Context Switching Overhead that the system has to go through for each request and that is why you see longer times for individual requests ( compared to one sequential thread ) but significant gain in overall processing.
It makes sense to go for parallel processing when Context Switching Overhead is negligible compared to time taken in overall activity.

Odd Asp.Net threadpool sizing behavior

I am load testing an .Net 4.0 MVC application hosted on IIS 7.5 (default config, in particular processModel autoconfig=true), and am observing odd behavior in how .Net manages the threads.
http://msdn.microsoft.com/en-us/library/0ka9477y(v=vs.100).aspx mentions that "When a minimum is reached, the thread pool can create additional threads or wait until some tasks complete".
It seems the duration that threads are blocked for, plays a role in whether it creates new threads or waits for tasks to complete. Not necessarily resulting in optimal throughput.
Question: Is there any way to control that behavior, so threads are generated as needed and request queuing is minimized?
Observation:
I ran two tests, on a test controller action, that does not do much beside Thread.Sleep for an arbirtrary time.
50 requests/second with the page sleeping 1 second
5 requests/second with the page sleeping for 10 seconds
For both cases .Net would ideally use 50 threads to keep up with incoming traffic. What I observe is that in the first case it does not do that, instead it chugs along executing some 20 odd requests concurrently, letting the incoming requests queue up. In the second case threads seem to be added as needed.
Both tests generated traffic for 100 seconds. Here are corresponding perfmon screenshots.
In both cases the Requests Queued counter is highlighted (note the 0.01 scaling)
50/sec Test
For most of the test 22 requests are handled concurrently (turquoise line). As each takes about a second, that means almost 30 requests/sec queue up, until the test stops generating load after 100 seconds, and the queue gets slowly worked off. Briefly the number of concurrency jumps to just above 40 but never to 50, the minimum needed to keep up with the traffic at hand.
It is almost as if the threadpool management algorithm determines that it doesn't make sense to create new threads, because it has a history of ~22 tasks completing (i.e. threads becoming available) per second. Completely ignoring the fact that it has a queue of some 2800 requests waiting to be handled.
5/sec Test
Conversely in the 5/sec test threads are added at a steady rate (red line). The server falls behind initially, and requests do queue up, but no more than 52, and eventually enough threads are added for the queue to be worked off with more than 70 requests executing concurrently, even while load is still being generated.
Of course the workload is higher in the 50/sec test, as 10x the number of http requests is being handled, but the server has no problem at all handling that traffic, once the threadpool is primed with enough threads (e.g. by running the 5/sec test).
It just seems to not be able to deal with a sudden burst of traffic, because it decides not to add any more threads to deal with the load (it would rather throw 503 errors than add more threads in this scenario, it seems). I find this hard to believe, as a 50 requests/second traffic burst is surely something IIS is supposed to be able to handle on a 16 core machine. Is there some setting that would nudge the threadpool towards erring slightly more on the side of creating new threads, rather than waiting for tasks to complete?
Looks like it's a known issue:
"Microsoft recommends that you tune the minimum number of threads only when there is load on the Web server for only short periods (0 to 10 minutes). In these cases, the ThreadPool does not have enough time to reach the optimal level of threads to handle the load."
Exactly describes the situation at hand.
Solution: Slightly increased the minWorkerThreads in machine.config to handle expected traffic burst. (4 would give us 64 threads on the 16 core machine).

The Case of the Missing '14 second SQLite database' performance

I have developed a program which uses SQLite 3.7 ... database, in it there is a rather extensive write/read module that imports , checks and updates data. This process takes 14 seconds on my PC and Im pleased as punch with the performance.
I use transactions for everything with paratetrs my PC is a Intel i7 with 18gig of ram. I have not set anything in the database. I used SQLite Expert to create the database and create the data structures including table and columns and checked that all indexes are created. In other words its all OK.
I have since deployed the program/database to 2 other machines. That 14 second process takes over 5 minutes on the other machines. Same program, identical data, identical database. The machines are upto date, one is a 3rd gen Intel i7 bought last week, the other is quite fast as well so hardware should not be an issue.
Im just not understanding what the problem could be? Is it the database itself ? I have not set anything other then encription on it. Remembering that I run the same and it takes the 14 seconds. Could it be that the database is 'optimised' to my PC ? so when I give it to others its not optimised?
I know I could turn off jurnaling to get better performance, but that would only speed up the process and still would leave the problem.
Any ideas would be welcome.
EDIT:
I have tested the program on my 7yo Dual Athelon with 3gig of ram running XP on HDD, and the procedure took 35 seconds. Well in tolerable limits considering. I just dont get what could be making 2 modern machines take 5 min ?
I have an idea that its a write issue, as using a reader they are slower but quite ecceptable.
SQLite speed is affected most by how well the disk does random reads and writes; any SSD is much more better at this than any rotating disk.
Whenever changes overflow the internal cache, they must be written to disk. You should use PRAGMA cache_size to increase the cache to more than the default 2 MB.
Changed data must be written to disk at the end of every transaction. Make sure that there are as many changes as possible in one transaction.
If much of your processing involves temporary tables or indexes, the speed is affected by the speed of the main disk. If your machines have enough RAM, you can force temporary data to RAM with PRAGMA temp_store.
You should enable Write-Ahead Logging.
Note: the default SQLite distribution does not have encryption.

Membase Blocking on Key Eviction?

We've been using Memcached for a while and recently started testing Membase in AWS. We're testing a single instance of Membase 1.6.0 on a large EC2 instance with 5GB RAM, 750GB disk (Linux FC8).
We've noticed that SQLite seems to block on eviction purges on an hourly basis when expiryPagerSleeptime wakes up. Although this was expected (because SQLite uses database level locking), we didn't expect that Membase would block as well.
In this case, it seems that while SQLite is deleting old keys, Membase "operations per second" fall to zero or near zero for several minutes. After the eviction process has finished, the Membase server quickly recovers. I would have anticipated that reads from Membase RAM would still proceed while SQLite was locked but this doesn't seem to be the case. Everything stops; the spy clients throw streams of exceptions as they time-out waiting for data that never arrives.
My impression from the docs was that Membase was asynchronous and would continue to serve reads from RAM. I would appreciate any help or suggestions to prevent Membase from blocking on key evictions. This is a serious issue for us because it seems to take about 4 minutes for this eviction process to finish and for the backlog in the disk queue to clear. That means every hour, Membase is effectively offline for 4 minutes.
I should also mention that this happens once the data is larger than RAM (and it's increasing size on the disk). We didn't notice any issues with key eviction when the data was just in RAM (presumably because key eviction in RAM happens too quickly to be noticable.)
With the desire of not duplicating information, this question is being answered and explained here: http://www.couchbase.com/forums/thread/membase-blocking-key-eviction
Perry

Resources