Resizeing BoneCP pool - bonecp

Is it possible to grow and shrink the pools dynamically.
I would like to be able to grow the pool should it be needed and then shrink it down again when load is less all without restarting the application.
I tried setting the max connections variable, but it then simply defaults to 9 connections.

You can reduce or increase the "acquireIncrement" that "When the available connections are about to run out, BoneCP will dynamically create new ones in batches."
The default for BoneCP is 10. If for example you set "acquireIncrement" to 1 your pool will increase or decrease dynamically by 1, having always a number of connections corresponding to the value of minConnectionsPerPartition.
http://jolbox.com/index.html?page=http://jolbox.com/configuration.html

Related

cloudera swap space reached to max threshold

I'm getting this alarm on Cloudera, is there any way to increase the swap space capacity?
While you ask how to increase the swap space capacity, I think it save to assume that what you are really looking for is a way to solve the problem of full swap space.
Increasing the swap space is only one way of dealing with the issue - the other is simply to use less swap space. Cloudera recommends using minimal to no swap space because using swap degrades the performance substantially. The way of controlling this is by setting the 'swappiness' to 1, vs the default of 60. See documentation for instructions and more rational.
If the swappiness is already set to 1, than you can try clearing the swap by toggling swap off, then on.
swapoff -a
swapon -a
Before toggling swap you should make sure that
the amount of swap space in use is less than the amount of free memory (as the contents of swap may be shifted to memory).
currently running processes are not using swap (running vmstat produces on output with columns labeled 'si' and 'so' telling you the amount of memory swapped in and out per second. if these are both 0, then you should be safe).

Is there a way to read from Dynamo DB stream with a fixed no of workers and leases without any issue

I am continuously publishing data into dynamoDB which has stream enabled. I am reading this stream using DynamoDB apadter of KCL.
I am using 1 KCL worker with 5 leases. At the time of creation my Dynamo table had 1 partition(1 RCU and 999WCU). When I keep publishing the data into dynamo, the no of partitions will grow and the no of active shards too. Reading is fine till the no of active shards are 5. As soon as it crosses 5, KCL is not able to read from one of the shards(tps is getting dropped).
Is there any config/parameter that I can set that will allow me to read from growing shards using fixed no of leases ?
You're looking for the maxLeasesPerWorker property.
From the javadoc:
Worker will not acquire more than the specified max number of leases even if there are more shards that need to be processed. This can be used in scenarios where a worker is resource constrained or to prevent lease thrashing when small number of workers pick up all leases for small amount of time during deployment.
Make sure to take note of the warning in the javadoc as well:
Note that setting a low value may cause data loss (e.g. if there aren't enough Workers to make progress on all shards). When setting the value for this property, one must ensure enough workers are present to process shards and should consider future resharding, child shards that may be blocked on parent shards, some workers becoming unhealthy, etc.

H2 DB persistence.mv.db file size increases even on data cleanup in CordApp

Persistence.mv.db size increases even on wiping out old data. And after size increases more than 71 Mb it gives handshake timeout(netty connection). Nodes stop responding to REST services.
We have cleared data from tables like NODE_MESSAGE_IDS, NODE_OUR_KEY_PAIRS, due to large number of hoping between six nodes. And generation of temporary key pairs for a session. And similarly many other tables, e.g. node_transactions, even after clearing them, size increases.
And also when we declare:
val session = serviceHub.jdbcSession()
"session.autoCommit is false" everytime. Also I tries to set its value to true, and execute sql queries.But it did not decrease database size.
This is in reference to the same project. We solved pagination issue by removing the data from tables but DB size still increases. So it is not completely solved:-
Buffer overflow issue when rows in vault is more than 200
There might be issue with your flows, as the node is doing a lot of checkpointing.
Besides that I cannot think of any other scenarios to cause the database to constantly growing.

scaling an azure website

I have a Standard website in Azure with a small instance, (1 core and 1.75 GB memory). It seems to be coping fine and handling the requests smoothly, although I am expecting a lot more within the week.
It is unclear though under what circumstances I should be looking to scale the instance size to the next level ie to Medium. (Besides MemoryWorkingSet of course, rather obvious :))
ie. Will moving up to a Medium instance resolve high CPU time ?
What other telltales should I be watching for ?
I am NOT comfortable scaling the number of instances to more than one at the moment until I resolve some cache issues.
I think the key point I am trying to understand is the link between the metrics provided and the means of scaling available regardless of it being scaled horizontally or vertically.
I am trying to keep the average response time as low as possible as the number of users that interact with the website increase.
Which of the other metrics will alert me when the load on the server is getting to its limits & I will need to scale Vertically ?
The idea behind scaling in Azure is to scale horizontally, i.e. add more instances. Azure can do this for you automatically. If you can't add more instances, Azure can't do the scaling for you automatically.
You can move to Medium instance, overall capacity will increase, but it is impossible to say what your application will require under heavy load. I suggest you run profiler and load test to find out the weak parts of your app and improve these before you have an actual increase in useage.

When to scale up an Azure Standard Instance Size

I have 19 websites running on Azure Standard Websites, with the instance size set to Small.
Right now I can't scale out to multiple instances (or use auto scale) because some of these sites are legacy sites that won't play nice across multiple sites.
The sites running now are fairly basic, but there are 3 sites that are growing fast, and I don't want to have them all bogged down because of the small instance, but I also don't want to pay for a large instance if I don't have to.
How to know when I should scale up to a medium or large instance?
There doesn't seem to be any way to see CPU load in the portal, only CPU time.
Instead of scale up, you need to scale out. you can set CPU metric and set the number if instances you need for that metric -

Resources