Executing a Kusto function residing on leader cluster from follower cluster - azure-data-explorer

Lets say I have a user defined function MyFunction() which is created in a leader cluster in database MyDatabase which is being followed by a follower cluster. Now if I execute the function from the follower cluster , where cluster is it being actually executed on?

It will be executed on the follower cluster. The function is always in the context of the database, and thus will execute in that database, if it is a follower database it will be executed in the follower cluster.

Related

Raft.Next - persistent cluster configuration fails when running multiple processes

I'm currently investigating Raft in dotNext and would like to move from the fairly simplistic example which registers all the nodes in the cluster at startup to using an announcer to notify the leader when a new node has joined.
To my understanding this means that I should start the initial node in ColdStart but then subsequent nodes should use the ClusterMemberAnnouncer to add to the cluster as:
services.AddTransient<ClusterMemberAnnouncer<UriEndPoint>>(serviceProvider => async (memberId, address, cancellationToken) =>
{
// Register the node with the configuration storage
var configurationStorage = serviceProvider.GetService<IClusterConfigurationStorage<UriEndPoint>>();
if (configurationStorage == null)
throw new Exception("Unable to resolve the IClusterConfigurationStorage when adding the new node member");
await configurationStorage.AddMemberAsync(memberId, address, cancellationToken);
});
It makes sense to me that the nodes should use a shared/persisted configuration storage so that when the second node tries to start up and announce itself, it's able to see the first cold-started active node in the cluster. However if I use the documented services.UsePersistentConfigurationStorage("configurationStorage") approach and then run the nodes in separate console windows ie. separate processes, the second node understandably says:
The process cannot access the file 'C:\Projects\RaftTest\configurationStorage\active.list' because it is being used by another process.
Has anyone perhaps got an example of using an announcer in Raft dotnext?
And does anyone know the best way (hopefully with an example) to use persistent cluster configuration storage so that separate processes (potentially running in different docker containers) are able to access the active list?

Horizontal scaling and cron jobs

I was recently forced to move my app to Amazon and use auto-scaling, I have stumbled on to a issue with cron jobs and automatic scaling.
I have a cron job running every 15 minutes which checks if subscriptions should be charged, the query selects all subscriptions that are past due, and attempts to charge them. It changes their status once processed, but they are fetched In a batch, and the process takes 1-3 minutes.
If I have multiple instances with the same cron job, it could fire simultaneously and charge the subscriptions multiple times. This has actually happened once.
What is the Best approach here? Somehow locking the table?
I am using Amazon elastic beanstalk and symfony3.
At least you can use dedicated micro instance for subscription charging (not auto-scaled of course), just with cron jobs. Simplest way yet safest (obviously it will safe if you move your subscription handling logic from front-end servers which potentially can be hacked to the server behind VPC subnet that isn't available from global network).
But if you don't want, you still can use another approach. You mentioned you use Beanstalk. Beanstalk allow to use delayed jobs.
So possible approach is:
1) When you create subscription, you can calculate when it should be charged, and then push the job with calculated delay to Beanstalk tube.
2) Then, worker get the job (with subscription) on-time. Only one worker will get the particular job, so it will work if you use autoscaling.
3) In worker, you check the subscription (probably it can be deleted or inactive etc.) and if it ready to charge, just run the code for charging. Then calculate next charging time and push new delayed job (with subscription) to queue.
Beanstalk has Symfony bundle and powerful PHP library
You can make your job run only for one instance i.e make your functionality - charge subscription run only for one of instance.
You can use AWS api for fetching all instances and then matching the instances with current running one.
ec2 = Aws::EC2::Resource.new(region: 'region',
credentials: Aws::Credentials.new(IAM_KEY', 'IAM_SECRET')
)
metadata_endpoint = 'http://169.254.169.254/latest/meta-data/'
current_server_id = Net::HTTP.get( URI.parse( metadata_endpoint + 'instance-id' ) )
instances = []
ec2.instances.each do |i|
if (i.state.name == 'running')
instances << i.id
end
end
if (instances.first == current_server_id )
{
your functionality
}

Hierarchical CLH lock behaviour

Could anyone explain how does a HCLH lock handles the new nodes that are created in the local cluster after the Cluster Master has merged the local queue into the global queue?
Once the local queue is merged on to the global queue, the cluster master sets the tailWhenSpliced field to true. The new local node that is added will know that it is the cluster master when it checks the predecessor's tailWhenSpliced flag. I have cut the long answer short.

Doctrine2: Cannot find concurrently persisted entity with findById

I have the current setup:
A regular Symfony2 web request can create and persist Job entity which also creates a Gearman Job, lets say this occurs in process 1. The Gearman Job is executed by a Gearman Worker which is passed the Job entity's ID.
I also use Symfony to create a Gearman Worker, this is run as a PHP CLI process, lets call this process 2.
For those not familiar with Gearman the worker code operates something like so:
for loop 5 times
get job from gearman (blocking method call)
get job entity from database
do stuff
Essentially this code keeps a Symfony2 instance running to handle 5 Jobs before the worker dies.
My issue is this: On the first job that the worker handles Doctrine2 is able to retrieve the created job from the database without issue using the following code:
$job = $this->doctrine
->getRepository('AcmeJobBundle:Job')
->findOneById($job->workload()); // workload is the job id
However, once this job completes and the for loop increments to wait for a second job, lets say this arrives from another Symfony2 web request on process 3 creating the Job with ID 2, the call to the Doctrine2 repository returns null even though the entity is definitely in the database.
Restarting the worker solves the issue, so when it carries out it's first loop it can pick up Job 2.
Does anyone know why this happens? Does the first call of getRepository or findOneById do some sort of table caching from MySQL that doesn't allow it to see the subsequently added Job 2?
Does MySQL only show a snapshot of the DB to a given connection as long as it is held open?
I've also tried resetting the entityManager before making the second call to findOneBy to no avail.
Thanks for any advice in advance, this one is really stumping me.
Update:
I've created a single process test case to rule out whether or not it was the concurrency causing the problem, and the test case executes as expected. It seems the only time the repository can't find job 2 is when it is added to the DB on another process.
// Job 1 already exists
$job = $this->doctrine
->getRepository('AcmeJobBundle:Job')
->findOneById(1);
$job->getId(); // this is fine.
$em->persist(new Job()); // creates job 2
$em->flush();
$job = $this->doctrine
->getRepository('AcmeJobBundle:Job')
->findOneById(2);
$job->getId(); // this is fine too, no exception.
Perhaps one process tries to load entity before it has saved by the second process.
Doctrine caches loaded entities by their id, so that when you get a second request for the same object it loads without making another query to the database. You can reed more about Doctrine IdentityMap here

How to lock node for deletion process

Within alfresco, I want to delete a node but I don't want to be used by any other users in a cluster environment.
I know that I will use LockService for lock a node (in a cluster environment) as in the folloing lines:
lockService.lock(deleteNode);
nodeService.deleteNode(deleteNode);
lockService.unlock(deleteNode);
the last line may cause an exception because the node has already been deleted, and indeed it causes the exception is
A system error happened during the operation: Node does not exist: workspace://SpacesStore/cb6473ed-1f0c-4fa3-bfdf-8f0bc86f3a12
So how to ensure concurrency in a cluster environment when delete a node to prevent two users to access the same node at the same time one of them want to update it and the second once want o delete it?
Depending on your cluster environment (e.g. same DB server used by all Alfresco instances), transactions might most likely just be enough to ensure no stale content is used:
serverA(readNode)
serverB(deleteNode)
serverA(updateNode) <--- transaction failure
The JobLockService allows more control in case of more complex operations, which might involve multiple, dynamic nodes (or no nodes at all, e.g. sending emails or similar):
serverA(acquireLock)
serverB(acquireLock) <--- wait for the lock to be released
serverA(readNode1)
serverA(if something then updateNode2)
serverA(updateNode1)
serverA(releaseLock)
serverB(readNode2)
serverB(releaseLock)

Resources