How to create asynchronous backups of shared data structures in hazelcast? - networking

I'm trying to add x number of objects via a simple for-loop to a distributed hazelcast queue (IQueue).
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
BlockingQueue<String> configs = hazelcastInstance.getQueue("test");
for(int i = 0; i<1000;i++) {
configs.add("Some string"+i);
}
Changing the values and in the config (see below) doesn't have any influence on the execution speed. I'd assume that increasing would block the insert operations and increasing would not (actually the loop should be run through as quickly as if the #add operation was on a local queue). However, the time executing the for-loop is the same. Even if i set both values to 0. Why is that (it's a two-node cluster with one node on a different vm)?
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation=
"http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<network>
<port auto-increment="true" port-count="20">5701</port>
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<member>172.105.66.xx</member>
</tcp-ip>
</join>
</network>
<queue name="test">
<statistics-enabled>false</statistics-enabled>
<max-size>0</max-size>
<backup-count>0</backup-count>
<async-backup-count>1</async-backup-count>
<empty-queue-ttl>-1</empty-queue-ttl>
</queue>
</hazelcast>

The async-backups are not blocking your calls, so there should be a minimal difference in setting 0 or 1. Setting another value is meaningless on the 2 nodes cluster.
What makes the difference is the fact if the owner of the partition with your data structure is a local one or a remote one. The performance issues are in such case usually caused by the network latency between the caller (your test) and data structure owner (remote Hazelcast instance).
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
IQueue<String> configs = hazelcastInstance.getQueue("test");
for(int i = 0; i<1000;i++) {
configs.add("Some string"+i);
}
Member localMember = hazelcastInstance.getCluster().getLocalMember();
Member partitionOwner = hazelcastInstance.getPartitionService().getPartition(configs.getName()).getOwner();
boolean localCall = localMember.equals(partitionOwner);
System.out.println("Local calls to IQueue: " + localCall);

Related

Interrupt ZStream mapMPar processing

I have the following code which, because of Excel max row limitations, is restricted to ~1million rows:
ZStream.unwrap(generateStreamData).mapMPar(32) {m =>
streamDataToCsvExcel
}
All fairly straightforward and it works perfectly. I keep track of the number of rows streamed, and then stop writing data. However I want to interrupt all the child fibers spawned in mapMPar, something like this:
ZStream.unwrap(generateStreamData).interruptWhen(effect.true).mapMPar(32) {m =>
streamDataToCsvExcel
}
Unfortunately the process is interrupted immediately here. I'm probably missing something obvious...
Editing the post as it needs some clarity.
My stream of data is generated by an expensive process in which data is pulled from a remote server, (this data is itself calculated by an expensive process) with n Fibers.
I then process the streams and then stream them out to the client.
Once the processed row count has reached ~1 million, I then need to stop pulling data from the remote server (i.e. interrupt all the Fibers) and end the process.
Here's what I can come up with after your clarification. The ZIO 1.x version is a bit uglier because of the lack of .dropRight
Basically we can use takeUntilM to count the size of elements we've gotten to stop once we get to the maximum size (and then use .dropRight or the additional filter to discard the last element that would take it over the limit)
This ensures that both
You only run streamDataToCsvExcel until the last possible message before hitting the size limit
Because streams are lazy expensiveQuery only gets run for as many messages as you can fit within the limit (or N+1 if the last value is discarded because it would go over the limit)
import zio._
import zio.stream._
object Main extends zio.App {
override def run(args: List[String]): URIO[zio.ZEnv, ExitCode] = {
val expensiveQuery = ZIO.succeed(Chunk(1, 2))
val generateStreamData = ZIO.succeed(ZStream.repeatEffect(expensiveQuery))
def streamDataToCsvExcel = ZIO.unit
def count(ref: Ref[Int], size: Int): UIO[Boolean] =
ref.updateAndGet(_ + size).map(_ > 10)
for {
counter <- Ref.make(0)
_ <- ZStream
.unwrap(generateStreamData)
.takeUntilM(next => count(counter, next.size)) // Count size of messages and stop when it's reached
.filterM(_ => counter.get.map(_ <= 10)) // Filter last message from `takeUntilM`. Ideally should be .dropRight(1) with ZIO 2
.mapMPar(32)(_ => streamDataToCsvExcel)
.runDrain
} yield ExitCode.success
}
}
If relying on the laziness of streams doesn't work for your use case you can trigger an interrupt of some sort from the takeUntilM condition.
For example you could update the count function to
def count(ref: Ref[Int], size: Int): UIO[Boolean] =
ref.updateAndGet(_ + size).map(_ > 10)
.tapSome { case true => someFiber.interrupt }

How do I get a value from a dictionary when the key is a value in another dictionary in Lua?

I am writing some code where I have multiple dictionaries for my data. The reason being, I have multiple core objects and multiple smaller assets and the user must be able to choose a smaller asset and have some function off in the distance run the code with the parent noted.
An example of one of the dictionaries: (I'm working in ROBLOX Lua 5.1 but the syntax for the problem should be identical)
local data = {
character = workspace.Stores.NPCs.Thom,
name = "Thom", npcId = 9,
npcDialog = workspace.Stores.NPCs.Thom.Dialog
}
local items = {
item1 = {
model = workspace.Stores.Items.Item1.Main,
npcName = "Thom",
}
}
This is my function:
local function function1(item)
if not items[item] and data[items[item[npcName]]] then return false end
end
As you can see, I try to index the dictionary using a key from another dictionary. Usually this is no problem.
local thisIsAVariable = item[item1[npcName]]
but the method I use above tries to index the data dictionary for data that is in the items dictionary.
Without a ton of local variables and clutter, is there a way to do this? I had an idea to wrap the conflicting dictionary reference in a tostring() function to separate them - would that work?
Thank you.
As I see it, your issue is that:
data[items[item[npcName]]]
is looking for data[“Thom”] ... but you do not have such a key in the data table. You have a “name” key that has a “Thom” value. You could reverse the name key and value in the data table. “Thom” = name

In Terraform, how can you easily create 10 alarms per table

A single terraform alarm looks something like this:
resource "aws_cloudwatch_metric_alarm" "ecs_cpu_reservation" {
alarm_name = "ecs-cpu-reservation-${var.environment}"
alarm_description = "my description"
namespace = "AWS/ECS"
metric_name = "CPUReservation"
dimensions {
ClusterName = "${var.environment}"
}
statistic = "Average"
period = "300"
evaluation_periods = "${var.acceptable_cpu_reservation_eval_period}"
comparison_operator = "GreaterThanOrEqualToThreshold"
threshold = "${var.acceptable_cpu_reservation}"
alarm_actions = ["${data.terraform_remote_state.vpc.my_topic_arn}"]
actions_enabled = "${var.alerting_enabled}"
}
I have 10 alarms per table, and 50 tables.
Therefore, the tf file will contain 500 of those resource blocks. That's a huge file!
The vast majority of the alarms are identical... the only difference being what table the alarm is for.
Is there a way to loop over a table name list and create the alarms?
From what I read, using the "count" variable (or iterating over a list) will lead to maintenance nightmares.
One other option to avoid looping would be to wrap things with a module so you need to source that just once to get everything inside the module.
So if you had a module that looked like this:
variable "dynamodb_table_name" {}
variable "consumed_read_units_threshold" {}
variable "consumed_write_units_threshold" {}
...
resource "aws_cloudwatch_metric_alarm" "consumed_read_units" {
alarm_name = "dynamodb_${var.dynamodb_table_name}_consumed_read_units"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "ConsumedReadCapacityUnits"
namespace = "AWS/DynamoDB"
period = "120"
statistic = "Average"
dimensions {
TableName = "${var.dynamodb_table_name}"
}
threshold = "${var.consumed_read_units_threshold}"
alarm_description = "This metric monitors DynamoDB ConsumedReadCapacityUnits for ${var.dynamodb_table_name}"
insufficient_data_actions = []
}
resource "aws_cloudwatch_metric_alarm" "consumed_write_units" {
alarm_name = "dynamodb_${var.dynamodb_table_name}_consumed_write_units"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "ConsumedWriteCapacityUnits"
namespace = "AWS/DynamoDB"
period = "120"
statistic = "Average"
dimensions {
TableName = "${var.dynamodb_table_name}"
}
threshold = "${var.consumed_write_units_threshold}"
alarm_description = "This metric monitors DynamoDB ConsumedWriteCapacityUnits for ${var.dynamodb_table_name}"
insufficient_data_actions = []
}
...
You could define all of your DynamoDB table metrics in a single place and then source that whenever you create your DynamoDB table and they would all get metric alarms for every thing you care about. It's then easy to add/remove/modify these alarms in a single place (the module) and have that automatically applied to your tables on the next terraform apply.
Ideally you'd be able to create a single DynamoDB module that created the table as well as the alarms at the same time but the DynamoDB table resource isn't particularly flexible so it would be a nightmare to design a module that would allow for the flexibility you need (different defined attributes and indexes mostly).
If you wanted you could combine this with looping to reduce some of the module code in exchange for issues around changing the looped things because Terraform will see the resources for things changing and force recreations of even things that shouldn't be affected. In the case where you are just creating alarms (nothing stateful or needs to be up 100%) this isn't a huge issue but be aware that it could require a second terraform apply to fully apply changes done like that.

How to mock 2 or multiple prepared statements per connection in jMock?

public void testCreate() throws ApplicationException {
DutyDrawback drawback = new DutyDrawback();
drawback.setSerialNumber("TEST123");
drawback.setSnProcessInd("Y");
drawback.setMediaNumber("TEST111");
drawback.setMnProcessInd("Y");
drawback.setConfirmedInd("Y");
drawback.setResponseCode("1");
drawback.setResponseMsg("MSG");
drawback.setLastChangedUser("USER");
drawback.setLastChangedDate(new Date(System.currentTimeMillis()));
mockILogger.expects(atLeastOnce()).method("informationalEvent");
Logger.setMockLogger((ILogger) mockILogger.proxy());
// Set up the statement expectations
Mock mockStatement = mock(PreparedStatement.class);
mockStatement.expects(atLeastOnce()).method("setString");
mockStatement.expects(once()).method("executeUpdate").will(returnValue(1));
Mock mockStatement1 = mock(PreparedStatement.class);
mockStatement1.expects(atLeastOnce()).method("setString");
mockStatement1.expects(once()).method("executeUpdate").will(returnValue(1));
// Set up the connection expectations
mockConnection.expects(once()).method("setAutoCommit");
mockConnection.expects(once()).method("commit");
mockConnection.expects(once()).method("prepareStatement").will(returnValue(mockStatement1.proxy()));
mockConnection.expects(once()).method("prepareStatement").will(returnValue(mockStatement.proxy()));
TVSUtils.setMockConnection((Connection) mockConnection.proxy());
DutyDrawbackDAO drawbackDAO = new DutyDrawbackDAO();
int count = drawbackDAO.create(drawback);
assertEquals(2, count);
}
I am getting the following trace:
org.jmock.core.DynamicMockError: mockPreparedStatement: unexpected invocation
Invoked: mockPreparedStatement.executeUpdate()
Allowed:
expected at least once and has been invoked: setString, is void
expected once and has been invoked: executeUpdate, returns <1>
at org.jmock.core.AbstractDynamicMock.mockInvocation(AbstractDynamicMock.java:96)
at org.jmock.core.CoreMock.invoke(CoreMock.java:39)
at $Proxy2.executeUpdate(Unknown Source)
at com.cat.tvs.dao.DutyDrawbackDAO.create(DutyDrawbackDAO.java:102)
at com.cat.tvs.dao.DutyDrawbackDAOTest.testCreate(DutyDrawbackDAOTest.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:58)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:60)
at java.lang.reflect.Method.invoke(Method.java:391)
at junit.framework.TestCase.runTest(TestCase.java:164)
at org.jmock.core.VerifyingTestCase.runBare(VerifyingTestCase.java:39)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:120)
at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
I am not sure what am I doing wrong here?
Thanks.
Firstly, you seem to be using jmock1, you might consider moving to jmock2.
The failure says that you're calling executeUpdate() when it's not expected. Are you calling it more than once?
Finally, I usually recommend not writing mock tests against JDBC, even though I wrote a paper many years ago showing how to do it. We talk about "only mock types you own" for a number of reasons, one of which is that mock tests assume that the third party implementation doesn't change. For JDBC, I'd recommend writing integration tests that run against a small instance of the real database.

AX 2009: Adjusting User Group Length

We're looking into refining our User Groups in Dynamics AX 2009 into more precise and fine-tuned groupings due to the wide range of variability between specific people within the same department. With this plan, it wouldn't be uncommon for majority of our users to fall user 5+ user groups.
Part of this would involve us expanding the default length of the User Group ID from 10 to 40 (as per Best Practice for naming conventions) since 10 characters don't give us enough room to adequately name each group as we would like (again, based on Best Practice Naming Conventions).
We have found that the main information seems to be obtained from the UserGroupInfo table, but that table isn't present under the Data Dictionary (it's under the System Documentation, so unavailable to be changed that way by my understanding). We've also found the UserGroupName EDT, but that is already set at 40 characters. The form itself doesn't seem to restricting the length of the field either. We've discussed changing the field on the SQL directly, but again my understanding is that if we do a full synchronization it would overwrite this change.
Where can we go to change this particular setting, or is it possible to change?
The size of the user group id is defined as as system extended data type (here \System Documentation\Types\userGroupId) and you cannot change any of the properties including the size 10 length.
You should live with that, don't try to fake the system using direct SQL changes. Even if you did that, AX would still believe that length is 10.
You could change the SysUserInfo form to show the group name only. The groupId might as well be assigned by a number sequence in your context.
I wrote a job to change the string size via X++ and it works for EDTs, but it can't seem to find the "userGroupId". From the general feel of AX I get, I'd be willing to guess that they just have it in a different location, but maybe not. I wonder if this could be tweaked to work:
static void Job9(Args _args)
{
#AOT
TreeNode treeNode;
Struct propertiesExt;
Map mapNewPropertyValues;
void setTreeNodePropertyExt(
Struct _propertiesExt,
Map _newProperties
)
{
Counter propertiesCount;
Array propertyInfoArray;
Struct propertyInfo;
str propertyValue;
int i;
;
_newProperties.insert('IsDefault', '0');
propertiesCount = _propertiesExt.value('Entries');
propertyInfoArray = _propertiesExt.value('PropertyInfo');
for (i = 1; i <= propertiesCount; i++)
{
propertyInfo = propertyInfoArray.value(i);
if (_newProperties.exists(propertyInfo.value('Name')))
{
propertyValue = _newProperties.lookup(propertyInfo.value('Name'));
propertyInfo.value('Value', propertyValue);
}
}
}
;
treeNode = TreeNode::findNode(#ExtendedDataTypesPath);
// This doesn't seem to be able to find the system type
//treeNode = treeNode.AOTfindChild('userGroupId');
treeNode = treeNode.AOTfindChild('AccountCategory');
propertiesExt = treeNode.AOTgetPropertiesExt();
mapNewPropertyValues = new Map(Types::String, Types::String);
mapNewPropertyValues.insert('StringSize', '30');
setTreeNodePropertyExt(propertiesExt, mapNewPropertyValues);
treeNode.AOTsetPropertiesExt(propertiesExt);
treeNode.AOTsave();
info("Done");
}

Resources