I'm using DAX client along with DyanamoDBMapper. I'm using the following dependencies.
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-dynamodb</artifactId>
<version>1.11.751</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>amazon-dax-client</artifactId>
<version>1.0.205917.0</version>
</dependency>
And the following code shows how the Mapper is initialized
AmazonDaxClientBuilder builder = AmazonDaxClientBuilder.standard();
builder.withRegion("<region>");
builder.withEndpointConfiguration("<DAX ednpoint>");
client = builder.build();
DynamoDBMapper dynamoDBMapper = new DynamoDBMapper(client);
When using this setup, in DAX metrics I see 0 cache hits and 0 cache misses for queries. But when I remove the mapper and using the table to query I can see cache hits.
I'm not using consistent reads as described here. AWS DAX cluster has zero cache hits and cache miss
Am I doing something wrong here or missing any configuration step here?
Also would like to know whether there's any way to get DAX cache hit or miss details for each request so debugging could be a bit easier.
The reason for the cache misses was with the DynamoDBQueryExpression which sets consistentRead flag to true by default.
Although DynamoDBMapperConfig by default does EVENTUAL consistent reads, DynamoDBQueryExpression is set to consistent reads. Due to this if you are using DynamoDBQueryExpression you have to explicitly set consistentRead to false.
When consistent reads are requested DAX requests are forwarded directly to DynamoDB and that's why we didn't see cache hits at DAX metrics.
Related
I have a list of entities List<Entity> entitiesList. I need to publish and store the list of events for each of entity.
I have an aggregate for Entity , all necessary handlers, CreateEntityCommand and EntityCreatedEvent.
Currently what I do:
1. Create commands in the loop and send these commands via command gateway for each entity from entitiesList.
for (Entity entity : entitiesList) {
CreateEntityCommand createEntityCommand = new CreateEntityCommand();
… here I set command’s fields …
commandGateway.send(createEntityCommand);
}
Inside the aggregate I have
#CommandHandler
public EntityAggregate(CreateEntityCommand createAlertCommand) {
EntityCreatedEvent entityCreatedEvent = new EntityCreatedEvent();
…. here I set event’s fields
AggregateLifecycle.apply(entityCreatedEvent);
}
As the result, the events are created published and saved into the DomainEventEntry table inside the loop one by one.
If I have 10000 of entities – this process takes a lot of time …
My question is – how can I improve this process of creating, publishing and saving a list of entities ?
I use this version of axon:
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-spring-boot-starter</artifactId>
<version>4.3</version>
<exclusions>
<exclusion>
<groupId>org.axonframework</groupId>
<artifactId>axon-server-connector</artifactId>
</exclusion>
</exclusions>
</dependency>
SpringBoot configuration with annotation #SpringBootApplication.
I haven't configured anything specific around Axon.
What I believe you need is to parallelize the processing of the commands to speed up the work. There are two ways to achieve this:
Change the local CommandBus
Distributed your application
I am assuming you are on pointer 1, hence my answer will be tailored towards this.
When you have a single instance of an Axon application using Spring Boot a SimpleCommandBus will be auto configured for you. This does not provide any possibility for concurrent work. Thus configuring a different CommandBus bean should be the way to go.
I'd suggest to first start using the AsynchronousCommandBus. This implementation uses an Executor (which you can further configure if you so desire) to spin up threads to dispatch (and handle) each command going through.
If this is still to slow to your liking, I'd try out the DisruptorCommandBus (for specifics on what "Disruptor" is, you can look here). This CommandBus implementation will use two threads pools; one pool for handling commands and another for storing the events.
Lastly, if you are already working with a distributed version of the CommandBus (like Axon Server, or with the DistributedCommandBus), you will have to provide a CommandBus bean with the qualifier "localSegment" attached to it. For a quick overview of the command buses provided by Axon, I'd have a look at there Reference Guide (up here).
I have a web application (spring) which I want to instrument using AWS-XRay. I have added "AWSXRayServletFilter" in my web.xml, and the below snippet in my spring configuration class, as per documentation.
static {
AWSXRayRecorderBuilder builder = AWSXRayRecorderBuilder.standard()
.withPlugin(new EC2Plugin()).withPlugin(new ECSPlugin());
builder.withSamplingStrategy(new DefaultSamplingStrategy());
AWSXRay.setGlobalRecorder(builder.build());
}
The below dependency is also added in pom.xml
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-xray-recorder-sdk-aws-sdk-instrumentor</artifactId>
<version>1.2.0</version>
</dependency>
During the application start up, I am getting the below exception.
com.amazonaws.xray.exceptions.SegmentNotFoundException: Failed to begin subsegment named 'AmazonDynamoDBv2': segment cannot be found
Any pointers to solve this will be helpful
When you init the global recorder, you should also start the parent segment. You're trying to create a SubSegment, without a Segment.
AWSXRay.setGlobalRecorder(AWSXRayRecorderBuilder.defaultRecorder());
AWSXRay.beginSegment("MySeg");
I only got this in junit-tests where dynamodb was mocked. To fix I put
AWSXRay.beginSegment("AmazonDynamoDBv2");
in test setup (#Before)
I did not touch implementation since I believe this is something Amazon already does in its SDK?
I use CppSQLite 3.5.7.
Is there way to log / get information about whether the SQLite query uses the disk or cache to get the results?
The SQLite C API has the sqlite3_db_status() function, which allows querying the SQLITE_DBSTATUS_CACHE_HIT/MISS counters.
The CppSQLite library does not expose this API; you'd have to modify it.
I need an alerting system where I could have my own metric and threshold to report for anomalies (basically alerting on the basis of logs and data in DB). I explored Bosun but not sure how to make it work. I have following issues:-
There are pre-defined items which are all system level, but I couldn't find a way to add new items, i.e. custom items
How will bosun ingest data other than scollector. As I understand could I use logstash as data source and totally miss OpenTDSP( Really don't like HBase dependency)?
By Items I think you mean metrics. Bosun learns about metrics, and their tag relationships when you do one of the following:
Relay opentsdb data through Bosun (http://bosun.org/api#sending-data)
Get copies of metrics sent to the api/index route http://bosun.org/api#apiindex
There are also metadata routes, which tell bosun about the metric, such as counter/gauge, unit, and description.
The logstash datasource will be deprecated in favor of an elastic datasource in the coming 0.5.0 release. But it is replaced by an elastic one is better (but requires ES 2+). To use those expressions see the raw documentation (bosun.org docs will updated next release): https://raw.githubusercontent.com/bosun-monitor/bosun/master/docs/expressions.md. To add it you would have something like the following in the config:
elasticHosts=http://ny-lselastic01.ds.stackexchange.com:9200,http://ny-lselastic02.ds.stackexchange.com:9200,http://ny-lselastic03.ds.stackexchange.com:9200
The functions to query various backends are only loaded into the expression library when the backend is configured.
I have setup Redis as a caching mechanism on my server with a Wordpress site. Basically on each request I check if a cache of the page exists and then I show the cache.
I'm using Predis (https://github.com/nrk/predis) as an interface to the redis database.
When I get the info from the usage of Redis however, I only see 1 key used in the system:
used_memory:103810376
used_memory_human:99.00M
used_memory_rss:106680320
used_memory_peak:222011768
used_memory_peak_human:211.73M
mem_fragmentation_ratio:1.03
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:8
bgsave_in_progress:0
last_save_time:1396168319
bgrewriteaof_in_progress:0
total_connections_received:726918
total_commands_processed:1240245
expired_keys:22
evicted_keys:0
keyspace_hits:1158841
keyspace_misses:699
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:21712
vm_enabled:0
role:master
db0:keys=1,expires=0
How could this be? I expect to see more keys listed, as each cached copy of the html of a page should have it's own key?
What am I missing here?
Without looking at the technical implementation, it could be several things.
1) The pages didn't get a hit, so they are not in the cache
2) The keys expired already
3) The mechanism uses for example a HSET , where you can have N key/values registered under 1 main key. You can check this by using the TYPE redis command on the single key you've got.