Kafka Producer for internal topics with individual configs - spring-kafka

One of my topologies generates an internal topic e.g. KSTREAM-AGGREGATE-STATE-STORE-0000000031 (see snipped from below) and for which an internal topic <app-id>KSTREAM-AGGREGATE-STATE-STORE-0000000031-changelog is created.
<...>
Processor: KSTREAM-FLATMAPVALUES-0000000022 (stores: [])
--> KSTREAM-AGGREGATE-0000000032, KSTREAM-FLATMAP-0000000027, KSTREAM-MAP-0000000023, KSTREAM-MAP-0000000025, KSTREAM-MAP-0000000029
<-- KSTREAM-TRANSFORMVALUES-0000000017
Processor: KSTREAM-AGGREGATE-0000000032 (stores: [KSTREAM-AGGREGATE-STATE-STORE-0000000031])
--> KTABLE-TOSTREAM-0000000033
<-- KSTREAM-FLATMAPVALUES-0000000022
Processor: KTABLE-TOSTREAM-0000000033 (stores: [])
--> KSTREAM-PEEK-0000000034
<-- KSTREAM-AGGREGATE-0000000032
<...>
the topology is defined as follows (BusObjKey and BusObj both are Avro Objects with according serdes, TransformBusObj provides the business logic for the aggregation and later mapping)
<...>
KStream<BusObjKey, BusObj> busObjStream = otherBusObjStream
.groupByKey()
.windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofMinutes(5)))
.aggregate(BusObj::new,
TransformBusObj::aggregate,
Materialized.with(busObjKeySerde, busObjSerde))
.toStream()
.map(TransformBusObj::map);
<...>
How can I controll the properties used for the producer creating as well sending messages to <app-id>KSTREAM-AGGREGATE-STATE-STORE-0000000031-changelog ? In particular I would need to turn compression on (e.g. config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy") ). Since I do not want to have compression all over the other producers I wonder of how to achive this in Spring Boot.

If you use the config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,"snappy"), it will turn on compression for all the producers in the Kafka Streams application. The only workaround I can think of would be to provide your own producer via the overloaded KafkaStreams constructor that accepts a KafkaClientSupplier. In your custom producer, you can inspect the topic name before sending and manually compress the data. Since you're manually compressing the data, I believe you'd also have to provide a custom restore consumer that "knows" to decompress. But I'm not sure this suggestion would even work or would be worth the effort.
HTH,
Bill

Related

Is there a syslog private enterprise number for custom/internal use?

So I recently was looking for a way to add extra metadata to logs and found out that syslog got me covered. I can add custom metadata using SD-ID feature like this:
[meta#1234 project="project-name" version="1.0.0-RC5" environment="staging" user="somebody#example.com"]
The problem is that 1234 has to be a syslog private enterprise number.
I assume those are given to big companies like microsoft or apple, but not to indie developers.
So My question is, is there a reserved number for internal use that everyone could use without registration for internal purpose?
If you use RFC5424-formatted messages, you can (or could) create custom fields in the SDATA (Structured Data) part of the message.
The latter part of a custom field in the SDATA is, as you mentioned, the private enterprise number (or enterpiseId).
As per RFC5424 defined:
7.2.2. enterpriseId
The "enterpriseId" parameter MUST be a 'SMI Network Management Private Enterprise Code', maintained by IANA, whose prefix is iso.org.dod.internet.private.enterprise (1.3.6.1.4.1). The number that follows MUST be unique and MUST be registered with IANA as per RFC 2578 [RFC2578].
Of course it depends on what you're using it for, if it's only for local logs, you can use any enterpriseId or you can even use a predefined SDATA field with a reserved SD-ID and rewrite it's value. (See: syslog-ng Guide)

Whats the difference between 'options' and 'overrides' in Rblpapi?

In the documentation here, Bloomberg does not make a distinction in the request. The requests can only have 3 things:securities, fields and overrides.
So what are options? How do they get used? Is this a distinction imposed by Rblpapi? Can someone explain the distinction?
Please let me know if I am incorrectly understanding something.
Options are parameters that change how a Request or Subscription should behave. For example, a ref data request with returnEID=true will return the EID(s) of each security in response messages. A Subscription with interval=5.0 will make it an Intervalized Subscription.
Overrides, on the other hand, are field/value pairs that you specify in Requests to alter the form or content of the returned fields, for example, GICS_SECTOR_NAME will normally return sector name in English (or precisely the default terminal language), you can specify SECURITY_NAME_LANG=9 override to get the name in Korean. You can also "request" SECURITY_NAME_LANG field to know the language used in GICS_SECTOR_NAME field. Overrides can be used in Request/Response only (not subscriptions), and are applied to the entire request, on all fields that react to that override.
option.names = "optName", option.values = "optVal"
in R, maps to:
request.set("optName", optVal);
in Java. E.g:
option.names="periodicitySelection", option.values="MONTHLY")
request.set("periodicitySelection", "MONTHLY");

meanuver's direction attribute in Routing service's Response

I'm testing the routing service in API 3.0 and I canĀ“t find the attribute "direction" in meanuver, this attribute exists in API 2.5, it attribute indicates the direction of the instruction for example "forward, straight, right.."
Does anybody know if there is some attribute that indicates the direction of the instruccion in the API 3.0?
Thanks.
As discussed in the migration guide, there is a fundamental shift in the use of services between the 2.x and 3.0 Here Maps APIs for JavaScript - previously the Manager objects decided a fixed format of for the request to the underlying REST APIs and encapsulated the response. Whereas now the full range of parameters can (and should) be set by the developer.
In the routing case the question is not so much "What can the 3.0 API do?" as "How was the REST request fixed by the 2.x API and how can I mimic the parts of that request that I need?".
Looking at the Legacy API playground simple routing example, the underlying REST request is:
http://route.cit.api.here.com/routing/7.2/calculateroute.json?routeattributes=shape&maneuverattributes=all&jsonAttributes=1&waypoint0=geo!52.516,13.388&waypoint1=geo!52.517,13.395&language=en-GB&mode=shortest;car;traffic:default;tollroad:-1&app_id=APP_ID&app_code=TOKEN...
This can be reproduced precisely in the 3.x API with the following:
var router = platform.getRoutingService(),
routeRequestParams = {
routeattributes: 'shape',
maneuverattributes: 'all',
jsonAttributes :'1',
waypoint0: '52.516,13.388',
waypoint1: '52.517,13.395',
language: 'en-GB',
mode: 'shortest;car;traffic:default;tollroad:-1'
};
router.calculateRoute(...);
The next question here is what parameters do you really need for your application? The list for the calculateRoute endpoint of the underlying REST Routing 7.2 API includes the description of maneuverattributes showing how to obtain directions - with maneuverattributes=...,direction
So it may be possible to reduce the routeRequestParams to something like:
var routeRequestParams = {
routeattributes: 'shape',
maneuverattributes: 'position,length,direction',
...etc...
So in summary, you'll need to consult the REST Routing API documentation to define what you need first, before passing those parameters into the query of the Maps API for JavaScript calculateRoute() call.

Dynamics AX 2009 AIF Tables

Background
I have an issue where roughly once a month the AIFQueueManager table is populated with ~150 records which relate to messages which had been sent to AX (where they "successfully failed"; i.e. errorred due to violation of business rules, but returned an exception as expected) over 6 months ago.
Question
What tables are involved in the AIF inbound message process / what order to events occur in? e.g. XML file is picked up and recorded in the AifDocumentLog, data's extracted and added to the AifQueueManager and AifGatewayQueue tables, records from here are then inserted in the AifMessageLog, etc.
Thanks in advance.
There are 4 main AIF classes, I will be talking about the inbound only, and focusing on the included file system adapter and flat XML files. I hope this makes things a little less hazy.
AIFGatewayReceiveService - Uses adapters/channels to read messages in from different sources, and dumps them in the AifGatewayQueue table
AIFInboundProcessingService - This processes the AifGatewayQueue table data and sends to the Ax[Document] classes
AIFOutboundProcessingService - This is the inverse of #2. It creates XMLs with relevent metadata
AIFGatewaySendService - This is the inverse of #1, where it uses adapters/channels to send messages out to different locations from the AifGatewayQueue
For #1
So #1 basically fills the AifGatewayQueue, which is just a queue of work. It loops through all of your channels and then finds the relevant adapter by ClassId. The adapters are classes that implement AifIntegrationAdapter and AifReceiveAdapter if you wanted to make your own custom one. When it loops over the different channels, it then loops over each "message" and tries to receive it into the queue.
If it can't process the file for some reason, it catches exceptions and throws them in the SysExceptionTable [Basic>Periodic>Application Integration Framework>Exceptions]. These messages are scraped from the infolog, and the messages are generated mostly from the receive adaptor, which would be AifFileSystemReceiveAdapter for my example.
For #2
So #2 is processing the inbound messages sitting in the queue (ready/inprocess). The AifRequestProcessor\processServiceRequest does the work.
From this method, it will call:
Various calls to Classes\AifMessageManager, which puts records in the AifMessageLog and the AifDocumentLog.
This key line: responseMessage = AifRequestProcessor::executeServiceOperation(message, endpointActionPolicy); which actually does the operation against the Ax[Document] classes by eventually getting to AifDispatcher::callServiceMethod(...)
It gets the return XML and packages that into an AifMessage called responseMessage and returns that where it may be logged. It also takes that return value, and if there is a response channel, it submits that back into the AifGatewayQueue
AifQueueManager is actually cleared and populated on the fly by calling AifQueueManager::createQueueManagerData();.

Apache camel using seda

I want to have a behavior like this:
Camel reads a file from a directory, splits it into chunks (using streaming), sends each chunk to a seda queue for concurrent processing, and after the processing is done, a report generator is invoked.
This is my camel route:
from("file://c:/mydir?move=.done")
.to("bean:firstBean")
.split(ExpressionBuilder.beanExpression("splitterBean", "split"))
.streaming()
.to("seda:processIt")
.end()
.to("bean:reportGenerator");
from("seda:processIt")
.to("bean:firstProcessingBean")
.to("bean:secondProcessingBean");
When I run this, the reportGenerator bean is run concurrently with the seda processing.
How to make it run once after the whole seda processing is done?
The splitter has built-in parallel so you can do this easier as follows:
from("file://c:/mydir?move=.done")
.to("bean:firstBean")
.split(ExpressionBuilder.beanExpression("splitterBean", "split"))
.streaming().parallelProcessing()
.to("bean:firstProcessingBean")
.to("bean:secondProcessingBean");
.end()
.to("bean:reportGenerator");
You can see more details about the parallel option at the Camel splitter page: http://camel.apache.org/splitter
I think you can use the delayer pattern of Camel on the second route to achieve the purpose.
delay(long) in which the argument indicates time in milliseconds. You can read more abuout this pattern here
For eg; from("seda:processIt").delay(2000)
.to("bean:firstProcessingBean"); //delays this route by 2 seconds
I'd suggest the usage of startupOrder to configure the startup of route though.
The official documentation provides good details on the topic. Kindly read it here
Point to note - " The routes with the lowest startupOrder is started first. All startupOrder defined must be unique among all routes in your CamelContext."
So, I'd suggest something like this -
from("endpoint1").startupOrder(1)
.to("endpoint2");
from("endpoint2").startupOrder(2)
.to("endpoint3");
Hope that helps..
PS : I'm new to Apache Camel and to stackoverflow as well. Kindly pardon any mistake that might've occured.

Resources