SMPP optional parameters - asynchronous

Good day guys!. I'm currently working on a system using JMS queues that send message over SMPP (using Logica SMPP library).
My problem is that I need to attach an internal id (that we manage within our system) to the message sequence id so that when in async mode I receive a response, the proper action can be taken for that particular message.
The first option I tried to implement was the use of optional parameters, as established for SMPP 3.4. I do not receive the optional parameters in the response (I've read that the response attaches the optional parameters depending on the provider).
A second approach was to keep a mapping in memory for those messages until their response is received (it saturates the memory, so it is a no go).
Can anyone else think on a viable solution for correlating an internal system ID of a message to its sequence number within an asynchronous SMPP environment?
Thank you for your time.

You need to keep a map of seq_nr - internal message id and delete from this map as soon you get an async response back from SMSC.
It should not saturate the memory as it will keep only inflight messages but you need to periodicaly iterate over the map and delete orphaned entries (as sometimes you will not get an reponse back from smsc).

Related

How to Improve Performance of Kafka Producer when used in Synchronous Mode

I have developed a Kafka version : 0.9.0.1 application that cannot afford to lose any messages.
I have a constraint that the messages must be consumed in the correct sequence.
To ensure I do not loose any messages I have implemented Retries within my application code and configured my Producer to ack=all.
To enforce exception handling and to Fail Fast I immediately get() on the returned Future from Producer.send(), e.g.
final Future<RecordMetadata> futureRecordMetadata = KAFKA_PRODUCER.send(producerRecord);
futureRecordMetadata.get();
This approach works fine for guaranteeing the delivery of all messages, however the performance is completely unacceptable.
For example it takes 34 minutes to send 152,125 messages with ack=all.
When I comment out the futureRecordMetadata.get(), I can send 1,089,125 messages in 7 minutes.
When I change ack=all to ack=1 I can send 815,038 in 30 minutes. Why is there such a big difference between ack=all and ack=1?
However by not blocking on the get() I have no way of knowing if the message arrived safely.
I know I can pass a Callback into the send and have Kafka retry for me, however this approach has a drawback that messages may be consumed out of sequence.
I thought request.required.acks config could save the day for me, however when I set any value for it I receive this warning
130 [NamedConnector-Monitor] WARN org.apache.kafka.clients.producer.ProducerConfig - The configuration request.required.acks = -1 was supplied but isn't a known config.
Is it possible to asynchronously send Kafka messages, with a guarantee they will ALWAYS arrive safely and in the correct sequence?
UPDATE 001
Is there anyway I can consume messages in kafka message KEY order direct from the TOPIC?
Or would I have to consume messages in offset order then sort programmatically
to Kafka message Key order?
If you expect a total order, the send performance is bad. (actually total order scenario is very rare).
If Partition order are acceptable, you can use multiple thread producer. One producer/thread for each partition.

Erlang: difference between using gen_server:cast/2 and standard message passing

I was working though a problem and noticed some code where a previous programmer was passing messages using the standard convention of PID ! Message. I have been using gen_server:cast/2. I was wondering if somebody could explain to me the critical differences and considerations when choosing between the two?
There are a few minor differences:
Obviously, the gen_server handles casts in handle_cast and "normal" messages in handle_info.
A cast never fails; it always returns ok. Sending a message with ! fails with badarg if you are sending a message to an atom that is currently not registered by a process. (Sending a message to a pid never causes an error, even if the process is dead.)
If the gen_server is running on a remote node that is not currently connected to the local node, then gen_server:cast will spawn a background process to establish the connection and send the message, and return immediately, while ! only returns once the connection is established. (See the code for gen_server:do_send.)
As for when to choose one or the other, it's mostly a matter of taste. I'd say that if the message could be thought of as an asynchronous API function for the gen_server, then it should use cast, and have a specific API function in the gen_server callback module. That is, instead of calling gen_server:cast directly, like this:
gen_server:cast(foo_proc, {some_message, 42})
make a function call:
foo_proc:some_message(42)
and implement that function like the direct cast above. That encapsulates the specific protocol of the gen_server inside its own module.
In my mind, "plain" messages would be used for events, as opposed to API calls. An example would be monitor messages, {'DOWN', Ref, process, Id, Reason}, and events of a similar kind that might happen in your system.
In addition to legoscia post I would say that it is easier to trace dedicated function API than messages. Especially in prod environment.

Meteor DDP - "ready" and "update" messages clarification

I'm currently implementing a DDP client based on the specs available on this page:
https://github.com/meteor/meteor/blob/master/packages/livedata/DDP.md
I just have a doubt concerning the 2 method types called "ready" and "update".
Let's start with the "ready", according to the spec:
When one or more subscriptions have finished sending their initial
batch of data, the server will send a ready message with their IDs.
Do this mean that we can have several "added" messages from the server until the whole collection to be completely transferred to the client. We should store this in a temporary place to then wait for the "ready" semaphore prior to make it public ? i.e. in the real collection ?
The same question regarding the remote procedure calls. Should I store the result in a temporary collection and only return (process) the result once the "updated" message is received ?
This part is obscure
Once the server has finished sending the client all the relevant data messages based on this procedure call, the server should send an
updated message to the client with this method's ID.
"Should", so I'm stuck if I do rely on it but nothing ?
We should store this in a temporary place to then wait for the "ready"
semaphore prior to make it public ? i.e. in the real collection ?
The standard Meteor JavaScript client makes added documents available in the client collection as they come in from the server. So if, for example, the collection is being displayed on the web page and 5 of 100 documents have arrived so far, the user will be able to see the 5 documents.
When the subscription "ready" message arrives, the subscription on the client is marked as "ready", which the client can use if they're doing something that needs to wait for all the data to arrive.
Whether you want to wait in your client for all the data to arrive before making it public is up to you... it depends on what you're doing with your client and if you want to show documents as they arrive or not.
"Should", so I'm stuck if I do rely on it but nothing ?
The Meteor server does send the "updated" message, so you can rely on it.
The same question regarding the remote procedure calls. Should I store the result in a temporary collection and only return (process) the result once the "updated" message is received ?
There are two outcomes from making a method call: the return value (or error) returned by the method (the "result" message), and documents that may have been inserted / updated / removed by the method call (the "updated" message). Which one you want to listen to is up to you: whether it's important for you to know when you've received all the document changes coming from the method call or if you just want the method return value.
The "updated" message is used by the Meteor client to perform "latency compensation": when the client changes a local document, the change is applied immediately to the local document (and the changes will be visible to the user)... on the assumption that the changes will likely be accepted by the server. Then the client makes a method call requesting the change, and waits for the updated documents to be sent from the server (which may include the changes if they were accepted, or not, if they were rejected). When the "update" message is received, the local changes are thrown away and replaced by the real updates from the server. If you're not doing latency compensation in your own client then you may not care about the "updated" message.

How to force the current message to be suspended and be retried later on from within a custom BizTalk **send** pipeline component?

Here is my scenario. BizTalk needs to transfer a file from a shared/central document library. First BizTalk receives an incoming message with a reference/path to this document in the library. Then it simply needs to read it out from this library and send it (potentially through different adapters). This is in essence, a scenario not so remote from the ClaimCheck EAI pattern.
Some ways to implement a claim check have been documented, noticeably BizTalk ESB Toolkit Claim Check, and BizTalk 2009: Dealing with Extremely Large Messages, Part I & Part II. These implementations do however take the assumption that the send pipeline can immediately read the stream that has been “checked in.”
That is not my case: the document will take some time before it is available in the shared library, and I cannot delay the initial received message. That leaves me with 2 options: either introduce some delay via an orchestration or ensure the send port will later on retry if the document is not there yet.
(A delay can only be introduced via an orchestration, there is no time-based subscriptions in BizTalk. Right?)
Since this a message-only flow I’d figure I could skip the orchestration. I have seen ways on how to have "Custom Retry Logic in Message Only Solution Using Pipeline" but what I need is not only a way to control the retry behavior (as performed by the adapter) but also to enforce it right from within the pipeline…
Every attempt I made so far just ended up with a suspended message that won’t be automatically retried even though the send adapter had retry configured… If this is indeed possible, then where/what should I do?
Oh right… and there is queuing… but unfortunately neither on premises nor in the cloud ;)
OK I may be pushing the limits… but just out of curiosity…
Many thanks for your help and suggestions!
I'm puzzled as to how this could be done without an Orch. The only way I can think of would be along the lines of:
The receive port for the initial messages just 'eats' the messages,
e.g. subscribing these messages to a dummy Send port with the Null Adapter,
ignoring them totally.
You monitor the Shared document library with a receive port, looking for any ? any new? document there.
Any located documents are subscribed by a send port and sent downstream.
An orchestration based approach would be along the lines of:
Orch is triggered by a receive of the Initial notification of an 'upcoming' new file to the library. If your initial notification is request response (e.g. exposed web service, you can immediately and synchronously issue the response)
Another receive port is used to do the monitoring of availability and retrieval of the file from shared library, correlating to the original notification message (e.g. by filename, or other key)
A mechanism to handle the retry if the document isn't available, and potentially an eventual timeout, e.g. if the document never makes it to the shared library.
And on success, a send port to then send the document downstream
Placing the delay shape in the Orch will offer more scalability than e.g. using Thread.Sleep() or similar in custom adapter or pipeline code, since BTS just calculates ad stamps the 'awaken' timestamp on the SQL record and can then dehydrate the orch, freeing up the thread.
The 'is the file there yet?' check can be done with a retry loop, delaying after each failed check, with a parallel branch with a timeout e.g. after an hour or so.
The polling interval can be controlled in the receive location, so I do not understand what you mean by there is no time based subscriptions in Biztalk. You also have a schedule window.
One way to introduce delay is to send that initial message to an internal webservice, which will simply post back the message to Biztalk after a specified time interval.
There are also loopback adapters, which simply post the message back into the messagebox. This can be ammended to add a delay.

In Mate, Sending two or more requests to the server simultaneously?

I'm using Mate's RemoteObjectInvoker to call methods in my FluorineFX based API. However, all requests seem to be sent to the server sequentiality. That is, if I dispatch a group of messages at the same time, the 2nd one isn't sent until the first returns. Is there anyway to change this behavior? I don't want my app to be unresponsive while a long request is processing.
This thread will help you to understand what happens (it talks about blazeds/livecylce but I assume that Fluorine is using the same approach). In a few words what happens is:
a)Flash player is grouping all your calls in one HTTP post.
b)The server(BlazeDs,Fluorine etc) receives the request and starts to execute the methods serially, one after another.
Solutions
a)Have one HTTP post per method, instead of one HTTP post containing all the AMF messages. For that you can use HTTPChannel instead of AMFChannels (internally it is using flash.net.URLLoader instead of flash.net.NetConnection). You will be limited to the maximum number of parallel connection defined by your browser.
b)Have only one HTTP post but implement a clever solution on the server (it will cost you a lot of development time). Basically you can write your own parallel processor and use message consumers/publishers in order to send the result of your methods to the client.
c)There is a workaround similar to a) on https://bugs.adobe.com/jira/browse/BLZ-184 - create your remoteobject by hand and append a random id at the end of the endpoint.

Resources