Lets imagine that we have a bank and an ATM. They communicate over network, which could fail. Is it possible to create a scenario, where communication between them is 100% durable. In this case it means, that:
client withdrawn physical amount of money
<=>
account balance updated accordingly
Let's check couple of scenarios:
ATM sends a requests, bank sends a confirmation. Confirmation get lost, bank updated account but client haven't got the money.
(if bank awaits confirmation from ATM to update balance) ATM sends a requests, bank sends a confirmation, ATM sends ack for reception. Ack got lost. ATM issued money, but bank never updated an account.
So I never could create a solution, where failing network would not prevent money from getting lost on either side.
Please, advise.
Actually, if I am not misunderstanding your question, you're probably talking about Long Wait algorithm.
In your first step---I'd suggest you to wait until the confirmation is not received(acknowledged) by the ATM or vice-versa. This is the only viable solution in this case. In this case,you set up a minimum fixed time-bound after which if the acknowledgement isn't received, you again request the same from the bank at each regular interval of n time-unit(the minimum time unit for checking acknowledgement from bank by the ATM). If it repeatedly fails, this means there is something wrong with the code OR concept.
Also, do utilise the concept of Redo Log Buffer as these are the best option to store and update the bank balances !!! Don't just keep only one copy,but two or three copies of the account information and make changes in temporary copy and only update the final account info in redo log once the acknowledgement from the ATM is received to the bank or the vice-versa! Mind receiving the acknowledgement before updating values in redo log!
Related
I'm trying to send data from multiple ESP-8266 to feeds on my Adafruit IO account.
The problem is that when I try to send new values, I'm faced with a ban from publishing because the 2 seconds time limit is violated when two or more of my MCUs happen to send data at the same time (I can't synchronize them to avoid this).
is there any possible solution to this problem?
I suggest to consider those three options:
A sending token which is send from one ESp to the next. So basically all ESPs are mot allowed to send. If the token is received its allowed to send - waits the appropriate time limit hands the token to the next ESP. This solution has all Arduinos connected via an AP/router and would use client to client communication. It can be setup fail safe, so if the next ESP is not available (reset/out of battery etc) you take the next on the list and issue an additional warning to the server
The second solution could be (more flexible and dynamic BUT SPO - single point of failure) to set up one ESP as data collector to do the sending.
If the ESps are in different locations you have to set them up that they meet the following requirement:
If you have a free Adafruit IO Account, the rate limit is 30 data
points per minute.
If you exceed this limit, a notice will be sent to the
{username}/throttle MQTT topic. You can subscribe to the topic if you
wish to know when the Adafruit IO rate limit has been exceeded for
your user account. This limit applies to all Data record modification
actions over the HTTP and MQTT APIs, so if you have multiple devices
or clients publishing data, be sure to delay their updates enough that
the total rate is below your account limit.
so its not 2 sec limit but 30/min (60/min if pro) so you limit sending each ESP to the formula:
30 / Number of ESPs sending to I/O -> 30 / 5 = 6 ==> 5 incl. saftey margin
means each ESP is within a minute only allowed to send 5 times. Important if the 5 times send limit is up it HAS to wait a minute before the next send.
The answer is simple, just don't send that frequent.
In the IoT world
If data need frequent update (such as motor/servo, accelerometer, etc.), it is often that you'd want to keep it local and won't want/need to send it to the cloud.
If the data need to be in the cloud, it is often not necessary need to be updated so frequently (such as temperature/humidity).
Alternatively, if you still think that your data is so critical that need to be updated so frequently, dedicate one ESP as your Edge Gateway to collect the data from sensor nodes, and send it to the cloud at once, that actually the proper way of an IoT network design with multiple sensor nodes.
If that still doesn't work for you, you still have the choice of pay for the premium service to raise the rate limit, or build your own cloud service and integrate it with your Edge Gateway.
We have a solution that takes a message, and sends it to a web API.
Every day, an automatic procedure is run by another department that passes thousands of records into the messagebox, which seems to cause errors related to the API solicit-response port (strangely these errors don't allude to a timeout, but they do only trigger when such a massive quantity of data is sent downstream).
I've contacted the service supplier to determine the capacity of their API calls, so I'll be able to tailor our flow once I have a better idea.
I've been reading up on Rate Based Throttling this morning, and have a few questions I can't find an answer to;
If throttling is enabled, does it only process the Minimum number of samples/messages? If so, what happens to the remaining messages? I read somewhere they're queued in memory, but only of a max of 100, so where do all the others go?
If I have 2350 messages flood through in the space of 2 seconds, and I want to control the flow, would changing my Sampling Window duration down to 1 second and setting Throttling override to initiate throttling make a difference?
If you are talking about Host Throttling setting, the remaining messages will be in the message box database and will show as being in a Dehydrated state.
You would have to test the throttling settings under load. If you get it wrong it can be very bad. I've come across one server where the settings were configured incorrectly and it is constantly throttling.
I've developed a channel in Mirth that sends an ORU message. The ACK will be then sent back asynchronously to a different channel on a specific port.
In order to be able to resend the ORU message in case an AR or AE is received back in the ACK I need to store this ORU somewhere to get access to it later when the ACK is received (remember it is asynchronous).
I am figuring out how to achieve this. My idea looks like this:
send ORU message and store it in a database
in the other channel wait for incomming ACKs
for an incomming ACK look for the related ORU in the database and depending if the ACK was positive or not, remove the ORU or resend it again
It would be nice if someone of you has some experience with it and can tell me if this is a proper way to do it and if not, how.
Case the idea is good, how should I implement the third step? I have already tried with a single channel but I cannot manage to resend the ORU.
I think your approach is reasonable. We tend to shy away from hard deletes and instead mark the message as "acknowledged" with the time the ACK (or NACK) message arrived. This gives you the ability to query for ACK/NACK/NULL and gives you a history of responses.
We keep track of our ORU by sampleId/timestamp and make note of the messageID. The messageID comes back in the ACK/NACK so it's easy to match up.
You probably don't want to resend (automatically) NACK'ed messages because what-ever caused the NACK in the first place (like a format issue) will not likely be resolved with a simple resend. Instead you may want a NACK to trigger an alert (like an email) to some group that can make the fix and resend manually.
You'll have to decide if you can safely resend missing "ACKs" and how long you should wait. Those are more business decisions than technical ones. For instance, you didn't receive an ACK/NACK, but was that because of network hiccup, an issue with Mirth, or did your original message actually not go through? If the receiving system is counting on 1 and only 1 message, then you'll need some way to reconcile before resending those messages that received no ACK.
Not sure what category this question falls into; perhaps general networking / design / algorithms.
For a project I am looking at having one server with multiple connected clients. After some time, when all clients have connected, the server should send a message to each client instructing them to take some action. I need to guarantee that each client will execute this action at exactly the same time. Theoretically, how can this be done? What are the practical complications I will come up against? My target platform is mobile.
One solution I can think of;
The server actively and continuously keep track of the round-trip latency for each client. Provided this latency doesn't change too fast over time, the server should be able to compensate for each client's lag and send messages to each such that they all start execution at roughly the same time. Is there a better way?
One not-really related question: Client side and server side events not firing simultaneously
It can easily be done.
You don't care about latency nor you need the same machine time at clients.
The key here is to create a precise appointment.
Since clients communicate to the server, and not vice versa (you didn't say anything about it though). I can give you the following solution:
When a client connects to the server, it should send their local time.
When the server thinks it's time for the event to be set. It should send an appointment event to each client, with their local time in it. Server can calculate this.
Then, each client knows when exactly they need to do something by setting a timer till the time for their appointment comes.
In theory yes you can but not in real life.
At least you should add some a validity time-slot. All actions should be in that predefined time-slot in order that action to be valid.
So basically "same moment" = "a predefined time slot".
A predefined time-slot can be any value that is close to same moment or real-time.
Say my network connection drops for a few seconds and I miss some SignalR server-pushed messages.
When I regain network connectivity are the messages I missed lost? or does signalR handle them and push them out when I reconnect?
If it can't handle missed messages, then what is the recommended approach for ensuring consistency?
Periodically (2-3 mins) poll to check server-data?
Somehow detect loss of network on the client side and do an ajax call to get the data on network restoration?
something else?
Here are a couple of thoughts:
If you aren't sending a lot of messages per second, consider sending no data in the messages themselves. Instead, the message is just a "ping" to the clients telling them to go get the server data when they can. Combine that with a periodic poll, as you said, and you can be assured that you won't miss messages. They just might be delayed.
If you are sending a lot of messages quickly, how about adding a sequential ID to each one? Think of a SQL Identity column. Your clients would need to keep track of the most recent ID received. After a network reconnect, the client could ask for all messages since [Last ID]. If a message is received whose ID is not contiguous with the most recently received, you know that there was a disconnect and can ask the server for the missing information.