I want to implement a chat system.
I am stuck at the point where user sends multiple messgaes really fast. Although all the messages are reached to the server but in any order.
So I thought of implementing a queue where each message shall
First be placed in queue
Wait for its turn
Make the post request on its turn
Wait for around 5 secs for the response from server
If the response arrives within time frame and the status is OK, message sent else message sending failed.
In any case of point 5, the message shall be dequeued and next message shall be given chance.
Now, the major problem is, there could be multiple queues for each chat head or the user we are talking to. How will I implement this? I am really new to dart and flutter. Please help. Thanks!
It sounds like you are describing a Stream - a series of asynchronous events which are ordered.
https://www.dartlang.org/guides/language/language-tour#handling-streams
https://www.dartlang.org/guides/libraries/library-tour#stream
Create a StreamController, and add messages to it as they come in:
var controller = StreamController<String>();
// whenever you have a message
controller.add(message);
Listen on that stream and upload the messages:
await for(var messsage in controller.messages) {
await uploadMessage(message);
}
Related
I have a very simple application for sending bulk messages.
It sends a single message to 20 groups.
The delay that I declare between send messages is "8".
It means about 7~8 send message request per minute.
The documentation says "telegram api has a limit of 20 request per minute".
It means I am using less than half of the limit.
But still, I am getting lots of flood wait errors.
And those wait errors has like 84.000second wait limit.
I am facing 2 errors while getting floodwait error.
1-
Security error while unpacking a recevied message: Server replied with a wrong session ID
2-
Floodwaiterror invoked while sending a message; forcing 70792 second wait interval for ....
I really don't know why this is happening.
The number I am trying to send message is brand new. Clear. Unbanned. Unspammed.
As I said wait interval between 2 messages is 8 seconds, means less than the limits for a minute.
Sessions are correct because it sends couple message but after couple it gets instantly tons of floodwaiterrors from 0 to 70k.
Could you help me to understand what is causing that please?
I have the same problem and solution is to pass the entity object instead of string name of receiver party.
was:
results = await client.send_message("mybot", message="hello")
now:
with TelegramClientSync(StringSession(session_id), api_id, api_hash) as client:
bot_entity = client.get_input_entity(peer="mybot")
results = await client.send_message(entity=bot_entity, message=message)
We are building a real-time chat app using Firestore. We need to handle a situation when Internet connection is absent. Basic message sending code looks like this
let newMsgRef = database.document(“/users/\(userId)/messages/\(docId)“)
newMsgRef.setData(payload) { err in
if let error = err {
// handle error
} else {
// handle OK
}
}
When device is connected, everything is working OK. When device is not connected, the callback is not called, and we don't get the error status.
When device goes back online, the record appears in the database and callback triggers, however this solution is not acceptable for us, because in the meantime application could have been terminated and then we will never get the callback and be able to set the status of the message as sent.
We thought that disabling offline persistence (which is on by default) would make it trigger the failure callback immediately, but unexpectedly - it does not.
We also tried to add a timeout after which the send operation would be considered failed, but there is no way to cancel message delivery when the device is back online, as Firestore uses its queue, and that causes more confusion because message is delivered on receiver’s side, while I can’t handle that on sender’s side.
If we could decrease the timeout - it could be a good solution - we would quickly get a success/failure state, but Firebase doesn’t provide such a setting.
A built-in offline cache could be another option, I could treat all writes as successful and rely on Firestore sync mechanism, but if the application was terminated during the offline, message is not delivered.
Ultimately we need a consistent feedback mechanism which would trigger a callback, or provide a way to monitor the message in the queue etc. - so we know for sure that the message has or has not been sent, and when that happened.
The completion callbacks for Firestore are only called when the data has been written (or rejected) on the server. There is no callback for when there is no network connection, as this is considered a normal condition for the Firestore SDK.
Your best option is to detect whether there is a network connection in another way, and then update your UI accordingly. Some relevant search results:
Check for internet connection with Swift
How to check for an active Internet connection on iOS or macOS?
Check for internet connection availability in Swift
As an alternatively, you can check use Firestore's built-in metadata to determine whether messages have been delivered. As shown in the documentation on events for local changes:
Retrieved documents have a metadata.hasPendingWrites property that indicates whether the document has local changes that haven't been written to the backend yet. You can use this property to determine the source of events received by your snapshot listener:
db.collection("cities").document("SF")
.addSnapshotListener { documentSnapshot, error in
guard let document = documentSnapshot else {
print("Error fetching document: \(error!)")
return
}
let source = document.metadata.hasPendingWrites ? "Local" : "Server"
print("\(source) data: \(document.data() ?? [:])")
}
With this you can also show the message correctly in the UI
Background
We have an app that deals with a considerable amount of requests per second. This app needs to notify an external service, by making a GET call via HTTPS to one of our servers.
Objective
The objective here is to use HTTPoison to make async GET requests. I don't really care about the response of the requests, all I care is to know if they failed or not, so I can write any possible errors into a logger.
If it succeeds I don't want to do anything.
Research
I have checked the official documentation for HTTPoison and I see that they support async requests:
https://hexdocs.pm/httpoison/readme.html#usage
However, I have 2 issues with this approach:
They use flush to show the request was completed. I can't loggin into the app and manually flush to see how the requests are going, that would be insane.
They don't show any notifications mechanism for when we get the responses or errors.
So, I have a simple question:
How do I get asynchronously notified that my request failed or succeeded?
I assume that the default HTTPoison.get is synchronous, as shown in the documentation.
This could be achieved by spawning a new process per-request. Consider something like:
notify = fn response ->
# Any handling logic - write do DB? Send a message to another process?
# Here, I'll just print the result
IO.inspect(response)
end
spawn(fn ->
resp = HTTPoison.get("http://google.com")
notify.(resp)
end) # spawn will not block, so it will attempt to execute next spawn straig away
spawn(fn ->
resp = HTTPoison.get("http://yahoo.com")
notify.(resp)
end) # This will be executed immediately after previoius `spawn`
Please take a look at the documentation of spawn/1 I've pointed out here.
Hope that helps!
I have an application that does the following:
After the app receives a get request, it reads the client's cookies
for identification.
It stores the identification information in Postgresql DB
And it sends the appropriate response and finishes the handling
process.
But in this way the client is also waiting for me to store the data in PSQL. I don'
t want this what I want is:
After the app receives a get request, it reads the client's cookies
for identification.
It sends the appropriate response and finishes the handling process.
It stores the identification information in Postgresql DB.
In the second part storing process is happening after the client has received his response so he won't have to wait for it. I've searched for a solution but haven't found anything thus far. I believe I'm searching with wrong keywords because, I believe this is a common problem.
Any feedback is appreciated.
You should add a callback to the ioloop. Via some code like this:
from tornado import ioloop
def somefuction(*args):
# call the DB
...
... now in your get() or post() handler
...
io_loop = ioloop.IOLoop.instance()
io_loop.add_callback(partial(somefunction, arg, arg2))
... rest of your handler ...
self.finish()
This will get called after the response is returned to the user on the next iteration through the event handler to call your DB processor somefunction.
If you dont want to wait for Postgres to respond you could try
1) An async postgres driver
2) Put the DB jobs on a queue and let the queue handle the DB write. Try Rabbit MQ
Remember because you return to the user before you write to the DB you have to think about how to handle write errors
I am using activemq to send messages. Due to the number of messages being sent and the amount of other work the application is sending. I've enabled async sending.
When the application ends I want to be sure the remaining messages are actually delivered as we sometimes miss one of the last messages that go on the connection.
Googling around I found only the warnings that state you might miss some messages well this is acceptable however we miss some of the last ones about 40% of the time. I wish to improve this situation.
Q: Is there any method to check wether the activemq producer/sender has sent all it's async messages?
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory( brokerUrl );
connectionFactory.setUseAsyncSend( true );
RedeliveryPolicy policy = connectionFactory.getRedeliveryPolicy();
policy.setInitialRedeliveryDelay( 500 );
policy.setBackOffMultiplier((short)2);
policy.setUseExponentialBackOff(true);
policy.setMaximumRedeliveries(10);
You could set the async send mode back to off for the last message which would cause the client to wait on the send until the broker acknowledges that the message has been received. The Connection object would need to be cast to ActiveMQConnection and then you can call setUseAsyncSend(false)