I wrote a piece of code which sends messages to a Telegram bot. In order to do so, I use the chat_id of the last conversation retrieved via the getUpdates method.
id = requests.get(f"https://api.telegram.org/bot{token}/getUpdates").json()['result'][-1]['message']['chat']['id']
My understanding is that the a conversation exists if someone started one with the bot via /start.
How can I initiate, from my code, a conversation to make sure a chat_id is available? (= that there is a conversion I can query).
I am also sure that the conversations, should they exist, are not kept indefinitely (this is another reason why requesting an update can yield empty results)
My understanding is that the a conversation exists if someone started one with the bot via /start.
Yes, conversation is always initiated by a user:
Bots can't initiate conversations with users. A user must either add them to a group or send them a message first. People can use telegram.me/ links or username search to find your bot.
Note that /start is not the only option here.
If you try to send a message for user, who didn't start conversation with bot, you will receive something like this: {"ok":false,"error_code":400,"description":"Bad Request: chat not found"}.
How can I initiate, from my code, a conversation to make sure a chat_id is available? (= that there is a conversion I can query).
Normally, you shouldn't worry about that. Bot does not query specific user actions/requests with getUpdates, it queries all interactions from all users and then decides what to do according to internal logic you provide.
You may want to store information about users and/or their requests in a database every time you receive an Update from particular user in getUpdates.
Based on that, bot can make a decision to send for example, a message to him.
I am also sure that the conversations, should they exist, are not kept indefinitely (this is another reason why requesting an update can yield empty results)
Yes, docs clearly state that
Incoming updates are stored on the server until the bot receives them either way, but they will not be kept longer than 24 hours.
An Update on a Telegram servers is an entity with a short life period.
If you haven't saved information about existing users, or lost a database, there's no way to retrieve that data from telegram servers.
P.S.: as a side note, I'd suggest using long polling, as Telegram Bot API is designed to be used with long polling if you're using getUpdates. The most important thing is the timeout request parameter of getUpdates method:
(timeout is) Timeout in seconds for long polling. Defaults to 0, i.e. usual short polling. Should be positive, short polling should be used for testing purposes only.
As written in question, you're using short polling.
Related
I'm currently working on an IRC-Chat and we want to add the option to chat with other people privately (User-To-User) which works fine, but the messages aren't stored, meaning that a user loses all private messages after disconnecting. They also can't message a person once they have disconnected.
All of this isn't the case with channels, where messages are stored for X time, allowing a asynchronous communication.
Is there a way of allowing asynchronous messaging for private User-To-User messages without storing the messages in an extra system? Or is this simply a limitation of IRC?
IRC isn't designed to store messages - it's a feature, not a bug. One way to get around this is to configure the server to 'simulate' past messages (essentially pretending to be the other user to send past messages). To do this, you would need to store messages - however, it would be on the server itself, not on a separate database or a third party.
I'm trying to get all the chat_id's in the telegram bot API
When I call
https://api.telegram.org/botmybot_token/getUpdates
I get this {"ok":true,"result":[]}
You get empty updates because there were no messages (or other actions) from users.
To obtain user's chat_id follow these steps:
User must first find the bot and send it any message, because
Bots can't initiate conversations with users. A user must either add them to a group or send them a message first. People can use telegram.me/ links or username search to find your bot.
Get the following URL: https://api.telegram.org/botmybot_token/getUpdates?timeout=300. Note that you should use timeout= because this is a long polling method (i.e. you will receive response only when there is a new incoming update or timeout is reached):
(timeout is) Timeout in seconds for long polling. Defaults to 0, i.e. usual short polling. Should be positive, short polling should be used for testing purposes only.
You can still receive updates without timeout setting, but that's not the right way to read user interaction. Instead you can you just run long-polling (i.e. with timeout setting) requests in a loop.
You can read more on getting infromation about user's chat_id here:
https://stackoverflow.com/a/56093268/2315573
I'm trying to get started implementing Web Push in one of my apps. In the examples I have found, the client's endpoint URL is generally stored in memory with a comment saying something like:
In production you would store this in your database...
Since only registered users of my app can/will get push notifications, my plan was to store the endpoint URL in the user's meta data in my database. So far, so good.
The problem comes when I want to allow the same user to receive notifications on multiple devices. In theory, I will just add a new endpoint to the database for each device the user subscribes with. However, in testing I have noticed that endpoints change with each subscription/unsubscription on the same device. So, if a user subscribes/unsubscribes several times in a row on the same device, I wind up with several endpoints saved for that user (all but one of which are bad).
From what I have read, there is no reliable way to be notified when a user unsubscribes or an endpoint is otherwise invalidated. So, how can I tell if I should remove an old endpoint before adding a new one?
What's to stop a user from effectively mounting a denial of service attack by filling my db with endpoints through repeated subscription/unsubscription?
That's more meant as a joke (I can obvioulsy limit the total endpoints for a given user), but the problem I see is that when it comes time to send a notification, I will blast notification services with hundreds of notifications for invalid endpoints.
I want the subscribe logic on my server to be:
Check if we already have an endpoint saved for this user/device combo
If not add it, if yes, update it
The problem is that I can't figure out how to reliably do #1.
I will just add a new endpoint to the database for each device the user subscribes with
The best approach is to have a table like this:
endpoint | user_id
add an unique constraint (or a primary key) on the endpoint: you don't want to associate the same browser to multiple users, because it's a mess (if an endpoint is already present but it has a different user_id, just update the user_id associated to it)
user_id is a foreign key that points to your users table
if a user subscribes/unsubscribes several times in a row on the same device, I wind up with several endpoints saved for that user (all but one of which are bad).
Yes, unfortunately the push API has a wild unsubscription mechanism and you have to deal with it.
The endpoints can expire or can be invalid (or even malicious, like android.chromlum.info). You need to detect failures (using the HTTP status code, timeouts, etc.) when you try to send the push message from your application server. Then, for some kind of failures (permanent failures, like expiration) you need to delete the endpoint.
What's to stop a user from effectively mounting a denial of service attack by filling my db with endpoints through repeated subscription/unsubscription?
As I described above, you need to properly delete the invalid endpoints, once you realize that they are expired or invalid. Basically they will produce at most one invalid request. Moreover, if you have high throughput, it takes only a few seconds for your server to make requests for thousands of endpoints.
My suggestions are based on a lot of experiments and thinking done when I was developing Pushpad.
Another way is to have a keep alive field on you server and have your service worker update it whenever it receives a push notification. Then regularly purge endpoints which haven't been responded to recently.
I'm new to Telegram Bots API so I'm still at abstraction phase.
Anyway I'm testing some Telegram methods using Postman. When I send a getUpdates request to server, it sends me all the updates, but the problem is if I send getUpdates again, I receive previous updates, too.
So is there any way to consume incoming updates so that server doesn't send them again?
Telegram Bots API is very compact and lacks good explanations. That's why I'm asking here.
Quote from Telegram's Bots FAQ:
The getUpdates method returns the earliest 100 unconfirmed updates. To confirm an update, use the offset parameter when calling getUpdates like this:
offset = update_id of last processed update + 1
All updates with update_id less than or equal to offset will be marked
as confirmed on the server and will no longer be returned.
Could you validate my approach for using Firebase to manage a user notification system?
Basically I want to have user specific channels as well as more general channels which hold notifications. These notifications would appear on an intranet if the user has not viewed them before.
The idea being a server side action will update Firebase endpoints using the REST API either for a specific user or broadcast to everyone. The specific user messages I can easily mark as read and therefore not show them again, its the general broadcast I am struggling slightly with.
I could add a flag(user ID) to the general broadcast to indicate its read but I am concerned about performance as the client would have to check historic broadcast messages for the existence of this flag. I could add a user id to create a new endpoint which should be quicker.
e.g. /notification/general/ - contains the message, this triggers the client which then checks to see if /users/USERID/MessageID exists if it doesnt display the message and create this end point.
Is there something I am missing or is that the best approach?
Are the messages always consumed in-order? If so you could have each client remember the ID of the last message it read in each public channel. You could then use "startAt" on the queue to limit it to only new messages.
If they're not consumed in order, then you'll need some way of storing data about which ones were read and which ones weren't. Perhaps you can have each message get sent out to everyone's personal queue, and then have each user remove read messages.
Since there are already undividual user messages, why not just deliver the broadcasts to everyone individually (think email) rather than trying to store a single copy and figure out who read it.
To reduce bulk, you could store the message content separately, and simply store the ids in a user's queue. Then when they are viewed, you flag them per-user without any additional complexity.
At 100k of users receiving 100 messages a day including broadcasts, with a standard firebase ID (around 20 characters), that comes out to 210,000,000 characters a year (i.e. nothing for a database, and probably still far less than the actual bulk of storing the message body), assuming they never expire and get deleted.