I used Graph API to add web hook for group conversations. Because I need to monitor conversations for all groups, I read the group list and add web hook for every group.
After 20-30 groups (The number will change for different accounts), Graph API begins return error:
{
"error": {
"code": "",
"message": "Server could not process subscription creation payload.",
"innerError": {
"request-id": "af7d109a-fb6c-4b41-9aa1-988fc21309ad",
"date": "2016-09-28T03:06:11"
}
}
}
It seems that Graph API will block after receive too many subscription request, is this right?
Then is there a way for me to monitor conversations for all group?
I don't think there's any way to accomplish this except to cycle through all the users and request conversation information one at a time. The drawback here is that it's generally a bad idea to request new information before an old request is finished (Microsoft will throttle your connection) so that limits you to about 2-3 requests/second at best. Depending on the size you're looking at, it may be several minutes between user refreshes.
This is based on my personal experience. I can't find any documentation that supports/denies this.
Related
Problem
Consider a social networking website supporting the following actions:
MediaUploadService: You can upload media files (images and videos) either a single file or multiple files.
TaggingService: Once the files are uploaded, all the persons identified in the files are tagged automatically.
NotificationService: Once the files are tagged, all the persons get notified.
The following requirements must be satisfied:
The user can cancel the upload at any time which means the uploading should also be stopped. It also means that the tagging and notification services should not even be triggered for such requests.
All the services should be able to retry the failed jobs.
All the services communicate through messaging infrastructure.
The services must be scalable and available.
My Take
We can have a global task queue and the upload service can listen for new jobs. The request can be represented as:
{
"request_id":"abcd-defg-pqrs",
"total_files": 2,
"files":[
{
"id":"bcde-efgh-qrst",
"name":"cat.jpg",
"type":"image"
},
{
"id":"cdef-fghi-rstu",
"name":"kitty.mp4",
"type":"video"
}
]
}
The request is broken into the single file upload request and is pushed to the upload-request message queues:
{
"request_id":"abcd-defg-pqrs",
"total_files": 2,
"file":{
"id":"bcde-efgh-qrst",
"name":"cat.jpg",
"type":"image"
}
}
Each request is picked and processed as a background job and the response is sent to the upload-response aggregator which keeps count of the total files uploaded:
{
"request_id":"abcd-defg-pqrs",
"total_files": 2,
"uploaded_files": 1,
"file":[
"bcde-efgh-qrst"
]
}
Once all the files are uploaded the final response is sent to the tagging-request message queues:
{
"request_id":"abcd-defg-pqrs",
"total_files": 2,
"files":[
"bcde-efgh-qrst",
"cdef-fghi-rstu"
]
}
When the tagging service is done with the job, it sends the request to the notification-request message queues. Finally, once we have all the tasks completed, the user can be notified about it using global-response message queues.
Concerns
For retry failed jobs, we can have other low-priority queues for each of the services. What if we want to give the same priority and process with retries in a real quick time as well?
Processing the jobs respecting the dependencies on the services, i.e., upload → tag → notify is taken care of using the messaging queues. Is there any better way to achieve the same?
How can we immediately stop files uploading (assuming file upload is still in progress by the time we are making a cancellation request)? For the uploaded files, we can simply go ahead and delete the files.
Look at temporal.io which provides a much better way to model such use cases. It is essentially a workflow engine that uses code without any intermediate representation. Cancellation and compensations are supported out of the box.
I'm having trouble detecting sharing/permissions changes (e.g. shared link) for drive items. Couple of issues I'm running into:
Issue 1:
When Calling Delta:
returns very little information about a shared drive item
E.g:
"shared": {
"scope": "users"
}
If I require more information I may call the permissions api:
https://learn.microsoft.com/en-us/graph/api/resources/permission?view=graph-rest-1.0
So I thought I will try and expand permissions via $expand when calling delta e.g.:
https://graph.microsoft.com/v1.0/drives/b!2sYXPZYs-EWuKr_Zuq-PuJXgC5oupbFGksDDgkXp5Grd_x1DWcntTY1FyJEH9caq/root/delta?$expand=permissions
Unfortunately, am receiving the following error response:
{
"error": {
"code": "invalidRequest",
"message": "The request is malformed or incorrect.",
"innerError": {
"request-id": "ea0ed04a-a4f7-4fbe-a16e-61ff0770fcc0",
"date": "2019-07-29T19:31:37"
}
}
}
I am trying to avoid a "permission" call for each shared item. (I see no point in calling the API for each drive item). Any suggestions?
Issue 2:
I'm using "Notifications/Webhooks" to receive notifications about drive item changes. Notifications work well enough for modify, create, delete, etc...
However, I noticed that when there are "sharing/permission" changes, notifications are not sent.
Ideas? Is this a limitation? (Why is it not documented?).
Thank you.
I have an update and a partial answer:
It is possible to solve these two issues, but the solution is undocumented.
This may change in the future and Microsoft at some point may document it.
(If and when it is officially documented I'll update my answer).
If anyone is running into the same issue, the best course of action is to reach out to Microsoft through partnership channels and/or support.
I have an application that's been running since 2015. It both reads and writes to approx 16 calendars via a service account, using the Google node.js library (calendar v3 API). We also have G Suite for Education.
The general process is:
Every 30 seconds it caches all calendar data via a list operation
Periodically a student will request an appointment "slot", it first checks to see if the slot is still open (via a list call) then an insert.
That's all it does. It's been running fine until the past few days, where API insert calls started failing:
{
"code": 403,
"errors": [{
"domain": "usageLimits",
"reason": "quotaExceeded",
"message": "Calendar usage limits exceeded."
}]
}
This isn't all that special - the documentation has three "solutions":
Read more on the Calendar usage limits in the G Suite Administrator
help.
If one user is making a lot of requests on behalf of many users
of a G Suite domain, consider using a Service Account with authority
delegation (setting the quotaUser parameter).
Use exponential backoff.
I'm not exceeding any of the stated limits as far as I can tell.
While I'm using a service account, it isn't making a request on behalf of a user. The service account has write access to the calendar and adds the user as an attendee
Finally, I do not think exponential backoff will help, although I do not have this implemented. The time between a request to insert and the next insert call is measured in seconds, not milliseconds. Additionally, just running calls directly on the command line with a simple script produce the same problem.
Some stats:
2015 - 2,466 inserts, 186 errors
2016 - 25,747 inserts, 237 errors
2017 - 42,815 inserts, 225 errors
2018 - 41,390 inserts, 1,074 errors (990 of which are in the past 3 days)
I have updated the code over the years, but it has remained largely untouched this term.
At this point I'm unsure what to do - there is no channel to reach Google, and while I have not implemented a backoff strategy, the way timings work with this application, subsequent calls are delayed by seconds, and processed in a queue that sequentially processes requests. The only concurrent requests would be list operations.
As per this quote I found:
registration_ids – Type String array – (Optional) [Recipients of a message]
Multiple registration tokens, min 1 max 1000.
Is this the actual limit of device tokens I can send a single message to? And do messages to topics have the same limit?
ex:
{
"to": [reg_token_01, reg_token_02, ..., reg_token_1000],
"priority": "high",
"data": {
"title": "Hi Peeps!",
"message": "This is a special message for only for you... More details are available..."
}
}
As always, thanks for the info and direction!
Update: For v1, it seems that registration_ids is no longer supported. It is strongly suggested that topics be used instead.
Seeing as FCM is based from the GCM core, the maximum number of registration tokens you can send to when using the registration_ids parameter is 1000. I'm pretty sure you did see that in the official documentation.
So if ever you still intend to use the registration_ids parameter but you need to send it to more than 1000, you can follow what was #Eran said in his answer here:
If you need to send the same message to more than 1000 Registration IDs, you simply split the sending process into groups of 1000 Registration IDs. Each group would be sent in a separate request to GCM server.
However, when it comes to topics, there is no limit. There used to be, but it was scrapped years ago. I have mentioned it my previous answers before:
Answer 1:
Nope. As per their blog last December 2015:
We’re now happy to announce that we’re allowing unlimited free topics for your app. This means app developers can place an unlimited number of devices within each topic and create an unlimited number of topics.
Answer 2:
Nope. Seeing that FCM has GCM as its core, there is no limit in the number of Topics for any app. There used to be a 1 million limit, but it was removed. You can refer to this Google Developers Blog for that.
Also, when creating a Topic in FCM, it would seem that it takes a day for it to be available, as per this post.
Apparently, there are legacy API's to achieve it. See here Send FCM message to multiple registration tokens
The method sendToDevice accepts array of registration tokens
Before I tackle this solution, I wanted to run it by the community to get feedback.
Questions:
Is my approach feasible? i.e. can it even be done this way?
Is it the right/most efficient solution?
If it isn’t the right solution, what would be a better approach?
Problems:
Need to send mass emails through the application.
The shared hosted server only permits a maximum of 500 emails to be sent per hour before getting labeled a spammer
Server timeout while sending batch emails
Proposed Solution:
Upon task submittal (i.e. the user provides all necessary email information using a form and frontend template, selects the target audience, etc..), the action will then:
Determines how many records (from a stored db of contacts) the email will be sent to
If the number of records in #1 above is more than 400:
Assign a batch number to all these records in the DB.
Run a CRON job that:
Every hour, selects 400 records in batch “X” and sends the saved email template until there are no more records with batch “X”. Each time a batch of 400 is sent, it’s batch number is erased (so it won’t be selected again the following hour).
If there is an unfinished CRON JOB scheduled ahead of it (i.e. currently running), it will be placed in a queue.
Other clarification:
To send these emails I simply iterate over the SWIFT mailer using the following code:
foreach($list as $record)
{
mailers::sendMemberSpam($record, $emailParamsArray);
// where the above simply contains: sfContext::getInstance()->getMailer()->send($message);
}
*where $list is the list of records with a batch_number of “X”.
I’m not sure this is the most efficient of solutions, because it seems to be bogging down the server, and will eventually time out if the list or email is long.
So, I’m just looking for opinions at this point... thanks in advance.