Insert multiple calendar.events in google calendar(API) - google-calendar-api

I am using google API, https://developers.google.com/calendar/v3/reference/events/insert to insert event in calendar. Single event is inserted successfully, but is there a way we can insert multiple events in a single callout?

You need to use batch to add / delete / update events.
Why use batch?
The primary reason to use the batch API is to reduce network overhead and thus increase performance.
Here is an example showing the usage of batch to add events dynamically using javascript / typescript,
createMultipleEvents() {
const events = [ {
'summary': 'sample test events1',
'location': 'coimbatore',
'start': {
'date': '2018-08-29',
'timeZone': 'America/Los_Angeles'
},
'end': {
'date': '2018-08-29',
'timeZone': 'America/Los_Angeles'
}
},
{
'summary': 'sample test events2',
'location': 'coimbatore',
'start': {
'date': '2018-08-29',
'timeZone': 'America/Los_Angeles'
},
'end': {
'date': '2018-08-29',
'timeZone': 'America/Los_Angeles'
}
},
];
const batch = gapi.client.newBatch();
events.map((r, j) => {
batch.add(gapi.client.calendar.events.insert({
'calendarId': 'primary',
'resource': events[j]
}))
})
batch.then(function(){
console.log('all jobs now dynamically done!!!')
});
}

Global HTTP Batch Endpoints (www.googleapis.com/batch) will cease to work on August 12, 2020 as announced on the Google Developers blog. For instructions on transitioning services to use API-specific HTTP Batch Endpoints (www.googleapis.com/batch/api/version), refer to the blog post.
https://developers.googleblog.com/2018/03/discontinuing-support-for-json-rpc-and.html

As stated in this thread, if you want to insert multiple events at once, you should use batch.
var batch = gapi.client.newBatch();
batch.add(gapi.client.calendar.events.insert({
'calendarId': 'primary',
'resource': events[0]
}));
batch.add(gapi.client.calendar.events.insert({
'calendarId': 'primary',
'resource': events[1]
}));
batch.add(gapi.client.calendar.events.insert({
'calendarId': 'primary',
'resource': events[2]
}));
......
batch.then(function(){
console.log('all jobs done!!!')
});
You may also check this link for additional reference.

Related

flutter app - trending posts by using google analytics data api

In my app, users create posts and I'd like to show trending posts by the number of views, comments, etc in a specific date range. To do that I thought I can create a custom event as below:
await FirebaseAnalytics.instance.logEvent(
name: "trending_contents",
parameters: {
"content_type": EnumToString.convertToString(type),
"content_id": contentModel.externalId,
"action_type": "post",
"point": 3,
},
);
I wonder if it is possible to use Google Analytics Data API to get trending posts by a specific date range? Or is there any better way to get trending posts instead of google analytics data API?
I finally found a solution on how to use Google Analytics Data API to manage trending content. If anyone is looking for a solution for a similar need, here is what I've done so far:
I send a custom event in specific situations such as when the user views the content etc. as below. If you use parameters' names according to predefined dimensions & metrics (see API Dimensions & Metrics), it will be easy to prepare a custom report (at least it was for me...). Later, I use contentType and contentId as dimensions and eventValue as a metric in the custom report.
await FirebaseAnalytics.instance.logEvent(
name: "trending_contents",
parameters: {
"content_type": EnumToString.convertToString(event.type),
"content_id": contentId,
"action_type": "view",
"value": 1,
},
);
Lastly, I created a scheduled cloud function that runs every 6 hours and populates firebase collection according to custom report results. This report gives contentIds in a specific date range ordered by the sum of values that I sent in a custom event
P.S. you need to create a service account in Google Cloud Console, then generate JSON credentials for it and add the file to your project (see credentialsJsonPath variable below). Then you need to add its email address to google analytics 'Property Access Management' section to access analytics data. To see Google Analytics Data API samples, you can check their GitHub repo
const { BetaAnalyticsDataClient } = require('#google-analytics/data');
exports.scheduledTrendingFunction = functions.pubsub.schedule('0 */6 * * *').onRun((context) => {
const propertyId = process.env.GA_PROPERTY_ID;
const credentialsJsonPath = process.env.GA_CRENDENTIALS_PATH;
const analyticsDataClient = new BetaAnalyticsDataClient({
keyFilename: credentialsJsonPath,
});
async function runReport(filterType) {
// [START analyticsdata_json_credentials_run_report]
const [response] = await analyticsDataClient.runReport({
property: `properties/${propertyId}`,
dateRanges: [
{
startDate: '3daysAgo',
endDate: 'today',
},
],
dimensions: [
{
name: 'contentType',
},
{
name: 'contentId'
}
],
metrics: [
{
name: 'eventValue'
},
],
dimensionFilter: {
andGroup: {
expressions: [
{
filter: {
fieldName: "eventName",
inListFilter: {
values: ["trending_contents"]
}
}
},
{
filter: {
fieldName: "contentType",
inListFilter: {
values: [filterType]
}
}
}
]
}
},
offset: 0,
limit: 20,
orderBys: [
{
desc: true,
metric: {
metricName: "eventValue"
}
}
]
});
// [END analyticsdata_json_credentials_run_report]
const batch = admin.firestore().batch();
// BATCH: delete
const trendRef = admin.firestore().collection('trends').doc(filterType);
batch.delete(trendRef);
const subTrendRef = admin.firestore().collection('trends').doc(filterType).collection('trendContents');
// console.log(response);
response.rows.forEach((row, index) => {
// BATCH: add each contentId to trend
const contentId = row['dimensionValues']['1']['value'];
batch.set(subTrendRef.doc(contentId), {priority: index + 1});
});
// Commit the batch
await batch.commit();
}
runReport("book");
return null;
});

Redux - how to tell the difference between unloaded content and content that doesn't exist in the back-end?

In a redux store, for example if I store customers data as:
{
data: [1, 2, 3],
customers: {
1: { ... },
2: { ... },
3: { ... }
}
}
In a component, when I try to display customer with id 4, seeing that this customer doesn't exist in the store, I would attempt to fetch this customer using an API call. Now, assume that customer 4 in fact doesn't exist in the back-end database. At the end of the API call, the store is updated and customer 4 is still not in the store.
My question is, from the component, this doesn't tell me whether the customer has not been loaded (in that case I will need to load again), or that the customer in fact does not exist in the back-end database (in that case I will need to display an appropriate message). How is this usually handled in Redux?
You may want to add error field to the redux store. When making the API call update the error if no customer found in DB.
{
data: [1, 2, 3],
customers: {
1: { ... },
2: { ... },
3: { ... }
},
error: null,
}
// api.js
fetch("backend-url.com/customer/4")
.then(res => res.json)
.then(result => {
// don't forget to clear the error in the reducer
dispatch(addCustomer(result));
})
.catch(err => {
// No customer found
dispatch(storeError(err));
});
Then in component
// component.js
...
const { customers, data, error } = useReduxStore();
return (
<div>
{ error ? <Error message={error} /> : <Customers customers={customers} /> }
</div>
);
You could check in your component after loading a customer, if the customer is an empty object {} ... then it's loaded but doesn't exist, and if it's null ... then you need to fire a redux action to load it...
-
If you got an error ... then you might display a generic error message like Something went wrong which might be an internet-connect issue, or an internal issue in your backend
export const getCustomer = () => dispatch => {
dispatch({ type: 'GET_CUSTOMER_START' });
axios
.get('your api end point')
.then(res => {
const customer = res.data;
dispatch({
type: 'GET_CUSTOMER_SUCCESS',
payload: {
customer: customer || {},
},
});
})
.catch(error => {
dispatch({
type: 'GET_CUSTOMER_FAIL',
payload: { error },
});
});
};

Nextjs and workbox integration

Requirement: I am trying to use service worker and cache static files so as to have a benefit to reduce HTTP requests and make the site performance better. 
Down the lane I would switch to offline, caching images, api's etc.
I have seen the plugins:
https://github.com/hanford/next-offline and
https://www.npmjs.com/package/next-pwa
It seems to work. Although I was trying to find out if there were examples of (nextjs + workbox).
Next js do have an example for https://github.com/vercel/next.js/tree/canary/examples/with-next-offline. But I would like just using workbox for this.
Anyone got any working examples? Even a basic one would do.
Currently am not using a custom server. Just using the inbuilt builder of nextjs (https://nextjs.org/docs/getting-started#manual-setup)
I figured out an answer on my own:
Reference: https://developers.google.com/web/tools/workbox/reference-docs/latest/module-workbox-build#.generateSW
I have done runtime caching for my app here and added the workbox file into the base file:
// Use the window load event to keep the page load performant
useEffect(() => {
window.addEventListener("load", () => {
const serviceWorkerScope = `/${country}/workbox-worker.js`
navigator.serviceWorker
.register(serviceWorkerScope)
.then(() => {
logger.info(`Service worker registered at ${serviceWorkerScope}`)
})
.catch(error => {
logger.error("Error in serviceWorker registration: ", error)
})
})
})
I have added comments,
// File to generate the service worker.
require("dotenv").config()
const workboxBuild = require("workbox-build")
const { COUNTRY: country, NODE_ENV } = process.env
const urlPattern = new RegExp(`/${country}\/static|_next\/.*/`)
// https://developers.google.com/web/tools/workbox/reference-docs/latest/module-workbox-build#.generateSW
const buildSW = () => {
return workboxBuild.generateSW({
swDest: "public/workbox-worker.js",
clientsClaim: true,
mode: NODE_ENV,
skipWaiting: true,
sourcemap: false,
runtimeCaching: [
{
urlPattern: urlPattern,
// Apply a cache-first strategy.
handler: "CacheFirst",
options: {
cacheName: "Static files caching",
expiration: {
maxEntries: 50,
maxAgeSeconds: 15 * 60, // 15minutes
},
},
},
],
})
}
buildSW()

Issue with sending LiveChat messages via DDP in RocketChat

I am trying to use the DDP Realtime API to initiate a LiveChat conversation but I am facing issues.
https://rocket.chat/docs/developer-guides/realtime-api/livechat-api
I am doing all the steps as per the documentation. In the first API outcome, you can see that it saus numAgents: 2 and online: true. However when I try to send a message to the same department, it says: "Sorry, no online agents".
Is there a way to find out the problem?
Result of livechat:getInitialData
{ enabled: true,
title: 'xyz.com',
color: '#C1272D',
registrationForm: false,
room: null,
triggers: [],
departments:
[ { _id: 'CxCTgXL4csw3TcW6S',
enabled: true,
name: 'Support',
description: '',
numAgents: 2,
showOnRegistration: true,
_updatedAt: 2017-09-24T06:46:39.657Z } ],
allowSwitchingDepartments: true,
online: true,
offlineColor: '#666666',
offlineMessage: 'We are not online right now. Please leave us a message:',
offlineSuccessMessage: '',
offlineUnavailableMessage: '',
displayOfflineForm: true,
videoCall: true,
offlineTitle: 'Leave a message',
language: '',
transcript: false,
transcriptMessage: 'Would you like a copy of this chat emailed?' }
Result of livechat:registerGuest
{ userId: 'j65Cp5peeLJLYhWQi',
token: 'J8IpnpB1yN1AYtO0e0EzLhuaRhe0zaZkjHBAamsehSO' }
Result of Login
{ id: 'j65Cp5peeLJLYhWQi',
token: 'J8IpnpB1yN1AYtO0e0EzLhuaRhe0zaZkjHBAamsehSO',
tokenExpires: 2017-12-23T07:45:01.928Z }
Result of sendMessageLivechat
{ isClientSafe: true,
error: 'no-agent-online',
reason: 'Sorry, no online agents',
message: 'Sorry, no online agents [no-agent-online]',
errorType: 'Meteor.Error' }
These are the parameters I am sending to sendMessageLiveChat.
"_id" : "j65Cp5peeLJLYhWQi"
"rid" : "a_random_string"
"msg": "Hello"
"token" : "J8IpnpB1yN1AYtO0e0EzLhuaRhe0zaZkjHBAamsehSO"
Could someone help me?
This is how I called registerGuest.
ddpClient.call("livechat:registerGuest",[{"token":authToken,"name":"test1","email":"test2#gmail.com","department":department._id},25],function(err, info){
});
the token passed by me here is the admin's authToken
The ddpClient object is obtained using the DDP npm package.
I solved this by a combination of
setting the bot as livechat agent & manager at the same time (I've read that tip somewhere it might be nonsense)
in Admin -> Omnichannel -> routing I've set 'accept even when no agents are online' (since my bot was never online, bould it was replying when DMessaged) + 'assign bot agents to new conversations'
I've setup myself a livechat-manager + livechat-agent role, but stayed in a different department, that way I can takeover
The rocket chat live api docs are quite out of date, just got stream-room-messages working because of a random forum post. Generally, registerGuest works with very minimal parameters as well, namely a random, self generated token + a name.
Here's my code for the complete setup
async subscribeToLiveRoom(message){
var _self = this
// let initial = await this.api
// .call("livechat:getInitialData",[token])
// register
const token = this.randomString()
var guestUser = await this.api
.call(
'livechat:registerGuest',
[{
token: token,
name: _self.$auth.user.name
}]
)
.catch(console.error)
console.log('guest', guestUser.visitor.token)
this.setActiveGuest(guestUser)
var roomId = this.randomString()
this.setActiveRoom(roomId)
let msg = await this.api
.call(
'sendMessageLivechat',
[{
_id: _self.randomString(),
rid: roomId,
msg: message,
token: guestUser.visitor.token
}])
.catch(console.error)
try {
let liveStream = await this.$subscribe("stream-livechat-room",[
roomId,
{
"useCollection": true,
"args":[
{
"visitorToken": guestUser.visitor.token
}
]
}
])
this.msgLive = await this.find('stream-livechat-room')
} catch (e) {
console.log(e)
}
//
try {
var roomStream = await this.$subscribe("stream-room-messages",[
roomId,
{
"useCollection": true,
"args":[
{
"visitorToken": guestUser.visitor.token
}
]
}
])
console.log('roomstream')
var update = this.find('stream-room-messages')
} catch (e) {
console.log('an error occured', e)
}
console.log( this.msg)
},
async sendToLiveRoom(message, rId){
var _self = this
// let initial = await this.api
// .call("livechat:getInitialData",[token])
// register
let msg = await this.api
.call(
'sendMessageLivechat',
[{
_id: _self.randomString(),
rid: rId,
msg: message,
token: _self.guest.visitor.token
}])
.catch(console.error)
},
By the way, since it's not well documented, you will get room-messages in livechat rooms via subscribing to stream-room-messages while you get room status changes (like switched to another agent) by subscribing to stream-livechat-room

Meteor: filtering a complete collection that is published with limit for lazy loading

I have a collection of posts that I want to load in chunks aka pagination/lazyloading. But also I want to:
search all posts
alternatively load and show all posts with an attribute 'important' of that collection without limit.
I didn't get along with multiple collections or subscriptions yet, so instead of setting the limit in .publish() in the server-code, I did it on the client side. Looks like this:
// Server side
export const Posts = new Mongo.Collection('posts');
if (Meteor.isServer) {
Meteor.publish('posts', function () {
return Posts.find({}, {
sort: {createdAt: -1}
});
});
}
//Client side
export default createContainer(() => {
Session.setDefault('lazyloadLimit', 10);
Meteor.subscribe('entrys');
return {
posts: Posts.find({}, { sort: { createdAt: -1 }, limit: Session.get('lazyloadLimit') }).fetch(),
importantPosts: Posts.find({important: true}, {sort: { createdAt: -1 }}).fetch(),
importantPostsCount: Posts.find({important: true}).count(),
};
}, App);
The results look right, but now my main question is: Does this load all Posts to the client, or just those within the limit? I guess it loads all of them, making the limit only be of use for the rendering time, but not for the bandwidth.
Just for completenes: My previous attempt looked like this, but I couldn't get to the ImportantPosts-collection:
// Server side
export const Posts = new Mongo.Collection('entrys');
export const ImportantPosts = new Mongo.Collection(null); // <- where do I put this?
if (Meteor.isServer) {
Meteor.publish('posts', function (limit) {
return Posts.find({}, {
limit: limit,
sort: {createdAt: -1}
});
});
Meteor.publish('importantPosts', function () {
return Posts.find({important: true}, {
sort: {createdAt: -1}
}); // <- how do I get these into ImportantPosts-collection?
});
}
// Client side
export default createContainer(() => {
Session.setDefault('lazyloadLimit', 10);
Tracker.autorun(function(){
Meteor.subscribe('posts', Session.get('lazyloadLimit'));
});
Meteor.subscribe('importantPosts');
return {
entrys: Posts.find({}, { sort: { createdAt: -1 }, limit: Session.get('lazyloadLimit') }).fetch(),
importantPosts: ImportantPosts.find({}, {sort: { createdAt: -1 }}).fetch(),
importantPostsCount: ImportantPosts.find({}).count(),
};
}, App);
let's talk about what these 2 do, from your earlier attempt, but calling it 'posts':
export const Posts = new Mongo.Collection('posts');
Meteor.publish('posts', function (limit) {
return Posts.find({}, {
limit: limit,
sort: {createdAt: -1}
});
});
Meteor.publish('importantPosts', function () {
return Posts.find({important: true}, {
sort: {createdAt: -1}
}); // <- how do I get these into ImportantPosts-collection?
});
the first, clearly, publishes all posts. the second publishes a subset of all the posts. as is, the 2nd is redundant. (from the client, you would do all your processing on the collection 'posts'.) Meteor calls this the "merge box": it combines all published items from the same collection into the same "box". i.e. it's the union of all the publishes from the same collection.
the downside of that first publish is that you are publishing all the posts to all clients just so you can do a search. that could potentially be a lot of data. you already mentioned paging, so conceptually that will help with what you're trying to do.
looking at the code i pasted above, you're thinking of the 2 publish statements as 2 separate collections, when really they're 1. so let's proceed with that in mind.
you want to do all of this:
page posts
show all important posts
search through posts
let's start with #3. i will assert that that should be done on the server. you can write a meteor method to do the search using the user's search term, and that meteor method, instead of returning that data, can simply publish the results. Meteor's merge box will ensure your client now has those posts, in addition to whatever posts it already has.
so upon searching, now your client need merely filter by the search value, and it should just work.
now for #2: you already have the publish written, filtering by important=true. if you subscribe to that on the client, those results are written to the merge box.
that leaves #1, which you already have written. as your user pages through posts, all of those will be written to the merge box.
so your client, in addition to subscribing to 2 things and making a method call, now just has to handle the filtering based on the user either searching, asking to see the important ones, or both (i.e. searches through just the important ones).
and all of these operations will be on the collection 'posts'.
the end result of doing it this way is that instead of loading all the posts at once, you're incrementally loading them as the user pages. so the user won't get all the posts unless they go through all the pages.
A solution I just found that seems to work (without me understanding how it works) might be Find From Publication looking like this:
$ meteor add percolate:find-from-publication
-
// Server side
export const Posts = new Mongo.Collection('posts');
if (Meteor.isServer) {
Meteor.publish('posts', function (limit) {
return Posts.find({}, {
limit: limit,
sort: {createdAt: -1}
});
});
FindFromPublication.publish('importantPosts', function () {
return Posts.find({important: true}, {
sort: {createdAt: -1}
});
});
}
// Client side
export default createContainer(() => {
Session.setDefault('lazyloadLimit', 10);
Tracker.autorun(function(){
Meteor.subscribe('posts', Session.get('lazyloadLimit'));
});
Meteor.subscribe('importantPosts');
return {
posts: Posts.find({}, { sort: { createdAt: -1 }, limit: Session.get('lazyloadLimit') }).fetch(),
importantPosts: Posts.findFromPublication('importantPosts', {}, {sort: { createdAt: -1 }}).fetch(),
importantPostsCount: Posts.findFromPublication('importantPosts', {}).count(),
};
}, App);

Resources