Cronofy available_periods does not do anything - cronofy

I am trying to get a user's availability, but in one use case, I want to ignore their actual availability rules and even their current schedule and calendar. Basically I am using cronofy in this use case to just provide me a list of times.
According to the docs https://docs.cronofy.com/developers/api/scheduling/availability/
I should be able to specify participants.members.available_periods.start and participants.members.available_periods.end. I've read and re-read these params over and over and am sure I am sending it as specified, but cronofy still returns to me only times that the user is not "busy".
Am I still not understanding this param? Is there another way to ignore a user's calendar, ie ignore their "busy" time slots?

The intent of the participants.members.available_periods parameter is to define a specific participant's availability hours for the query. Useful in multi-person queries when one or more participants have ad-hoc shift patterns or other complicated working hours. You can choose to specify these here or use our Available Periods endpoint along with Managed Availability to have the availability query consider the latest set of Available Periods when it is run.
The Availability query isn't designed to ignore all calendar events for an individual participant but there is another way you can achieve what you're looking for.
Application Calendars are designed as drop in replacements for synced-calendar Accounts in Cronofy. So you can create one of more of these via the API and use them in the Availability query as a stand-in for the participant.
They still support Managed Availability and can have events created in them. So if you wanted to ensure that your application doesn't double book over the events it already knows about, you can just create the events as your application books them.
I hope this helps. Our support team (support#cronofy.com) are always happy to talk through the specifics of your use case if that would be helpful.
-- UPDATE --
We've decided to support this as a first class concept in our API.
You can now pass an empty array to the participants.member.calendar_ids attribute to indicate that you don't want any calendars included within the availability query. And thus, only the Availability Rules or query periods will be considered. Thanks for the question.
More information here.

Related

How to find which kinds are not being used in Google Datastore

There's any way to list the kinds that are not being used in google's datastore by our app engine app without having to look into our code and/or logic? : )
I'm not talking about indexes, which I can list by issuing an
gcloud datastore indexes list
and then compare with the datastore-indexes.xml or index.yaml.
I tried to check datastore kinds statistics and other metadata but I could not find anything useful to help me on this matter.
Should I give up to find ways of datastore providing me useful stats and code something to keep collecting datastore statistics(like data size), during a huge period to have at least a clue of which kinds are not being used and then, only after this research, take a look into our app code to see if the kind Model was removed?
Example:
select bytes from __Stat_Kind__
Store it somewhere and keep updating for a period. If the Kind bytes size does not change than probably the kind is not being used anymore.
The idea is to do some cleaning in datastore.
I would like to find which kinds are not being used anymore, maybe for a long time or were created manually to be used once... You know, like a table in oracle that no one knows what is used for and then if we look into the statistics of that table we would see that this table was only used once 5 years ago. I'm trying to achieve the same in datastore, I want to know which kinds are not being used anymore or were used a while ago, then ask around and backup/delete it if no owner was found.
It's an interesting question.
I think you would be best-placed to audit your code and instill organizational practice that requires this documentation to be performed in future as a business|technical pre-prod requirement.
IIRC, Datastore doesn't automatically timestamp Entities and keys (rightly) aren't incremental. So there appears no intrinsic mechanism to track changes short of taking a snapshot (expensive) and comparing your in-flight and backup copies for changes (also expensive and inconclusive).
One challenge with identifying a Kind that appears to be non-changing is that it could be referenced (rarely) by another Kind and so, while it does not change, it is required.
Auditing your code and documenting it for posterity should not only provide you with a definitive answer (and identify owners) but it pays off a significant technical debt that has been incurred and avoids this and probably future problems (e.g. GDPR-like) requirements that will arise in the future.
Assuming you are referring to records being created/updated, then I can think of the following options
Via the Cloud Console (Datastore > Dashboard) - This lists all your 'Kinds' and the number of records in each Kind. Theoretically, you can take a screen shot and compare the counts so that you know which one has experienced an increase or not.
Use of Created/LastModified Date columns - I usually add these 2 columns to most of my datastore tables. If you have them, then you can have a stored function that queries them. For example, you run a query to sort all of your Kinds in descending order of creation (or last modified date) and you only pull the first record from each one. This tells you the last time a record was created or modified.
I would write a function as part of my App, put it behind a page which requires admin privilege (only app creator can run it) and then just clicking a link on my App would give me the information.

In Firebase, can I trigger a conversion only if an event has a certain parameter?

I have a Firebase Event called conversion_performed. It has a parameter success.
Can I trigger a conversion only if the parameter success is "true"? Or do I need to have two different events: conversion_succeeded and conversion_failed?
Well after reading some documentation, Google Analytics for Firebase provides event reports that help you understand how users interact with your app. You can create custom tag in order to track the user behavior and create custom analytics trigger to allow a proper process related to this event.
In special, there are a king of tags called conversions, based in this definition the conversions are classified in macro and micro.And I think, this classification is important for your question.
In case of macro conversion, if your trigger processes a large amount of information and perform multiple operations over a single event/conversion, I think you should separate in both tags to isolate each processing. Or if you require to perform very different path of data process based on success, I strongly suggest separate the event in two.
On the other hand, in case of micro conversion, if the processing trigger does not have much work and if the knowledge of the success of the conversion is part of a unique processing path, I suggest keep just one conversion trigger.
In a nutshell, your questions depends entirely about your problem nature. And I suggest two criteria to separarte or not your conversion trigger.
Heavy process - Separate your tags in app/web or create several pub/sub trigger to separate processing.
Isolated behavior - If the execution of the trigger is totally different depending on the success of the conversion, I recommend two conversions and two triggers.
I have completely changed the answer to try to go deeper into your problem because I think, the first one was very shallow. And I hope this answer is helpful to you.

Updating database at certain time

I'm looking to make my Firebase Database update at a particular time.
The way it should work is that, for a group, the leader sets a deadline time. The group votes on some stuff. At the deadline time, I would like the database to automatically tabulate the votes and store the response within.
I'm not sure how to set these types of rules for the database without doing a check whenever a member of the group is online and refreshes their feed. Also, this would allow any member to write to the vote-result field, which seems bad when I want it to just be automatic. It seems like there should be an easier way than this, but I just can't find anything.
It seems like the other option would be to set up a separate server that counts through all the time-frames and sends an update request when the time has allotted. But it seems like Firebase should have this built in. I'm sure I'm missing something. Thank you in advance.
EDIT: Here is a more comprehensive look at my usecase. I am looking into cron stuff now, as I think it will solve my problem, but I don't know.
1) Leader creates a group and invites friends to it. Event is created is firebase database. Group is created with a specific deadline.
2) Before deadline, leader and friends can vote on certain options. Basically they submit a dictionary to database with their votes.
3) On deadline, either just need to change the state of the group (from voting to closed) or calculate the vote response. Same problem, which is that I don't know to do do it at a certain time w/o using user clients.

Firebase data structure and url to use

I'm really new to firebase, want to try out a simple mix-client app on it - android, js. I have a users table and a tasks table. The very first question that comes to my mind is, how to store them (and thus how the url to be)? For example, based on the tasks table, should I use:
/tasks/{userid}/task1, /tasks/{userid}/task2, ...
Or
/{userid}/tasks/task1, /{userid}/tasks/task2, ...
The next question, based on the answer to the first one - why to use any of the versions?
In my opinion, the first version is good because domains are separated.
The second approach is good because data is stored per-user which may make some of the operations easier.
Any ideas/suggestions?
Update: For the current case, let's say there are following features:
show list of tasks for each user
add new task to the list
edit/delete a task by user.
Simple operations.
This answer might come in late, but here's how I feel about the question after a year's experience with Firebase.
For your very first question, it totally depends on which data your application will mostly read and how and in which order ( kind of like sorting ) you expect to read the data.
your first proposal of data structure, that is "/tasks/{userid}/task1", "taks/{userid}/task2"... is good if the application will oftentimes read the tasks as per users with an added advantage of possibly sorting the data by any task's "attribute" if I might call it so.
say each task has got a priority attribute then,
// get all of a user's tasks with a priority of 25.
var userTasksRef = firebase.database().ref("tasks/${auth.uid}");
userTasksRef.orderByChild("priority").equalTo(25).on(
"desired_event",
(snapshot) => {
//do something important here.
});
2. I'll highly advice against the second approach because generally most if not all of the data that is associated to that user will be stored under the "/{userid}/" node and with firebase's mechanism, should a situation be in which you need more than one datum at that path level, it will require you getting that data with all the other data that's associated to that user's node ( tasks and any other data included). I won't want that behavior on my database. Nonetheless, this approach still permits you to store the tasks as per the users or making multiple RESTfull requesting and collecting the required data datum after datum. Suggest fanning out the data structure if this situation is encountered. Totally valid data structure if there don't exist a use case in the application where in datum at the first level of the path is needed and only that datum is needed but rather the block of data available at that path level with all the data at the deriving paths at that level( that is 2nd 3rd ... levels).
As per the use cases you've described, and if the database structure you've given is exhaustive of your database structure, I'll say it isn't enough to cover your use cases.
Suggest reading the docs here. Great and exhaustive documentation of their's.
As a pick, the first approach is a better approach to modelling this data use case in NoSQL and more accurately Firebase's NoSQL database.

How to build large/busy RSS feed

I've been playing with RSS feeds this week, and for my next trick I want to build one for our internal application log. We have a centralized database table that our myriad batch and intranet apps use for posting log messages. I want to create an RSS feed off of this table, but I'm not sure how to handle the volume- there could be hundreds of entries per day even on a normal day. An exceptional make-you-want-to-quit kind of day might see a few thousand. Any thoughts?
I would make the feed a static file (you can easily serve thousands of these), regenerated periodically. Then you have a much broader choice, because it doesn't have to run below second, it can run even minutes. And users still get perfect download speed and reasonable update speed.
If you are building a system with notifications that must not be missed, then a pub-sub mechanism (using XMPP, one of the other protocols supported by ApacheMQ, or something similar) will be more suitable that a syndication mechanism. You need some measure of coupling between the system that is generating the notifications and ones that are consuming them, to ensure that consumers don't miss notifications.
(You can do this using RSS or Atom as a transport format, but it's probably not a common use case; you'd need to vary the notifications shown based on the consumer and which notifications it has previously seen.)
I'd split up the feeds as much as possible and let users recombine them as desired. If I were doing it I'd probably think about using Django and the syndication framework.
Django's models could probably handle representing the data structure of the tables you care about.
You could have a URL that catches everything, like: r'/rss/(?(\w*?)/)+' (I think that might work, but I can't test it now so it might not be perfect).
That way you could use URLs like (edited to cancel the auto-linking of example URLs):
http:// feedserver/rss/batch-file-output/
http:// feedserver/rss/support-tickets/
http:// feedserver/rss/batch-file-output/support-tickets/ (both of the first two combined into one)
Then in the view:
def get_batch_file_messages():
# Grab all the recent batch files messages here.
# Maybe cache the result and only regenerate every so often.
# Other feed functions here.
feed_mapping = { 'batch-file-output': get_batch_file_messages, }
def rss(request, *args):
items_to_display = []
for feed in args:
items_to_display += feed_mapping[feed]()
# Processing/returning the feed.
Having individual, chainable feeds means that users can subscribe to one feed at a time, or merge the ones they care about into one larger feed. Whatever's easier for them to read, they can do.
Without knowing your application, I can't offer specific advice.
That said, it's common in these sorts of systems to have a level of severity. You could have a query string parameter that you tack on to the end of the URL that specifies the severity. If set to "DEBUG" you would see every event, no matter how trivial. If you set it to "FATAL" you'd only see the events that that were "System Failure" in magnitude.
If there are still too many events, you may want to sub-divide your events in to some sort of category system. Again, I would have this as a query string parameter.
You can then have multiple RSS feeds for the various categories and severities. This should allow you to tune the level of alerts you get an acceptable level.
In this case, it's more of a manager's dashboard: how much work was put into support today, is there anything pressing in the log right now, and for when we first arrive in the morning as a measure of what went wrong with batch jobs overnight.
Okay, I decided how I'm gonna handle this. I'm using the timestamp field for each column and grouping by day. It takes a little bit of SQL-fu to make it happen since of course there's a full timestamp there and I need to be semi-intelligent about how I pick the log message to show from within the group, but it's not too bad. Further, I'm building it to let you select which application to monitor, and then showing every message (max 50) from a specific day.
That gets me down to something reasonable.
I'm still hoping for a good answer to the more generic question: "How do you syndicate many important messages, where missing a message could be a problem?"

Resources