Drupal Date repetition interval enheretance - drupal

I need a global interval template for recurring events.
I am constructing a schedule management web app. I have a set of event happening periodically up certain moment in time. For example I a have a train schedules. They repeat them selves every week for certain winter or summer periods. Using Date module I would have to enter ending and beginning dates of let say summer period for each train route.
What I want to do is to simply add a taxonomy term which would hold repetition interval with some holiday exclusions...
This more Drupal philosophy question yet I think that others had ran into similar issues before.
Maybe I am looking to this problem from a wrong angel could somebody could lend me fresh set of eyes.

Related

Configuring WooCommerce Bookings for a purchasing a preset number of sessions/hours

I'm developing a site for a tutoring service and they have 3 types of services they sell:
Hourly Tutoring
Package Deals (Buy 10 hours, get a reduced hourly price, customer chooses when to use each of the 10 hours)
Study Courses (Signup for a class with 10 preset meeting dates, customer doesn't get to choose dates, they are already set).
WCB works great already for Service 1, where people can just pick an hour or more and book it.
I'm wondering if someone could give me any insight as to the best way to setup service 2 where someone can buy a set number of hours and then request dates for them.
And for service 3, can this be done through WCB or would it be better to just make a WooCommerce product and put the dates in the product description?
I'm new to WCB and it seems like it can do this, having someone book 10 sessions/hours and then schedule them at a later date.
I'm just not sure the best way to do it. Thanks in advance for any help!

How can I use the occurrences to calculate the end date of a recurring event in eas

Can anyone tell me the best way of calculating the end date of a recurring event from the number of occurrences and the pattern in which the event occurs.
For example:
I have an event which has start date as 10/07/2014 (Tuesday) and occurs every week on Tuesday. This event will end after 10 occurrences (say). So, the my method should return me the end date as : 12/09/2014
The method should also consider more complex situations like suppose if the event occurs yearly on first Monday of October and has total 10 occurrences.
(This isn't an answer which gives you a complete solution by any means, but hopefully it's a step in the right direction.)
Good luck. I've worked on an ActiveSync implementation, and recurrent events are fundamentally painful. You'll need to think about all kinds of corner cases - if something occurs every month on the 30th, what happens in February? What happens if it happens at 1.30am, and the clocks go forward or backward in the event's time zone so that 1.30am happens 0 or 2 times for a particular day?
Noda Time can help with this, but it doesn't provide a complete solution, partly because all the requirements will vary so much.
The important types you'll need to know about are LocalDate and LocalDateTime to provide time-zone-neutral dates/times, and Period which represents a not-necessarily-fixed period of time, such as "1 month". That will help with things like "add a week" - and there are methods on LocalDate for things like "next Monday after this date". It gets harder for events which are "weekly, on Monday and Wednesday" - you'll want to step through the weeks, working out which days occur within a particular week, until you've gone through all the events you need.
Noda Time 2.0 has the concept of "adjusters" which will make life somewhat simpler for things like "the first Monday of October" but everything you need to do can be done with Noda Time 1.3. (Don't wait for Noda Time 2.0, which I wouldn't expect to be released for another 6 months at least.)
I think my strongest pieces of advice would be:
Keep it simple. Focus on getting the right results first, then work out any optimizations you need. (For example, don't try to "guess" when the 100th instance of an event will occur - stepping through 100 instances with simple steps will be slower, but get you to the right answer. Do measure the performance, but make sure you have good tests before you optimize.)
Introduce your own types to represent exactly what you know about the event. Use the Noda Time types where they match of course, but don't be tempted to use an existing type just because it's quite like what you're trying to represent. The small differences will bit you eventually.
Make sure you know what you actually want the results to be. Write lots of tests. Date and time work is a naturally data-oriented domain, so invest in making it as easy as possible to write tests for all the corner cases you should be thinking of. (And you really should be thinking about them. Pay particular attention to leap years and time zones.)
Be aware that time arithmetic doesn't follow the normal rules of arithmetic - x + 1 month + 1 month isn't the same as x + 2 months
If/when behaviour surprises you, do come back to ask specific questions here. There aren't very many of us working on Noda Time, but questions tend to be answered quickly :)

How Do You Deal With Time Zones in Time Series Graphs?

I imagined there would be more literature on this, but I'm having trouble finding any. I have a lot of non-algebraically-aggregatable time series data (that is to say, points for which no function exists that I could use to aggregate them to a higher granularity-- stuff like unique active users, unique contributors, etc... where knowing the amount I had every minute of some hour does not tell me how many I had total during the hour). Currently, I'm just storing and presenting all of this data in UTC. The problem is that many of my clients find this confusing-- understandably so. Because the data is non-algebraically-aggregatable, there's no way to get from UTC data for 1 day midnight- midnight to, say, PST data from midnight to midnight. Recalculation would need to be done from raw data.
So:
Recalculation from raw data is prohibitively expensive for some complicated analytics graphs
We could store all data for all time zones, but this would increase the amount of data we store x24.
All of that said, how do other people deal with this issue? Here's how Google Analytics does it, but this seems insufficient for my use case because I know if I open the multiple timezone can of worms, clients will ask for more than one. This will also take a lot of work that doesn't seem worth the effort as just adding timezone support won't be extremely noticeable or a huge win. What I'm really hoping for is some clever design solution that just presents the UTC data in some intuitive enough way that it's no longer confusing for people in other timezones. Has anyone dealt with similar problems and come upon a solution I'm missing?
First of all, you should recognize that there a lot more than 24 time zones. In order to accurately take into account how people actually use time worldwide, you should be using IANA time zones, of which there are over 500. See also Wikipedia and the timezone tag wiki.
If you are dealing with individual points (discreet timestamps), then you can certainly convert from UTC to any time zone you wish, on the fly as you render your graph. You just need to also keep in mind that the range of data you query will also need to be translated to that time zone.
But if you are talking about aggregating data by the "day" of a specific time zone, then there is no magic bullet. You will need to decide ahead of time which time zones you want to support and calculate each one separately. When you do this, recognize that it's not just the view that's changing. Since the day boundaries are different for each time zone, then the data for each time zone could potentially have very different daily totals.
You should also be aware that not every day has 24 hours. If the day happens to be the date of a daylight saving time transition, it could have 23, 23.5, 24.5, or 25 hours. This could potentially affect how you draw your graph.
One approach you might consider is to be time zone ignorant in your aggregations, rather than using UTC or any specific time zone. Of course this depends heavily on the context of your data, but it is appropriate in certain circumstances. For example, on an invoice, you might care less about the specific timestamps, and more about which calendar date the invoice was assigned to. In that case, once a date is assigned, you would just aggregate on that date. Even if the company operates over multiple time zones, you wouldn't care about that in aggregate.
As far as some clever design that abstracts this from the user, I'm afraid I haven't seen much. The only two choices you really have are timezone-adjusted aggregations (UTC or otherwise), and time zone ignorant aggregations for calendar-date contexts.
We had similar issues to roll up the data for Generation in renewable. We went with three options User / Farm / UTC.
If user selects USER then all the data would be based on his browser Time zone. And Yesterday meant 24 hours till last mid night in user local time.
Similarly if it was Farm, then we take the Farm local and derive the same.
UTC is standard similar to what you have implemented.

When filtering search results by date, should time zone be crtieria for the date?

We have a website that has a large number of events that have dates and times created by admins. Admins choose a time zone for each date time entered, and they are stored in UTC time. We are trying to support a global audience, and be completely localized in terms of dates.
We have a search page, that allows dates to be entered as search criteria.
So users could say, show me all events between "12:01 AM July-1-2011" and "11:59 PM July 10-2011".
I'm trying to figure out what the best approach is to determining what time zone to consider the date filter criteria in.
Force end users to select a time zone when creating a date filters. This is cumbersome, and our designers our pushing back. It is what I would prefer.
Assume the the entered dates are in the users "preferred" time zone, which is set upon logging in.
Store times in Local time, without converting to UTC. This way the end users are searching in the admin created date. I hate this idea, i need help explaining why this is bad.
Please help!
Second option is possible solution to your problem. And it is probably the best.
Possibly you could get current time zone offset from web browser (with JavaScript) but the problem is, there are certain time zones that currently have the same offset but Daylight Saving Time switches on different dates, therefore search result would be inaccurate. By having User to choose his/her prefer time zone and storing that information in the profile, you could always present correct dates and times, as well as use this information for searching. However, I would add an information near search box, so that end User would know what time zone this refers to (with JavaScript that would be obvious: the current one, with profile User might forgot).
BTW. Time zone information is best to show as "UTC+02:00 (Warsaw, Zagreb, Skopje)" instead "Central European Time"...
As for other options:
1. Too much clicking. As well as "don't make me think, I want to have it in my local time zone, isn't that obvious?".
3. Local times will not be comparable against each other. You will soon end up with two different dates referring to the same point in time (at least in terms of the numbers). Really bad idea.

Freezing Previous Years Data

I have a database that tracks employee’s data for the current year and previous years.
In examples below I will use calendar years (years start in Jan and end in Dec)- this is not always the case, some users have their year running from July to June- or April to March, etc.
There are many tables that, with a few heavy calculations, make a view of the employee’s data at this point in time.
This current year’s data is what users look at mostly. But data from previous years impact on this year (so, a change made to the 2008 data- will have a knock on effect on 2009, and then into 2010 and so onto this year).
Obviously, this has a negative impact on performance when viewing reports as viewing this year’s data will mean trudging through all the previous years- calculating and creating views until the end result is found. As the application ages, this problem will get worse and worse- say in 2015, anybody using the system from its inception (2008) will be waiting for a long time to get their data.
We plan to freeze previous years data so instead of having data from 2008, 2009, 2010 and this year’s data- we would have one block with the previous year’s data (with all calculations done for those previous years) and this year’s data.
In this way, we would have the end results data for all the previous year’s already calculated and we would only need to add to this year to get the final result.
Obviously, we would have to prevent users from entering/updating data in previous years.
My question is what is the best way to achieve this? I presume you would need some process that waits until the new year and does some calculations.
Thanks in advance,
ViperMAN.
the approach you describe is normally referred as data archiving, you can have some queries a DBA runs manually every year the first working day after the new year party so the calculated data is prepared and stored.
Also, your application needs to deny users to modify previous years data, if I have got it right.
One approach I was thinking works as follows:
Create a table that holds the result of the previous years calculations.
Prevent all addtions/deletions/updates to previous years from the app tier.
Change reporting so that queries would consult this table instead of trudging along, calculating everything out each time.
Have a daily process that would:
Check if today was the first day of an employees year-
If yes, get all of the elapsed year's data and add them to all the previous years.
Obviously, this is a simlified version- but one that I think could work.
Thoughts?
If you have data that should not be edited, and you can define what that data is, then I would use a combination of stored procedures and security settings to ensure the old data stays accurate.
If you used stored procedures as a filter, you can have logic in your stored procedure that checks the record against the current DateTime and only allows the update if everything fits your requirements.

Resources