Flex: Calculate hours between 2 times? - apache-flex

I am building a scheduling system. The current system is just using excel, and they type in times like 9:3-5 (meaning 9:30am-5pm). I haven't set up the format for how these times are going to be stored yet, I think I may have to use military time in order to be able to calculate the hours, but I would like to avoid that if possible. But basically I need to be able to figure out how to calculate the hours. for example 9:3-5 would be (7.5 hours). I am open to different ways of storing the times as well. I just need to be able to display it in an easy way for the user to understand and be able to calculate how many hours it is.
Any ideas?
Thanks!!

Quick dirty ugly solution
public static const millisecondsPerHour:int = 1000 * 60 * 60;
private function getHoursDifference(minDate:Date, maxDate:Date):uint {
return Math.ceil(( maxDate.getTime() - minDate.getTime()) / millisecondsPerHour);
}

Ok it sounds like you're talking about changing from a schedule or plan that's currently developed by a person using an excel spreadsheet and want to "computerize" the process. 1st Warning: "Scheduling is not trivial." How you store the time isn't all that important, but it is common to establish some level of granularity and convert the task time to integer multiples of this interval to simplify the scheduling task.
If you want to automate the process or simply error check you'll want to abstract things a bit. A basic weekly calendar with start and stop times, and perhaps shift info will be needed. An exception calendar would be a good idea to plan from the start. The exception calendar allows holidays and other exceptions. A table containing resource and capacity info will be needed. A table containing all the tasks to schedule and any dependencies between tasks will be needed. Do you want to consider concurrent requirements? (I need a truck and a driver...) Do you want to consider intermittent scheduling of resources? Should you support forward or backward scheduling? Do you plan to support what if scenarios? (Then you'll want a master schedule that's independent of the planning schedule(s)) Do you want to prioritize how tasks are placed on the schedule? (A lot of though is needed here depending on the work to be done.) You may very well want to identify a subset of the tasks to actually schedule. Then simply provide a reporting mechanism to show if the remaining work can fit into the white space in the schedule. (If you can't get the most demanding 10% done in the time available who cares about the other 90%)
2nd Warning: "If God wrote the schedule most companies couldn't follow it."

Related

Firebase, how to implement scheduler?

When some information is stored in the firestore, each document is storing some specific time in the future, and according to that time, the event should occur in the user's app.
The first way I could find was the Cloud Function pub sub scheduler. However, I could not use this because the time is fixed.
The second method was to use Cloud Function + Cloud Task. I have referenced this. 
https://medium.com/firebase-developers/how-to-schedule-a-cloud-function-to-run-in-the-future-in-order-to-build-a-firestore-document-ttl-754f9bf3214a
This perfectly performed the function I really wanted, but there was a fatal drawback in the Cloud Task, because I could only save the event within 30 days. In other words, future time exceeding 30 days did not apply to this.
I want this event to be saved over the long term. And I want it to be somewhat smooth for large traffic.
I`m using Flutter/Firebase, how to implement this requirements above?
thank you for reading happy new year
You could check in the function that gets activated on document creation if the task is due in more than 30 days, and if so, store it somewhere else (maybe another document). Then have another process that checks if the task is now within the 30 days range and then have it do the same as the newly created ones. This second process could be run every week or two weeks.

Schedule function in firebase

The problem
I have a firebase application in combination with Ionic. I want the user to create a group and define a time, when the group is about to be deleted automatically. My first idea was to create a setTimeout(), save it and override it whenever the user changes the time. But as I have read, setTimeout() is a bad solution when used for long durations (because of the firebase billing service). Later I have heard about Cron, but as far as I have seen, Cron only allows to call functions at a specific time, not relative to a given time (e.g. 1 hour from now). Ideally, the user can define any given time with a datetime picker.
My idea
So my idea is as following:
User defines the date via native datepicker and the hour via some spinner
The client writes the time into a seperate firebase-database with a reference of following form: /scheduledJobs/{date}/{hour}/{groupId}
Every hour, the Cron task will check all the groups at the given location and delete them
If a user plans to change the time, he will just delete the old value in scheduledJobs and create a new one
My question
What is the best way to schedule the automatic deletion of the group? I am not sure if my approach suits well, since querying for the date may create a very flat and long list in my database. Also, my approach is limited in a way, that only full hours can be taken as the time of deletion and not any given time. Additionally I will need two inputs (date + hour) from the user instead of just using a datetime (which also provides me the minutes).
I believe what you're looking for is node schedule. Basically, it allows you to run serverside cron jobs, it has the ability to take date-time objects and schedule the job at that time. Since I'm assuming you're running a server for this, this would allow you to schedule the deletion at whatever time you wish based on the user input.
An alternative to TheCog's answer (which relies on running a node server) is to use Cloud Functions for Firebase in combination with a third party server (e.g. cron-jobs.org) to schedule their execution. See this video for more or this blog post for an alternative trigger.
In either of these approaches I recommend keeping only upcoming triggers in your database. So delete the jobs after you've processed them. That way you know it won't grow forever, but rather will have some sort of fixed size. In fact, you can query it quite efficiently because you know that you only need to read jobs that are scheduled before the next trigger time.
If you're having problems implementing your approach, I recommend sharing the minimum code that reproduces where you're stuck as it will be easier to give concrete help that way.

Is there a way to define a trigger that runs reliably at a datetime specified as a field in the updated/created object?

The question
Is it possible (and if so, how) to make it so when an object's field x (that contains a timestamp) is created/updated a specific trigger will be called at the time specified in x (probably calling a serverless function)?
My Specific context
In my specific instance the object can be seen as a task. I want to make it so when the task is created a serverless function tries to complete the task and if it doesn't succeed it updates the record with the partial results and specifies in a field x when the next attempt should happen.
The attempts should not span at a fixed interval. For example, a task may require 10 successive attempts at approximately every 30 seconds, but then it may need to wait 8 hours.
There currently is no way to (re)trigger a Cloud Function on a node after a certain timespan.
The closest you can get is by regularly scheduling a cron job to run on the list of tasks. For more on that, see this sample in the function-samples repo, this blog post by Abe, and this video where Jen explains them.
I admit I never like using this cron-job approach, since you have to query the list to find the items to process. A while ago, I wrote a more efficient solution that runs a priority queue in a node process. My code was a bit messy, so I'm not quite ready to share it, but it wasn't a lot (<100 lines). So if the cron-trigger approach doesn't work for you, I recommend investigating that direction.

Dealing with huge amount of select statements

i'm designing an ASP.NET Application wich builds an Overview of all the sales per partner in a period of time.
How it works so far:
Select all partnerNo(SQL-Server) and add to List(ASP.NET)
Select sales of partnerNo1 over period of time(SQL-Server), summarize them(ASP.NET) and add them to a DataTable(ASP.NET)
Select sales of partnerNo2 over period of time, summarize them and add them to a datatable
Select sales of partnerNo3 over period of time, summarize them and add them to a datatable
and so on
Now here is the Problem: if i select only the TOP 100 partnerNo, if takes a while, but i get a result. If i change the TOP to 1000, the SQL-Server processes the SQL-Statements
(can see him working in activitymonitor), and the iis-server is feeding him the new SQL-Selects... but after a while, the iis is terminating the page-request from the browser, so no result is shown
i really hope, i could explain it enough for someone to help me.
With regards
Dirk Th.
That's the RBAR anti-pattern. It should be possible to create one SQL query that returns summarized information from all partners.
That's typically much faster: less data has to go over the line, and less often. A roundtrip to a database can cost 50ms. If you do 600 of those, you're at the 30 second timeout for web pages.
If you have Framework 4.5, AND getting the summary data for each partnerN is mutually exclusive, you can try parallel tasks.
http://msdn.microsoft.com/en-us/library/dd460720.aspx
Now, that's not a simple subject. But it would allow you to take advantage of multiple processors.
Number one rule. You CANNOT RELY ON SEQUENCE.
........
Option 2, a more "traditional" approach is to hit the database for everything you need.
I would abandon DataTables, and start using DTO or POCO objects.
Then you can author mini "read only properties" that replace your calculated/derived data-table columns.
Go to the database, do not use cursors or looping, and hit the database for all the info you need. After you get it back, stuff it into DTOs/POCO's rely on read-only properties where you can (for derived values)..........and then if you have to run some business logic to figure-out some derived values, then do that.
If you're "stuck" with a DataSet/DataTable for the presentation layer, you can do loop over your DTOs/POCOs and stuff them into a DataSet/DataTable.

How to build large/busy RSS feed

I've been playing with RSS feeds this week, and for my next trick I want to build one for our internal application log. We have a centralized database table that our myriad batch and intranet apps use for posting log messages. I want to create an RSS feed off of this table, but I'm not sure how to handle the volume- there could be hundreds of entries per day even on a normal day. An exceptional make-you-want-to-quit kind of day might see a few thousand. Any thoughts?
I would make the feed a static file (you can easily serve thousands of these), regenerated periodically. Then you have a much broader choice, because it doesn't have to run below second, it can run even minutes. And users still get perfect download speed and reasonable update speed.
If you are building a system with notifications that must not be missed, then a pub-sub mechanism (using XMPP, one of the other protocols supported by ApacheMQ, or something similar) will be more suitable that a syndication mechanism. You need some measure of coupling between the system that is generating the notifications and ones that are consuming them, to ensure that consumers don't miss notifications.
(You can do this using RSS or Atom as a transport format, but it's probably not a common use case; you'd need to vary the notifications shown based on the consumer and which notifications it has previously seen.)
I'd split up the feeds as much as possible and let users recombine them as desired. If I were doing it I'd probably think about using Django and the syndication framework.
Django's models could probably handle representing the data structure of the tables you care about.
You could have a URL that catches everything, like: r'/rss/(?(\w*?)/)+' (I think that might work, but I can't test it now so it might not be perfect).
That way you could use URLs like (edited to cancel the auto-linking of example URLs):
http:// feedserver/rss/batch-file-output/
http:// feedserver/rss/support-tickets/
http:// feedserver/rss/batch-file-output/support-tickets/ (both of the first two combined into one)
Then in the view:
def get_batch_file_messages():
# Grab all the recent batch files messages here.
# Maybe cache the result and only regenerate every so often.
# Other feed functions here.
feed_mapping = { 'batch-file-output': get_batch_file_messages, }
def rss(request, *args):
items_to_display = []
for feed in args:
items_to_display += feed_mapping[feed]()
# Processing/returning the feed.
Having individual, chainable feeds means that users can subscribe to one feed at a time, or merge the ones they care about into one larger feed. Whatever's easier for them to read, they can do.
Without knowing your application, I can't offer specific advice.
That said, it's common in these sorts of systems to have a level of severity. You could have a query string parameter that you tack on to the end of the URL that specifies the severity. If set to "DEBUG" you would see every event, no matter how trivial. If you set it to "FATAL" you'd only see the events that that were "System Failure" in magnitude.
If there are still too many events, you may want to sub-divide your events in to some sort of category system. Again, I would have this as a query string parameter.
You can then have multiple RSS feeds for the various categories and severities. This should allow you to tune the level of alerts you get an acceptable level.
In this case, it's more of a manager's dashboard: how much work was put into support today, is there anything pressing in the log right now, and for when we first arrive in the morning as a measure of what went wrong with batch jobs overnight.
Okay, I decided how I'm gonna handle this. I'm using the timestamp field for each column and grouping by day. It takes a little bit of SQL-fu to make it happen since of course there's a full timestamp there and I need to be semi-intelligent about how I pick the log message to show from within the group, but it's not too bad. Further, I'm building it to let you select which application to monitor, and then showing every message (max 50) from a specific day.
That gets me down to something reasonable.
I'm still hoping for a good answer to the more generic question: "How do you syndicate many important messages, where missing a message could be a problem?"

Resources