Any way to end the queue wrap-up time?
We define a wrap-up time of 60 seconds to allow the agents to finish their notes from a call. In some cases, the full 60 seconds is not needed and our agents would like to end their wrap-up time.
Note:I dont want use pause/unpause commands, because I am tracking the pause/unpause events to track the breaks (metting, launch, break, etc) times.
No, there are no way change wrapuptime dynamicaly.
But posible create dialplan/web page support which will pause/unpause or just not allow call device while puting notes.
You also can mark pauses created by web page, so not use it for track breaks.
Bit ancient question, but there is actually a tool which allows you to manage wrap up/pause/unpause of agents and gives you many more features to control/monitor your call center so it might be worth looking at. Have a look at OrderlyStatsSE.
Related
The problem
I have a firebase application in combination with Ionic. I want the user to create a group and define a time, when the group is about to be deleted automatically. My first idea was to create a setTimeout(), save it and override it whenever the user changes the time. But as I have read, setTimeout() is a bad solution when used for long durations (because of the firebase billing service). Later I have heard about Cron, but as far as I have seen, Cron only allows to call functions at a specific time, not relative to a given time (e.g. 1 hour from now). Ideally, the user can define any given time with a datetime picker.
My idea
So my idea is as following:
User defines the date via native datepicker and the hour via some spinner
The client writes the time into a seperate firebase-database with a reference of following form: /scheduledJobs/{date}/{hour}/{groupId}
Every hour, the Cron task will check all the groups at the given location and delete them
If a user plans to change the time, he will just delete the old value in scheduledJobs and create a new one
My question
What is the best way to schedule the automatic deletion of the group? I am not sure if my approach suits well, since querying for the date may create a very flat and long list in my database. Also, my approach is limited in a way, that only full hours can be taken as the time of deletion and not any given time. Additionally I will need two inputs (date + hour) from the user instead of just using a datetime (which also provides me the minutes).
I believe what you're looking for is node schedule. Basically, it allows you to run serverside cron jobs, it has the ability to take date-time objects and schedule the job at that time. Since I'm assuming you're running a server for this, this would allow you to schedule the deletion at whatever time you wish based on the user input.
An alternative to TheCog's answer (which relies on running a node server) is to use Cloud Functions for Firebase in combination with a third party server (e.g. cron-jobs.org) to schedule their execution. See this video for more or this blog post for an alternative trigger.
In either of these approaches I recommend keeping only upcoming triggers in your database. So delete the jobs after you've processed them. That way you know it won't grow forever, but rather will have some sort of fixed size. In fact, you can query it quite efficiently because you know that you only need to read jobs that are scheduled before the next trigger time.
If you're having problems implementing your approach, I recommend sharing the minimum code that reproduces where you're stuck as it will be easier to give concrete help that way.
I am using Firebase's cloud functions to execute triggers when a client adds something to the database, but it seems like these triggers take long time to execute.
For example, I have a trigger that adds a creation date to a post whenever a post is added to the database, and it takes like 10 seconds to complete.
Also, I have larger triggers that take even longer.
Is there like a "best practice" I am missing? Am I doing anything wrong?
I've found that the reason that Cloud Functions sometimes takes time to respond, is because they have a "warm up" period, meaning that the first time you call a cloud function it begins to warm up, and it won't be fully responsive until it is warm.
After it warms up, it will be as responsive as you would expect.
The reason for this behavior is resource resource balancing - If you don't use a function for some time, it will shut down and clear resources so other functions will be more responsive.
This might be a very easy one for many of you - but I'm stuck trying to figure out a strategy for rendering updates to the View while server is performing a timeconsuming operation.
This is the case. I have a view that has a button which say "Approve". This approve needs to call some Action or backgroundprogress of some kind to perform a heavy operation that might take 20-30 seconds.
During that time I want to update the View with some kind of processing gif-animation and append text's like "performing operation A", "performing operation B" and so on.
What is the best stragegy for achieving this?
Here an answer you might not like: don't even bother trying to get some "progress updates" from the server.
Take a look at this task from the commercial point of view. The purpose of providing some feedback is to give the user some warm and fuzzy feeling that they have not been forgotten and the task they have asked their computer to do has not been abandoned. How much cost are you willing to incur delivering this feature?
The simplest such device is the humble progress bar. Even though most experience users would not trust it to tell them when a task will finish they do still trust that if its moving something is happening.
My solution would be to post off an async operation to the server to kick the operation off. Then show a progress bar that is entirely managed by javascript. It starts of rapidly but slows down as it progresses such that it would never actually complete but does appear to be making some progress. When the async operation completes briefly show the progress bar as reaching completion then remove it.
The cost of other solutions is much, much greater but the benefit over this approach is almost negligable if not actually negative, after all they are complex to implement and are more likely to go wrong.
I must admit that I haven't tried but I am willing to do it.
I think SignalR could be a good try.
while this sounds good in theory, I would make this a background operation that then sends a notification to the user via email or sms when the task is done.
otherwise you need to
setup a cache (not asp.net cache) on the server to store the current state of the long running process
setup a js timer to poll the server
update the ui with the current state stored in the cache.
not impossible, but a lot of moving parts.
I am building a scheduling system. The current system is just using excel, and they type in times like 9:3-5 (meaning 9:30am-5pm). I haven't set up the format for how these times are going to be stored yet, I think I may have to use military time in order to be able to calculate the hours, but I would like to avoid that if possible. But basically I need to be able to figure out how to calculate the hours. for example 9:3-5 would be (7.5 hours). I am open to different ways of storing the times as well. I just need to be able to display it in an easy way for the user to understand and be able to calculate how many hours it is.
Any ideas?
Thanks!!
Quick dirty ugly solution
public static const millisecondsPerHour:int = 1000 * 60 * 60;
private function getHoursDifference(minDate:Date, maxDate:Date):uint {
return Math.ceil(( maxDate.getTime() - minDate.getTime()) / millisecondsPerHour);
}
Ok it sounds like you're talking about changing from a schedule or plan that's currently developed by a person using an excel spreadsheet and want to "computerize" the process. 1st Warning: "Scheduling is not trivial." How you store the time isn't all that important, but it is common to establish some level of granularity and convert the task time to integer multiples of this interval to simplify the scheduling task.
If you want to automate the process or simply error check you'll want to abstract things a bit. A basic weekly calendar with start and stop times, and perhaps shift info will be needed. An exception calendar would be a good idea to plan from the start. The exception calendar allows holidays and other exceptions. A table containing resource and capacity info will be needed. A table containing all the tasks to schedule and any dependencies between tasks will be needed. Do you want to consider concurrent requirements? (I need a truck and a driver...) Do you want to consider intermittent scheduling of resources? Should you support forward or backward scheduling? Do you plan to support what if scenarios? (Then you'll want a master schedule that's independent of the planning schedule(s)) Do you want to prioritize how tasks are placed on the schedule? (A lot of though is needed here depending on the work to be done.) You may very well want to identify a subset of the tasks to actually schedule. Then simply provide a reporting mechanism to show if the remaining work can fit into the white space in the schedule. (If you can't get the most demanding 10% done in the time available who cares about the other 90%)
2nd Warning: "If God wrote the schedule most companies couldn't follow it."
I've been playing with RSS feeds this week, and for my next trick I want to build one for our internal application log. We have a centralized database table that our myriad batch and intranet apps use for posting log messages. I want to create an RSS feed off of this table, but I'm not sure how to handle the volume- there could be hundreds of entries per day even on a normal day. An exceptional make-you-want-to-quit kind of day might see a few thousand. Any thoughts?
I would make the feed a static file (you can easily serve thousands of these), regenerated periodically. Then you have a much broader choice, because it doesn't have to run below second, it can run even minutes. And users still get perfect download speed and reasonable update speed.
If you are building a system with notifications that must not be missed, then a pub-sub mechanism (using XMPP, one of the other protocols supported by ApacheMQ, or something similar) will be more suitable that a syndication mechanism. You need some measure of coupling between the system that is generating the notifications and ones that are consuming them, to ensure that consumers don't miss notifications.
(You can do this using RSS or Atom as a transport format, but it's probably not a common use case; you'd need to vary the notifications shown based on the consumer and which notifications it has previously seen.)
I'd split up the feeds as much as possible and let users recombine them as desired. If I were doing it I'd probably think about using Django and the syndication framework.
Django's models could probably handle representing the data structure of the tables you care about.
You could have a URL that catches everything, like: r'/rss/(?(\w*?)/)+' (I think that might work, but I can't test it now so it might not be perfect).
That way you could use URLs like (edited to cancel the auto-linking of example URLs):
http:// feedserver/rss/batch-file-output/
http:// feedserver/rss/support-tickets/
http:// feedserver/rss/batch-file-output/support-tickets/ (both of the first two combined into one)
Then in the view:
def get_batch_file_messages():
# Grab all the recent batch files messages here.
# Maybe cache the result and only regenerate every so often.
# Other feed functions here.
feed_mapping = { 'batch-file-output': get_batch_file_messages, }
def rss(request, *args):
items_to_display = []
for feed in args:
items_to_display += feed_mapping[feed]()
# Processing/returning the feed.
Having individual, chainable feeds means that users can subscribe to one feed at a time, or merge the ones they care about into one larger feed. Whatever's easier for them to read, they can do.
Without knowing your application, I can't offer specific advice.
That said, it's common in these sorts of systems to have a level of severity. You could have a query string parameter that you tack on to the end of the URL that specifies the severity. If set to "DEBUG" you would see every event, no matter how trivial. If you set it to "FATAL" you'd only see the events that that were "System Failure" in magnitude.
If there are still too many events, you may want to sub-divide your events in to some sort of category system. Again, I would have this as a query string parameter.
You can then have multiple RSS feeds for the various categories and severities. This should allow you to tune the level of alerts you get an acceptable level.
In this case, it's more of a manager's dashboard: how much work was put into support today, is there anything pressing in the log right now, and for when we first arrive in the morning as a measure of what went wrong with batch jobs overnight.
Okay, I decided how I'm gonna handle this. I'm using the timestamp field for each column and grouping by day. It takes a little bit of SQL-fu to make it happen since of course there's a full timestamp there and I need to be semi-intelligent about how I pick the log message to show from within the group, but it's not too bad. Further, I'm building it to let you select which application to monitor, and then showing every message (max 50) from a specific day.
That gets me down to something reasonable.
I'm still hoping for a good answer to the more generic question: "How do you syndicate many important messages, where missing a message could be a problem?"