Probably best described by example:
I have a project with 10 tasks of 1 day to be done by 1 resource. So they have to happen sequentially, easy to arrange by levelling or linking and the project will last 10 days. But this sets a task order, and it seems that Project expects them to be done in that order.
If they are, it is easy to get an indication from the standard reports about overall project progress. I can get a "late tasks" report which will show exactly that.
But the reality in our projects is that each task is done a bit at a time. So after 5 days, we are just as likely to have done 50% of all the tasks, or 100% of the last 5, as 100% of the first five and for us, that is equally satisfactory progress.
I have tried but I can't seem to find a way of knowing if the overall project is on track by work done regardless of task order. Is there an easy answer for this?
Related
I have a problem regarding the organization of my data. What I want to achieve:
What I want to achieve
TL/DR: One data point updated in real time in many different groups, how to organize?
Each user sets a daily goal (goal) he wants to achieve
Upon working each user increases his time to get closer to his daily goal (daily_time_spent). (say from 1 minute spent to 2 minute spent).
Each user can also be in a group with other users.
If there is a group of users, you can see each other's progress (goal/daily_time_spent) in real time (real time being every 2-5 minutes, for cost reasons).
It will later also be possible to set a daily goal for a specific group. Your own daily goal would contribute to each of the groups.
Say you are part of three groups with the goals 10m/20m/30m and you already did 10m then you would complete the first group and have done 50% of the second group and 30% of the third group. Your own progress (daily_time_spent) contributes to all groups, regardless of the individual goals (group_daily_goal).
My ideas
How would I organize that? One idea is if a user increments his/her time, the time gets written down into each group the user is part of and then, when the user increases his time, it gets increased in each group he/she is part of. But this seems to be pretty inefficient, because I would potentially write the same data in many different places (coming from the background of a SQL-Developer, it might also be expensive?).
Another option: Each user tracks his time, say under userTimes/{user} and then there are the groups: groups/{groupname} with links to userTimes. But then I don't know how to get realtime updates.
Thanks a lot for your help!
Both approach can work fine, and there is no singular best approach here - as Alex said, it all depends on the use-cases of your app, and your comfort level with the code that is required for each of them.
Duplicating the data under each relevant user will complicate the code that performs the write operation, and store more data. But in return for that, reading the data will be really simple and scale very well to many users.
Reading the data from under all followed users will complicate the code that performs the read operation, and slow it down a bit (though not nearly as much as you may expect, as Firebase can pipeline the requests). But it does keep your data minimal and your write operations simple.
If you choose to duplicate the data, that is an operation that you can usually do well in a (RTDB-triggered) Cloud Function, but it's also possible to do it through a multi-path write operation from the client.
I'm currently trying to figure out the best way to save entries for my app in a way that I can effectively query them by day. I am stuck between two different approaches.
To simplify my problem, let's say I'm making a journal app. For this app, a journal entry contains {title, timestamp}.
Approach #1:
-Journal
--[user_id]
---journal entry
Approach #2
-Journal
--[user_id]
---[Unix timestamp of beginning of day]
----journal entry
As of now, I'm leaning towards approach #1, specifically because it could potentially allow me to grab entries within the last 24 hours by querying rather than for a specific day.
At the same time, however, approach #2 would allow me to more easily handle the potentially sparse amount of entries with client side logic, and it will allow me to avoid using any querying functions. The appeal to me in this approach is that I could just generate very simple functions for generating a timestamp for the beginning of any given day, and I could just get the journal entries childed to that timestamp.
I really want to go with approach #1 as it seems like the right thing to do, and it feels like something that would give me a freedom in many ways that approach #2 would not, but I worry that the querying functions required for part 1 are not very straight forward and would be a huge hastle.
If I go with approach #1, is there a specific way to query?
I'm not sure if using orderbychild(timestamp).startAt(starting timestamp) and .endAt(ending timestamp) would work as it intuitively should, as I've tried this approach in the past and it didn't behave correctly.
Please let me know which approach I should go with, and if it's approach 1, how I should be properly querying it.
Thanks.
The question
Is it possible (and if so, how) to make it so when an object's field x (that contains a timestamp) is created/updated a specific trigger will be called at the time specified in x (probably calling a serverless function)?
My Specific context
In my specific instance the object can be seen as a task. I want to make it so when the task is created a serverless function tries to complete the task and if it doesn't succeed it updates the record with the partial results and specifies in a field x when the next attempt should happen.
The attempts should not span at a fixed interval. For example, a task may require 10 successive attempts at approximately every 30 seconds, but then it may need to wait 8 hours.
There currently is no way to (re)trigger a Cloud Function on a node after a certain timespan.
The closest you can get is by regularly scheduling a cron job to run on the list of tasks. For more on that, see this sample in the function-samples repo, this blog post by Abe, and this video where Jen explains them.
I admit I never like using this cron-job approach, since you have to query the list to find the items to process. A while ago, I wrote a more efficient solution that runs a priority queue in a node process. My code was a bit messy, so I'm not quite ready to share it, but it wasn't a lot (<100 lines). So if the cron-trigger approach doesn't work for you, I recommend investigating that direction.
I'm currently working on a project that uses EventStore, CommonDomain, and NServiceBus, when I have NumberOfWorkerThreads set to 1, all of our services(nservicebus - we have 6 of them, each has thier own event store) runs perfectly, but when i set NumberOfWorkerThreads to more than one, I start seeing a ton of deadlocks, i mean like at least 50 per minute. All of the deadlocks are on the Commits table. From what i've found is that, it looks like I'm updating the same aggregate in multiple threads, which could easily happen during and import of a catalog per say, and I update quantity in one thread, while updating the price in another thread, so both threads are trying to update the same aggregate.
Has anyone else had this issue, and how have you gotten around it?
I am building a scheduling system. The current system is just using excel, and they type in times like 9:3-5 (meaning 9:30am-5pm). I haven't set up the format for how these times are going to be stored yet, I think I may have to use military time in order to be able to calculate the hours, but I would like to avoid that if possible. But basically I need to be able to figure out how to calculate the hours. for example 9:3-5 would be (7.5 hours). I am open to different ways of storing the times as well. I just need to be able to display it in an easy way for the user to understand and be able to calculate how many hours it is.
Any ideas?
Thanks!!
Quick dirty ugly solution
public static const millisecondsPerHour:int = 1000 * 60 * 60;
private function getHoursDifference(minDate:Date, maxDate:Date):uint {
return Math.ceil(( maxDate.getTime() - minDate.getTime()) / millisecondsPerHour);
}
Ok it sounds like you're talking about changing from a schedule or plan that's currently developed by a person using an excel spreadsheet and want to "computerize" the process. 1st Warning: "Scheduling is not trivial." How you store the time isn't all that important, but it is common to establish some level of granularity and convert the task time to integer multiples of this interval to simplify the scheduling task.
If you want to automate the process or simply error check you'll want to abstract things a bit. A basic weekly calendar with start and stop times, and perhaps shift info will be needed. An exception calendar would be a good idea to plan from the start. The exception calendar allows holidays and other exceptions. A table containing resource and capacity info will be needed. A table containing all the tasks to schedule and any dependencies between tasks will be needed. Do you want to consider concurrent requirements? (I need a truck and a driver...) Do you want to consider intermittent scheduling of resources? Should you support forward or backward scheduling? Do you plan to support what if scenarios? (Then you'll want a master schedule that's independent of the planning schedule(s)) Do you want to prioritize how tasks are placed on the schedule? (A lot of though is needed here depending on the work to be done.) You may very well want to identify a subset of the tasks to actually schedule. Then simply provide a reporting mechanism to show if the remaining work can fit into the white space in the schedule. (If you can't get the most demanding 10% done in the time available who cares about the other 90%)
2nd Warning: "If God wrote the schedule most companies couldn't follow it."