The current interval of 1 second and max of 60 seconds is too small and issues may be missed.
When viewing the live metrics stream page of Application Insights the interval on all graphs are 1 second, and it only goes up to 60 seconds. I am trying to use this as a monitoring page to keep an eye on recently released or updated function apps. For this I need to be able to change the interval to view more data at once without having to keep watch on it. Right now if we don't keep watch on it every minute we may miss something important.
I have searched the Microsoft documentation, the git repository, stackoverflow, and various other sites trying to find my answer but the only thing I found was from over 4 years ago and I would hope that this has changed since then.
Live Metrics Stream allows to peek at what's going one right now with 1 second resolution. But it doesn't persist data anywhere. So, data is only stored in UX (browser) and right now only for 60 seconds.
For bigger intervals it might make sense to refer to other Application Insights experiences (including Analytics).
Related
Let's say you have a small project. The team has estimated all the tasks as 300 days of effort.
I have 5 developers in the team, and I want MS Project to tell me when the project will complete considering vacations and working schedule of my team member.
In order to do that:
I'm creating a Task "Development" with fixed work "300d", and task type "Fixed Work".
Then I create 5 resources, and specify a 2 week vacation for one of the developers somewhere in the middle of the schedule.
Then I assign my 5 development resources to this task.
The problem is, the 300d distributed evenly to all 5 development resources. And If one of them have a two weeks vacation in between, due to that particular resource the work will be finished 2 weeks later, where other 4 resources are sitting and doing nothing for 2 weeks. Total duration is 70 days.
what I get
What I want to get is: work is distributed accordingly through all 5 resources unevenly in a way that the whole task finishes as earlier as possible taking most of the usable time from all developers.
That's how I would expect it to work. In that particular case I was distributing hours manually.
what i would expect
Is there a possibility in MS Project to do something like this? Or am I doing something wrong?
There are a couple issues with how you are approaching the problem.
1. Rather than just planning out the manpower hours estimated to be needed for the entire project on a single line item, You should plan out the tasks that will need to be done to accomplish "Small Project"
If you discretely plan out the tasks that need to be accomplished to satisfy the scope of "Small project", you can establish dependency (predecessor/successor) relationships between your tasks and figure out what tasks need to be done before you can move on to others. When you do this it will give you a good idea of how long the total duration of the project will take and likely be more accurate than just relying on an estimate based on the manpower hours estimate your developers give you. Find out what tasks they actually need to do, not just how many hours they think the whole project will take them. This will also allow you to plan out the utilization of your resources better because you'll be able to assign specific resources to specific tasks, and not all of your resources need to be on every task.
2. In general I would avoid using the Task Usage form.
I noticed you are altering resources in the task usage form, but unless you are really experienced with Microsoft Project I would avoid ever touching that, as it's really easy to set the period of performance of resources assigned to a task to be different than the actual period of performance of the task itself. This will cause MS Project to behave unusually, and it can be hard for an unexperienced user to understand why. This usually leads to pain and frustration. This leads me to my next bit of advice:
3. If you really want to specify a resource's vacation time, it's better to adjust the calendar associated the resource to exclude those dates as working dates.
In your situation with only 5 resources on your project, this can be fairly easy to do. You can accomplish this 2 different ways (I'll start with the easiest option):
1. You can add resource specific exclusion dates to the default calendar in your project
You can accomplish this by opening the Resource Sheet table and then clicking the Project tab then Change Working Times. If you have the Resource Sheet open instead of the Gantt chart, you can specify the resource that is going to be effected by the exceptions:
In this example you can see that I would be excluding (removing) 8/23/21 thru 9/3/21 as working days for the SW Engineer resource, without needing to change the calendar used by the resource completely.
2. You can completely change the calendar used by particular resources to be different than the default calendar set for the project.
You can accomplish this by going into the Resource Sheet and opening the Base Calendar column:
From here you can assign any calendar that exists in the project to the resource. Of course this means you would need to create the calendars and assign exclusion dates to them.
To create a calendar, click the Project tab then click Change Working Times. Click Create New Calendar on the form that opens up and give it a name:
From there you can add exclusion dates and all that.
Note: In a larger project with many resources, I would recommend not messing with the calendar for the resources at all. It just gets hard to deal with when there are a lot of resources.
We have a couple of environments still running Time Series Insights Preview version. This is really fast and we are really satisfied by it. However, new environments really seem a lot slower with the official release. Warm path extraction is a lot slower, but still doable, while cold path extraction becomes unbearable.
EDIT: We need to add &storeType=WarmStore if we would like to query warm data. Cool! This works really fast again! Question about cold store still persists:
It is hard to compare the different environments, because the datasets are not exactly the same, but for our new environment we have about 4.5 TB sensor data imported in TSI.
The following screenshot shows a query that tries to retrieve one minute of data for one device (each device only sends data each 10 seconds) in the far past of 2018. However, the server returns the call after 30 seconds with a continuationtoken, saying it couldn't retrieve all the 6 values in time. Sometimes it manages to return all 6 of the values, but it still takes 30 seconds.
My internet download speed, while performing the query, was over 80 Mb per second, so that shouldn't be an issue either.
Is this something we should be worried about in the new release?
please submit a support ticket through the Azure portal with all of these details and the product team will investigate.
I'm stuck with building my own, simple browser game.
My program: you can upgrade your tools which allow you to gain more points per hour.
My problem:
So for example a user logs in and upgrades his tools from 0 to 1 which would double the amount of points gained. But upgrading takes 2 hours to complete. I don't expect my user to be online for 2 hours so I save the time he was last seen in an SQL table. Now when 2 hours have passed the amount of points gained need to be doubled but it's very possible that the user doesn't visit the page for another 10 hours. So my current program keeps adding 1 point per hour until the user visits the page. So in this case he'd have 12 points. But it needs to multiply after 2 hours so he needs to have 22 points.
Another, maybe simpler example is a maximum amount of points. Let's say the max is 10 points. But the user stays offline for 15 hours which means he'd earn 15 points at a rate of 1pnt/hr.
I don't have any functionally code yet because I want to know if something like this is actually possible and how for example cityVille(facebook) does it.
Now my question:
Can anyone give me a tip or give me some info on how to get started at this or at least give me the name of what I'm searching for? I've tried google'ing things like "offline database interactions" or "changing variables without user request" but nothing useful comes up.
Thanks in advance,
BlaDrzz.
You can schedule jobs with SQL server. These jobs can run at whatever frequency you like.
http://technet.microsoft.com/en-us/library/ms191439.aspx
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We've just set up a Rational Team Concert v3 system. The data was loaded on Friday, but there was an issue connecting to the report data warehouses that was not fixed until today (Monday). We've fixed it, and the data load operations seem to be finishing correctly now.
I'm desperately eager to see a burndown chart - even though I know that in 24 hours we won't really have enough data to make it useful. I'm also eager to see just about any report from the RTC server, as we want to be able to share as much information as possible with the customer, and this is a trial for RTC as a large team tool.
How long should one expect it to take before RTC is able to show reports relating to work items? We've already cached several data updates - but only within the last few hours.
Should we wait 24 hours? 48? should it show up immediately? Haven't found any good heuristics for this on the Rational site.
You need a few things to happen to get a decent burn down chart in RTC
Run the Data Warehouse job (this happens every 24 hours automatically, or you can trigger it manually from the Reports page in the Admin section.
Get some work done - complete tasks, set Stories to Completed, etc. The burn down is a graph over time of work done.
You should see progress on the chart after the two event above occur.
Another thing to check - is that specific burn down chart set to point to the right project and team?
If that does not work - you may want to raise the question with IBM support (sounds like something is wrong, or raise the question on the RTC forum on jazz.net
Closure - It turns out we had several problems. Problems included:
- incorrect account setup on the account syncing between RTC and the data warehouse - we had to both make a new account and setup more privileges for it.
- a truly messed up set of sprints. I don't know what went wrong with the Sprints that were first set up (by default!) with the project, but they did not ever sync properly. Moving tasks to a newly made sprint caused the tasks to show up properly in reports (after a sync), but the original sprints were simply broken. Eventual workaround - make new sprints, same dates, and move all assigned stories/tasks to them.
The final answer was - the data should show up instantly after a sync. If you think your sync shows new data and you don't see a change in your report, then you have a problem.
Other notes - the data in the selection fields in "edited" reports is based on the data in the warehouse. If you don't see a sprint or release in there, it means that the report search criteria is not showing that there is data in the column that you are looking for. Report business logic seems to vary by report - in some cases, not being able to select a sprint (or not having a sprint in the data that matches the "current iteration") - will cause empty reports.
It means I can't see traffic i got today. Also is it only specific to API or Overall Analytics System?
The reason Google recommends this is because for most of the data, there is about a 24 hour delay before you see it in reports or have it available for pulling with the API. The extra 24 hours on top of that is a buffer for insurance.
So if you look at a report or pull data with the API from like 12 hours ago, and then wait an hour or whatever and pull the data with same ranges/metrics/etc... the numbers won't match up, because by then, more data will have become available. But it's data that was already there (people didn't take a time machine into the past and visit your site, obviously)...it was just not yet processed and available for looking at through the report/API.
A delay in data for reports (or through an API) is not unique to GA. Different reporting tools have different "lags" in data availability, depending on how their databases are setup, how they process the data, how much you are paying for the services, etc... for instance (these are the 4 major tools I've used):
Yahoo Web Analytics data is more or less real-time
Adobe/Omniture SiteCatalyst is..they say real-time but in practice I've seen it take anywhere from instant to an hour
WebTrends has a 24 hour delay
GA has a 24 hour delay
But this isn't as big a deal as you might think. Most companies look at reports by the week, month, quarter, year, so really the delay isn't a problem for the people that matter. The only people that really feel it are the code implementers who have to sit there and wait to see data come in when they are trying to QA an implementation or debug when there is a potential problem.
But even then there are a lot of tools out there that let you see in real-time what is physically being sent to the tool (like firebug, charles proxy, etc...), which greatly helps in QAing. It doesn't really help as far as QAing stuff that requires settings/alterations within the tool's interface, but still, it's a big help.