Pivotal Tracker: why are my epics "done" when they are not "done"? - pivotaltracker

I can't seem to manually control the state of "done"-ness for my epics, and it is often wrong.
It seems it would only be "done" when all stories are complete. But this not the case. Perhaps its a bug? Perhaps this happens when all stories are complete, but then I add a story later?
The most frustrating aspect of this is that "done" epics are hidden by default and epics I'm working on keep disappearing.

An epic in Pivotal tracker is done when all its stories are either done or in icebox.
Found this from here
Completing epics
As you complete your stories, epics are considered done (and appear in green) when all prioritized stories in them are accepted. Stories in the icebox are not considered to be “in play” – they’re on ice, and in most cases epics will have stories in the icebox indefinitely. It’s often the case that everything you wanted doesn’t actually get done, yet the feature is perfectly usable. However, as soon as you drag a completed epic’s story from the icebox to the backlog, it will stop being green.
Done epics remain in the Epics panel until the iteration in which stories in them were accepted is over. Then you’ll see “Show x Done Epics” at the top of the epics panel instead.

Related

Redux randomly resets a reducer's state

I've run into a problem and I need a hand.
This is more of a theoretical approach rather than practical.
My problem is:
I am making an app in my spare time, which is rather complex - it's supposed to resemble a social network of sorts to an extent.
I have 10 reducers in my combineReducers function and a crapload of actions fire up when a user either goes to their dashboard or the start/home page - to fetch the user's uploaded images, videos, their profile picture, posts of their friends, posts directed to them, all of their posts together regardless of whether they are images, videos, audio posts or textual and groups they're a part of.
However, some of the reducers affect others, and simply reset them, but they were supposed to act independently of one another.
In the images below, you can clearly see that some textual posts were loaded in one of the dispatched actions, only to be completely reset by another action from a completely different reducer with a different type constant when loading videos.
In case it matters, some of my separate reducer objects share the property loading, initially set to null or false.
The maintainer on GitHub might have also been stumped, as they recommended that I ask my question here.
The reducer loads
Different reducer randomly resets another one

Adding and removing listeners continually

I add a listener when user goes to a certain page. If the user leaves the page, I remove the listener. I'm doing this because I don't want to download the updates that the user doesn't need anymore. Can we say this is a good practice? User can go back and forth between the pages and it will cause several listeners to be added and removed continually. What are the potential issues from the aspect of performance and cost?
From what you're describing, there will be no negative impact on performance and cost. Adding and removing listeners is cheap. If you are only doing it during page navigation, that is not actually very fast at all, by any standard.

domInteractive vs Time to Interactive - whats the difference?

Google offers a number of polyfill libraries for measuring and tracking First Input Delay(FID) and Time to Interactive (TTI) on analytics platforms. However this metric does not come standard with GA.
domInteractive however is a metric you can track out of the box with GA.
What's the difference? The only explanation I've found for the competing interactive metrics is a vague forum post explaining that TTI may offer a more complex look at interactive delays, but without much in the way of details.
Am I better off tracking TTI on my users if I'm concerned about input delays affecting conversion, or am I fine to stick with domInteractive?
My understanding is the following:
Time to Interactive (TTI) is when the website is visually usable and engaging. For example, when a user can click around on the UI and the website is functional. Ideally, we want all experiences to get interactive as soon as possible. Examples of websites with poor TTI are websites where a user can be actively engaged with the UI for a good amount of time before anything actually happens. Poor TTI is caused by too much (main thread) JavaScript which adversely causes delays for interactivity for visible UI elements. An example, of this is here. This is an especially important metric to consider for the mobile space since everyone doesn't have a nice phone (so it will take longer to parse the JavaScript needed to load a site) as well as the variance that occurs due to different network speeds: i.e. WI-FI, 3G, 4G
domInteractive however is when a page's primary content is visible and meaningful paints have occurred. At this stage a user can visually see the webpage and the corresponding UI elements that represent the site's DOM.
First Input Delay (FID) is the measurement of how long it took to respond to a user event. For example, how long did it take for a button's event handler to take over and respond once the user clicked the button.
As far as I know FID and TTI are experimental metrics right now so they probably wouldn't be baked into Google Analytics by default. As for your question: "Am I better off tracking TTI on my users if I'm concerned about input delays affecting conversion, or am I fine to stick with domInteractive?" You actually want to track FID if you're concerned with input delays affecting conversion. TTI is still a very useful metric to track since it measures when your site as a whole is interactive and both TTI and FID will provide more value than domInteractive.
If you're still interested check out this explanation on the Cost of JavaScript by Addy Osmani. He does a beautiful job explaining the performance issues we are facing with JavaScript as well as talking about TTI and FID.
Cheers
According to this link, domInteractive is "when the parser finished its work on the main document". Time to interactive - is time when all page scripts (including library e.g. Angular and yours) finished initialization, page is not freezen and user can start interacting with it.
Had to dig into the Spec but I think I found what I was looking for:
The DOMContentLoaded event fires after the transition to "interactive" but before the transition to "complete", at the point where all subresources apart from async script elements have loaded.
Basically domInteractive will not reflect async scripts that are still loading in, which is why your TTI metric can vary so widely.

Long delay after button is clicked

I'm just learning ASP.NET using VB 2010, and although I've had a lot of good progress, I am stumped by one issue that I can't resolve. I've also the web for answers, but I haven't found anything that is exactly what I am dealing with. ...though I may not be using the correct search terms.
Anyway, this is an app that will run on our company internet site which requires users to enter information into text boxes and click a button to accept it. Then it will show a modal pop-up asking the user to confirm. The pop-up has a "Confirm" button and a "Cancel" Button. The cancel button works immediately (hides the confirmation pop-up), but the confirm button hangs up for several seconds before it moves to the next step, which is a modal "Thank You" pop-up. The Confirm button writes data to a database.
Now, that's how it works inside the development environment. However, when it's on the production server, it will sit there for who-knows-how-long before it does anything. I can tell that it is writing to the database, and then displaying the data on the page, but the Confirmation pop-up stays up, and the Thank You pop-up never shows up. Also, the app is supposed to send an email to the user as acknowledgement, but it doesn't do that.
When it hangs up like this, I have never waited long enough to see when it catches up. And when it's live like that, I don't know of a way to debug it.
More info about the page: There are several update panels, one that responds to a timer tick every second to update fields on the page. The others are set to "conditional," being updated by other events. For example, the Confirmation and Thank You modals are in conditional update panels which respond to different events.
So I have two questions: Can anyone advise me about the hangup, and is there a way to debug from a live site? Oh, and maybe a third: Can you have too many update panels?
Update: Follow up question: Can it be going off on a different thread, going off track from the correct thread? I've never really understood threading, but this seems like a possibility.
This could be any number of things, so it's going to probably be something you're going to need to dive into and troubleshoot and it's probably not something we'll be able to help with too much.
First, the obligatory request: please post your code :)
Now, something that works quickly and dev and slowly in production is usually a resource issue or a code/data issue. First, take a quick look at the server and make sure it's up to the task for multiple users and all of that. It's worth a quick look, but it's usually an issue with the code or data.
What is that update command doing? Is the SQL behind it written well and efficiently? Are there any database locks that might be happening where another user is doing something and your code is actually waiting for it to complete before doing the updating? How many rows are in the database / how many are being effected?
I'd start by running a SQL trace to see what's really happening and to get an ideas as to how many database calls there are an how long each one takes to execute. If that's not the answer, look at the VB code and see if it's efficiently written. If not, go back to the server resources. Without seeing any code or having any idea what the application is supposed to do, I'd bet on the database queries being the culprit.
My bad. I hadn't mentioned one aspect, because I had no idea it would be a factor, but it is. Part of the process was to log certain events into a log file. The way it's set up in our IIS, that's a big no-no. So it was throwing an error, but the error was only manifesting itself as a long delay. I commented out the code that opens, writes to, and closes the log file, and it's all good.

How can the delay before displaying meteor collections be best addressed?

Before displaying the items from a collection, meteor seems to do some processing that leaves the client window without updates. You can see this live if you surf to http://madewith.meteor.com on a reasonable machine. My 2.6GHz 4GB RAM laptop takes about 5 seconds to render the items in the list, during which there is no indication of progress and a new user in a hurry could reasonably believe the page has finished loading.
Is there a way to incrementally display items from a collection, such that the server pushes to the client the first items of data on the wire, and the browser renders them, while new items are received? Akin to HTTP's chunked transfer.
Or is the only solution to display a spinner graphic while loading the collection, similar to what https://atmosphere.meteor.com/ does (the "doing something smart" message)?
If you examine the xhr of the madewith app, you'll see that all (87 at this moment) apps load in the same request. So I don't think 'incrementally' displaying data is going to help in this case.
The issue is just that meteor apps take a while to initialise. I'm not sure if this can be improved in the future, but for now, yes, I think displaying a spinner is the best solution.
Regarding how to know when the data is ready, you can use the onReady callback on a collection, or see this PR for a better solution coming soon.

Resources