What is "Blocking Others" - phabricator

I'm very new to phabricator, and I do not understand the term "Blocking Others" as it is used in the 'Active Revisions' dashboard. There are several bugs going back to (at least) Oct 2103 addressing this (
https://secure.phabricator.com/T1279#42118,
https://secure.phabricator.com/T4144, https://secure.phabricator.com/T10031), but I remain confused.
If I accept a revision, how can I configure Phabricator to place that revision in a non-obtrusive location until such time as it has changed and requires my attention again? I do not understand why all of the revisions that appear under the top heading "Blocking Others" are revisions on which I am the last person to take action and clearly do not require my attention. My current work flow is to completely ignore that section. I would like it to go away, but perhaps there is something I should understand that would make that section useful rather than annoying.

If you have Blocking Reviewers set up, the "Blocking Others" section can be confusing (since it's not you probably blocking the user). The item is blocked though. There are some upstream tasks on sorting this out better, see https://secure.phabricator.com/T4144.

Related

How do I fix NPCs not showing a cast bar for some custom spells?

I am currently creating a module that creates custom boss fights in vanilla dungeons. To accomplish this without having to make edits to existing spells used by other creatures, I've been using Stoneharry Spell Editor to create custom spells that the bosses use.
The spells that I created are doing exactly what I want them to do but the majority of the spells that aren't instant cast will not have a cast timer shown while the boss is casting the spell.
Some of these spells haven't been edited aside from the damage. Just a straight up copy of a basic spell like Lightning Bolt. I searched through all of the attributes and there was no difference between spells that would show the cast timer and those that wouldn't.
What determines whether or not a spell with have a visible cast bar and how do I fix the spells that don't?
I have gone through the process of creating a custom MPQ file to patch my client in addition to the server side DBC file.
I was unable to find the cause of why some of these spells were not showing a cast bar but I did find a flag that can be set to force the display of the cast bar as a fix to the issue.
Setting AttributeEx4 with the value 268435456 (hex value 0x10000000) will force it to display. I confirmed this flag worked with all of the spells that were previously not showing a cast bar.
Might be related, might be unrelated, but from what I understand the 3.3.5 client and blizz's own UI uses events in the combat log to show stuff like cast bars in the UI frame. And because 3.3.5 client famously has bugs that the combat log gets frozen and stuck, sometimes these things disappear. People also call this famous bug other names like the "recount bug", since it leads to addons like Recount showing the wrong values for damage and such, because they stop receiving the correct events from the combat log. Notable thing is though the bug is very strange, it does not always completely freeze everything but rather still let some of the events through, leading the numbers in damage meter addons to change but be completely wrong.
I have stumbled to the same problem with regular mobs and bosses, noticed that some of them suddenly stop showing things like cast bars and buffs/debuffs after the combat log bug happens. The bosses still make their animations for casting and stuff properly, but they don't show in the UI. That's what lead me to think that the stuff happening in the "world" are handled by server sending opcodes but the stuff that is shown in your UI frame are from the combat log.
So first make sure you're using a combat log clearing addon or a macro, like this one:
https://github.com/anzz1/CLFix
Yes, I know that code runs the CombatLogClearEntires() every single frame, but from what I have tested I simply found every other addon that clears the log more infrequently to the combat log bug sometimes happening. Only running it every frame has spared me from any more combat log bugs. See, the thing about that bug is that you have to clear the log before it happens, clearing it afterwards won't usually help and you need to reload the whole UI.
Secondly, what you could do is check what your client sees happening in the combat log by printing the combat log events and comparing the different spell events that way. This can be achieved easily by making a frame, registering the COMBAT_LOG_EVENT_UNFILTERED event and printing the results.
Like this, just wrap that code into a .lua addon to see what's what:
local f = CreateFrame("Frame", nil, UIParent)
f:SetScript("OnEvent", function(self, event, ...)
-- timestamp, eventType, srcGUID, sourceName, srcFlags, destGUID, destName, destFlags, spellID, spellName, arg1, arg2, arg3, ...
print(...)
end)
f:RegisterEvent("COMBAT_LOG_EVENT_UNFILTERED")

Reassign user story during sprint?

If a story is in progress and then swim lanes are code review and QA-ready, how should the assignment of stories work? Should a story remain assigned to the developer? And should the code review and QA tasks be created as sub-tasks in it? Or should the story be re-assigned when it is moved to code review by the developer, and when code review is done, it is moved to QA lane by the reviewer and re-assigned to QA by the reviewer. It seems anti-pattern to re-assign tickets from in-progress to future states. It looks okay to re-assign tickets before it was brought in the sprint but not after.
Scrum does not have anything to say about how the work is done nor how a board is managed. However, many team's look at Kanban's "pull" approaches to answer this. In that case, work is never assigned or given, it is only claimed/taken on. Therefor, work would be moved to "Code Review" by the reviewer when they began the work. Similarly, the work would be moved to QA by the tester when they started. "Ready" columns are a bit of a misnomer as they are not states. Rather, they are statuses of the previous state. If your order is Code Review - QA Ready - QA, then in fact, QA ready is a possible designation on work in Code Review. This may seem minor, but it is very important to prevent pile-ups in your process where work stalls without owners.
There is no single answer, but one way of doing it is to think of of a User Story as a container of tasks where each task is a small technical deliverable of any kind. With this mindset you can effectivly stop thinking of who the assignee is as each developer will have its small contribution towards the goal.
One of the problems with task re-assignment is that at one point you can loose traceability of who has done what and productivity on per developer basis. So in this sense having each teammember doing its own tasks and delivering towards the completion of a user story can solve this.
Then you can assign the User Story to the product owner, or you can assign it to a developer that kind of holds ownership towards its delivery to test when the tester will take over. But the user story when assigned to a developer does not mean that he owns the User Story, it just means that it is his responsibility to ensure hand over to test nothing more nothing less.
When a tester encounters a bug then you create a bug attached to the User story.
Not recommended. It's feasible tho. You have to assess your current work situation. If the user story is something that can make a whole difference, then it would be better to just stop the sprint, reassess your situation and make the necessary changes - then continue. Either way, when you are adding a new user story to the backlog, deadlines can be hardly met.
We are using a little bit different approach. Like we have following columns on Jira Board.
To-do
In_progress
Ready for Review
Ready for QA
In-Testing
Rework/Rejected
Done
A developer pick a task from to-do and assign it to him self and keep it in-progress. Once he is done he moved it to Ready for Review and keep it un assign. Someone will pick it and assign it to himself and review it. After reviewing that person will move the case to ready for QA without assigning it to anyone. Whoever is free or plaining to work on case will assign that case to himself and when he starts working on the case, he will move it to in-testing. As a result of testing the case can go in rework/rejected or in Done. If it moved to Rework/Rejected he will assign it to original person who initially worked on it. And that person when rework on it, will move the case to in-progress again.

How Do I Know What AOT Class I Need to Change to Modify the Behavior of Canceling PO Lines

Despite knowing AX enough to get around, it's still thoroughly confusing to me. I have a background in Visual Studio C#, and I could always figure out where a particular segment of code was performing the unexpected behavior, but after a year and a half of AX 2012, it's still a mystery.
There's a legacy system that is not getting POs sent from AX whenever a PO Line is cancelled; my objective is to change AX to guarantee that cancelled lines are sent back to this legacy system.
I need to modify the behavior after PO lines are canceled. I know that users cancel the line by going to Procurement and Sourcing => Common -> Purchase orders => All purchase orders. They Request Change on a PO, then under the Purchase Order Lines section, they navigate to Update Line => Deliver Remainder; doing a Personalize on this form shows that the form is called PurchUpdateRemain, a Foundation form. I hit the Cancel Line button, then confirm the change.
I know that a workflow is triggered on this, and I've completed the whole process of approving the change, but no AIF service is called according to the trace I ran on it, so I'm confused as to what AIF service should handle it.
My question is: How do I find where a file should be sent out in AX? AX does not seem to give any indication as to what logic should be called after a line is cancelled. If I could just see the whole flow of the code like I could in Visual Studio, I could determine where I need to make my change in AX, but I've not yet figured out how I can possibly do that. Any tips? I'm at my wits end here.
A lot of it is just knowledge gained from experience. Pretty much figuring out where to look. Putting a breakpoint in and tracing the code and stepping into methods is often the best way.
Here's a stack trace that shows where the status is changing so you can figure out where to put your code. I cut off line #'s because my code is customized and they wouldn't line up.
Hopefully this won't be too late.
Check the class PurchCancel, in the run method you can see the process of how a purchase order is being cancelled.

DTM giving a different report suite for custom links and page calls?

I'm getting some very strange behavior in DTM. When our page loads (from a local instance of the website) we get the expected call going out with the proper dev report suite. When a custom link call is made from that page, for some reason DTM sends it with a production report suite. If I look in Adobe Analytics for the custom link name reported under the prod RSI, it does not show up in there.
Any ideas on what is going on and how I can fix this issue?
This is my shot in the dark based on what you have said, and it is based on the assumption that your statements are true (e.g. you aren't seeing pink elephants, that the request was indeed showing your prod rsid in the proper portion of the request url, that you did in fact check your prod rsid after an acceptable amount of time has past, no segment or other filter shenanigans, etc..: in short, that you do know how to accurately perform the basic QA song and dance).
Under that assumption, the below is a scenario that can plausibly reproduce what you are describing. I could be partially right or totally off for your specific situation, but there's really no way for me to know for sure without having access to your DTM instance.
The Scenario
Long story short is it sounds like you have a blend of custom code and DTM automatic settings enabled, and DTM is overriding and/or not caring about your custom code for link tracking.
More specifically, it sounds to me like you have AA implemented as a tool in DTM, and in the config settings, you have your production and staging rsids specified in the text fields.
Then in the General section, you either do NOT have values specified for Tracking Server and Tracking Server Secure, or else they are set to the wrong values.
Then, in the Library Management section, you have either selected "Managed by Adobe" in which case DTM takes care of the library, or else you have selected "Custom" and you are adding the library yourself AND you have NOT checked "Set report suites using custom code below".
Then, somewhere in DTM (e.g. the Library Management > Custom code box, or Customize Page Code codebox) you have code that pops rsid stuff (e.g. s.account, s_account, dynamicAccountList stuff), and possibly also trackingServer and trackingServerSecure.
Finally, you (like most other people, because DTM's double script include for staging vs. prod is.. dumb) just use the prod script include on your page, and either use the debug/staging mode or rely on whatever rsid routing logic you've setup to route to dev.
So.. when the page is first loaded, DTM loads the AA library and it sets variables and stuff based on what you specified in the tool config. During this time, it is also popping any custom code blocks you have in the tool config, which may or may not override what you have specified in the tool config fields, depending on what you enabled. Then after that, it pops stuff you have in page load rules (if any), etc..
But then comes the link click.. As I have mentioned in other posts on SO, DTM has this caveat (IMO bug) about how it references the AA object after the initial page load/AA request: basically, it doesn't. Instead, it makes use of internal methods (the main one being a .getS() method) to create a new instance of the AA object, based on whatever things you have configured in the tool config section. Well here's the rub.. it does NOT account for or execute any custom coding you have done in code boxes in the tool config section.
So that basically happens whenever an event based or direct call rule is triggered, and it effectively screws you. Why does DTM do this? I do not know. IMO Adobe needs to change this feature caveat bug. Either they should refactor DTM to execute the code boxes, OR they could, you know.. just reference the original AA object created, like any normal script would do..
But in any case..
So for example, my theory here is that page loads fine, points to dev rsid based on your setup. But then you click a link and an event triggers, and DTM makes a new AA object not caring about your custom code, so all it has to go on is what you have in the tool's config fields.
Since DTM doesn't actually have any rules around the prod vs. dev rsids you specify in those fields (you have to write custom code in the custom code boxes - that DTM ignores!), it just pops the prod rsid, because that's the script include you have on your page.
Then as far as not seeing the data actually show up in your prod rsid: again, since DTM ignores what you set in your custom code boxes, it's defaulting to what is specified in the trackingServer fields in the tool config, and my assumption here is they are either blank or wrong (you should be able to look at the request url to adobe to verify this). This theory is because you said the prod rsid is right, and you see a request being made. So the next culprit would be wrong tracking server specified.
So, that is my theory of what's going on. Maybe it's all right, maybe it's some right, hopefully it may point you in the right direction at least.
Edit:
If you can confirm that this is indeed how you have things setup, then you will naturally ask "Okay, well what do I do about that?". As I have said in a lot of my other SO answers.. basically, your only option is to uncheck all the settings that make DTM automate AA, and in all your rules, keep the AA section disabled and whatever AA vars you wanna set, set them yourself and make the s.t() or s.tl() call yourself in a 3rd party script code box, so that it continues to reference and pop based off the originally instantiated AA object.
Update
Based on your comments below, okay so yeah.. that sounds like what I described, and accounts for prod rsid popping. As for data not showing up in report.. so if you are certain tracking server is set correct (the request url looks good) then this isn't a DTM issue. Here are some other explanations for why the data wouldn't show up:
Are you sure the request is being sent to your prod rsid? I don't know what you are looking at to verify this, but this is where you should be looking: In the request URL to AA: "http://[trackingServer value]/b/ss/[s.account value]/1..."
Click request isn't making it to Omniture. Verify in a packet sniffer that the request is actually made and that you are getting a 200 OK or NS_Binding_Aborted response.
You aren't waiting long enough to check for the data. Even basic hit data and looking at "real time" reports takes a little bit of time to show up.
You have a segment/filter active that's not jiving with the data you are trying to look at. Make sure that you don't have anything applied. Or, if you are using those things to find your data (and aren't seeing it), ensure that you are correctly applying it.
You recently created the rsid and the "go live" date hasn't passed yet. Data will not show up in the report suite until up to 24 hours after the specified "go live" date.
You have a vista rule in place that's affecting data showing up. Some companies have a vista rule in place for a number of reasons and there are a million ways it could affect data (e.g. routing to a different report suite). For shits and grins, check your dev (or other rsids) to see if your data showed up there. Even if that doesn't make sense, at least it's a step forward.
You have a bots / ip exclusion rule in place that's catching data from your location.
The data sent in from the link click isn't relevant to the report. For example, maybe you are looking at e.g. prop10 report and prop10 isn't actually sent in the click request.
I know a lot of these are basic things to check, and no doubt you've checked, but check again. Have someone else check for you to be sure. I'm not questioning g your abilities here, but even the best of coders forget to cross their t's and dot their i's sometimes, and manage to miss obvious things. If you are sure about all of these then contact Adobe ClientCare because I really can't think of anything else that wouldn't involve an issue with Adobe's backend.
I ran into a similar problem with my implementation. Essentially what I did was set the s.account variable directly inside the doPlugins, so it would be set on all tracking calls. I wrote specifics here also: DTM Tracking Account

Best Strategies for preventing addresses with PO Boxes?

I have a client which is shipping via UPS, and therefore cannot deliver to Post Office boxes. I would like to be able to validate customer address fields in order to prevent them from entering addresses which include a PO box. It would be best if this were implemented as a regex so that I could use a client-side regex validation control (ASP.NET).
I realize there's probably no way to get a 100% detection rate, I'm just looking for something that will work most of the time.
UPS also has tools that you can integrate to do this... that way you can verify an address exactly as to whether or not they will ship, what the cost would be, schedules, etc. I suggest visiting the UPS IT Solutions page for more information.
This should get you started. Test to see if the Address field matches this regex.
"^P\.?\s?O\.?\sB[Oo][Xx]."
Translation to English: That's a P at the beginning of the line, followed by an optional period and space, followed by an O, followed by an optional period, followed by a space, followed by "Box", followed by anything else.
You might be better off putting a disclaimer on the page warning that you can not ship to post office boxes, opposed to validating the input.
More than likely if you do create a regex that catches most of the P.O. Box scenarios, there's a good chance it'll also catch things you weren't intending (i.e. a customer with a street name containing the letters 'p' 'o' and 'box')
Unfortunately, UPS's online software allows P.O. Boxes to go through, but will choke on them once they're in the shipping channel.
In our case, our cart abandonment rate went up when we tried to gracefully prevent P.O. Boxes. We found it much more cost effective to leave it alone, accept the sale, bring it to the attention of customer service, and let them resolve it.
Of course, if you get a high incidence of P.O Boxes, this may not be the case for you.
I'd start with a regex ala Lizard (but use the "ignore case" flag :)), test on historical data, then iterate as you see what invalid inclusions and exclusions you see in testing.
Most shipping providers (for example FedEx) will validate the shipping address. For example, with FedEx web services, there is a call to validate a shipping address and get the estimated cost. This not only ensures that the address is not a PO Box, but also makes sure that the rest of the address is valid.
Regarding the OP's comment to Jason Coco's answer:
Since you're in a position to add regex validation to the shipping address, I assume that you have control of the application (i.e., you have the source and can modify it). If that's the case, then you should have the ability to, on reciept of the submitted data, check whether it is to be shipped via USPS, FedEx, or UPS and submit a request to the appropriate shipper-specific address validator, gaining all the benefits suggested in Jason's answer.
By making it shipper-specific, this would also allow you to avoid implementing one-size-fits-all rules, such as "no PO boxes because UPS doesn't deliver to them", even though the user can select non-UPS shippers who do deliver to PO boxes.
What if it doesn't start with "PO Box.." or "P.O. Box" ?
Example:
John Schmidt |
Silver Valley PO Box 3901 |
Whereswaldoville, SI. 78946
I used an onblur event for the address field to use a javascript function, indexOf, to recognize the input.toUpperCase "PO BOX" || "P.O" that is >= 0.
If either of these two searches are not found, the return is -1, otherwise, it will return the string's start position which will always be 0 or more.
This will ensure that lazy typing, 'po box,' 'p.o box,' and as well as 'p.o. box' will be recognized. I suppose you could add 'po. box' as well.
Anyway, the condition triggers an unobtrusive message to show that 'We can't ship to a PO Box address." It's a feature to not see it if it doesn't apply to you. Otherwise, for users who don't have js or css enabled, they'll just see the message. The only fail on this graceful degradation is if a user has css, but not js enabled (where they just won't see the message at all). I only came up with the solution today, but if I think of a better way, I'll come back to post it here.

Resources