How can I set <UFTApp>.<TestOrComponent>.Settings.Run.OnError non-programmatically for interactive (i.e. IDE-based debug) runs of a BPT component? - automated-tests

In UFT 14.50 (but I don't think this is version-specific), I face the following problem:
For action-based GUI tests, I can configure for each test using File/Settings/Run what should happen in case of error:
For BPT GUI-based components, I cannot; there is no "Run" section:
Also in the application area for the component, which would be the second place where it would make sense to put this setting, there is no such setting.
I do understand that I can set this setting programmatically using <App>.<TestOrComponent>.Settings.Run.OnError.
I also understand that one can configure this setting in the execution setting for each component call in a BPT test flow script, or BPT test script, but what about interactive component execution for debugging reasons -- I have to change this setting in every component programmatically during runtime if I want to define/change this setting when doing debug runs in the UFT IDE, is this correct?
Bonus question: What is the rationale to hide this setting for a given component?

It's a good question. It's one i had to think about as there are quite a few components involved.
You build your code for BPT application areas in UFT but the BPT test itself is designed to be managed and executed from ALM. Around version 11 or 12 HP (the vendor at the time) updated the remote agent to have "debug mode" (the top option in these settings):
If you've not seen this you window, you get it from right clicking on the remote agent on your system tray:
(you can also set it by updating the mic.ini file - shout if you need more info on this)
I've not used BPT for about a decade, but that option in the remote agent I am very aware of. POTENTIALLY The run option you're after doesn't exist because, for BPT and it's aLM dependency, it's now all controlled through that debug run setting.
With it CHECKED, when you run a test from ALM you get the popup on error
With it UNCHECKED, when you run a test from ALM, if it hits an issue the error popup is suppressed.
You don't need to set your options programatically.
Bonus points answer: Logically this makes sense as you're potentially kicking off an entire test set from ALM and if you had 1 bad object/line in the first test and it blocked an entire overnight run you'd be fairly angry. At least this way you have your local machine as a deubg run but all remote execution machines have it unchecked so they just keep going. It becomes a machine configuration and not a script configuration.
If this doesn't work as you expect there are other ways of using a common function library with environment variables to set all to debug or all to carry-on mode.

Related

AEM Launcher run mode issue in Publish Instance

I am running a launcher in publish instance,the launcher doesn't invoke the workflow when run mode is publish or publish and author but it works when I make run mode as author
Can someone help me with this behaviour of AEM?
Wrokflow launchers are tied to run-modes and you can change the behavior to run to either or both of them. The default dialog height is less hence it is not visible you need to scroll down.
Taken from the Adobe documentation:
When using one of the above run modes (author, publish, samplecontent, nosamplecontent), the value used at installation time defines the run mode for the entire lifetime of that installation.
For these run modes you cannot change them after installation.
You cannot use runmodes author AND publish at the same time (I wonder what AEM will do if you try to set both runmodes) and switching the runmode is also a bad idea.
See https://docs.adobe.com/docs/en/aem/6-0/deploy/configuring/configure-runmodes.html for details.
There must be something wrong with your instance. Any log messages?

asp.net config transforms - don't apply for normal builds, only publish

We have a number of config transforms which enable us to publish to a particular environment with the correct options specified in web.config.
However, it would be useful to run the application locally while specifying a particular build configuration. This would enable us to run the app locally and have it connected to the live database, for example - quite handy when tracking down bugs, for example.
However, when we press F5 to run the app locally, regardless of the build configuration currently selected, no transform of the web.config file appears to occur.
Is this the normal behaviour and is it possible to change it?
Reposted from comment:
Yes, it is the normal behaviour. It's a nuisance because it makes the whole thing feel half-a-job-ish and I agree there should be the option to opt-in for the same transformations being applied during a standard build. I haven't found any VS extensions that can do this for you yet, though I imagine it could be done. I personally make a ".Local" version of all my build configs and publish to a local IIS which I can attach to very quickly/easily if I want to use a diffferent environment/config's web.config. Requires some duplication, but does the job
Thanks David

Problem with Flex unit testing in IntelliJ

I have some problems running FlexUnit tests in IntelliJ.
Every time I execute test, Internet Explorer (which is not even set as default browser) pops up and blocks unit test, i.e. blocks it as add so I must allow access through that dumb top bar and then another confirmation and then finally test runs. Is there any way to reconfigure it to another browser or to run it some other way so I just hit the Run button on Idea and I can see results right away?
Thank you for help
As you are using Windows have a look at the first comment by Alexander Doroshko in this bug report: http://youtrack.jetbrains.net/issue/IDEA-49795
Current behavior:
if 'Use system default browser' is selected at Settings | Web Browsers | Default Web Browser then swf/html is started in default OS application (either default browser of standalone Flash player)
if default browser is overridden in IDEA then it is always used both for swf/html.
It would be more convenient if standalone Flash player is used for swfs independently of this setting as soon as it is OS default program for sfws.
I recommend to configure the stand alone flashplayer executable for executing unit tests. As you can also see from the report this has been improved in IDEA 10.

Questions on setting up automated builds

When using an automated build system, it is usually a source control entry which executes tests (but I assume this can be configured to not be on every entry in a large team). How comes build applications have actions for source code checkins. Is there any need for this? So to summarise, is a build script executed by a source control entry or at a certain time everyday?
Also, the term "break the build" - does this mean code put source control and when the build is executed, it fails due to the code not passing a unit test/code coverage app returns negative results below a certain threshold?
Finally, what does a step mean? (Eg one step build)?
Thanks
So to summarize, is a build script executed by a source control entry or at a certain time everyday?
This depends. Some teams use a commit in the version control system as trigger, some teams use a temporal event as trigger (e.g. each hour). If you run the build after each change, you get immediate feedback. If you let some time run between two builds, you delay that feedback and, in case of a build failure, it's harder to identify the change(s) that are the cause. It may require more investigation.
Just to clarify the vocabulary, for me, "the build" is actually the script/tool that automates all the things that needs to be done (compiling, runing tests, etc). Then running this automated build continuously is what people call "continuous integration". And triggering a build on an event (time based or on commit), pulling the sources from the repository, running the build script, notifying people in case of failure is the responsibility of a "continuous integration engine".
Also, the term "break the build" - does this mean code put source control and when the build is executed, it fails due to the code not passing a unit test/code coverage app returns negative results below a certain threshold?
This is very binary indeed: the build passes, or it doesn't. When it doesn't, there can be many reasons: the code didn't compile, a test failed, a quality check failed (coding standards, code coverage, etc). If you commit some code than causes a build failure (whatever the reason is), then you "broke the build".
Finally, what does a step mean? (Eg one step build)?
In my opinion, a one step build means that you can build your entire application, run the tests, run the quality checks, produce reports, assemble the application, deploy it, etc with one command. This is a synonym of automated build (if you can't run your build in one step, i.e. if it requires human intervention, then it isn't fully automated).
Also, the term "break the build" -
does this mean code put source control
and when the build is executed, it
fails due to the code not passing a
unit test/code coverage app returns
negative results below a certain
threshold?
This could mean different things depending on company, project or team.
Usually "build" is some reference (usually automated) procedure which is either succeeds or not.
Thus "breaking the build" is doing something that leads failing of this reference procedure.
It could include or exclude unit-tests running, or regression test running, or deployment of your product, or whatever your team thinks should never fail.

proliferation of rocket tray icons

Each time I run a test using TestDriven, it creates another "rocket" icon on my system tray. I have to manually do right-click Quit to get rid of them. How can I avoid this?
Check for any open file handles you may be creating in your tests. Depending on the size of your test suite that may be too time consuming and tedious. There's an option to turn off caching the test process between test runs in the options for TestDriven.Net. This seems to be designed for instances like what you're seeing. From their documentation:
Cache test process between test runs
By default the external test process will be cached when the ‘Run Test(s)’ command is used. This process appears in the tool tray as a rocket icon which can be used to kill the process. This is fine unless one of your tests starts leaking leaking native resources (such as leaving open a file handle). The best solution is to fix the resource leak, but you now have to option to work around the issue by killing the test process at the end of each test run. This can be useful if the resource leak is in a 3rd party DLL which can’t be easily be changed.
From here: http://weblogs.asp.net/nunitaddin/archive/2008/12/03/testdriven-net-options-pane.aspx
I realize you asked this a year ago, so you may have already figured out a way to fix the problem. In that case, I would ask that you let us know what you did.

Resources