Can I filter error messages from certain sources in the Chrome Dev console? - console

In the Chrome DevTools console, I keep getting error messages from certain places that do not actually affect my application's performance. Is there a way to filter out errors from those sources? (e.g., a YouTube iframe with errors, certain Chrome Extensions, etc.)

Yes, you can. You are able to filter messages from any file source by right-clicking on the file's name and line (something like main.js:15) and selecting Hide messages from *filename*. This will block all messages coming from that file (as is probably self-explanatory given what the button says).
Warning: This will also block messages using console.log() that might harm your debugging process, as well as errors that might come up in the future and be important (which you will now not know about). Use with caution on your own files. It should be harmless with files that aren't yours (again, things like iframes and extensions).
You can reverse the block by going up to the Filter dialog box near the top of the console (to the right of the eye icon) and deleting it. (This will also delete all other filters, so you could just remove one if you needed.) You can also more specifically filter messages using Filter, but for the purposes of the question (blocking messages from a certain file), it does the job the fastest and the best.
For more information, see: https://developers.google.com/web/tools/chrome-devtools/console/reference#filter. This also includes information on the other ways to more specifically filter console messages.

At the top of your console is a filter box. Here you can do
plain text search
regular expression search such as /\d+\s\d+/
attribute, such as url:pagead
negate any of the above with prefix -.
combine (AND) any of the above with space
For example -url:pagead will filter out all messages that have pagead in the url. There are two other attributes context: and source:, but I don't know what they do.
For example, def anon will show only messages that contain both def and anon (not necessarily together).
I have not found any way to OR two expressions (UNION).
see documentation

Related

Customized json report for karate framework [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Karate tests - problem with visiting PDF link from email in headless mode (run from Jenkins) [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Web test recording: automatically insert assertions during recording?

I need to automate as much as possible the recording of Web test scenarios. Selenium IDE or better Katalon plugin for Chrome seem very effective for this. However what's missing in the recording are the assertions. I've so far found no real alternative than to "add them by hand" after the recording is done.
Now I know which parts of my pages contain relevant output text, i.e. are subject to test. For instance based on ID patterns, class names, tag hierarchy etc.
So given that my web app is in a "known good state", I could theoretically grab the text content of the relevant tags during the recording, and insert my assertions in the recorded scenario right there and then. My aim is to automate this.
Is there any way to do this in Katalon plugin, Selenium IDE or any other automated web recording tool? I've read about Katalon Extension Scripts but as far as I understand it, these cannot do what I want?
-- edit -- trying to rephrase and be more concrete --
During my recording, on certain events (e.g. on page load) I want the tool to find all elements that match certain selectors, and for each match store an assertion in the scenario that asserts the actual current value (e.g. div.innerText or input.value) of the element on the page. I want to define the events and the selectors that should trigger the insertion of assertions and the expression that defines the asserted value.
example
Suppose my webapp has a search page. I enter data in input fields, and hit the "search" button. These actions are recorded by most tools like Katalon Recorder. Now on the next page, the search results will show. Each search result will be in a div class="result". Suppose while recording I got two search results "foo" and "bar". So I want the tool to store in the scenario, while recording, an assertion that the first result should be "foo" and the second should be "bar", based on my rule that all $("div.result") should have their "innerText" asserted upon page load.
Avoid using Selenium IDE, as compatibility with Firefox has been discontinued since Firefox version 55, you will thus not be able to run your tests on recent versions of Firefox.
When performing actions in the browser, it is relatively easy to record those actions to re-run them again. It is 100% clear what button you just pressed.
You can probably do a million different assertions on a page, it would be difficult for any tool to guess which things you would like to assert and then automatically add those assertions so I would be surprised if you would find a tool that would do exactly what you want.
What is keeping you from writing your own automated tests in code from scratch? From my experience, coding your own tests is not that much slower, but once you are used to doing this you will be able to tackle more complex problems with much more ease.
I have no experience with Katalon.
You can't add assertions in recording time, but you can use Selenese after recording too.
Check official reference here: https://docs.katalon.com/display/KD/Selenese+%28Selenium+IDE%29+Commands+Reference
For what it's worth, I've managed to get what I needed as follows:
locate the Extension directory of Katalon Recorder in my Chrome
copy the entire contents to Eclipse
modify the source content/recorder.js, method Recorder.attach() by adding the following:
var self = this;
$(...).each(function(i, el) {
var target = self.locatorBuilders.buildAll(el);
if (el.tagName == "SELECT" || el.tagName == "INPUT")
recorder.record("assertValue", target, el.value, false);
else
recorder.record("assertText", target, el.innerText, false);
});
(note ... are the JQuery selectors that define the areas that I know will contain relevant data in application. This could be tweaked either in this source (e.g. by adding more selectors), or in the application itself (e.g. by adding a signaling class to certain tags in the HTML just to trigger assertions).
in chrome, activate "developer mode" and load the modified plugin.
While recording, assertions are now automatically added for the relevant parts (... in the above) of my web app, on each page load.
happy!

Log of everything VoiceOver is saying

Im using VoiceOver during development to test accessibility changes.
Many times VoiceOver detects changes properly, starts reading them, but is interrupted with new information. So the information that is important is essentially cancelled when additional changes present themselves.
In my case I have an alert that's very important, but ancestor changes seem to get read instead.
If I could see a log of everything VoiceOver is saying I can at least be confident the text is being read and figure out a way to mitigate the problem (possibly by delaying it)
Is there anyway to get a Voiceover log?
I don't believe there is any way to print out a log, but you can save the output to an audio file by pressing ctrl-option-shift-Z. If the audio is running too quickly you could try slowing it down or using some commands to repeat the output. Some of the commands listed here might be helpful:
http://lab.dotjay.co.uk/notes/voiceover-commands/

Different "clickable" log items in Chorome Dev Tools console

When I console.log a javascript object or array in Chrome Dev Tools I get a nice and clickable "drilldown" tree representation where I can inspect the various values, their keys and values with all the syntax highlighting, (i) icon, .length shown etc.
Is there some extension API for doing/changing this behaviour so it is different for some other classes/instances? My idea was to format Clojure data structures so one can inspect them the same way.
EDIT: I know I can do a simple formatting in console.log via %c etc., but that is only a tiny fraction of what I want to do.
So far I wasn't successful with googling. If you know where is the correct Chrome extension API written, can you please point me to the right direction?
There are no current extension APIs for customizing console output. https://code.google.com/p/chromium/issues/detail?id=142783 tracks that item. The team is open to a patch for this, if you'd like to look into tackling the implementation.
What you can do is to override console.log and, when object you are interested in is being printed out, use own function for printing it. To achieve something similar to the default object output you should probably use console.group and console text formatting (shown bellow).
See the official docs for more tips on using the Console.

Resources