What is instumentation used in nyc?
nyc's instrument command can be used to instrument source files outside of the context of your unit-tests:
I assume it will do coverage outside unit-testing. I tried it with
nyc instrument src coverage/instrument
then run the application and tries hitting an endpoint
npm start
but when I do above, it doesn't generate a file in nyc_output thus can't report anything.
Do I have to finish the nyc instrument command? how to do so?
nyc instrument is used to instrument your code. It produces output that, when ran, will gather coverage data. This is useless unless you actually do something with that data... like report it or use it somehow. When you run a file that has been instrumented, it will store coverage data in global.__coverage__ I believe. You can then do what you want with that data. So, you could create a reporter that will run the instrumented file, then take a look at global.__coverage__ to see what the coverage is like. Simply running an instrumented file won't generate any output
To see what the coverage is of a file that has been instrumented, you can either create your own reporter where you require the instrumented file, then take a look at global.__coverage__ or you could run the nyc command to generate coverage data like normal.
Here are a few examples:
Let's say you have a file file.js that you want to check the coverage of and you've run the command:
nyc instrument file.js > file_instrumented.js
Now, you'll have a file named file_instrumented.js that has all the code necessary to generate code coverage.
If I run that file with node file_instumented.js nothing happens... other than the file executes same as file.js
But, if I create a file named coverage.js with this code:
require("./file_instrumented.js");
console.log(global.__coverage__)
Then, I run node coverage.js you'll be able to see coverage data. You can then output whatever data you want. It's sort of a lower level access to the coverage data
If you want to generate a report in nyc_output you'll need to use the nyc command against the instrumented file. For example, something like this:
nyc --reporter=text --report-dir=./nyc_output node file_instrumented.js
A command like this would work too if you made the file_instrumented.js file executable:
nyc --reporter=text --report-dir=./nyc_output file_instrumented.js
However, if we try to run that same command against the original file.js like this:
nyc --reporter=text --report-dir=./nyc_output node file.js
You'll see that we get a report that shows no coverage. And this is because the file.js file isn't instrumented and therefore doesn't give the nyc reporter any data to report
You are correct that using nyc instrument will do coverage outside unit-testing frameworks as I've demonstrated above. It's a bit confusing because the docs aren't as clear as they should be. There are no good examples that I could find on how to get coverage of files outside of test frameworks so I figured this all out by looking at the source code for nyc as well as some of the testing frameworks.
The thing is that the testing frameworks instrument the file for you so when you run a command like this using the Mocha testing framework for example:
nyc --reporter=text mocha --ui bdd test.js
What's happening is:
nyc is executing mocha...
then mocha is instrumenting your code for you behind the scenes (correction - as per #Dining Philosopher - it looks like it’s not actuallymocha that is instrumenting your code. It’s actually nyc that instruments the code because it knows about mocha. Note: I haven’t verified this in code)
then mocha is running that instrumented code
which runs tests while collecting coverage data
which gives nyc the global.__coverage__ that it needs to generate a report
finally, nyc uses that data to output a report in your nyc_output folder
Hope this all makes sense...
In the nyc v15.1.0, we can create the report without instrumented file, just run
nyc --reporter=text --report-dir=./nyc_output node file.js
is worked !!
But i think we if we want to get the running time report, we still need the instrumented file to run
Instrumentation in this context means adding some code in between the original code. You can see this with a simple nyc instrument example.js. The output looks but confusing but is just valid javascript that still does the same things as the original program.
The important thing to know is that the added code has some side-effects. In the case of nyc the added code modifies the global.__coverage__ object (thanks Ray Perea).
Confusingly, nyc only instruments code that is tested (it knows about some testing frameworks such as mocha). To override this behavior, pass the --all flag to nyc.
A simple example running instrumented javascript:
nyc --all node example.js
Where example.js contains some uninstrumented javascript (e.g. console.log("Hello world"). This will instrument all javascript files (nested) in the current directory and report their coverage when running node example.js.
P.S. I would have edited Ray Perea's answer but their edit queue is full.
Related
I recently started looking into Makefiles to keep track of the scripts inside my research project. To really understand what is going on, I would like to understand the contents of .Rout files produced by R CMD BATCH a little better.
Christopher Gandrud is using a Makefile for his book Reproducible research with R and RStudio. The sample project (https://github.com/christophergandrud/rep-res-book-v3-examples/tree/master/data) has only three .R files: two of them download and clean data, the third one merges both datasets. They are invoked by the following lines of the Makefile:
# Key variables to define
RDIR = .
# Run the RSOURCE files
$(RDIR)/%.Rout: $(RDIR)/%.R
R CMD BATCH $<
None of the first two files outputs data; nor does the merge script explicitly import data - it just uses the objects created in the first two scripts. So how is the data preserved between the scripts?
To me it seems like the batch execution happens within the same R environment, preserving both objects and loaded packages. Is this really the case? And is it the .Rout file that transfers the objects from one script to the other or is it a property of the batch execution itself?
If the working environment is really preserved between the scripts, I see a lot of potential for issues if there are objects with the same names or functions with the same names from different packages. Another issue of this setup seems to be that the Makefile cannot propagate changes in the first two files downstream because there is no explicit input/prerequisite for the merge script.
I would appreciate to learn if my intuition is right and if there are better ways to execute R files in a Makefile.
By default R CMD BATCH will save your workspace to a hidden .Rdata file after running unless you choose --no-save. That's why it's not really the recommended way to run R script. The recommended way is with Rscript which will not save by default. You must write code explicitly to save if that's what you want. This is different than the Rout file which should only have the output from the commands run in the script.
In this case, execution doesn't happen in the exact same environment. R is still called three times, but that environment is serialized and reloaded between each run.
You are correct that there may be a lot of problems with saving and re-loading workspaces by default. That's why most people recommend you do not do that. But in this cause, the author just figured it made things easier for their workflow so they used it. It would be better to be more explicit about input and output files in general though.
I run a code coverage on questasim and I got ucdb file as output. But I need to exclude code coverages of some modules that connect to the top module.I don't need some of modules to be covered which this I can improve coverage report.
How can I do it without run simulation again?
Thanks.
I used a command as shown below.
coverage exclude -du <design_unit>
or
coverage exclude -srcfile <design_file>
Reference: QuestaSim User Manual
I am trying to convert my istanbul code coverage commands to nyc.
It appears as though nyc is now the command line interface for the istanbul test coverage library.
With istanbul, we'd get coverage like so:
istanbul cover foo.js --dir coverage
then we'd get a consolidated report like so:
istanbul report --dir coverage --include **/*coverage.json lcov
so I am trying to determine what the equivalent command is with nyc -
to get coverage with nyc, it looks like I can do this:
nyc node foo.js # writes coverage data to .nyc_output
but when I look in .nyc_output, there are a bunch of .json files but they don't seem to have any data in them.
If I try to get a report, using
nyc report --reporter=lcov
That report command doesn't seem to do anything, the .nyc_output directory looks the same as before:
Note I am fine with using configuration files and avoiding extra commands at the command line.
Official documentation proposes using it like this:
nyc --reporter=lcov npm test
I need to find the code coverage metrics values for my QT based GUI code. Please suggest any tool which would allow to create a test case and generate the coverage values.
Thanks,
Nayan
GCOV can be used to get statement and branch coverage
ref :Qt and gcov, coverage files are not generated
other tool is TUG
ref:
http://pedromateo.github.io/tug_qt_unit_testing_fw/
I am using phpunit to do functional tests. I use the log-junit option to generate results in JUnit-XML format. I then use phing to read this XML and generate a HTML report. The report is fine and neat. However, I have two questions:--
Can I also show the results in graphical format in the same JUnit HTML report file(generated by Phing)? A pie chart or any other chart for that matter? (In terms of passed to failed tests)
The JUnit summary which is generated by using the option -log-junit when running PHPUnit tests shows the test times in seconds. It is not easily readable when the number is big. Can I convert this to minutes by setting some option in command line? Is there any other way to do this?
I am trying to do this without the use of jenkins.
Please share if you know something about this.
Thanks.
What you can do is to use PHPUnit and generate HTML report for test coverage. It's similar to junit log:
phpunit --log-junit results/phpunit/junit.xml --coverage-html=results/phpunit/covegare -c tests/phpunit.xml
Then you can use Jenkins - Post-build Actions
Publish Junit test result report
Publish HTML reports
This way you will have tests results but also very useful code coverage report.