nyc equivalent of istanbul command - istanbul

I am trying to convert my istanbul code coverage commands to nyc.
It appears as though nyc is now the command line interface for the istanbul test coverage library.
With istanbul, we'd get coverage like so:
istanbul cover foo.js --dir coverage
then we'd get a consolidated report like so:
istanbul report --dir coverage --include **/*coverage.json lcov
so I am trying to determine what the equivalent command is with nyc -
to get coverage with nyc, it looks like I can do this:
nyc node foo.js # writes coverage data to .nyc_output
but when I look in .nyc_output, there are a bunch of .json files but they don't seem to have any data in them.
If I try to get a report, using
nyc report --reporter=lcov
That report command doesn't seem to do anything, the .nyc_output directory looks the same as before:
Note I am fine with using configuration files and avoiding extra commands at the command line.

Official documentation proposes using it like this:
nyc --reporter=lcov npm test

Related

Exclude some design unit from code coverage on Questasim

I run a code coverage on questasim and I got ucdb file as output. But I need to exclude code coverages of some modules that connect to the top module.I don't need some of modules to be covered which this I can improve coverage report.
How can I do it without run simulation again?
Thanks.
I used a command as shown below.
coverage exclude -du <design_unit>
or
coverage exclude -srcfile <design_file>
Reference: QuestaSim User Manual

What is instrumentation in nyc istanbul?

What is instumentation used in nyc?
nyc's instrument command can be used to instrument source files outside of the context of your unit-tests:
I assume it will do coverage outside unit-testing. I tried it with
nyc instrument src coverage/instrument
then run the application and tries hitting an endpoint
npm start
but when I do above, it doesn't generate a file in nyc_output thus can't report anything.
Do I have to finish the nyc instrument command? how to do so?
nyc instrument is used to instrument your code. It produces output that, when ran, will gather coverage data. This is useless unless you actually do something with that data... like report it or use it somehow. When you run a file that has been instrumented, it will store coverage data in global.__coverage__ I believe. You can then do what you want with that data. So, you could create a reporter that will run the instrumented file, then take a look at global.__coverage__ to see what the coverage is like. Simply running an instrumented file won't generate any output
To see what the coverage is of a file that has been instrumented, you can either create your own reporter where you require the instrumented file, then take a look at global.__coverage__ or you could run the nyc command to generate coverage data like normal.
Here are a few examples:
Let's say you have a file file.js that you want to check the coverage of and you've run the command:
nyc instrument file.js > file_instrumented.js
Now, you'll have a file named file_instrumented.js that has all the code necessary to generate code coverage.
If I run that file with node file_instumented.js nothing happens... other than the file executes same as file.js
But, if I create a file named coverage.js with this code:
require("./file_instrumented.js");
console.log(global.__coverage__)
Then, I run node coverage.js you'll be able to see coverage data. You can then output whatever data you want. It's sort of a lower level access to the coverage data
If you want to generate a report in nyc_output you'll need to use the nyc command against the instrumented file. For example, something like this:
nyc --reporter=text --report-dir=./nyc_output node file_instrumented.js
A command like this would work too if you made the file_instrumented.js file executable:
nyc --reporter=text --report-dir=./nyc_output file_instrumented.js
However, if we try to run that same command against the original file.js like this:
nyc --reporter=text --report-dir=./nyc_output node file.js
You'll see that we get a report that shows no coverage. And this is because the file.js file isn't instrumented and therefore doesn't give the nyc reporter any data to report
You are correct that using nyc instrument will do coverage outside unit-testing frameworks as I've demonstrated above. It's a bit confusing because the docs aren't as clear as they should be. There are no good examples that I could find on how to get coverage of files outside of test frameworks so I figured this all out by looking at the source code for nyc as well as some of the testing frameworks.
The thing is that the testing frameworks instrument the file for you so when you run a command like this using the Mocha testing framework for example:
nyc --reporter=text mocha --ui bdd test.js
What's happening is:
nyc is executing mocha...
then mocha is instrumenting your code for you behind the scenes (correction - as per #Dining Philosopher - it looks like it’s not actuallymocha that is instrumenting your code. It’s actually nyc that instruments the code because it knows about mocha. Note: I haven’t verified this in code)
then mocha is running that instrumented code
which runs tests while collecting coverage data
which gives nyc the global.__coverage__ that it needs to generate a report
finally, nyc uses that data to output a report in your nyc_output folder
Hope this all makes sense...
In the nyc v15.1.0, we can create the report without instrumented file, just run
nyc --reporter=text --report-dir=./nyc_output node file.js
is worked !!
But i think we if we want to get the running time report, we still need the instrumented file to run
Instrumentation in this context means adding some code in between the original code. You can see this with a simple nyc instrument example.js. The output looks but confusing but is just valid javascript that still does the same things as the original program.
The important thing to know is that the added code has some side-effects. In the case of nyc the added code modifies the global.__coverage__ object (thanks Ray Perea).
Confusingly, nyc only instruments code that is tested (it knows about some testing frameworks such as mocha). To override this behavior, pass the --all flag to nyc.
A simple example running instrumented javascript:
nyc --all node example.js
Where example.js contains some uninstrumented javascript (e.g. console.log("Hello world"). This will instrument all javascript files (nested) in the current directory and report their coverage when running node example.js.
P.S. I would have edited Ray Perea's answer but their edit queue is full.

Running a windows executable within R using wine in ubuntu

I am trying to execute a windows only executable called (groundfilter.exe from FUSION) within Rstudio on Ubuntu.
I am able to run groundfilter.exe from a terminal using wine as follows:
wine C:/FUSION/groundfilter.exe /gparam:0 /wparam:1 /tolerance:1 /iterations:10 test_Grnd.las 1 test.las
The executes fine and produces file test_Grnd.las OK.
But when i try to do this from within Rstudio using system() it doesn't quite work, and no output file is produced (unlike from terminal). I do this:
command<-paste("wine C:/FUSION/groundfilter.exe",
"/gparam:0 /wparam:1 /tolerance:1 /iterations:10",
"/home/martin/Documents/AUAV_Projects/test_FUSION/test_FUSION/test_GroundPts.las",
"1",
"/home/martin/Documents/AUAV_Projects/test_FUSION/test_FUSION/test.las",sep=" ")
system(command)
The executable appears to be called OK in Rstudio console, but run as if no file names were supplied. The output( truncated ) is:
system(command)
GroundFilter v1.75 (FUSION v3.60) (Built on Oct 6 2016 08:45:14) DEBUG
--Robert J. McGaughey--USDA Forest Service--Pacific Northwest Research Station
Filters a point cloud to identify bare-earth points
Syntax: GroundFilter [switches] outputfile cellsize datafile1 datafile2 ...
outputfile Name for the output point data file (stored in LDA format)
This is the same output from the terminal if the file names are left off, so somehow my system call in R is not correct?
I think wine will not find paths like /home/martin/....
One possibility would be to put groundfilter.exe (and possibly dlls it needs) into the directory you want to work with, and set the R working directory to that directory using setwd().
The other possibility I see would be to give a path that wine understands, like Z:/home/martin/....
This is not an authoritative answer, just my thoughts, so please refer to the documentation for the real story.

Generating knitr reports

I'm pretty new to knitr, but I've written a script that generates a report for a county. One of the first lines in the first code chunk is display_county <- "King", and it queries a database to make all sorts of nice things about King County. Now I want to create reports for every county in my state. The only line in the script that needs be changed is the definition of display_county.
I know the brew packages is set up for stuff like this, and I know there's overlap between brew and knitr, but I don't know what I should be using.
This answer using Brew and Sweave would work with minor modifications, but is there a nice knitr way to bypass brew?
If I understand correctly, you are going to use the same Rnw file for each county, so only the variable display_county will be different for each county. I would first make the call to the database to get all the names of counties and store them in a vector (say... myCounties). After that, your reports can be generated with a script containing the following:
for(dc in myCounties) {
knit2pdf(input='county_report.Rnw', output=paste0(dc, '_county_report.pdf'))
}
To handle errors more effectively, you can also wrap the knit2pdf call on a tryCatch statement:
for(dc in myCounties) {
tryCatch(knit2pdf(input='county_report.Rnw', output=paste0(dc, '_county_report.pdf')))
}

Best way to utilize bash script (Ubuntu) Rscript, pdflatx, and API

I'm currently writing some code that
connect to a server via API and fetches a bunch of data,
organizes that data by case ID,
generates an individual case report,
creates one pdf (case overview) file per case, and finally
pushes these files back to the server.
I'm quite familiar with R and somewhat familiar with pdflatex. I've just found out about bash scripts-as I have started to work in a Ubuntu environment-and I am now starting to realize that it is not straightforward which programs are best suited for the job.
My current plan is to fetch the data using RCrul in R, organize the data in R and generate a bunch of .tex-files. Hereafter I plan to use pdflatex to create teh pdf-files, and finally use R again to push the newly create pdf files back to the server. I’ve started writing a small bash script,
for f in *Rnw
do
# do something on ${f%%.*}
Rscript -e “source("fetch.data.and.generate.Rnw.R")” # 1 through 3
Rscript -e "library(knitr); knit('${f%%.*}.Rnw')" # 4
pdflatex "${f%%.*}.tex" # 4 continued
rm "${f%%.*}.tex" "${f%%.*}.aux" "${f%%.*}.log" "${f%%.*}.out" # cleanup after 4
Rscript -e “source("push.pdf.R")” # 5
done
I hoped someone out there could advise me about what software is best suited for the individual part of the job and what would give my the best performance.
The data is not that extensive, I will be working with about 500 to 2000 cases and approximately 20 to 30 variables.
#flodel and #shellter make excellent points. I'll only add that, if you decide to keep using bash in your solution, you might find it easier to calculate your filename variable once and then use that elsewhere:
for f in *Rnw; do
stem="${f%%.*}"
Rscript commands with $stem
pdflatex command involving $stem
Rscript commands for pushing $stem.pdf
rm $stem.*
end

Resources