How to generate nyc report from json results (no .nyc_output)? - istanbul

I've inherited a JS code base with Jasmine unit tests. The testing framework uses karma and instanbul-combine to get code coverage. It seems istanbul-combine isn't working with present node modules, and besides is no longer maintained: the recommended replacement is nyc. I'm having trouble replacing istanbul-combine with nyc in the Makefile.
I succeeded in merging my separate coverage results (json) files into a single coverage-final.json file (this SO question), but now I need to generate the summary report.
How do I generate a summary report from a coverage.json file?
One problem here, I think, is that I have no .nyc_output directory with intermediate results, since I'm not using nyc to generate coverage data. All my coverage data is in a coverage directory and its child directories.
I've tried specifying a filename:
npx nyc report --include coverage-final.json
Also tried specifying the directory:
npx nyc report --include coverage
Neither works.
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
The CLI help documentation says
--temp-dir, -t directory to read raw coverage information from
But when I use that point to coverage directory (viz., npx nyc report -t coverage), I get the same unsatisfactory result. NYC is apparently fairly rigid in the formats it will accept this data.
Here's the original Makefile line that I'm replacing:
PATH=$(PROJECT_HOME)/bin:$$PATH node_modules/istanbul-combine/cli.js \
-d coverage/summary -r html \
coverage/*/coverage-final.json

Using this line in my Makefile worked:
npx nyc report --reporter html --reporter text -t coverage --report-dir coverage/summary
It grabs the JSON files from the coverage directory and puts them altogether into an HTML report in the coverage/summary subdirectory. (Didn't need the nyc merge command from my previous question/answer.)
I'm not sure why the -t option didn't work before. It may be I was using the wrong version of nyc (15.0.0 instead of 14.1.1, fwiw).

After trying multiple nyc commands to produce the report from JSON with no luck, I found an interesting behavior of nyc: You have to be in the parent directory of the instrumented code when you are generating a report. For example:
If the code I instrumented is in /usr/share/node/**, and the merged coverage.json result is in /tmp directory. If I run nyc report --temp-dir=/tmp --reporter=text under /tmp, I won't get anything.
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
But if I run the same command under /usr/share/node or /, I'm able to get the correct output with coverage numbers.
Not sure if it's a weird permission issue in nyc. If it's an expected behavior of nyc

Related

How do i ONLY zip the subfolders' content rather than the subfolder itself?

I have some folders with content in it like this:
Mainfolder
|
|---SubFolder1
|
|----ContentFiles1
|----MoreContentFiles1
|
|---SubFolder2
|
|----ContentFiles2
|----MoreContentFiles2
|
|---SubFolder3
|
|----ContentFiles3
|----MoreContentFiles3
and would like to zip the contents like this:
Subfolder1.zip
|
|--ContentFiles1
|----MoreContentFiles1
Subfolder2.zip
|
|--ContentFiles1
|----MoreContentFiles1
Subfolder3.zip
|
|--ContentFiles1
|----MoreContentFiles1
However, theres a problem. There are almost 600 of these folders which need to be zipped. I have tried various methods which almost worked but did just not make the cut, like:
for /d %%X in (*) do "F:\Program Files\7-Zip\7z.exe" a "%%X.zip" "%%X\" -mx=5 –tzip
This bat file gave me almost what i needed, but the format of those zip files were:
Subfolder1.zip
|
|--Subfolder1(folder)
|----ContentFiles1
|----MoreContentFiles1
Subfolder2.zip
|
|--Subfolder2(folder)
|----ContentFiles1
|----MoreContentFiles1
Subfolder3.zip
|
|--Subfolder3(folder)
|----ContentFiles1
|----MoreContentFiles1
These were sadly unusable.
This didn't seem like too tough of a task at first, but even after researching a little about batch coding and seeing multiple different youtube videos I still don't have a proper solution. The problem is that many of the methods I find either simply don't work or are outdated.
example pics:
What i get with methods i have tried
What i need
Codes which did not even open (either outdated or been used improperly by me):
#echo off
setlocal
set zip="C:\Program Files\WinRAR\rar.exe" a -r -u
dir C:\MyPictures /ad /s /b > C:\MyPictures\folders.txt
for /f %%f in (C:\MyPictures\folders.txt) do if not exist C:\MyPictures\%%~nf.rar %zip% C:\MyPictures \%%~nf.rar %%f
endlocal
exit
and:
#echo off
setlocal
for /d %%x in (C:\MyPictures\*.*) do "C:\Program Files\7-Zip\7z.exe" a -tzip "%%x.zip" "%%x\"
endlocal
exit
both found from: Batch file to compress subdirectories
note: i did indeed change the directories of these batch codes, but it did not work, and I am unsure what it meant by the "folder.txt" directories.
so if anyone knows anything or could tell me if it's not possible that would be great :).
Use .\folder\* syntax (dot and asterisk):
for /d %%G in (*) do "F:\Program Files\7-Zip\7z.exe" a "%%G.zip" ".\%%G\*" -mx=5 –tzip
Credits to https://superuser.com/a/340062

copy first 100 files from directory which have a specific sentence inside the file

Copy first 100 files from one directory /ctm/logs to another /data/input
These files must have the sentence "completed with error code zero" inside them
Is there any method?
cd into your /ctm/logs directory, then use this:
grep -lr "completed with error code zero" . | head -100 | xargs -I% cp "%" /data/input
Double check to make sure the desired 100 files are being copied. If there is an issue with the ordering this is better suited to be solved with a small program/script rather than a one-liner.

How to move files based on a list (which contains the filename and destination path) in terminal?

I have a folder that contains a lot of files. In this case images.
I need to organise these images into a directory structure.
I have a spreadsheet that contains the filenames and the corresponding path where the file should be copied to. I've saved this file as a text document named files.txt
+--------------+-----------------------+
| image01.jpg | path/to/destination |
+--------------+-----------------------+
| image02.jpg | path/to/destination |
+--------------+-----------------------+
I'm trying to use rsync with the --files-from flag but can't get it to work.
According to man rsync:
--include-from=FILE
This option is related to the --include option, but it specifies a FILE that contains include patterns (one per line). Blank lines in the file and lines starting with ';' or '#' are ignored. If FILE is -, the list will be read from standard input
Here's the command i'm using: rsync -a --files-from=/path/to/files.txt path/to/destinationFolder
And here's the rsync error: syntax or usage error (code 1) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-52.200.1/rsync/options.c(1436) [client=2.6.9]
It's still pretty unclear to me how the files.txt document should be formatted/structured and why my command is failing.
Any help is appreciated.

How to make a single makefile that applies the same command to sub-directories?

For clarity, I am running this on windows with GnuWin32 make.
I have a set of directories with markdown files in at several different levels - theoretically they could be in the branch nodes, but I think currently they are only in the leaf nodes. I have a set of pandoc/LaTeX commands to run to turn the markdown files into PDFs - and obviously only want to recreate the PDFs if the markdown file has been updated, so a makefile seems appropriate.
What I would like is a single makefile in the root, which iterates over any and all sub-directories (to any depth) and applies the make rule I'll specify for running pandoc.
From what I've been able to find, recursive makefiles require you to have a makefile in each sub-directory (which seems like an administrative overhead that I would like to avoid) and/or require you to list out all the sub-directories at the start of the makefile (again, would prefer to avoid this).
Theoretical folder structure:
root
|-make
|-Folder AB
| |-File1.md
| \-File2.md
|-Folder C
| \-File3.md
\-Folder D
|-Folder E
| \-File4.md
|-Folder F
\-File5.md
How do I write a makefile to deal with this situation?
Here is a small set of Makefile rules that hopefuly would get you going
%.pdf : %.md
pandoc -o $# --pdf-engine=xelatex $^
PDF_FILES=FolderA/File1.pdf FolderA/File2.pdf \
FolderC/File3.pdf FolderD/FolderE/File4.pdf FolderD/FolderF/File5.pdf
all: ${PDF_FILES}
Let me explain what is going on here. First we have a pattern rule that tells make how to convert a Markdown file to a PDF file. The --pdf-engine=xelatex option is here just for the purpose of illustration.
Then we need to tell Make which files to consider. We put the names together in a single variable PDF_FILES. This value for this variable can be build via a separate scripts that scans all subdirectories for .md files.
Note that one has to be extra careful if filenames or directory names contain spaces.
Then we ask Make to check if any of the PDF_FILES should be updated.
If you have other targets in your makefile, make sure that all is the first non-pattern target, or call make as make all
Updating the Makefile
If shell functions works for you and basic utilities such as sed and find are available, you could make your makefile dynamic with a single line.
%.pdf : %.md
pandoc -o $# --pdf-engine=xelatex $^
PDF_FILES:=$(shell find -name "*.md" | xargs echo | sed 's/\.md/\.pdf/g' )
all: ${PDF_FILES}
MadScientist suggested just that in the comments
Otherwise you could implement a script using the tools available on your operating system and add an additional target update: that would compute the list of files and replace the line starting with PDF_FILES with an updated list of files.
Final version of the code that worked for Windows, based on #DmitiChubarov and #MadScientist's suggestions is as follows:
%.pdf: %.md
pandoc $^ -o $#
PDF_FILES:=$(shell dir /s /b *.md | sed "s/\.md/\.pdf/g")
all: ${PDF_FILES}

Efficient way of getting listing of files in large filesystem

What is the most efficient way to get a "ls"-like output of the most recently created files in a very large unix file system (100 thousand files +)?
Have tried ls -a and some other varients.
You can also use less to search and scroll it easily.
ls -la | less
If I'm understanding your question correctly try
ls -a | tail
More information here
If the files are in a single directory, then you can use:
ls -lt | less
the -t option to ls will sort the files by modification time and less will let you scroll through them
If the want recent files across an entire file system --- i.e., in different directories, then you can use the find command:
find dir -mtime 1 -print | xargs ls -ld
Substitute the directory where you want to start the search for "dir". The find command will print the names of all of the files that have been modified in the last day (-mtime 1 means modified in the last one day) and the xargs command will take that list of files and feed it to ls, giving you the ls-like output you want

Resources