How to set JSHint options on per directory basis - jshint

I see that the ability to specify JSHint options on a per directory basis was added here.
However it is not clear to me how you actually take advantage of this. What do I do to set JSH options in a single directory, so that the options differ from other directories?

It appears that the change in question actually allows you to specify overriding options on a per-file basis. You can add an overrides property to your config, the value of which should be an object. The keys of this object are treated as regular expressions against which file names are tested. If the name of the file being analysed matches an overrides regex then the options specified for that override will apply to that file:
There's an example of this in the cli.js test file diff in the commit you linked to:
{
"asi": true,
"overrides": {
"bar.js$": {
"asi": false
}
}
}
In that example there is a single override which will apply to any files that match the bar.js$ regular expression (which looks like a bit of an oversight, since the . will match any character and presumably was intended to only match a literal . character).
Having said all that, it doesn't look like the overrides property is going to help you. I think what you actually want is a new .jshintrc file in the directory in question. JSHint looks for that file starting in the directory of the file being analysed and moves up the directory tree until it finds one. Whichever it finds first is the one that gets used. From the docs:
In case of .jshintrc, JSHint will start looking for this file in the same directory as the file that's being linted. If not found, it will move one level up the directory tree all the way up to the filesystem root.
A common use case for this is to have separate JSHint configurations for your application code and your test code. This allows you to define the different environments and globals separately.

Related

Artifactory: download all root files of latest published build, non-recursively

I would like to retrieve from artifactory the files that follow this pattern:
"my-artifactory-repo/project-name/branch-name/1234/*"
Where 1234 is the build number of the latest published build, and where * should not recurse into folders but just match the immediate files. I use build-publish to publish each build under the build name "project-name.branch-name".
I have written a filespec which specifies a "build" but not a build number, causing Artifactory to automatically select the latest published build. This works fine if I simplify the pattern to my-artifactory-repo/project-name/branch-name/*, which results in all artifacts for the latest build being downloaded.
However, where I run into trouble is with the requirement to download only the immediate files instead of recursing into folders. So I tried to use a regex as follows.
{
"files":
[
{
"pattern": "my-artifactory-repo/project-name/branch-name/(\\d+/[^/]+)",
"build": "project-name.branch-name",
"recursive": "false",
"regexp": "true","
"target": "./artifactory-tmp/"
}
]
}
However, each time I try to use a regex, I get zero results.
Some other things I've tried which didn't work:
Not surrounding regex with parentheses
Using a simpler regex like ..../build.bat (because currently I have 4 digit build numbers, and I know a build.bat file is there), with or without parentheses
not using a regex and instead using the pattern my-artifactory-repo/project-name/branch-name/*/*. But this causes recursing in spite of "recursive":"false"
using this search command to retrieve the available build numbers first, so that I can extract the last one and insert it into the filespec. However, there's no way to tell whether the latest build folder is already complete and published, or still currently uploading.
jfrog search --recursive=false --include-dirs=true "my-artifactory-repo/project-name/branch-name/"
The "regexp" option is supported only for the upload command, and not the download command. I believe that the file spec schema under the jfrog-cli documentation shows that. Options that are included in the file specs and are not supported are ignored. This is perhaps something that should be improved.
When the "recursive" option is set to "false" when downloading files, the command indeed avoids recursive directory search. However patterns that include multiple wildcards together with non recursive search, cause some ambiguity in the sense of how deep should the search get. Notice that a single wildcard can include multiple directories. It easy to understand what recursive means when the pattern includes a single wildcard, but with more than one, recursive is practically undefined.
See if you can make your pattern more specific, so that it catches exactly what you need. If this is impossible, consider splitting your pattern into multiple patterns. All patterns can still be included into the same file spec, because "files" inside the spec is a list, which can include multiple pattern and target pairs.

Flow-typed - Generate Libdef

I'm using Flow to help author a JS project. If I want to provide a libdef file to supplement it do I need to create it manually, or am I able to execute some magic command that I'm not aware of yet which will generate the lib def for me?
Something like $ flow-typed doyourmagic would be nice.
EDIT:
Found this https://stackoverflow.com/a/38906578/192999
Which says:
There's two things:
If the file is owned by you (i.e. not a third party lib inside node_modules or such), then you can create a *.js.flow file next to it that documents its exports.
If the file is not owned by you (i.e. third party lib inside node_modules or such), then you can create a libdef file inside flow-typed/name-of-library.js
For .js.flow files
you write the definitions like this:
// #flow
declare module.exports: { ... }
For libdef files you write the definitions like this:
declare module "my-third-party-library" { declare module.exports: {... } }
For my question I fall into the "is owned by you" camp.
I guess I'm confused as to:
How I write these files.
How/where I publish these files to package it up for another project to reference.
Also, why do I need to create the .js.flow file manually? Can this not be magically generated? Perhaps that's the intention going forward but not implemented yet.
I found a nice guide showing how to package flow code together with the compiled code. So:
You do not have to write your own libdefs, you can use the entire flow source code. If you want a definition with only the type declarations, you can look into flow gen-flow-files, although that is still experimental and might fail.
You can package them as *.js.flow and the flow checker will automatically pick those up when you import your library.

What is proper syntax for wildcard driven filter rules file for rsync?

I am trying to use a filter rules file that uses wildcards for include/exclude lines to drive what gets included and excluded for running rsync to backup a server.
I am using the command line like:
rsync -avzh --filter="merge rsync_backup_filter" / user#backupserver:/backup-data
where rsync_backup_filter is currently something like
+ /var
+ /var/www
+ /var/www/webapp1/***
+ /var/www/webapp2/***
- /var/www/*/repo
+ /home/***
+ /etc/***
- *
but that filter rule syntax above is not quite right because the results don't actually match what I am intending. The repo sub folder is not excluded as I am trying to achieve.
I want to be able to use something like the last entry to say exclude anything not explicitly (using wildcards) included in the rules above
I want to be able to include certain paths (in the example above including webapp paths) and indicate that all files and folders below that point should be included but also be able to add exclusions within that previous wildcard (in the example above I want to exclude the repo subdir in any webapp path, so essentially saying "all except" in certain paths
Above is just a snippet and the longer version of the file would be a full backup strategy for the server from the root on
What is correct way to do structure the filter rules file and proper way to use wildcards to achieve what I've described?
Filter rules constitute an ordered set and evaluation continues until the first match occurs. So, all repo folders will be excluded, if you move the exclude rule up - at least to line #3.

Grunt-init copyAndProcess function: Can I pass in multiple values to 'noProcess' option?

I'm using grunt-init to build a template for a site structure I repeat regularly.
The template.js file uses the init.copyAndProcess function to customize most of files but a few of them get corrupted by the file processing (some fonts and image files) and I want to include those files in the 'noProcess' option. If these files all existed in the same directory, I could use the noProcess option as mentioned in the documentation [ See: http://gruntjs.com/project-scaffolding#copying-files ] and pass in a string like and it works:
var files = init.filesToCopy(props);
init.copyAndProcess(files, props, {noProcess: 'app/fonts/**'} );
Unfortunately the files that I need to have no processing performed on are not all in the same directory and I'd like to be able to pass in an array of them, something like the following block of code, but this does not work.
var files = init.filesToCopy(props);
init.copyAndProcess(files, props, {noProcess: ['app/fonts/**', 'app/images/*.png', 'app/images/*.jpg']} );
Any thoughts on how I can have multiple targets for the 'noProcess' option?
As soon as I posted the question, I realized that my proposed code did work. I simply had an invalid path when I'd renamed my 'app' directory to 'dev'.

What is job.get() and job.getBoolean() in mapreduce

I am working on pdf document clustering over hadoop so I am learning mapreduce by reading some examples on internet.In wordcount examples have lines
job.get("map.input.file")
job.getboolean()
What is function of these functions?what is exactly map.input.file where is it to set? or is it just a name given to input folder?
Please post answer if anyone know.
For code see the following link
wordcount 2.0 example=http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
These are job configurations. i.e. set of configurations which are passed on to each mapper and reducer. Now, these configurations consist of well defined mapreduce/hadoop related configurations as well as user-defined configurations.
In your case, map.input.file is a pre-defined configuration and yes it is set to a comma separated list of all the paths you have set as input path.
While wordcount.skip.patterns is a custom configuration which is set as per user's input, and you may see this configuration to be set in run() as follows:
conf.setBoolean("wordcount.skip.patterns", true);
As for when to use get and when to use getBoolean, it should be self-explanatory, as whenever you want to set a value of type boolean you will use getBoolean and setBoolean to get and set the specific config value respectively. Similarly you have specific methods for other data types as well. If it is string then you may use get().

Resources