What is proper syntax for wildcard driven filter rules file for rsync? - rsync

I am trying to use a filter rules file that uses wildcards for include/exclude lines to drive what gets included and excluded for running rsync to backup a server.
I am using the command line like:
rsync -avzh --filter="merge rsync_backup_filter" / user#backupserver:/backup-data
where rsync_backup_filter is currently something like
+ /var
+ /var/www
+ /var/www/webapp1/***
+ /var/www/webapp2/***
- /var/www/*/repo
+ /home/***
+ /etc/***
- *
but that filter rule syntax above is not quite right because the results don't actually match what I am intending. The repo sub folder is not excluded as I am trying to achieve.
I want to be able to use something like the last entry to say exclude anything not explicitly (using wildcards) included in the rules above
I want to be able to include certain paths (in the example above including webapp paths) and indicate that all files and folders below that point should be included but also be able to add exclusions within that previous wildcard (in the example above I want to exclude the repo subdir in any webapp path, so essentially saying "all except" in certain paths
Above is just a snippet and the longer version of the file would be a full backup strategy for the server from the root on
What is correct way to do structure the filter rules file and proper way to use wildcards to achieve what I've described?

Filter rules constitute an ordered set and evaluation continues until the first match occurs. So, all repo folders will be excluded, if you move the exclude rule up - at least to line #3.

Related

Artifactory: download all root files of latest published build, non-recursively

I would like to retrieve from artifactory the files that follow this pattern:
"my-artifactory-repo/project-name/branch-name/1234/*"
Where 1234 is the build number of the latest published build, and where * should not recurse into folders but just match the immediate files. I use build-publish to publish each build under the build name "project-name.branch-name".
I have written a filespec which specifies a "build" but not a build number, causing Artifactory to automatically select the latest published build. This works fine if I simplify the pattern to my-artifactory-repo/project-name/branch-name/*, which results in all artifacts for the latest build being downloaded.
However, where I run into trouble is with the requirement to download only the immediate files instead of recursing into folders. So I tried to use a regex as follows.
{
"files":
[
{
"pattern": "my-artifactory-repo/project-name/branch-name/(\\d+/[^/]+)",
"build": "project-name.branch-name",
"recursive": "false",
"regexp": "true","
"target": "./artifactory-tmp/"
}
]
}
However, each time I try to use a regex, I get zero results.
Some other things I've tried which didn't work:
Not surrounding regex with parentheses
Using a simpler regex like ..../build.bat (because currently I have 4 digit build numbers, and I know a build.bat file is there), with or without parentheses
not using a regex and instead using the pattern my-artifactory-repo/project-name/branch-name/*/*. But this causes recursing in spite of "recursive":"false"
using this search command to retrieve the available build numbers first, so that I can extract the last one and insert it into the filespec. However, there's no way to tell whether the latest build folder is already complete and published, or still currently uploading.
jfrog search --recursive=false --include-dirs=true "my-artifactory-repo/project-name/branch-name/"
The "regexp" option is supported only for the upload command, and not the download command. I believe that the file spec schema under the jfrog-cli documentation shows that. Options that are included in the file specs and are not supported are ignored. This is perhaps something that should be improved.
When the "recursive" option is set to "false" when downloading files, the command indeed avoids recursive directory search. However patterns that include multiple wildcards together with non recursive search, cause some ambiguity in the sense of how deep should the search get. Notice that a single wildcard can include multiple directories. It easy to understand what recursive means when the pattern includes a single wildcard, but with more than one, recursive is practically undefined.
See if you can make your pattern more specific, so that it catches exactly what you need. If this is impossible, consider splitting your pattern into multiple patterns. All patterns can still be included into the same file spec, because "files" inside the spec is a list, which can include multiple pattern and target pairs.

Is it possible to separate each endpoint into its own file in R Plumber?

Im looking to separate out my complex API structure so that I have the following structure. I am wondering. Is there a way to mount all files under the users/ folder to the same ./api/v1/users route? And the same for projects/ ? One key point of consideration is the fact that I will have dynamic routes defined within these files too (e.g. ./projects/<project_id>)
In shiny, to accomplish something like this Id use source('file.R', local=TRUE) but Plumber doesn't work in the same way.
The reason I am structuring it this way is to reduce complexity during development (as opposed to adding multiple verbs to the same endpoint).
+-- v1/
|+-- users/
|+----- GET.R
|+----- POST.R
|+-- projects/
|+----- GET.R
|+----- POST.R
Ive tested mounting but unfortunately cannot mount multiple files from each folder to the same route name. See the example code
v2 <- plumber::Plumber$new("api/v1/projects/GET.R")
root$mount(paste0(ROOT_URI,"/v1"), v2)
v1 <- plumber::Plumber$new("api/v1/projects/POST.R")
root$mount(paste0(ROOT_URI,"/v1"), v1)
(within the GET.R and POST.R files are each one function named "projects" that handle one of two verbs)
The answer is sort of. Using the 'here' package allows you to import functions defined within files relative to your plumber file. Then in your plumber file you can fill in your decorator and place your function after it.

Automatic toctree update

I'm not sure if this is possible. But I'm hoping to put multiple .rst files in a directory, and during compilation. I want these files to automatically be inserted in the toctree. How can I go about this?
You can use the glob option which enables wildcards. Like this:
.. toctree::
:glob:
*
This adds all other *.rst files in the same directory to the toctree.
Reference: "Use “globbing” in toctree directives"

How to set JSHint options on per directory basis

I see that the ability to specify JSHint options on a per directory basis was added here.
However it is not clear to me how you actually take advantage of this. What do I do to set JSH options in a single directory, so that the options differ from other directories?
It appears that the change in question actually allows you to specify overriding options on a per-file basis. You can add an overrides property to your config, the value of which should be an object. The keys of this object are treated as regular expressions against which file names are tested. If the name of the file being analysed matches an overrides regex then the options specified for that override will apply to that file:
There's an example of this in the cli.js test file diff in the commit you linked to:
{
"asi": true,
"overrides": {
"bar.js$": {
"asi": false
}
}
}
In that example there is a single override which will apply to any files that match the bar.js$ regular expression (which looks like a bit of an oversight, since the . will match any character and presumably was intended to only match a literal . character).
Having said all that, it doesn't look like the overrides property is going to help you. I think what you actually want is a new .jshintrc file in the directory in question. JSHint looks for that file starting in the directory of the file being analysed and moves up the directory tree until it finds one. Whichever it finds first is the one that gets used. From the docs:
In case of .jshintrc, JSHint will start looking for this file in the same directory as the file that's being linted. If not found, it will move one level up the directory tree all the way up to the filesystem root.
A common use case for this is to have separate JSHint configurations for your application code and your test code. This allows you to define the different environments and globals separately.

Ignoring a directory using ack's .ackrc

I'm not sure what it's for, but the code I'm working on has a bunch of folders called "save.d," it looks like they're used for some sort of version control (we also have .svn folders).
How can I update my .ackrc file to ignore those directories by default?
My .ackrc is currently
--type-set=inc=.inc
--ignore-dir=pear
--type-set=tpl=.tpl
Our folder structure can look like:
program/parsers/save.d
program/modules/save.d
Adding another line --ignore-dir=save.d did the trick

Resources