I would like to retrieve from artifactory the files that follow this pattern:
"my-artifactory-repo/project-name/branch-name/1234/*"
Where 1234 is the build number of the latest published build, and where * should not recurse into folders but just match the immediate files. I use build-publish to publish each build under the build name "project-name.branch-name".
I have written a filespec which specifies a "build" but not a build number, causing Artifactory to automatically select the latest published build. This works fine if I simplify the pattern to my-artifactory-repo/project-name/branch-name/*, which results in all artifacts for the latest build being downloaded.
However, where I run into trouble is with the requirement to download only the immediate files instead of recursing into folders. So I tried to use a regex as follows.
{
"files":
[
{
"pattern": "my-artifactory-repo/project-name/branch-name/(\\d+/[^/]+)",
"build": "project-name.branch-name",
"recursive": "false",
"regexp": "true","
"target": "./artifactory-tmp/"
}
]
}
However, each time I try to use a regex, I get zero results.
Some other things I've tried which didn't work:
Not surrounding regex with parentheses
Using a simpler regex like ..../build.bat (because currently I have 4 digit build numbers, and I know a build.bat file is there), with or without parentheses
not using a regex and instead using the pattern my-artifactory-repo/project-name/branch-name/*/*. But this causes recursing in spite of "recursive":"false"
using this search command to retrieve the available build numbers first, so that I can extract the last one and insert it into the filespec. However, there's no way to tell whether the latest build folder is already complete and published, or still currently uploading.
jfrog search --recursive=false --include-dirs=true "my-artifactory-repo/project-name/branch-name/"
The "regexp" option is supported only for the upload command, and not the download command. I believe that the file spec schema under the jfrog-cli documentation shows that. Options that are included in the file specs and are not supported are ignored. This is perhaps something that should be improved.
When the "recursive" option is set to "false" when downloading files, the command indeed avoids recursive directory search. However patterns that include multiple wildcards together with non recursive search, cause some ambiguity in the sense of how deep should the search get. Notice that a single wildcard can include multiple directories. It easy to understand what recursive means when the pattern includes a single wildcard, but with more than one, recursive is practically undefined.
See if you can make your pattern more specific, so that it catches exactly what you need. If this is impossible, consider splitting your pattern into multiple patterns. All patterns can still be included into the same file spec, because "files" inside the spec is a list, which can include multiple pattern and target pairs.
Related
I'm using insomnia to make calls to the Artifactory API.
I have the following query, which works really well:
items.find({"repo":{"$eq":"my-repository-virt"}}, {"$and":[{"#my.fileType":{"$match": "jar"}},{"#my.otherType":{"$match": "type2"}},{"#prodVersion":{"$match": "false"}}]})
But I have a problem in that there are duplicate files in some sub-folders with the same properties/filename that I would like to exclude.
I would like to add path to this query, but I can never get any results returned.
The repository is a virtual repository that links to 3 other real repositories.
One of my colleagues can call the following query with the command line tool and get the expected results:
jfrog rt search my-repo-snapshots/myproject/subfolder/jars/*.jar
I have tried adding the path parameter to my query, I've tried removing everything except the repo and the path, like this:
items.find({"repo":{"$eq":"my-repo-snapshots"}},{"path" : "my-repo-snapshots/myproject/subfolder/jars/*.jar"})
I've tried with just the path, with variations on the path, including/excluding the repo name, using the virtual repo, the actual repo, but I always get a successful search with 0 results returned.
How can I build this query to search the virtual repo, along a certain path, and including certain properties?
EDIT:
I've also tried:
items.find({"repo":{"$eq":"my-repo-snapshots"}},{"path" : {"$match":"my-repo-snapshots/myproject/subfolder/jars/*.jar"}})
Both with the repo in the path and without, I still get 0 results.
OK I figured it out.
The path part needs to be added in with the {"$and": ...} section where the properties are included. Like so:
items.find({"repo":{"$eq":"my-repository-virt"}},
{"$and":[
{"path":{"$match":"path/to/relevant/folders/*"}},
{"#my.fileType":{"$match": "jar"}},
{"#my.otherType":{"$match": "type2"}},
{"#prodVersion":{"$match": "false"}}
]})
The easier fix would have been:
items.find({"repo":{"$eq":"my-repo-snapshots"}},{"path" : {"$eq":"my-repo-snapshots/myproject/subfolder/jars"}, {"name" : {"$match":"*.jar"}})
So the problem with your initial attempt, is that the "path" should match the folder and the "name" should match the filename
I am trying to use a filter rules file that uses wildcards for include/exclude lines to drive what gets included and excluded for running rsync to backup a server.
I am using the command line like:
rsync -avzh --filter="merge rsync_backup_filter" / user#backupserver:/backup-data
where rsync_backup_filter is currently something like
+ /var
+ /var/www
+ /var/www/webapp1/***
+ /var/www/webapp2/***
- /var/www/*/repo
+ /home/***
+ /etc/***
- *
but that filter rule syntax above is not quite right because the results don't actually match what I am intending. The repo sub folder is not excluded as I am trying to achieve.
I want to be able to use something like the last entry to say exclude anything not explicitly (using wildcards) included in the rules above
I want to be able to include certain paths (in the example above including webapp paths) and indicate that all files and folders below that point should be included but also be able to add exclusions within that previous wildcard (in the example above I want to exclude the repo subdir in any webapp path, so essentially saying "all except" in certain paths
Above is just a snippet and the longer version of the file would be a full backup strategy for the server from the root on
What is correct way to do structure the filter rules file and proper way to use wildcards to achieve what I've described?
Filter rules constitute an ordered set and evaluation continues until the first match occurs. So, all repo folders will be excluded, if you move the exclude rule up - at least to line #3.
I see that the ability to specify JSHint options on a per directory basis was added here.
However it is not clear to me how you actually take advantage of this. What do I do to set JSH options in a single directory, so that the options differ from other directories?
It appears that the change in question actually allows you to specify overriding options on a per-file basis. You can add an overrides property to your config, the value of which should be an object. The keys of this object are treated as regular expressions against which file names are tested. If the name of the file being analysed matches an overrides regex then the options specified for that override will apply to that file:
There's an example of this in the cli.js test file diff in the commit you linked to:
{
"asi": true,
"overrides": {
"bar.js$": {
"asi": false
}
}
}
In that example there is a single override which will apply to any files that match the bar.js$ regular expression (which looks like a bit of an oversight, since the . will match any character and presumably was intended to only match a literal . character).
Having said all that, it doesn't look like the overrides property is going to help you. I think what you actually want is a new .jshintrc file in the directory in question. JSHint looks for that file starting in the directory of the file being analysed and moves up the directory tree until it finds one. Whichever it finds first is the one that gets used. From the docs:
In case of .jshintrc, JSHint will start looking for this file in the same directory as the file that's being linted. If not found, it will move one level up the directory tree all the way up to the filesystem root.
A common use case for this is to have separate JSHint configurations for your application code and your test code. This allows you to define the different environments and globals separately.
I'm looking to use the Artifactory property search
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-ArtifactSearch%28QuickSearch%29
Currently this will pull json listing any artifact that matches my properties.
"results" : [
{
"uri": "http://localhost:8080/artifactory/api/storage/libs-release-local/org/acme/lib/ver/lib-ver.pom"
},{
"uri": "http://localhost:8080/artifactory/api/storage/libs-release-local/org/acme/lib/ver2/lib-ver2.pom"
}
]
I need to be able to filter the artifacts I get back as i'm only interested in a certain classifier. The GAVC Search has this with &c=classifier
I can do it in code if this isn't possible via the interface
Any help appreciated
Since the release of AQL in Artifactory 3.5, it's now the official and the preferred way to find artifacts.
Here's an example similar to your needs:
items.find
(
{
"$and":[
{"#license":{"$eq":"GPL"}},
{"#version":{"$match":"1.1.*"}},
{"name":{"$match":"*.jar"}}
]
}
)
To run the query in Artifactory, copy the query to a file and name it aql.query
Run the following command from the directory that contains the aql.query file
curl -X POST -uUSER:PASSWORD 'http://HOST:PORT/artifactory/api/search/aql' -Taql.query
Don't forget to replace the templates (USER, PASSWORD,HOST and PORT) to real values.
In the example
The first two criteria are used to filter items by properties.
The third criteria filters items by the artifact name (in our case the artifact name should end with .jar)
For more details on how to write AQL query are in AQL
Old answer
Currently you can't combine the property search with GAVC search.
So you have two options:
Executing one of them (whichever gives you more precise results) and then filter the JSON list on the client by a script
Writing an execution user plugin that will execute the search by using the Searches service and then filter the results on the server side.
Of course, the later is preferable.
I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.