How to fetch SHA_256 value from manifest.json file in JFROG - artifactory

I am trying to fetch the SHA_256 value from manifest.json file but unable to get using aql.
Below is the cmd I am using:
ubuntu#test:~$ **curl -sS -u sumkumar:$pw -XPOST -k -H "Content-type: text/plain" https://<URL>/artifactory/api/search/aql -d 'items.find({"repo":"xyz"},{"path":"a/b/c"}).include("*")'**
{
"results" : [ {
"repo" : "xyz",
"path" : "a/b/c",
"name" : "manifest.json",
"type" : "file",
"size" : 1579,
"created" : "2018-03-13T11:58:33.771Z",
"created_by" : "uex-sp-cd",
"modified" : "2018-03-15T14:17:38.299Z",
"modified_by" : "uex-sp-cd",
"updated" : "2018-03-15T14:17:38.299Z",
"depth" : 4,
"actual_md5" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"actual_sha1" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"original_md5" : "NO_ORIG",
"original_sha1" : "NO_ORIG",
"virtual_repos" : [ ]
},{
However, if you look the original manifest.json file from UI.It contains SHA256 value.
enter image description here

According to the AQL documentation, the item entity has the sha256 field. But it mentions the following note:
SHA-256 is supported from Artifactory version 5.5
You can only do an AQL search on an Artifact that has been deployed to Artifactory version 5.5 or above, or if you have migrated your database as described in SHA-256 Support after upgrading Artifactory to version 5.5 and above.
Please verify your Artifactory fits the requirements above.
In addition, assuming this is a Docker manifest.json, it specifically should also have the SHA256 in a property named docker.manifest.digest (and maybe also a property named sha256). To get the property values you can add "property.*" to the include(..) part of the query.
For example:
items.find({"repo":"xyz","path":"a/b/c","name":"manifest.json"})
.include("repo","path","name","sha256","property.*")
Will return something like:
{
"results" : [ {
"repo" : "xyz",
"path" : "a/b/c",
"name" : "manifest.json",
"sha256" : "34cb6f8e1e1aca...",
"properties" : [ {
"key" : "docker.manifest.digest",
"value" : "sha256:34cb6f8e1e1aca..."
},{
"key" : "sha256",
"value" : "34cb6f8e1e1aca..."
}
...

Related

Kibana and OpenSearch incompatible versions

I'm trying to integrate Kibana with my OpenSearch ( is it possible ? ). Unfortunately I get a version error.
Is there any way to use opensearch in kibana?
These are the versions I get via curl:
curl -k -u "admin:PASSWORD" "https://IP:9200/"
{
"name" : "node-1",
"cluster_name" : "cluster",
"cluster_uuid" : "he6gqhl2S-6dlVv6dyPOEA",
"version" : {
"number" : "7.10.2",
"build_type" : "rpm",
"build_hash" : "e505b10357c03ae8d26d675172402f2f2144ef0f",
"build_date" : "2022-01-14T03:38:06.881862Z",
"build_snapshot" : false,
"lucene_version" : "8.10.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
They are very tightly coupled and aren't really supposed to work together, so I doubt they will (or it would be very brittle).
If you want to have Kibana, why not Elasticsearch under the hood?

Using jq to loop over objects and return an iterated value

I have an enteresting TODO that I'd like some eyes on. I'm using journalctl to grab system journal entries and output as JSON. journalctl outputs as JSON objects separated by a newline.
Here's a sample of a JSON object outputted from journalctl:
{ "__CURSOR" : "s=bd3afe956aec45d89cf3939e839c3647;i=3e084;b=6dd8a3060bdb448d848f4694339c6e94;m=5d0fd53a6;t=5c797ae76d38f;x=6cdb742ba83df8f5", "__REALTIME_TIMES
TAMP" : "1626829164613612", "__MONOTONIC_TIMESTAMP" : "24980054950", "_BOOT_ID" : "6dd8a3030bdb446d858f4694337c6e94", "PRIORITY" : "6", "SYSLOG_FACILITY" : "3",
"CODE_FILE" : "../src/core/job.c", "CODE_LINE" : "845", "CODE_FUNC" :
"job_log_status_message", "SYSLOG_IDENTIFIER" : "systemd", "JOB_TYPE" : "start", "JOB_RES
ULT" : "done", "MESSAGE_ID" : "39f53479d3a045ac8e11786248231fbf", "_TRANSPORT" : "journal", "_PID" : "11282", "_UID" : "1162", "_GID" : "1162", "_COMM" : "syste
md", "_EXE" : "/lib/systemd/systemd", "_CMDLINE" : "/lib/systemd/systemd --user", "_CAP_EFFECTIVE" : "0", "_SELINUX_CONTEXT" : "unconfined\n", "_AUDIT_SESSION"
: "974", "_AUDIT_LOGINUID" : "1162", "_SYSTEMD_CGROUP" : "/user.slice/user-1162.slice/user#1162.service/init.scope", "_SYSTEMD_OWNER_UID" : "1162", "_SYSTEMD_UN
IT" : "user#1162.service", "_SYSTEMD_USER_UNIT" : "init.scope", "_SYSTEMD_SLICE" : "user-1162.slice", "_SYSTEMD_USER_SLICE" : "-.slice", "_SYSTEMD_INVOCATION_ID
" : "1f72568e79c34f14a525f98ce6a151c2", "_MACHINE_ID" : "32c9faf3dd90422881ce03690ebf0015", "_HOSTNAME" : "ip-192-168-22-27", "MESSAGE" : "Listening on port.", "USER_UNIT" : "dirmngr.socket", "USER_INVOCATION_ID" :
"7c1e914aada947cd80a45d68275751dc", "_SOURCE_REALTIME_TIMESTAMP" :
"1626829165607533" }
I'm using --after-cursor to get a subset of the journal. There is no --before-cursor option afaik so I'm trying to find a way to stop at a specific cursor and search the JSON objects between the first and last cursor.
At the moment I'm using the following snippet (sloppy by my own admission) to search the journal after a cursor and count the objects where a match is found.
journalctl -u my-service --after-cursor="$FIRST_CURSOR" -o json | jq -n 'inputs|select(.MESSAGE|test(".*My search string .*")) | jq length | wc -l
I'd like to do this more intelligently with possibly an if/else statement but I'm a jq novice.
A jq-only solution would be:
[inputs|select(.MESSAGE|test("My search string")) | length] | length
Notice there is no need for the initial or final .* in the regex
For future reference, please follow the "mcve" guidelines at http://stackoverflow.com/help/mcve .
In particular, your JSON got mangled while copying-and-pasting into SO; and an actual example, with expected output, would be appreciated.

Providing header dependency in meson

I am using meson build system 0.49.0 on ubuntu 18.04. My project has some idl files and i want to include header files from another folder. How do I add provide include_directories in meson.
idl_compiler = find_program('widl')
idl_generator = generator(idl_compiler,
output : [ '#BASENAME#.h' ],
arguments : [ '-h', '-o', '#OUTPUT#', '#INPUT#' ])
idl_files = [ .... ]
header_files = idl_generator.process(idl_files)
You can add include directories directly to generator() arguments:
idl_generator = generator(idl_compiler,
output : '#BASENAME#.h',
arguments : [
'-h',
'-o', '#OUTPUT#',
'-I', '#0#/src/'.format(meson.current_source_dir()),
'#INPUT#' ])
I added -I option which according to docs can be used to
Add a header search directory to path. Multiple search directories are
allowed.
and used meson's string formatting together with meson's object method current_source_dir() which
returns a string to the current source directory.
Note also that output argument is string, not list.
Or, for example, if you have several of them and need to use them as dependency later, you can have array:
my_inc_dirs = ['.', 'include/xxx', 'include']
generate args for generator:
idl_gen_args = [ '-h', '-o', '#OUTPUT#', '#INPUT#' ]
foreach dir : my_inc_dirs
idl_gen_args += ['-I', '#0#/#1#'.format(meson.current_source_dir(), dir)]
endforeach
idl_generator = generator(idl_compiler,
output : '#BASENAME#.h',
arguments : idl_gen_args)
and use later for dependency:
my_exe = executable(
...
include_directories : [my_inc_dirs],
...)

JSON DDL request failed - pipeline rowset is missing

Objective
Refresh a partition using a query (override). Using these as a guide:
https://www.sqlbi.com/articles/using-process-add-in-tabular-models/
https://gist.github.com/dgosbell/a7bc9fe9ff5a99fdb4df5819b8760217#file-refresh-with-override-example-txt
Apparently the MS example is not correct: https://learn.microsoft.com/en-us/bi-reference/tmsl/refresh-command-tmsl#examples
TMSL Script
{
"refresh" : {
"type" : "add",
"objects" : [{
"database" : "dbname",
"table" : "tblname"
}
],
"overrides" : [{
"partitions" : [
{
"originalObject" : {
"database" : "dbname",
"table" : "tblname",
"partition" : "partname"
},
"source" : {
"query" :
"SELECT * FROM source.view WHERE date_field = '2014-12-06'"
}
}
]
}
]
}
}
Error Message
The JSON DDL request failed with the following error: Failed to execute XMLA. Error returned: 'The Process command for partition 'partname' in table 'tblname' cannot be executed because the pipeline rowset is missing.
'..
Technical Details:
RootActivityId: 89a6f9ac-e5d4-4eaa-b049-455190039b4b
Date (UTC): 6/28/2019 3:20:36 PM
0: PFError::SetLastError() line 2158 + 0x0 (sql\picasso\engine\src\pf\eh\pferror.cpp)
1: PFSetLastError() line 2906 + 0x0 (sql\picasso\engine\src\pf\eh\pferror.cpp)
2: ConvertExceptionsToPFResult<<lambda_764f81a97ea803a6bb1663c7971ce151> >() line 424 + 0x34 (sql\picasso\engine\src\pf\kernel\shared\pfshmacros.inl)
3: PFSetLastErrorExTag() line 3461 + 0x2e (sql\picasso\engine\src\pf\eh\pferror.cpp)
4: 0x00007FFAB599CC7E (symbolic name unavailable)
Other Info
Executed on SSMS directly and in Powershell (via Runbook) with same error message.
Question
What does this error message mean exactly? (It is very hard to find
helpful documentation.) Or, is there an alternative solution to refreshing
a partition using a query override?
You can only use either a Query or M partition sources
If you are using an M partition source the syntax is:
"source":{
"type":"m",
"expression":"…"
}
And if you are using a Query partition source the syntax is:
"source":{
"type":"query",
"query":"…",
"dataSource":"…"
}

How to query a Document by ObjectId with rmongodb

Here is what you get in mongo shell :
db.col.find(ObjectId("5571849db1969e0a6eb32731")).pretty()
{
"_id" : ObjectId("5571849db1969e0a6eb32731"),
"name" : "Some name",
"logo" : "Some logo",
"users" : [
ObjectId("5571830031c7fc341bc2e105"),
ObjectId("5571830031c7fc341bc2e107")
],
"admins" : [ ],
"__v" : 0,
"steps" : 5782
}
Here is what I get in rmongo :
myResult <- mongo.find(Connexion, "db.col", query='ObjectId("5571849db1969e0a6eb32731")')
#Error in mongo.bson.from.JSON(arg) :
# Not a valid JSON content: ObjectId("5571849db1969e0a6eb32731")
So, how to do it right ?
Just in case : I had a look at this already. But mongolite doesn't support authentication (which is therefore a no-go) and I don't understand what to do with the second answer. If I try
result <- mongo.find(Connexion, "db.col", query=mongo.oid.from.string("5571849db1969e0a6eb32731"))
I get
# Error in mongo.bson.from.argument(query) : Can't convert to bson: argument should be one of 'list', 'mongo.bson' or 'character'(valid JSON)
This should work:
result <- mongo.find(Connexion, "db.col", query=list('_id' = mongo.oid.from.string("5571849db1969e0a6eb32731")))

Resources