I have build a parcel and a csd that go together and work fine when deploying to a cluster.
However, when I stop the service a child process started by a startup script keeps on running in the background.
I have tried many things but I went back to something more "brutal".
CSD extract
"startRunner" : {
"program" : "scripts/rexster.sh",
"args" : [ "start" ],
"environmentVariables" : {
"CONF_FILE" : "${conf_file}",
"REXSTER_PORT" : "${rexster_port}",
"HBASE_ZK" : "${hbase_zk_quorum}",
"REXSTER_SHUTDOWN_PORT" : "${rexster_shutdown_port}",
"HBASE_TABLE_NAME" : "${hbase_tablename}",
"MEM_XMX" : "${memory_xmx}",
"MEM_XMS" : "${memory_xms}"
}
},
"stopRunner" : {
"relevantRoleTypes" : ["TITAN_REXTER_SERVER"],
"runner" : {
"program" : "scripts/rexster.sh",
"args" : [ "stop" ]
}
}
}
scripts/rexster.sh extract
...
;;
(stop)
echo "Stopping rexster"
exec kill -9 `ps aux | grep java | grep titan | awk '{print $2}'`
# exec stopRexster.sh $TITAN_HOME
;;
(*)
echo "Don't understand [$CMD]"
;;
esac
But the process keeps on running in the background:
usr/java/jdk1.7.0_67-cloudera/bin/java -server -Xms128m -Xmx512m -Dtitan.logdir=../log com.tinkerpop.rexster.Application -s -c /opt/cloudera/parcels/TITAN-1.0/lib/titan-0.5.2-hadoop2-CDH5.3/conf/rexster-hbase.xml
The stoprunner wasn't placed correctly.
It is defined per role, after the role definition.
<...ROLE...>}]
,
"stopRunner" : {
"masterRole":"TITAN_REXTER_SERVER",
"relevantRoleTypes" : ["TITAN_REXTER_SERVER"],
"runner" : {
"program" : "scripts/rexster.sh",
"args" : [ "stop" ],
"environmentVariables" : {
"CONF_FILE" : "${conf_file}",
"REXSTER_PORT" : "${rexster_port}",
"HBASE_ZK" : "${hbase_zk_quorum}",
"REXSTER_SHUTDOWN_PORT" : "${rexster_shutdown_port}",
"HBASE_TABLE_NAME" : "${hbase_tablename}"
}
}
}
Related
I am trying to fetch the SHA_256 value from manifest.json file but unable to get using aql.
Below is the cmd I am using:
ubuntu#test:~$ **curl -sS -u sumkumar:$pw -XPOST -k -H "Content-type: text/plain" https://<URL>/artifactory/api/search/aql -d 'items.find({"repo":"xyz"},{"path":"a/b/c"}).include("*")'**
{
"results" : [ {
"repo" : "xyz",
"path" : "a/b/c",
"name" : "manifest.json",
"type" : "file",
"size" : 1579,
"created" : "2018-03-13T11:58:33.771Z",
"created_by" : "uex-sp-cd",
"modified" : "2018-03-15T14:17:38.299Z",
"modified_by" : "uex-sp-cd",
"updated" : "2018-03-15T14:17:38.299Z",
"depth" : 4,
"actual_md5" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"actual_sha1" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"original_md5" : "NO_ORIG",
"original_sha1" : "NO_ORIG",
"virtual_repos" : [ ]
},{
However, if you look the original manifest.json file from UI.It contains SHA256 value.
enter image description here
According to the AQL documentation, the item entity has the sha256 field. But it mentions the following note:
SHA-256 is supported from Artifactory version 5.5
You can only do an AQL search on an Artifact that has been deployed to Artifactory version 5.5 or above, or if you have migrated your database as described in SHA-256 Support after upgrading Artifactory to version 5.5 and above.
Please verify your Artifactory fits the requirements above.
In addition, assuming this is a Docker manifest.json, it specifically should also have the SHA256 in a property named docker.manifest.digest (and maybe also a property named sha256). To get the property values you can add "property.*" to the include(..) part of the query.
For example:
items.find({"repo":"xyz","path":"a/b/c","name":"manifest.json"})
.include("repo","path","name","sha256","property.*")
Will return something like:
{
"results" : [ {
"repo" : "xyz",
"path" : "a/b/c",
"name" : "manifest.json",
"sha256" : "34cb6f8e1e1aca...",
"properties" : [ {
"key" : "docker.manifest.digest",
"value" : "sha256:34cb6f8e1e1aca..."
},{
"key" : "sha256",
"value" : "34cb6f8e1e1aca..."
}
...
I have an enteresting TODO that I'd like some eyes on. I'm using journalctl to grab system journal entries and output as JSON. journalctl outputs as JSON objects separated by a newline.
Here's a sample of a JSON object outputted from journalctl:
{ "__CURSOR" : "s=bd3afe956aec45d89cf3939e839c3647;i=3e084;b=6dd8a3060bdb448d848f4694339c6e94;m=5d0fd53a6;t=5c797ae76d38f;x=6cdb742ba83df8f5", "__REALTIME_TIMES
TAMP" : "1626829164613612", "__MONOTONIC_TIMESTAMP" : "24980054950", "_BOOT_ID" : "6dd8a3030bdb446d858f4694337c6e94", "PRIORITY" : "6", "SYSLOG_FACILITY" : "3",
"CODE_FILE" : "../src/core/job.c", "CODE_LINE" : "845", "CODE_FUNC" :
"job_log_status_message", "SYSLOG_IDENTIFIER" : "systemd", "JOB_TYPE" : "start", "JOB_RES
ULT" : "done", "MESSAGE_ID" : "39f53479d3a045ac8e11786248231fbf", "_TRANSPORT" : "journal", "_PID" : "11282", "_UID" : "1162", "_GID" : "1162", "_COMM" : "syste
md", "_EXE" : "/lib/systemd/systemd", "_CMDLINE" : "/lib/systemd/systemd --user", "_CAP_EFFECTIVE" : "0", "_SELINUX_CONTEXT" : "unconfined\n", "_AUDIT_SESSION"
: "974", "_AUDIT_LOGINUID" : "1162", "_SYSTEMD_CGROUP" : "/user.slice/user-1162.slice/user#1162.service/init.scope", "_SYSTEMD_OWNER_UID" : "1162", "_SYSTEMD_UN
IT" : "user#1162.service", "_SYSTEMD_USER_UNIT" : "init.scope", "_SYSTEMD_SLICE" : "user-1162.slice", "_SYSTEMD_USER_SLICE" : "-.slice", "_SYSTEMD_INVOCATION_ID
" : "1f72568e79c34f14a525f98ce6a151c2", "_MACHINE_ID" : "32c9faf3dd90422881ce03690ebf0015", "_HOSTNAME" : "ip-192-168-22-27", "MESSAGE" : "Listening on port.", "USER_UNIT" : "dirmngr.socket", "USER_INVOCATION_ID" :
"7c1e914aada947cd80a45d68275751dc", "_SOURCE_REALTIME_TIMESTAMP" :
"1626829165607533" }
I'm using --after-cursor to get a subset of the journal. There is no --before-cursor option afaik so I'm trying to find a way to stop at a specific cursor and search the JSON objects between the first and last cursor.
At the moment I'm using the following snippet (sloppy by my own admission) to search the journal after a cursor and count the objects where a match is found.
journalctl -u my-service --after-cursor="$FIRST_CURSOR" -o json | jq -n 'inputs|select(.MESSAGE|test(".*My search string .*")) | jq length | wc -l
I'd like to do this more intelligently with possibly an if/else statement but I'm a jq novice.
A jq-only solution would be:
[inputs|select(.MESSAGE|test("My search string")) | length] | length
Notice there is no need for the initial or final .* in the regex
For future reference, please follow the "mcve" guidelines at http://stackoverflow.com/help/mcve .
In particular, your JSON got mangled while copying-and-pasting into SO; and an actual example, with expected output, would be appreciated.
When I try to use httpoison to query an elasticsearch server like
iex(1)> HTTPoison.get "http://localhost:9200"
I get
{:error, %HTTPoison.Error{id: nil, reason: :econnrefused}}.
If I do
curl -XGET "http://localhost:9200"
I get
{
"name" : "es01",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Wik-EpMkQ8ummJE6ctNAOg",
"version" : {
"number" : "7.0.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "e4efcb5",
"build_date" : "2019-04-29T12:56:03.145736Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Does anyone know what this behavior is due to and how to fix it?
P.S.: Changing localhost to 127.0.0.1 does not solve the problem.
Here's my setup:
elasticsearch Version: 7.0.1
{:httpoison, "~> 1.5"} #=> mix.lock shows version 1.5.1 was installed
curl results:
$ curl -XGET "http://localhost:9200"
{
"name" : "My-MacBook-Pro.local",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "vEFl3B5TTYaBxPhQFuXPyQ",
"version" : {
"number" : "7.0.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "e4efcb5",
"build_date" : "2019-04-29T12:56:03.145736Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
HTTPoison results:
$ iex -S mix
Erlang/OTP 20 [erts-9.3] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false]
===> Compiling parse_trans
===> Compiling mimerl
===> Compiling metrics
===> Compiling unicode_util_compat
===> Compiling idna
==> ssl_verify_fun
Compiling 7 files (.erl)
Generated ssl_verify_fun app
===> Compiling certifi
===> Compiling hackney
==> httpoison
Compiling 3 files (.ex)
Generated httpoison app
==> hello
Compiling 15 files (.ex)
Generated hello app
Interactive Elixir (1.6.6) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> HTTPoison.get "http://localhost:9200"
{:ok,
%HTTPoison.Response{
body: "{\n \"name\" : \"My-MacBook-Pro.local\",\n \"cluster_name\" :
\"elasticsearch\",\n \"cluster_uuid\" : \"vEFl3B5TTYaBxPhQFuXPyQ\",\n
\"version\" : {\n \"number\" : \"7.0.1\",\n \"build_flavor\" :
\"default\",\n \"build_type\" : \"tar\",\n \"build_hash\" :
\"e4efcb5\",\n \"build_date\" : \"2019-04-29T12:56:03.145736Z\",\n
\"build_snapshot\" : false,\n \"lucene_version\" : \"8.0.0\",\n
\"minimum_wire_compatibility_version\" : \"6.7.0\",\n
\"minimum_index_compatibility_version\" : \"6.0.0-beta1\"\n },\n
\"tagline\" : \"You Know, for Search\"\n}\n",
headers: [
{"content-type", "application/json; charset=UTF-8"},
{"content-length", "522"}
],
request: %HTTPoison.Request{
body: "",
headers: [],
method: :get,
options: [],
params: %{},
url: "http://localhost:9200"
},
request_url: "http://localhost:9200",
status_code: 200
}}
iex(2)>
Next, I stopped the elasticsearch server, then I ran the HTTPoison request again:
ex(2)> HTTPoison.get "http://localhost:9200"
{:error, %HTTPoison.Error{id: nil, reason: :econnrefused}}
I got similar results for the curl request:
$ curl -XGET "http://localhost:9200"
curl: (7) Failed to connect to localhost port 9200: Connection refused
What happens if you issue two curl requests in a row? Do they both succeed? Try issuing the HTTPoison request first, then the curl request. Which one fails? Try the reverse order. Same results?
I'm almost positive you have the same problem that I did. Check to make sure you are not forcing ipv6 in your /etc/hosts file.
If you have something like this:
::1 localhost
...get rid of it and Httpoison should work again
I installed elasicsearch 5.0.1 with ingest attachment and tried indexing pdf in elasticsearch from shell script using below command
#!/bin/ksh
var=$(base64 sample.pdf | perl -pe 's/\n/\\n/g')
var1=$(curl -XPUT 'http://localhost:9200/my_index5/my_type/my_id?pipeline=attachment&pretty' -d' { "data" : "'$var'" }')
echo $var1
Got an error as
{ "error" : { "root_cause" : [ { "type" : "exception", "reason" : "java.lang.IllegalArgumentException: ElasticsearchParseException[Error parsing document in field [data]]; nested: IllegalArgumentException[Illegal base64 character a];", "header" : { "processor_type" : "attachment" } } ]
Can anyone please help resolving the above error
Rectified the error.
Cause for this error is, In the base64 encoded content \n was present which caused the "Illegal format exception".
As a solution, When tried like below it worked
var=$(base64 sample.pdf | perl -pe 's/\n//g')
I installed elasticsearch 5.0.1 and corresponding ingest attachment. Tried indexing pdf document from shell script as below
#!/bin/ksh
var=$(base64 file_name.pdf)
var1=$(curl -XPUT 'http://localhost:9200/my_index4/my_type/my_id?pipeline=attachment&pretty' -d' { "data" : $var }')
echo $var1
I got error as
{ "error" : { "root_cause" : [ { "type" : "exception", "reason" :
"java.lang.IllegalArgumentException: ElasticsearchParseException[Error parsing document in field
[data]]; nested: IllegalArgumentException[Illegal base64 character 24];",
"header" : { "processor_type" : "attachment" } } ]...
Can anyone please help on resolving the above issue ... Not sure whether I am passing invalid base64 character ?
Please note that when I pass like this, It works !
var1=$(curl -XPUT 'http://localhost:9200/my_index4/my_type/my_id?pipeline=attachment&pretty'
-d' { "data" : "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=" }')
I guess the issue has to shell not expanding the variables within single-quotes, you need to double-quote to expand it. i.e.
change -d' { "data" : $var }'
to
-d '{"data" : "'"$(base64 file_name.pdf)"'"}'
directly to pass the base64 stream.
(or)
-d '{"data" : "'"$var"'"}'
More about quoting and variables in ksh here.