How to use complex-number of kairosdb from REST - opentsdb

I am checking if I can use kairosdb for my project. I was checking out the REST api's and I have a use case where I need to save both my device state and status (state tells if device is on or off and status tells if my device is occupied or empty)
kairosdb version: 1.1.1
I came across this link https://kairosdb.github.io/docs/build/html/restapi/AddDataPoints.html
but when I try to post data from REST client I am getting the error 400 BAD Request error. The error is
{"errors":["Unregistered data point type 'complex-number'"]}
My request I am posting is ,
{
"name": "device_data",
"type": "complex-number",
"datapoints": [
[
1470897496,
{
"state": 0,
"status": "empty"
}
]
],
"tags": {
"device_id": "abc123"
}
}
In tried doing the same in Java as specified in https://kairosdb.github.io/docs/build/html/kairosdevelopment/CustomData.html
I get the same error i
Please let me know how to use complex-numbers or custom data types from REST

Recently, I figured out how to use this.
Using the example from the official document of KairosDB.
create 2 files called ComplexDataPoint.java and ComplexDataPointFactory.java and then paste the code provided by the tutorial on the doc: https://kairosdb.github.io/docs/build/html/kairosdevelopment/CustomData.html#example-for-creating-custom-types
download the KairosDB source, then extract the .zip file.
paste the 2 files in /KAIROSDB_DOWNLOADED_SOURCE/src/main/java/org/kairosdb/core/datapoints/
configure the CoreModule.java at /KAIROSDB_DOWNLOADED_SOURCE/src/main/java/org/kairosdb/core/, add the following line in the function protected void configure():
bind(ComplexDataPointFactory.class).in(Singleton.class);
open terminal, cd to KAIROSDB_DOWNLOADED_SOURCE/, then follow the instruction in the file how_to_build.txt
when complete, it will create a folder called build, the compiled kairosdb jar file is located in KAIROSDB_DOWNLOADED_SOURCE/build/jar
in your kairosdb installation folder, backup the kairosdb-X.X.X.jar file in YOUR_KAIROSDB_INSTALLATION/lib
mv kairosdb-X.X.X.jar kairosdb-X.X.X.jar.backup
mv the newly compiled jar file to YOUR_KAIROSDB_INSTALLATION/lib
modify the configuration file by adding the following line:
kairosdb.datapoints.factory.complex=org.kairosdb.core.datapoints.ComplexDataPointFactory
restart your kairosdb
For your query, since the registered name is kairosdb.datapoints.factory.complex, replace complex-number with complex in your query string.
Hope this will help you! I am now having a problem to plot the complex data. I am still figuring out...

Related

Publishing zip from TeamCity to Artifactory using FileSpec, does it even work?

I've tried everything I can think of to get this to work but in every attempt NO errors are returned (the logs say everything succeeded and one file was uploaded) but in artifactory, although the build info is published, nothing appears in the target repo.
I tried using powershell and the rest api to publish, but that just gave me (502 gateway error). Again, no matter what combination of values I used, always this useless error.
I have been forced to revert to using the 'legacy patterns' because this is the only thing that works AT ALL.
The pattern I am using is:
%ProjectPublishDir%\**\*.* => path/to/my/APP/%Major-Version%.%Minor-Version%/%ArtifactNamePrefix%/%ArtifactFileName%
Where all these values are parameters defined in TeamCity. This works like a charm.
However, if I add this at the end of my powershell to create the artifact:
#Write-Host "##teamcity[publishArtifacts '%ProjectPublishDir%\**\*.* => %ArtifactsDir%\%ArtifactFileName%']"
Then I replace legacy patterns with file specs for my powershell step, then I use this file spec:
{
"files": [
{
"pattern": "%ArtifactsDir%\%ArtifactFileName%",
"target": "my_repo_name/path/to/my/APP/%Major-Version%.%Minor-Version%/%ArtifactNamePrefix%/",
"props": "type=zip;status=ready"
}
]
}
The log says the artifact was generated (which I verified on the build agent is true) and that it was sent to Artifactory successfully, but when I check in artifactory, the build info is there but the artifact is missing.
Every example everywhere I can find is too simple and from the basic examples out there, what I have above should work. So what are the docs missing that I am not doing/doing incorrectly?

Cloud Build Trigger - Not in Firebase app directory error

I have problem with deploying node.js (Angular) application from Bitbucket repository to Firebase using Cloud Build Trigger.
I performed steps from this article - https://cloud.google.com/build/docs/deploying-builds/deploy-firebase
Necessary APIs are turned on,
Service account has all required permissions,
Firebase community builder is deployed and visible in Container Registry,
Cloudbuild.json file is added to repository,
Cloud Build Trigger is created and it points to one specific branch of my repository.
Problem is that after running Cloud Build Trigger I receive following error: "Error: Not in Firebase app directory (could not locate firebase.json)"
What could be reason of such error? Is it possible to point to the trigger where in repository is firebase application and firebase.json file?
EDIT:
My cloudbuild.json file is quite simple:
{
"steps": [
{
"name": "gcr.io/PROJECT_NAME/firebase",
"args": [
"deploy",
"--project",
"PROJECT_NAME",
"--only",
"functions"
]
}
]
}
Logs in Cloud Build Trigger history:
starting build "build_id"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/link
* branch branch_id -> FETCH_HEAD
HEAD is now at d8bc2f1 Add cloudbuild.json file
BUILD
Pulling image: gcr.io/PROJECT/firebase
Using default tag: latest
latest: Pulling PROJECT/firebase
1e987daa2432: Already exists
a0edb687a3da: Already exists
6891892cc2ec: Already exists
684eb726ddc5: Already exists
b0af097f0da6: Already exists
154aee36a7da: Already exists
37e5835696f7: Pulling fs layer
62eb6e670f1d: Pulling fs layer
47e62615d9f9: Pulling fs layer
428cea824ccd: Pulling fs layer
765a2c722bf9: Pulling fs layer
b3f5d0a285e3: Pulling fs layer
428cea824ccd: Waiting
765a2c722bf9: Waiting
b3f5d0a285e3: Waiting
47e62615d9f9: Verifying Checksum
47e62615d9f9: Download complete
62eb6e670f1d: Verifying Checksum
62eb6e670f1d: Download complete
765a2c722bf9: Verifying Checksum
765a2c722bf9: Download complete
b3f5d0a285e3: Verifying Checksum
b3f5d0a285e3: Download complete
37e5835696f7: Verifying Checksum
37e5835696f7: Download complete
428cea824ccd: Verifying Checksum
428cea824ccd: Download complete
37e5835696f7: Pull complete
62eb6e670f1d: Pull complete
47e62615d9f9: Pull complete
428cea824ccd: Pull complete
765a2c722bf9: Pull complete
b3f5d0a285e3: Pull complete
Digest: sha256:4b6b7214d6344c8247130bf3f5443a2851e39aed3ececb32dfb2cc25a5c07e44
Status: Downloaded newer image for gcr.io/PROJECT/firebase:latest
gcr.io/PROJECT/firebase:latest
Error: Not in a Firebase app directory (could not locate firebase.json)
ERROR
ERROR: build step 0 "gcr.io/PROJECT/firebase" failed: step exited with non-zero status: 1
Firebase.json file is placed in repository in following path: /apps/firebase/firebase.json
I can't see any possibility to point that path in Cloud Build Trigger config.
I'm pretty sure that you aren't in the correct directory when you run the command. To check this, you can add this step
{
"steps": [
{
"name": "gcr.io/cloud-builders/gcloud",
"entrypoint":"ls"
"args": ["-la"]
}
]
}
If you aren't in the correct directory, and you need to be in the ./app/firebase to run your deploy, you can add this parameter in your step
{
"steps": [
{
"name": "gcr.io/PROJECT_NAME/firebase",
"dir": "app/firebase",
"args": [
"deploy",
"--project",
"PROJECT_NAME",
"--only",
"functions"
]
}
]
}
Let me know if it's better

How can I use mesos-execute to run a jar

I have set up my mesos cluser correctly with one master and two slaves. What I am trying to do is use the mesos-execute framework to run jar files on the cluster. I can use it to run simple commands like:
mesos-execute --master=mesosr:5050 --name="simple-test" --command=echo "hello"
Which will run as expected. However if I try to replace that echo "hello" command with something like "java -jar helloWorld.jar" it won't work.
I managed to identify the problem, but I don't know how to fix it. The issue is that the command doesn't run from the home directory, it runs from something similar to this
/var/lib/mesos/slaves/3f5439b1-7fab-45d6-876e-7e75b7c15fc9-S0/frameworks/3f5439b1-7fab-45d6-876e-7e75b7c15fc9-0043/executors/java-test/runs/7c20baff-080f-48ee-95fc-3662c388744b
I got that path by running "pwd" as a command on mesos-execute.
Now, my question is how do I get out from there? cd doesn't work.
Is there any way for me to get to the home folder or to a special folder where I can put my jars to make them accessible to mesos-execute?
The use case for this application is that there will be a lot of small jar files that will have to be run on the cluster. They don't have to stay alive, so I am not using anything like Marathon for these jars.
Thank you.
From mesos-execute -h
--task_group=VALUE The value could be a JSON-formatted string of TaskGroupInfo or a file path containing the JSON-formatted TaskGroupInfo. Path must be of the form file:///path/to/file or /path/to/file. See the TaskGroupInfo message in mesos.proto for the expected format. NOTE: agent_id need not to be set.
Example:
{
"tasks":
[
{
"name": "Name of the task",
"task_id": {"value" : "Id of the task"},
"agent_id": {"value" : ""},
"resources": [{
"name": "cpus",
"type": "SCALAR",
"scalar": {
"value": 0.1
}
},
{
"name": "mem",
"type": "SCALAR",
"scalar": {
"value": 32
}
}],
"command": {
"value": "sleep 1000"
}
}
]
}
What interest you most is command part. There you can define your task with all files it need to download to run correctly. All possible configuration options for command are specified in CommandInfo.
Ok, so I figured out how to do it.
I was going about it all wrong, perhaps not the best idea to tackle a new thing at the end of a workday.
What I was trying to do was change directory to the home directory as part of the mesos-execute command. This is not allowed. The way to run a jar that is located in the home directory is to specify the path of the jar in the java -jar command. So the final command, that works, looks like this:
mesos-execute --master=mesosr:5050 --name="simple-test" --command="java -jar /home/user/jarFile.jar"
This works, and the jar is executed on the cluster.

retrieve artifact with maven timestamp from artifactory

Is there a way to retrieve an artifact with the maven timestamp as it was originally uploaded by maven?
from jenkins logs:
Uploading: http://artifactory.foo/artifactory/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-20160504.182015-2.tar.gz
Results from artifactory REST api:
$ curl -X GET 'http://artifactory.foo/artifactory/api/search/gavc?g=com.foo&a=foo-web-service&v=1.16.0-SNAPSHOT&c=*&repos=libs-snapshot-local'
{
"results" : [ {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT-sources.jar"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.pom"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.tar.gz"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.war"
} ]
}
I'd like to get the the same name as it was uploaded to via a wget or equivalent...
What I want to acheive:
jenkins uploads foo-web-service-1.16.0-20160504.182015-2.tar.gz to libs-snapshot-local
query REST api to get latest artifact link that includes the timestamps in the name with parameters a=foo-web-service&version=1.16.0&...
wget $artifact_link_with_timestamp
What I currently acheive that does not satisfy my need:
jenkins uploads foo-web-service-1.16.0-20160504.182015-2.tar.gz to libs-snapshot-local
query REST api via gavc search with parameters a=foo-web-service&version=1.16.0&...
wget $artifact_link
Conclusion as stated in the accepted answer, the problem was in the artifactory config itself. To achieve what I wanted, I needed the snapshots to be unique.
As long as your repository is configured to use unique snapshots (or to use client snapshot policy and you use Maven 3 and up), you can always use the Maven timestamp as a version. Replacing it with -SNAPSHOT is a "runtime" trick to make the resolution easier.
If your repository is configured to use non-unique snapshots, the files are actually stored with -SNAPSHOT instead of version and override previous snapshots (don't do that).

gclient runhooks fails

I'm trying to build Chrome under windows, I got the chromium trunk using tortoiseSVN and I believe I got everything correctly, but when I run "gclient runhooks" I get the error: "Error: client not configured; see 'gclient config'".
Now, I know that it happens because I don't have a ".gclient" file on the same directory, but I couldn't find .gclient file anywhere in the project. I tried to create .gclient file myself but it says there's a solution missing.
I'm probably missing something, can anyone help me with that? I'm pretty stuck!
Thanks!
gclient config http://src.chromium.org/svn/trunk/src
gclient runhooks
Or make a .gclient file with the following content, which skips the huge amount of webkit layout tests
solutions = [
{ "name" : "src",
"url" : "http://src.chromium.org/svn/trunk/src",
"deps_file" : "DEPS",
"managed" : True,
"custom_deps" : {
"src/third_party/WebKit/LayoutTests": None,
"src/chrome_frame/tools/test/reference_build/chrome": None,
"src/chrome/tools/test/reference_build/chrome_mac": None,
"src/chrome/tools/test/reference_build/chrome_win": None,
"src/chrome/tools/test/reference_build/chrome_linux": None,
},
"safesync_url": "",
},
]
The above solution is out-dated. Running with the SVN repository results in:
Error:
The chromium code repository has migrated completely to git.
Your SVN-based checkout is now obsolete; you need to create a brand-new
git checkout by following these instructions:
http://www.chromium.org/developers/how-tos/get-the-code
Now you need to create a .gclient file like this
solutions = [
{
"managed": False,
"name": "src",
"url": "https://chromium.googlesource.com/chromium/src.git",
"custom_deps": {},
"deps_file": ".DEPS.git",
"safesync_url": "",
},
]
and do:
gclient sync
Chromium does not include a preconfigured .gclient file for the Chromium build and does not automatically handle Visual Studio versioning changes and default Deploy toolkit hints. After you have successfully downloaded the deploy tools and the chromium source code as provided at chromium.org perform the following in the root directory where your deploy_tools and src code is located.
NOTE : If you receive errors try to start a new command prompt session and try again.
set DEPOT_TOOLS_WIN_TOOLCHAIN=0
set GYP_MSVS_VERSION = 2015
gclient config https://chromium.googlesource.com/chromium/src.git
gclient sync
gclient runhooks
cd src
ninja -C out\Debug chrome
The build will take some time gclient runhooks should generate the build folder.

Resources