I sent metrics with non-ascii characters into graphite, then the file was created successfully, but I can't query it.
the file in file system was created: whisper_root/test/测试/recs.wsp
but the metrics can't be shown in the graphite tree view: this is the graphite tree snapshot
when I put the query url (http://192.168.16.35:8085/metrics/find/?_dc=1488781861984&query=test.%E6%B5%8B%E8%AF%95.*&format=treejson&path=test.%E6%B5%8B%E8%AF%95&node=test.%E6%B5%8B%E8%AF%95) in the browser, I get this response: "Missing required parameter 'query'"
HOW can I get the graphite working with metrics with non-ascii characters? thanks.
Related
I have started using Azure ML Studio and have come across an issue with the Automated ML model. I create an AutoML run and get a decent precision. I deploy the model and get an endpoint using the out-of-the-box deploy button. I use postman to test the endpoint and get a response. But the response is in text format.
What i'm getting:
"{\"result\": [\"Prediction Label X\"]}"
What i'm expecting:
{"result":["Prediction Label X"]}
Postman has Accept and Content-Type both set to application/json.
Of course i could clean this text response up and parse it as JSON, but i'd rather get it directly from Azure in the correct format.
There doesnt appear to be anywhere in the ML Studio to modify the code or response format and i'm new to the Azure Studio.
Any thoughts?
The service is returning the raw text, you can use the json.loads(response.content.decode("utf-8")) to convert to Json.
I am running a server with owncloud for a bunch of users.
However, i totally forbid the usage of this cloud for illegal stuff, like movies/audio/album/zip containing tv shows etc.
How can i be sure that users won't store files downloaded from torrent websites ?
Is there a unix binary that can fetch some screenshots from the mkv/avi file, and check whether there is a watermark or a known picture (20th century fox, warner etc.)
I cannot search for weird strange names like 'DVDRIP', since filenames are encrypted.
I am a beginner so please be kind. I want to download CPU utilization rate from from some VMs installed on a server. The server has Graphite installed. I installed the Python graphite-api and I have the server connection details. How do I make the REST api call to start pulling the data ?
Use the requests package:
>>> r = requests.get('https://your_graphite_host.com/render?target=app.numUsers&format=json', auth=('user', 'pass'))
>>> r.json() # this will give your the JSON file with the data
Keep in mind that you will have to replace app.numUsers with the appropriate metric name. You can also request other formats and time ranges, see the graphite-api docs.
I can see from querying our elasticsearch nodes that they contains internal statistics that for example show disk, memory and CPU usage (for example via GET _nodes/stats API).
Is there anyway to access these in Kibana-4?
Not directly, as ElasticSearch doesn't natively push it's internal statistics to an index. However you could easily set something like this up on a *nix box:
Poll your ElasticSearch box via REST periodically (say, once a minute). The /_status or /_cluster/health end points probably contain what you're after.
Pipe these to a log file in a simple CSV format along with a time stamp.
Point logstash to these log files and forward the output to your ElasticSearch box.
Graph your data.
Is there an R package to connect to graphite (whisper)?
Seems I am looking the same thing. For now I see only that ways:
Using jsnonlite within R to access graphite render URL API and get json or csv formatted data.
Get whisper data from via whisper-fetch (example usage described in russian IT blog (its automatically translated to English by google)