download data from Graphite using Python - graphite

I am a beginner so please be kind. I want to download CPU utilization rate from from some VMs installed on a server. The server has Graphite installed. I installed the Python graphite-api and I have the server connection details. How do I make the REST api call to start pulling the data ?

Use the requests package:
>>> r = requests.get('https://your_graphite_host.com/render?target=app.numUsers&format=json', auth=('user', 'pass'))
>>> r.json() # this will give your the JSON file with the data
Keep in mind that you will have to replace app.numUsers with the appropriate metric name. You can also request other formats and time ranges, see the graphite-api docs.

Related

How to Access SAS Server with IOM Protocol in R

I'm trying to read in SAS data into R similarly to this quick tutorial here: https://josezea.wordpress.com/2019/05/02/connect-sas-server-from-r/
The issue is that the server I'm trying to connect to uses an IOM protocol, which doesn't seem to be supported in the RCurl package. Does anyone have any suggestions to reading data from a SAS Server with these protocols in R? It can be reading from a file pathway or a library, either works for my scenario. Thanks!
Below is the code I attempted to run in R:
library(RCurl)
library(haven)
protocol <- "IOM"
server <- "server.com:5555"
userpwd <- "username:password"
sasfileRoute <- "/path_to_data/bonus_schedule.sas7bdat"
## Read Data as data frame
url <- paste0(protocol, "://", server, sasfileRoute)
binary_sasdata <- getBinaryURL(url = url, userpwd=userpwd)
df_data = read_sas(binary_sasdata)
I think you're misunderstanding what the linked page does. It shows how to use R to read in a SAS dataset - but not to connect to SAS.
SAS datasets are usually stored as .sas7bdat files. You should connect via SFTP or network share or similar to access datasets; this won't work if the datasets are stored in a LASR server or other in-memory location of course.
If you need to connect to SAS itself (to execute code or to access in-memory data), you can do so if the SAS server is a SAS Viya server. See R SWAT for more information on that; it uses SAS's APIs to do what you need.
Otherwise, you will have to run the SAS executable from inside R (if you have access to that version of SAS), or have a SAS user export your data for you from inside SAS. I am not familiar with a way to connect to SAS 9 via R directly, and the link in comments seems out of date (CRAN at least doesn't seem to have that package any more).
SASPy does allow Python to do something similar with SAS 9.4, so perhaps that's a better route if you have SAS 9.4.
IOM is a SAS protocol used by its Integration Technologies product. Instead of IOM, find the network path to the file and use it. See resources using IOM (C#, Java, PowerShell, etc.) on the SAS website. Usually, IOM is used for code submission and SAS object management.

How to connect to a socket.io from R?

I am trying to connect to a socket.io data source using R.
Specifically I am trying to connect to CoinCap https://github.com/CoinCapDev/CoinCap.io.
I started by trying the websockets package from here but I could not get a connection. Maybe it is not socket.io compliant.
The best example appears to be in this post which asks the same question.
It seems the answer was to create a socket.io server as a middleman and then connect to R.
The problem is that I am not nearly as advanced as jeromefroe and have no experience with sockets or javascript and I have do not understand how the server that he created works or how to build or start it.
jeromefroe provides his javascript server code in the post, and I don't know what to do with it.
I am trying to collect data in R and use for analysis.
Can somebody help me get the connection running and/or help me set up the sever like jeromefroe did for the connection?
If I understand your question correctly, you are trying to "collect data in R and use for analysis". The website provides the REST URLs and so it is a matter of doing a http GET to retrieve data. An example usage of the httr package as follows. The result retrieved is in json format. Hence, you need jsonlite package to convert into a R data structure.
library(httr)
library(jsonlite)
resp <- httr::GET("http://coincap.io/coins")
jsonlite::fromJSON(rawToChar(resp$content))

Is there anyway to "ETL" data out of Graphite DB (Grafana)

We are trying to move some of our monitoring away from Grafana / Graphite DB into another system. Is there anyway to pull the full db data into a sql db?
You may use tools delivered from Whisper DB to extract data from files, i.e. and upload it into your DB.
[boss#DPU101 gn_bytes_total_card0]$ whisper-fetch gauge.wsp | head
1499842740 51993482526.000000
1499842800 51014501995.000000
1499842860 51011637567.000000
1499842920 51301789613.000000
1499842980 50994189020.000000
1499843040 50986821344.000000
This tool also allows you to extract data in JSON:
$ whisper-fetch --help
[...]
--json Output results in JSON form
You can use whisper utilities provided. You need to download it separately using following command.
on Ubuntu 14.04
apt-get install python-whisper
whisper-fetch.py program will allow you to download data into json format (or pretty format - separated by tab).
The data points will be for every 60 seconds.
Whisper Link

Execute R Script on AWS via API

I have an R package that I would like to host through Amazon Web Services that will be accessible via an API. The script should take a couple of input values and return the R output in json format. Also, the API should be able to handle multiple requests simultaneously.
So for example, call http://sampleapi.com/?location=USA?state=Florida. That would then run the R package and return the output data to the calling application.
Has anyone done this before or know of resources you can point me to that would explain how to do so? Thanks!
Thanks for all the suggestions. I decided to use Ruby for the API with the rinruby and rails-api gems and will host that through AWS Elastic Beanstalk. See this question for how I am setting it up - Ruby API - Accept parameters and execute script

Is it possible to access elasticsearch's internal stats via Kibana

I can see from querying our elasticsearch nodes that they contains internal statistics that for example show disk, memory and CPU usage (for example via GET _nodes/stats API).
Is there anyway to access these in Kibana-4?
Not directly, as ElasticSearch doesn't natively push it's internal statistics to an index. However you could easily set something like this up on a *nix box:
Poll your ElasticSearch box via REST periodically (say, once a minute). The /_status or /_cluster/health end points probably contain what you're after.
Pipe these to a log file in a simple CSV format along with a time stamp.
Point logstash to these log files and forward the output to your ElasticSearch box.
Graph your data.

Resources