Suppose there were several hosts and ports of the redis server, like
10.0.1.1:6381
10.0.1.1:6382
10.0.1.2:6381
10.0.1.2:6382
how can I configure the redux::hiredis()?
I have google around but can't find a solution. And I noticed that there was a note on db parameter of the redis_config function with "Do not use in a redis clustering context.", so I was wondering that this was a way to connect to a cluster. In addition, I have also try to pass redis://10.0.1.1:6381,10.0.1.1:6382,10.0.1.2:6381,10.0.1.2:6382 to the url parameter, but still failed.
Any suggestions? Or is there another package you would suggest?
My initial solution is writing a function to point to the correct node based on the error message.
check_redis <- function(key = "P10000", host = "10.0.1.1", port = 6381) {
r <- redux::hiredis(host = host, port = port)
status <- tryCatch(
{
r$EXISTS(key = key)
},
error = function(e){
address <- str_match(e$message,
"[0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+")
host <- str_split(address, ":", simplify = T)[1]
port <- str_split(address, ":", simplify = T)[2]
return(list(host = host, port = port))
}
)
if (is.list(status)) {
r <- redux::hiredis(host = status$host, port = status$port)
}
return(r)
}
It can help to direct to the correct node, but this solution is neither elegant nor efficient. So please advise.
Related
I have an R script which consumes a feed periodically and then produces signals or actions, which need to be sent to a TCP server implemented in other language. I have handled the signal generation part in this fashion:
sendViaSocket <- function(object, socket) {
byteLen <- as.integer(object$byteLen())
writeBin(
byteLen, socket,
endian = "little",
useBytes = TRUE
)
writeBin(
object$toRaw(), socket,
useBytes = TRUE
)
}
readFromSocket <- function(socket) {
packetLen <- readBin (
socket, integer(), endian = "little"
)
rawVec <- readBin(
socket, "raw", n = packetLen
)
return (
parse (messages.MyMessage, rawVec)
)
}
getOHLC <- function() {
# feed reading function
...
}
generateSignal <- function (candleStick) {
# signal generating function
...
}
performAction <- function(message) {
# Action on server message
}
clientLoop <- function() {
socket <- socketConnection(
host = 'localhost',
port = 42000,
blocking = TRUE,
server = FALSE,
open = "a+b"
)
while(TRUE) {
candleStick <- getOHLC()
signal <- generateSignal(candleStick)
sendViaSocket(signal, socket)
# Reading any message from server
object <- readFromSocket(message)
Sys.sleep(10)
}
}
clientLoop()
I have deliberately set the socket connection as blocking as I want to read all the data available in the connection in one go, and send message in one go.
Currently I am reading any notification from client loop itself, but if the server does not send anything, the client loop is blocked.
What I want to do is have a callback when the socket has received any data, and then process that data in the callback. This will not block the client loop. Something along the following lines
onServerDataReceived <- function (serverMessage) {
# Perform related action on receiving server message
}
socket$onDataReceived(callback = onServerDataReceived)
Is there a way in which I can use a client socket connection with a callback when the server actually sends a message/ or do this in a reactive / asynchronous way?
Edit: As a bonus I also don't want to do while (TRUE) {...} and use an API or package that provides a better way to call the clientLoop() function periodically every n seconds (minimum interval is 30 seconds)
Good afternoon!
I'm a self-taught R developer so I might not be using the correct jargon, but please bear with me.
I have written a script to clean and process data which has been manually exported from Qualtrics, however I would like to further automate my data process workflow by tapping directly into the Qualtrics API. I am attempting to use the qualtRics package to handle the API call, but when I execute the function I get a return of port 443: Connection refused. I have yet to find a workable solution for this. I have followed the standard workflow Code below:
#Save API Key and Base URL as environment variables
qualtrics_api_credentials(
api_key = "<API_KEY>",
base_url = "<BASE_URL>",
overwrite = TRUE,
install = TRUE
)
#Refresh environment
readRenviron("~/.Renviron")
#Pull list of surveys
Surveys <- all_surveys()
#Fetch survey data from Survey based on Survey ID
SurveyData <- fetch_survey("Survey_ID",
last_response = NULL,
start_date = "2019-11-18",
end_date = NULL,
unanswer_recode = NULL,
limit = NULL,
include_questions = NULL,
save_dir = NULL,
force_request = FALSE,
verbose = TRUE,
label = TRUE,
convert = TRUE,
import_id = FALSE,
local_time = FALSE)
For both the all_surveys() and fetch_survey() functions, if I use https:// I get a return of:
"Error in curl::curl_fetch_memory(url, handle = handle) :
Failed to connect to port 443: Connection refused"
and if I use http:// a return of:
"Error in curl::curl_fetch_memory(url, handle = handle) :
Failed to connect to port 80: Connection refused"
Is there a workaround or a different approach I can take to pull Qualtrics survey data into a data frame via the API?
Best regards
I am using svSocket package in R to create a socket server. I have successfully created server using startSocketServer(...). I am able to connect my application to the server and send data from server to the application. But I am struggeling with reading of messages sent by application. I couldn't find any example for that on internet. I found only processSocket(...) example in documentation of vsSocket (see below) which describes the function that processes a command coming from the socket. But I want only read socket messages comming to the server in repeat block and print them on the screen for testing.
## Not run:
# ## A simple REPL (R eval/process loop) using basic features of processSocket()
# repl <- function ()
# {
# pars <- parSocket("repl", "", bare = FALSE) # Parameterize the loop
# cat("Enter R code, hit <CTRL-C> or <ESC> to exit\n> ") # First prompt
# repeat {
# entry <- readLines(n = 1) # Read a line of entry
# if (entry == "") entry <- "<<<esc>>>" # Exit from multiline mode
# cat(processSocket(entry, "repl", "")) # Process the entry
# }
# }
# repl()
# ## End(Not run)
Thx for your input.
EDIT:
Here more specific example of socket server creation and sending message:
require(svSocket)
#start server
svSocket::startSocketServer(
port = 9999,
server.name = "test_server",
procfun = processSocket,
secure = FALSE,
local = FALSE
)
#test calls
svSocket::getSocketClients(port = 9999) #ip and port of client connected
svSocket::getSocketClientsNames(port = 9999) #name of client connected
svSocket::getSocketServerName(port = 9999) #name of socket server given during creation
svSocket::getSocketServers() #server name and port
#send message to client
svSocket::sendSocketClients(
text = "send this message to the client",
sockets = svSocket::getSocketClientsNames(port = 9999),
serverport = 9999
)
... and response of the code above is:
> require(svSocket)
>
> #start server
> svSocket::startSocketServer(
+ port = 9999,
+ server.name = "test_server",
+ procfun = processSocket,
+ secure = FALSE,
+ local = FALSE
+ )
[1] TRUE
>
> #test calls
> svSocket::getSocketClients(port = 9999) #ip and port of client connected
sock0000000005C576B0
"192.168.2.1:55427"
> svSocket::getSocketClientsNames(port = 9999) #name of client connected
[1] "sock0000000005C576B0"
> svSocket::getSocketServerName(port = 9999) #name of socket server given during creation
[1] "test_server"
> svSocket::getSocketServers() #server name and port
test_server
9999
>
> #send message to client
> svSocket::sendSocketClients(
+ text = "send this message to the client",
+ sockets = svSocket::getSocketClientsNames(port = 9999),
+ serverport = 9999
+ )
>
What you can see is:
successfull creation of socket server
successfull connection of external client sock0000000005C576B0 (192.168.2.1:55427) to the server
successfull sending of message to the client (here no explizit output is provided in console, but the client reacts as awaited
what I am still not able to implement is to fetch client messages sent to the server. Could somebody provide me an example on that?
For interaction with the server from the client side, see ?evalServer.
Otherwise, it is your processSocket() function (either the default one, or a custom function you provide) that is the entry point triggered when the server got some data from one connected client. From there, you have two possibilities:
The simplest one is just to use the default processSocket() function. Besides some special code between <<<>>>, which is interpreted as special commands, the default version will evaluate R code on the server side. So, just call the function you want on the server. For instance, define f <- function(txt) paste("Fake process ", txt) on the server, and call evalServer(con, "f('some text')") on the client. Your custom f() function is executed on the server. Just take care that you need to double quote expressions that contain text here.
An alternate solution is to define your own processSocket() function to capture messages sent by the client to the server earlier. This is safer for a server that needs to process a limited number of message types without parsing and evaluating R code received from the client.
Now, the server is asynchronous, meaning that you still got the prompt available on the server, while it is listening to client(s) and processing their requests.
Apache Thrift is a way to declare data types and interfaces. You can compile the thrift into many other languages, called "bindings." Is there a compiler than can produce an R binding for thrift? I don't see one.
Still in its early adoption phase, but you can try: thriftr
service PingPong {
string ping(),
}
Server:
library(thriftr)
pingpong_thrift = thriftr::t_load("pingpong.thrift",
module_name="pingpong_thrift")
Dispatcher <- R6::R6Class("Dispatcher",
public = list(
ping = function() {
return('pong')
}
)
)
server = thriftr::make_server(pingpong_thrift$PingPong, Dispatcher$new(),
'127.0.0.1', 6000)
server$serve()
Client:
library(thriftr)
pingpong_thrift = thriftpy::t_load("pingpong.thrift",
module_name="pingpong_thrift")
client = thriftpy::make_client(pingpong_thrift$PingPong, "127.0.0.1", 6000)
cut(client$ping())
I try to use Terraform to deploy some machines on an Openstack Cloud.
I have no problem to create networks, subnet, keys, security groups and rules, floating ip, network ports (with security groups attached), but, when I try to create compute instances with two NICs (network ports created before), I have a syntax error with no hint to resolve it.
Could you help me please ?
My code is:
resource "openstack_compute_instance_v2" "RNGPR-REBOND-01" {
name = "RNGPR-REBOND-01"
flavor_name = "${var.MyFlavor}"
image_id = "${var.MyImage}"
key_pair = "${var.CODOB}-keypair"
network {
port = "${openstack_networking_port_v2.RNGPR-REBOND-01-eth0.id}"
access_network = true
}
network {
port = "${openstack_networking_port_v2.RNGPR-REBOND-01-eth1.id}"
}
floating_ip = "${openstack_compute_floatingip_v2.FloatingIp-RNGPR-REBOND-01.address}"
}
resource "openstack_compute_instance_v2" "RNGPR-LB-01" {
name = "RNGPR-LB-01"
flavor_name = "${var.MyFlavor}"
image_id = "${var.MyImage}"
key_pair = "${var.CODOB}-keypair"
network {
port = "${openstack_networking_port_v2.RNGPR-LB-01-eth0.id}"
}
network {
port = "${openstack_networking_port_v2.RNGPR-LB-01-eth1.id}"
}
floating_ip = "${openstack_compute_floatingip_v2.FloatingIp-RNGPR-LB-01.address}"
}
And the syntax error is:
Error applying plan:
2 error(s) occurred:
* openstack_compute_instance_v2.RNGPR-REBOND-01: Error creating OpenStack server: Invalid request due to incorrect syntax or missing required parameters.
* openstack_compute_instance_v2.RNGPR-LB-01: Error creating OpenStack server: Invalid request due to incorrect syntax or missing required parameters.
.
From my experience, these error messages aren't very helpful.
I would first set TF_LOG=DEBUG and OS_DEBUG=1 wherever you are running terraform. This will print error messages that are actually beneficial.
One time I was trying to create a server with a key pair that my user didn't have access to in openstack. I was receiving the same error and didn't figure it out until Debugging was set.