I am writing a grpc based server and client. Server is running on linux and client is running on windows.
I am trying to handle the scenario when the server is not started but the client is up.
auto state = m_channel->GetState(true);
while (state != GRPC_CHANNEL_READY || state != GRPC_CHANNEL_SHUTDOWN)
{
std::chrono::time_point deadline = std::chrono::system_clock::now() + std::chrono::seconds(30);
if (m_channel->WaitForStateChange(state, deadline))
{
std::cout << "new state is: " << static_cast<int>(state) << "\n";
state = m_channel->GetState(true);
}
}
When I run, this it fails with this error:
**
I0929 22:24:05.748000000 14812 subchannel.cc:905] subchannel 0123CF78 {address=ipv4:192.168.175.130:40051, args={grpc.client_channel_factory=0x121dd68, grpc.default_authority=192.168.175.130:40051, grpc.internal.channel_credentials=0x121dce8, grpc.internal.security_connector=0x1235f28, grpc.internal.subchannel_pool=0x1225db0, grpc.max_receive_message_length=-1, grpc.primary_user_agent=grpc-c++/1.49.0-dev, grpc.resource_quota=0x1225990, grpc.server_uri=dns:///192.168.175.130:40051}}: connect failed (UNAVAILABLE:WSA Error {syscall:"ConnectEx", os_error:"No connection could be made because the target machine actively refused it.\r\n", grpc_status:14, wsa_error:10061, created_time:"2022-09-29T20:24:05.748604482+00:00"}), backing off for -1057 ms
**
Where as when I run the client on linux, I see it properly waiting till the server is up and running.
Is there a specific firewall setting that is needed for windows ?
Interesting thing to notice is the back off time is in -ve, where as on linux, it is a +ve value and increases as per the backoff strategy.
The issue is fixed, when I started using grpc conan-package to build the server and client instead of locally built grpc from source. I believe some mistakes were made while building grpc locally from source.
I did not get time to look in details for the root cause it.
Related
I'm trying to send logs into datadog using rsyslog. Ideally, I'm trying to do this without having the logs stored on the server hosting rsyslog. I've run into an error in my config that I haven't been able to find out much about. The error occurs on startup of rsyslog.
omfwd: could not get addrinfo for hostname '(null)':'(null)': Name or service not known [v8.2001.0 try https://www.rsyslog.com/e/2007 ]
Here's the portion I've added into the default rsyslog.config
module(load="imudp")
input(type="imudp" port="514" ruleset="datadog")
ruleset(name="datadog"){
action(
type="omfwd"
action.resumeRetryCount="-1"
queue.type="linkedList"
queue.saveOnShutdown="on"
queue.maxDiskSpace="1g"
queue.fileName="fwdRule1"
)
$template DatadogFormat,"00000000000000000 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - - %msg%\n "
$DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.logs.datadoghq.com
*.* ##intake.logs.datadoghq.com:10516;DatadogFormat
}
First things first.
The module imudp enables log reception over udp.
The module omfwd enables log forwarding over (tcp, udp, ...)
So most probably - or atleast as far as i can tell - with rsyslog you just want to log messages locally and then send them to datadog.
I don't know anything about the $ActionSendStreamDriver tags, so I can't help you there. But what is jumping out is, that in your action you haven't defined where the logs should be sent to.
ruleset(name="datadog"){
action(
type="omfwd"
target="10.100.1.1"
port="514"
protocol="udp"
...
)
...
}
I am trying to setup a websocket server as described in this boost beast example.
Everything works fine except that the websocket stream read throw unexpeced system error with error code of "End of file" and "Operation cancelled"
beast::flat_buffer buffer;
try {
ws->read(buffer); // ws is in the free store
}
catch(beast::system_error const& se) {
if(se.code() == websocket::error::closed) {
LOG_INFO << "ws closed, exiting handing thread..";
break;
}
LOG_WARNING << "exception: " << se.code() << ", " << se.code().message();
}
After client connected to this server, and the server start to read incoming msg from the client with
ws->read(buffer);
From time to time, one End of file system_error and many operation cancelled system error are caught and printed as below:
WARNING exception: asio.misc:2, End of file
WARNING exception: system:125, Operation canceled
WARNING exception: system:125, Operation canceled
WARNING exception: system:125, Operation canceled
I googled around, End of file is probably caused by underlying tcp socket is closed, but the issue is that the disconnct happens very often and that does not make sense. And what exactly will cause Operation cancelled system error?
It turns out to be caused by bad netowrk. When I disable some VPN, the issue is gone.
I make a server and request successfully in windows,then I put them to the linux which is the develop machine of company.
And I Got Connect Failed error when I run the clien.py. I have changed the host IP to localhost,0.0.0.0,127.0.0.1 and the IP of develop machine,but it doesn't work.
And I tried the docker inspect command,but the IPAddress is empty like "".
And I changed the port to 8500,8501,4321 and still got the error.
I tried make proxy to the company,the request returns a Name resolution failure error.So I think maybe the cause is the net of company?
Any answer would be very appreciate!
The Error Message:
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Connect Failed"
debug_error_string = "{"created":"#1551677753.381330483","description":"Failed to create subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":2721,"referenced_errors":[{"created":"#1551677753.381328206","description":"Pick Cancelled","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":241,"referenced_errors":[{"created":"#1551677753.381303114","description":"Connect Failed","file":"src/core/ext/filters/client_channel/subchannel.cc","file_line":689,"grpc_status":14,"referenced_errors":[{"created":"#1551677753.381278484","description":"Failed to connect to remote host: OS Error","errno":111,"file":"src/core/lib/iomgr/tcp_client_posix.cc","file_line":205,"os_error":"Connection refused","syscall":"connect","target_address":"ipv4:0.0.0.0:8501"}]}]}]}"
I am searching for a robust solution to perform extensive computations on a remote server, dedicated to computational tasks. The server is on Windows 2008 R2 and has R x64 3.4.1 installed on it. I've searched for free solutions and am now focusing on the Rserver/RSclient packages solutions.
However, I can't connect any client (using RSclient) to the instanced server.
This is how I'm proceeding at the moment from the server side:
library(Rserve)
run.Rserve(config.file = "Rserv.conf")
using the following Rserv.conf file:
port 6311
remote enable
plaintext enable
control enable
r-control enable
The server is now intanciated using the Rsession (It's a bit ugly, but will change that latter on):
running Rserve in this R session (pid=...), 1 server(s)
Now, i'm trying to connect using a remote computer (Client-side) using:
library(RSclient)
c = RS.connect(host = "...")
The connection then seems to succeed, checking for c:
> c
Rserve QAP1 connection 0x000000000fbe9f50 (socket 764, queue length 0)
The error occurs when i try to eval anything, for example:
> RS.server.eval(c,"0<1")
Error in RS.server.eval(c, "0<1") : command failed with status code 0x4e: no control line present (control commands disabled or server shutdown)
I've read the available guides but still failed in connecting. What is wrong? It seems to be related to control lines but I authorized them in the config file.
for me the problem was solved by initiating the Rserve instance with the command:
R CMD Rserve --RS-port 9000 --RS-enable-remote --RS-enable-control
instead of starting it in the R environment (library(Rserve), run.Rserve(config.file = "Rserv.conf")). You may try this on Windows as well.
Refer https://github.com/s-u/Rserve/wiki/rserve.conf.
port 6311
remote enable -> it should be remote true
plaintext enable
control enable
r-control enable
Likewise refer the link and try with actual values
I am trying to connect to a SFTP remote server using JSCH library version 0.1.49. Every time I run the program I receive the following error :
Initializing...
Connection to SFTP server is successfully
com.jcraft.jsch.JSchException: Unable to connect to SFTP server.com.jcraft.jsch.JSchException: failed to send channel request
at shell.MainClass.JschConnect(MainClass.java:95)
at shell.MainClass.main(MainClass.java:30)
line 30 is : sftpChannel.connect() from the code below :
System.out.println("Initializing...");
JSch jsch = new JSch();
Session session = null;
try {
session = jsch.getSession(ProjectConstants.rmUsername,ProjectConstants.rmHost, 22);
session.setPassword(ProjectConstants.rmPassword);
java.util.Properties config = new java.util.Properties();
config.put("StrictHostKeyChecking", "no");
session.setConfig(config);
session.connect();
if (session.isConnected() == true) {
System.out.println("Connection to SFTP server is successfully");
}
ChannelSftp sftpChannel = (ChannelSftp) session.openChannel("sftp");
try {
sftpChannel.connect();
} catch (Exception e) {
throw new JSchException("Unable to connect to SFTP server. "
+ e.toString());
}
the credentials I am using are correct ( it connects through FileZilla using the same data ), and I also disabled the proxy for that server ( either way I get the same error with or without proxy )
If anyone could help me I would greatly appreciate it as I am stuck with this error for about a week now ...
Thank you.
Check if SFTP server is started and running.
I had encountered the same issue - I was not able to open SFTP channel to my server, but I could connect with WinSCP. It took me some time to notice that WinSCP would fallback to SCP hence confusing me. Starting the server solved this issue.
Check Subsystem sftp /usr/lib/openssh/sftp-server in /etc/ssh/sshd_config
In /etc/ssh/sshd_config I changed:
Subsystem sftp /usr/lib/openssh/sftp-server
to:
Subsystem sftp internal-sftp
It helps.