I am using gRPC version 0.14.0 in C#. It is part of an API provided by a vendor. When the server is up, all remote invocations work fine. When the server is not available, the invocation would crash and no exception is being bubble up.
This what my code looks like
try
{
Channel channel = new Channel("127.0.0.1", 12122, ChannelCredentials.Insecure);
UtilityService.IUtilityServiceClient stub = UtilityService.NewClient(channel);
var fiveSecondsInFuture = DateTime.Now.AddSeconds(5).ToUniversalTime();
var asynchCall = stub.KeepAlive(new Ping { }, deadline: fiveSecondsInFuture);
Console.WriteLine(asynchCall.GetStatus());
var result = asynchCall.ResponseStream.ToListAsync().Result;
} catch(Exception e)
{
Console.WriteLine("Something went wrong");
}
What is the best way to handle situations when the server is not available?
Related
We have a syslog client in our application and it is implemented using Cloudbees- TcpSyslogMessageSender. We are creating the context and connHow to reuse the session and port number in TCP-TLS communication using Cloudbees- TcpSyslogMessageSender.
Will it be handled by Cloudbees or we have to configure any settings explicitly. Here is our code.
With this code, it is using a new port everytime.
TcpSyslogMessageSender messageSendertcp = new TcpSyslogMessageSender();
messageSendertcp.setSyslogServerHostname("localhost");
messageSendertcp.setSyslogServerPort("6514");
messageSendertcp.setMessageFormat(MessageFormat.RFC_5425);
messageSendertcp.setDefaultMessageHostname(this.getHostName());
messageSendertcp.setDefaultAppName("test");
messageSendertcp.setDefaultFacility("local0"));
messageSendertcp.setDefaultSeverity("notice");
logger.info("entering getsslcontext");
SSLContext context = getSSLContext(); //SSLContext is formed using client keystore and trustores
logger.info("context object");
messageSendertcp.setSSLContext(context);
messageSendertcp.setSsl(true);
}
try {
logger.info("sending message tcp");
messageSendertcp.sendMessage(syslogMessage);
} catch (IOException e) {
return false;
} finally {
try {
if (messageSendertcp != null)
messageSendertcp.close();
} catch (IOException e) {
return false;
}
}
Here Every Time your code is closing TCP object and and whenever new message comes it is again creating and using new socket. So in order to send the message on same port do not close the socket(TCP object) and use the Server details cache. For example this cache implemented using map that contains Server Details as the map and TCP object as key. And do not close the TCP object.
I have an infinitely running process that pushes events from a server to subscribed SignalR clients. There may be long periods where no events take place on the server.
Currently, the process all works fine -- for a short period of time-- but eventually, the client stops responding to events pushed by the server. I can see the events taking place on the server-side, but the client becomes unaware of the event. I am assuming this symptom means some timeout period has been reached and the client has unsubscribed from the Hub.
I added some code to reconnect if the connection was dropped, and that has helped, but the client still eventually stops seeing new events. I know there are many different timeout values that can be adjusted, but it's all pretty confusing to me and not sure if I should even be tinkering with them.
try
{
myHubConnection = new HubConnectionBuilder()
.WithUrl(hubURL, HttpTransportType.WebSockets)
.AddMessagePackProtocol()
.AddJsonProtocol(options =>
{
options.PayloadSerializerSettings.ContractResolver = new DefaultContractResolver();
})
.Build();
// Client method that can be called by server
myHubConnection.On<string>("ReceiveInfo", json =>
{
// Action performed when method called by server
pub.ShowInfo(json);
});
try
{
// connect to Hub
await myHubConnection.StartAsync();
msg = "Connected to Hub";
}
catch (Exception ex)
{
appLog.WriteError(ex.Message);
msg = "Error: " + ex.Message;
}
// Reconnect lost Hub connection
myHubConnection.Closed += async (error) =>
{
try
{
await Task.Delay(new Random().Next(0, 5) * 1000);
await myHubConnection.StartAsync();
msg = "Reconnected to Hub";
appLog.WriteWarning(msg);
}
catch (Exception ex)
{
appLog.WriteError(ex.Message);
msg = "Error: " + ex.Message;
}
};
This all works as expected for a while, then stops without errors. Is there something I can do to (1) ensure the client NEVER unsubscribes, and (2) if the connection is lost (network outage for example) ensures the client resubscribes to the events. This client must NEVER timeout or give up trying to reconnect if required.
I have created a web API in AWS that I am trying to get some JSON back from using a web page built in ASP.NET Webforms (most up-to-date version). I can't get the asynchronous part to work. Either the GET method hangs seemingly forever, or - following the best practice approach from the Microsoft documentation - I get this error after a little while:
[TimeoutException: An asynchronous operation exceeded the page
timeout.] System.Web.UI.d__554.MoveNext() +984
I know this is something to do with the wait/async portion of the code and being in ASP.NET because of the following.
If I use very similar code in a console application it works fine.
If i call the web API using POSTMAN it works fine.
I have made async = true in the page directive. Here is my page load
protected void Page_Load(object sender, EventArgs e)
{
try
{
RegisterAsyncTask(new PageAsyncTask(GetStuffAsync));
}
catch (Exception ex)
{
renderStoreCards.Text = ex.Message;
}
}
Here is my method
private async Task GetStuffAsync()
{
string testHtml = string.Empty;
try
{
var signer = new AWS4RequestSigner("AccessKey", "SecretKey");
var request = new HttpRequestMessage
{
Method = HttpMethod.Get,
RequestUri = new Uri("https://some-aws-address-changed-for-stack-overflow.execute-api.ap-southeast-2.amazonaws.com/Prod/tables/InSiteStoreInformation/ServerName")
};
request = await signer.Sign(request, "execute-api", "ap-southeast-2");
var client = new HttpClient();
var response = await client.SendAsync(request).ConfigureAwait(false);
string responseString = await response.Content.ReadAsStringAsync();
}
catch (Exception ex)
{
renderStoreCards.Text = ex.Message;
}
}
The above example produces a TimeoutException. Previous to the above, I was trying the following code. This works fine in a console app, but not in the ASP.NET page.
class Program
{
static void Main(string[] args)
{
try
{
MainAsync().Wait();
}
catch (Exception ex)
{
Console.WriteLine($"Exception occured {ex.Message}");
Console.ReadKey();
}
}
static async Task MainAsync()
{
try
{
var signer = new AWS4RequestSigner("AccessKey", "SecretKey");
var request = new HttpRequestMessage
{
Method = HttpMethod.Get,
RequestUri = new Uri("https://<Hiddenforstackoverflowpost>.execute-api.ap-southeast-2.amazonaws.com/Prod/tables/InSiteStoreInformation/ServerName")
};
request = await signer.Sign(request, "execute-api", "ap-southeast-2");
var client = new HttpClient();
var response = await client.SendAsync(request);
var responseStr = await response.Content.ReadAsStringAsync();
dynamic sales = Newtonsoft.Json.JsonConvert.DeserializeObject(responseStr);
Console.WriteLine($"Server = {sales[0].ServerName}");
Console.ReadKey();
Console.Write(responseStr);
Console.ReadKey();
}
catch (Exception ex)
{
throw (ex);
}
}
}
I am by no means an expert in async/wait combinations, but it appears that the HttpClient I'm using has no synchronous alternative, so I have to figure this out.
I know this is an old question, but I just came across.
The timeouts are most likely caused by ReadAsStringAsync as you neglected to use '.ConfigureAwait(false)' on it. Running async tasks inside ASP.net can be very finicky especially around execution contexts. It most likely is some kind of dead-lock on the async method when it tries to restore the execution context on its return. This usually fails due to the nature of IIS hosting. I am not sure if it actually is a dead-lock or some other issue under the hood. Just make sure to always use .ConfigureAwait(false).
I know this is old, but maybe it helps someone else coming across this issue.
I'm trying to access the Azure Elastic Scale Split/Merge tool from an ASP.NET application. I can open the page in my browser after I use the certificate that I uploaded on Azure. But when I try to connect to the page in ASP.NET I keep getting 500 Internal Server Error, even though I used the certificate in my request.
Is there something wrong with the code below? Have I been forgetting something?
var handler = new WebRequestHandler();
handler.ServerCertificateValidationCallback = delegate { return true; };
handler.ClientCertificateOptions = ClientCertificateOption.Manual;
handler.ClientCertificates.Add(Cert); //Cert is the X509Certificate2 I use
using (var client = new HttpClient(handler))
{
try
{
var response = await client.GetAsync(Endpoint); //Endpoint = https://foobar.cloudapp.net/
if (response.IsSuccessStatusCode)
{
var a = response.Content.ReadAsStringAsync();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
throw;
}
}
Found where I went wrong. When creating the Certificate I was using the .cer file, but it works now with the .pfx file
I am writing an integration webservice which will consume various webservices from a couple different backend systems. I want to be able to parallelize non-dependent service calls and be able to cancel requests that take too long (since I have an SLA to meet).
to aid in parallel backend calls, I am using the ASYNC client apis (generated by wsimport using the client-side jax-ws binding alteration files)
the issue I am having is that when I try to cancel a request, the Response<> appropriately marks the request as canceled, however the actual request is not really canceled. apparently some part of the JAX-WS runtime actually submits a com.sun.xml.ws.api.pipe.Fiber to the run queue which is what actually does the request. the cancel on the Result<> does not prevent these PIPEs from running on the queue and making the request.
has anyone run into this issue or a similar issue before?
My code looks like this:
List<Response<QuerySubscriberResponse>> resps = new ArrayList<Response<QuerySubscriberResponse>>();
for (int i = 0; i < 10; i++) {
resps.add(FPPort.querySubscriberAsync(req));
}
for (int i = 0; i < 10; i++) {
logger.info("Waiting for " + i);
try {
QuerySubscriberResponse re = resps.get(i).get(1,
TimeUnit.SECONDS); // execution time for this request is 15 seconds, so we should always get a TimeoutException
logger.info("Got: "
+ new Marshaller().marshalDocumentToString(re));
} catch (TimeoutException e) {
logger.error(e);
logger.error("Cancelled: " + resps.get(i).cancel(true));
try {
logger.info("Waiting for my timed out thing to finish -- technically I've canceled it");
QuerySubscriberResponse re = resps.get(i).get(); // this causes a CancelledExceptio as we would expect
logger.info("Finished waiting for the canceled req");
} catch (Exception e1) {
e1.printStackTrace();
}
} catch (Exception e) {
logger.error(e);
} finally {
logger.info("");
logger.info("");
}
}
I would expect that all of these requests would end up being cancelled, however in reality they all continue to execute and only return when the backend finally decides to send us a response.
as it turns out this was indeed a bug in the jax-ws implementation. Oracle has issued a Patch (RHEL) against wls 10.3.3 to address this issue.