TcpClient.BeginRead Method collision using asynchronous Callbacks - asynchronous

I am trying to create a Tcp socket server that accepts multiple clients. However, for the past couple of days, I haven't been able to overcome a certain obstacle. I believe I've isolated the problem to be in the TcpClient.BeginRead(callbackMethod) Method.
Basically, distinct clients activate this method but the callback isn't invoked/triggered until they actually send data into their outgoing stream. However, the encoding.ASCII.Getstring() Method I perform on the bytes that come in via the stream outputs an unwanted "0/0/0/" depending on the order the beginread methods were started. Why is this happening? Why? Please help.
The Situation/Scenario in Order
Event 1.) ClientOne Connects which then triggers a BeginRead with asynchronous call back.(Now callback is waiting for data)
Event 2.) ClientTwo Connects which then triggers a BeginRead with asynchronous call back. (Now callback is waiting for data)
Event 3.) If ClientOne sends a message first, the data definitely is serviced, however, the Encoding.ASCII.GetString(3 arguments) outputs "0/" for every byte. I think ClientTwo's BeginRead is interfering with ClientOne's BeginRead somehow.
Event 3. (Not 4)) If ClientTwo sends a message first, the data is serviced and decoded/stringified correctly using Encoding.ASCII.GetString(3 arguments).
Source Code
void onCompleteAcceptTcpClient(IAsyncResult iar){TcpListener tcpl = (TcpListener)iar.AsyncState;
try
{
mTCPClient = tcpl.EndAcceptTcpClient(iar);
var ClientEndPoint = mTCPClient.Client.RemoteEndPoint;
Console.log(ClientEndPoint.ToString());
Console.log("Client Connected...");
_sockets.Add(mTCPClient);
tcpl.BeginAcceptTcpClient(onCompleteAcceptTcpClient, tcpl);
mRx = new byte[512];
_sockets.Last().GetStream().BeginRead(mRx, 0, mRx.Length, onCompleteReadFromTCPClientStream, mTCPClient);
}
catch (Exception exc)
{
MessageBox.Show(exc.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
void **onCompleteReadFromTCPClientStream**(IAsyncResult iar)
{
foreach (string message in messages)//For Testing previous saved messages
{
printLine("Checking previous saved messages: " + message);
}
TcpClient tcpc = new TcpClient();
int nCountReadBytes = 0;
try
{
tcpc = (TcpClient)iar.AsyncState;
nCountReadBytes = tcpc.GetStream().EndRead(iar);
printLine(nCountReadBytes.GetType().ToString());
if (nCountReadBytes == 0)
{
MessageBox.Show("Client disconnected.");
return;
}
string foo;
/*THE ENCODING OUTPUTS "0/" FOR EVERY BYTE WHEN AN OLDER CALLBACK'S DATA IS DECODED*/
foo = Encoding.ASCII.GetString(mRx, 0, nCountReadBytes);
messages.Add(foo);
foreach (string message in messages)
{
console.log(message);
}
mRx = new byte[512];
//(reopens the callback)
tcpc.GetStream().BeginRead(mRx, 0, mRx.Length, onCompleteReadFromTCPClientStream, tcpc);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}

Related

.Net Core SignalR: how to persist connections

I have an infinitely running process that pushes events from a server to subscribed SignalR clients. There may be long periods where no events take place on the server.
Currently, the process all works fine -- for a short period of time-- but eventually, the client stops responding to events pushed by the server. I can see the events taking place on the server-side, but the client becomes unaware of the event. I am assuming this symptom means some timeout period has been reached and the client has unsubscribed from the Hub.
I added some code to reconnect if the connection was dropped, and that has helped, but the client still eventually stops seeing new events. I know there are many different timeout values that can be adjusted, but it's all pretty confusing to me and not sure if I should even be tinkering with them.
try
{
myHubConnection = new HubConnectionBuilder()
.WithUrl(hubURL, HttpTransportType.WebSockets)
.AddMessagePackProtocol()
.AddJsonProtocol(options =>
{
options.PayloadSerializerSettings.ContractResolver = new DefaultContractResolver();
})
.Build();
// Client method that can be called by server
myHubConnection.On<string>("ReceiveInfo", json =>
{
// Action performed when method called by server
pub.ShowInfo(json);
});
try
{
// connect to Hub
await myHubConnection.StartAsync();
msg = "Connected to Hub";
}
catch (Exception ex)
{
appLog.WriteError(ex.Message);
msg = "Error: " + ex.Message;
}
// Reconnect lost Hub connection
myHubConnection.Closed += async (error) =>
{
try
{
await Task.Delay(new Random().Next(0, 5) * 1000);
await myHubConnection.StartAsync();
msg = "Reconnected to Hub";
appLog.WriteWarning(msg);
}
catch (Exception ex)
{
appLog.WriteError(ex.Message);
msg = "Error: " + ex.Message;
}
};
This all works as expected for a while, then stops without errors. Is there something I can do to (1) ensure the client NEVER unsubscribes, and (2) if the connection is lost (network outage for example) ensures the client resubscribes to the events. This client must NEVER timeout or give up trying to reconnect if required.

Is there a way to specify the wait time of retrying a message?

Is there a way to specify the wait time of retrying a message for a particular exception?
E.g. If object is in SomethingInProgress status, throws an SomethignInProgressException and I want to the message to be retry after 40m. Or is it more appropriate to raise a SomethingInProgressEvent and use bus.defer?
This is part of the reason why Rebus does not have the concept of second-level retries - I've simply not seen any way that this function could be created in a way that was generic and still flexible enough.
To answer your question shortly: No, there's no (built-in) way of varying the time between retries for a particular exception. In fact, there's no way to configure a wait time between retries at all - failing messages will be retried as fast as possibly, and then moved to the error queue if they keep failing to avoid "clogging up the pipes".
In your case, I suggest you do something like this:
public void Handle(MyMessage message) {
var headers = MessageContext.GetCurrent().Headers;
var deliveryAttempt = headers.ContainsKey("attempt_no")
? Convert.ToInt(headers["attempt_no"])
: 0;
try {
DoWhateverWithThe(message);
} catch(OneKindOfException e) {
if (deliveryAttempt > 5) {
bus.Advanced.Routing.ForwardCurrentMessage("error");
return;
}
bus.AttachHeader(message, "attempt_no", deliveryAttempt + 1);
bus.Defer(TimeSpan.FromSeconds(20), message);
} catch(AnotherKindOfException e) {
if (deliveryAttempt > 5) {
bus.Advanced.Routing.ForwardCurrentMessage("error");
return;
}
bus.AttachHeader(message, "attempt_no", deliveryAttempt + 1);
bus.Defer(TimeSpan.FromMinutes(2), message);
}
}
which I just wrote off the top of my head without being 100% certain that it actually compiles ... but the gist of it is that we track how many delivery attempts we've made in a custom header on the message, bus.Deferring the message an appropriate time span for each failed delivery attempt, immediately forwarding the message to the error queue when our max # of delivery attempts has been exceeded.
I hope that makes sense :)
A more recent example of how to do this is:
public async Task Handle(IFailed<MyMessage> message)
{
var maxAttempts = 10;
var optionalHeaders = new Dictionary<string, string>();
if (message.Headers != null && message.Headers.ContainsKey("attemptNumber"))
{
// increment the attempt number
var attemptNumber = int.Parse(message.Headers["attemptNumber"]);
attemptNumber++;
optionalHeaders.Add("attemptNumber", attemptNumber.ToString());
if (attemptNumber > maxAttempts)
{
// log I give up message, message will move to dead queue
return;
}
}
else
optionalHeaders.Add("attemptNumber", "1");
// if message failed to process, defer processing for 5 minutes and try again
await Bus.Defer(TimeSpan.FromMinutes(5), message.Message, optionalHeaders);
}

Signalr Context Thread Safety

We are using SignalR to push messages from server to client. Some of the things we are using server broadcast for:
Live notifications
Updating changes of shared data
Chat like functionality
One of our devs started with the StockTicker example, and we expanded it to push all of our different message types. Here is our general scheme:
private void RunJobs()
{
_jobs = GetAllJobs();
while (true)
{
bool workDone = false;
for (int i = 0; i < _jobs.Count; i++)
{
var j = _jobs.ElementAt(i);
bool workToDo = j.MessageAvailable();
workDone = workDone || workToDo;
if (workToDo)
{
var message = j.GetMessage();
_threadPool.QueueWorkItem(ProcessJob, j, message);
}
}
if (!workDone)
{
Thread.Sleep(_sleepTime);
}
}
}
/// <summary>
/// Method called by threads to process queued up Work Item (ISignalRJob)
/// </summary>
/// <param name="job">Job to run.</param>
private void ProcessJob(ISignalRJob job, QueueMessage message)
{
try
{
job.ProcessMessage(message);
}
catch (Exception e)
{
//handle exception
}
}
As each job processes, it performs an operation like:
protected override void ProcessMessage(QueueMessage message)
{
var nqm = JsonConvert.DeserializeObject<NotificationQueueMessage>(message.Body);
var notification = webService.GetNotification(notification.Id);
foreach(var userConnectionId in GetUserConnectionIds(nqm.UserId)){
_signalRConnectionContext.Clients.Client(userConnectionId).pushNotification(notification);
}
}
In a thread, monitor a series of queues for messages. If a message turns up, pop the message off the queue, and start a new thread to process the message (ProcessJob). The jobs will then do any service calls / db calls necessary to build the client message, then push the message to the client.
The service seems to work, but periodically the client will stop receiving the messages, although I have verified they are being sent from the server. Is it possible that pushing to a client connection in multiple threads is putting it in a bad state?
Should I perhaps be returning the result of the QueueMessage processing to the main SignalR thread, and return them all synchronously?

Is there a notification when ASP.NET Web API completes sending to the client

I'm using Web API to stream large files to clients, but I'd like to log if the download was successful or not. That is, if the server sent the entire content of the file.
Is there some way to get a a callback or event when the HttpResponseMessage completes sending data?
Perhaps something like this:
var stream = GetMyStream();
var response = new HttpResponseMessage(HttpStatusCode.OK);
response.Content = new StreamContent(stream);
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
// This doesn't exist, but it illustrates what I'm trying to do.
response.OnComplete(context =>
{
if (context.Success)
Log.Info("File downloaded successfully.");
else
Log.Warn("File download was terminated by client.");
});
EDIT: I've now tested this using a real connection (via fiddler).
I inherited StreamContent and added my own OnComplete action which checks for an exception:
public class StreamContentWithCompletion : StreamContent
{
public StreamContentWithCompletion(Stream stream) : base (stream) { }
public StreamContentWithCompletion(Stream stream, Action<Exception> onComplete) : base(stream)
{
this.OnComplete = onComplete;
}
public Action<Exception> OnComplete { get; set; }
protected override Task SerializeToStreamAsync(Stream stream, TransportContext context)
{
var t = base.SerializeToStreamAsync(stream, context);
t.ContinueWith(x =>
{
if (this.OnComplete != null)
{
// The task will be in a faulted state if something went wrong.
// I observed the following exception when I aborted the fiddler session:
// 'System.Web.HttpException (0x800704CD): The remote host closed the connection.'
if (x.IsFaulted)
this.OnComplete(x.Exception.GetBaseException());
else
this.OnComplete(null);
}
}, TaskContinuationOptions.ExecuteSynchronously);
return t;
}
}
Then I use it like so:
var stream = GetMyStream();
var response = new HttpResponseMessage(HttpStatusCode.OK);
response.Content = new StreamContentWithCompletion(stream, ex =>
{
if (ex == null)
Log.Info("File downloaded successfully.");
else
Log.Warn("File download was terminated by client.");
});
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return response;
I am not sure if there is direct signaling that all is ok, but you can use a trick to find out that the connection is exist just before you end it up, and right after you fully send the file.
For example the Response.IsClientConnected is return true if the client is still connected, so you can check something like:
// send the file, make a flush
Response.Flush();
// and now the file is fully sended check if the client is still connected
if(Response.IsClientConnected)
{
// log that all looks ok until the last byte.
}
else
{
// the client is not connected, so maybe have lost some data
}
// and now close the connection.
Response.End();
if the server sent the entire content of the file
Actually there is nothing to do :)
This might sound very simplistic but you will know if an exception is raised - if you care about server delivering and not client cancelling halfway. IsClientConnected is based on ASP.NET HttpResponse not the WebApi.

quartz scheduler sending multiple email notifications

All i am using a quartz schedular for scheduling a job in an asp.net mvc application.This schedular schedules a job after fixed interval of time.
http://quartznet.sourceforge.net/
The service i have created basically runs every minute.It reads the message from the
message que(database in my case) every 1min , sends an email and updates the message sent status
to true.
I am having some problems though.TO be specific the problem is the service sends the same email twice because of the reasons mentioned below.
In some cases the service gets called as soon as an email is send before the db update happens.As The database update does not happen after sending email and service is invoked again,the processed message is again read from the database as unread message and gets resent.
The same message is read again from database.Thus the service ends of sending same message twice.
How do i handle this case in my code.
public void Execute(JobExecutionContext context)
{
List<QueuedEmail> lstQueuedEmail =
_svcQueuedEmail.Filter((x => x.IsSent == false)).Take(NO_OF_MAILS_TO_SEND).ToList();
if (lstQueuedEmail.Count > 0)
{
foreach (var queuedEmail in lstQueuedEmail)
{
try
{
bool emailSendStatus = false;
emailSendStatus = EmailHelper.SendEmail(queuedEmail.From, queuedEmail.To, queuedEmail.Subject,
queuedEmail.Body, queuedEmail.FromName);
QueuedEmail objQueuedEmail =
_svcQueuedEmail.Filter(x => x.Id == queuedEmail.Id).FirstOrDefault();
if (emailSendStatus)
{
objQueuedEmail.IsSent = true;
objQueuedEmail.SentOnUtc = DateTime.UtcNow;
}
else
{
objQueuedEmail.IsSent = false;
if (objQueuedEmail.SentTries == null)
{
objQueuedEmail.SentTries = 1;
}
else
{
objQueuedEmail.SentTries += 1;
}
}
_svcQueuedEmail.Update(objQueuedEmail);
}
catch (Exception)
{
//log error
}
}
}
}
Assuming you have two states for an email: "Pending" and "Sent".
You should add a third an intermediary state called "Sending" and as soon as you read the email from the Queue you should change it's status to something like "Executing" so other threads/services won't get it again.

Resources