POCO - Socket Reactor - Avoid Polling When Client Ab Ends - poco

When a client ab ends the reactor seems to indefinitely go into a polling state resulting in roughly 15% of the processor being used. If the client reconnects I'm still losing that 15%, I'm trying to determine what is lacking in my code to handle this properly.
When the client ab ends _socket.available() immediately returns false so in the else block I'm attempting to do the right thing. Doing the same thing I do when a client terminates normally, 'delete this' eliminates the processor issue but the next time a client connects I get an allocation error, I'd like to understand why that is, what's the difference? Just putting a sleep in there solves everything but onSocketReadable continues to be called with _socket.available() == false, so it remains as a sort of orphaned active reactor, what am I missing? I also tried stopping the reactor, that stops the processor use but the restarted client will no longer connect, there's something I don't understand there also, seems like a new reactor would be created just as it was initially?
void onSocketReadable(const AutoPtr<ReadableNotification>& pNf)
{
// some socket implementations (windows) report available
// bytes on client disconnect, so we double-check here
if (_socket.available())
{
// No FIFO for now
//int len = _socket.receiveBytes(_fifoIn);
char* buffer = new char[65535];
memset(buffer, 0, 65535);
_socket.setReceiveBufferSize(65535);
int n = _socket.receiveBytes(buffer, 65535);
std::string json = buffer;
delete [] buffer;
if (json == "SHUTDOWN\r\n")
{
delete this;
return;
}
try
{
std::string result = _processor.process(json,_sm);
result.append("\r\n");
_socket.sendBytes(result.data(), (int)result.length());
}
catch (Poco::Exception& e)
{
std::cout << e.message();
}
}
else
{
// delete this;
// return;
// _reactor.stop();
Sleep(10);
}
}

Related

How can I get Windows.Storage.Streams.IInputStream inputStream length?

I use HoloLens 2 as a client and my unity server on my PC. (More discussion about this: How can I read byte array coming from server in UWP app?) I lost my debug await reader1.LoadAsync(256);. I tried everything to get my stream data but I couldn't. I don't want to const value for the buffer I need the exact stream size for the buffer. I tested this and it works only and only if the buffer size and data stream size is equal. Or you can suggest me other approaches?
// Create the StreamSocket and establish a connection to the server.
using (var streamSocket = new Windows.Networking.Sockets.StreamSocket())
{
// The server hostname that we will be establishing a connection to.
var hostName = new Windows.Networking.HostName(host);
// client is trying to connect...
await streamSocket.ConnectAsync(hostName, port);
// client connected!
// Read data from the server.
using (Windows.Storage.Streams.IInputStream inputStream = streamSocket.InputStream)
{
using (var reader1 = new Windows.Storage.Streams.DataReader(inputStream))
{
reader1.InputStreamOptions = Windows.Storage.Streams.InputStreamOptions.ReadAhead;
reader1.UnicodeEncoding = Windows.Storage.Streams.UnicodeEncoding.Utf8;
reader1.ByteOrder = Windows.Storage.Streams.ByteOrder.LittleEndian;
// Should be in stream size !!!
await reader1.LoadAsync(256);
while (reader1.UnconsumedBufferLength > 0)
{
var bytes1 = new byte[reader1.UnconsumedBufferLength];
reader1.ReadBytes(bytes1);
// Handle byte array internally!
HandleData(bytes1);
await reader1.LoadAsync(256);
}
reader1.DetachStream();
}
}
}
// close socket
}
catch (Exception ex)
{
Windows.Networking.Sockets.SocketErrorStatus webErrorStatus = Windows.Networking.Sockets.SocketError.GetStatus(ex.GetBaseException().HResult);
}
How can I get Windows.Storage.Streams.IInputStream inputStream length?
IInputStream of SteamSocket can't be seek. So we can't get length of steam, That's why we need set a buffer to load input stream all the time until the stream finished.
I checked code above, if you have set InputStreamOptions as ReadAhead. It will do the next step, when the 256 buffer fills up, please try to InputStreamOptions as Partial.
reader1.InputStreamOptions = InputStreamOptions.ReadAhead;
Update
If you want to get length of current message before load it, we suggest you write the message length as message header into to the steam.
For example
string stringToSend = "PC client uses System.Net.Sockets.TcpClient, System.Net.Sockets.NetworkStream but UWP (HoloLens) uses Windows.Networking.Sockets.StreamSocket.";
var bytes = Encoding.UTF8.GetBytes(stringToSend);
writer.WriteInt32(bytes.Length);
writer.WriteBytes(bytes);
Receive client
while (true)
{
// Read first 4 bytes (length of the subsequent string).
uint sizeFieldCount = await reader.LoadAsync(sizeof(uint));
if (sizeFieldCount != sizeof(uint))
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
// Read the string.
int bytesLength = reader.ReadInt32();
uint Actualbytelength = await reader.LoadAsync((uint)bytesLength);
if (Actualbytelength != bytesLength)
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
// Display the string on the screen. The event is invoked on a non-UI thread, so we need to marshal
// the text back to the UI thread.
var bytes = new byte[Actualbytelength];
reader.ReadBytes(bytes);
}
Hi Nico sorry for the late update. I tried everything but TCP is nearly impossible for HoloLens with UWP app. So I tried UDP and it works perfectly (https://github.com/mbaytas/HoloLensUDP). I hope Microsoft put a TCP example for HoloLens 1 and 2 in near future.

Setting ASIO no_delay option

I'm having troubles setting the no_delay option on an asio socket. The following code runs well, except for the delay. My server receives the messages only after the 5000 ms expire.
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using namespace boost::asio;
struct Client
{
io_service svc;
ip::tcp::socket sock;
Client() : svc(), sock(svc)
{
ip::tcp::resolver resolver(svc);
ip::tcp::resolver::iterator endpoint = resolver.resolve(boost::asio::ip::tcp::resolver::query("127.0.0.1", "32323"));
connect(sock, endpoint);
}
void send(std::string const& message) {
sock.send(buffer(message));
}
};
int main()
{
Client client;
client.send("hello world\n");
client.send("bye world\n");
boost::this_thread::sleep_for(boost::chrono::milliseconds(5000));
}
When trying to add a delay I have a few options:
1) Add the option before connection:
Client() : svc(), sock(svc)
{
ip::tcp::resolver resolver(svc);
ip::tcp::resolver::iterator endpoint = resolver.resolve(boost::asio::ip::tcp::resolver::query("127.0.0.1", "32323"));
sock.set_option(ip::tcp::no_delay(true));
connect(sock, endpoint);
}
However this throws set_option: Bad file descriptor
2) Add the option after the connection:
Client() : svc(), sock(svc)
{
ip::tcp::resolver resolver(svc);
ip::tcp::resolver::iterator endpoint = resolver.resolve(boost::asio::ip::tcp::resolver::query("127.0.0.1", "32323"));
connect(sock, endpoint);
sock.set_option(ip::tcp::no_delay(true));
}
However in this case, the option has no effect and I still see the delay. According to boost::asio with no_delay not possible? , I need to set the option after I've opened the socket but before I've connected the socket. So I've tried this:
Client() : svc(), sock(svc)
{
ip::tcp::endpoint endpoint( ip::address::from_string("127.0.0.1"), 32323);
sock.open(ip::tcp::v4());
sock.set_option(ip::tcp::no_delay(true));
sock.connect(endpoint);
}
However, I still see no effect. How can I set this option?
Edit: It's possible that I am not setting the option correctly on the server-side. This is the complete server code:
#include <boost/asio.hpp>
#include <iostream>
int main() {
boost::asio::io_service io_service;
boost::asio::ip::tcp::acceptor acceptor(io_service, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 32323));
boost::asio::ip::tcp::socket socket(io_service);
acceptor.accept(socket);
socket.set_option(boost::asio::ip::tcp::no_delay(true));
boost::asio::streambuf sb;
boost::system::error_code ec;
while (boost::asio::read(socket, sb, ec)) {
std::cout << "received:\n" << &sb;
}
}
The client is properly setting the ip::tcp::no_delay option. However, the delay being observed is not the result of this option. Instead, it is the result of the server attempting to read more data than the client has sent, and when the client exits after sleeping 5000ms, the server's read operation completes with an error.
The read() operation initiated by the server will complete when either it has read streambuf.max_size() bytes or an error occurs. The streambuf's max size defaults to std::numeric_limits<std::size_t>::max() and can be configured in its constructor. In this case, the server attempts to read std::numeric_limits<std::size_t>::max() bytes, but the client only sends 22 bytes, sleeps 5000ms, then closes the socket. When the server observes that connection has closed, the read() operation completes with 22 bytes read and an error code of boost::asio::error::eof.

how can i write the code for using threading concept in my ASP.Net Application using c#

I am developing ASP.NET Application using c#.net. In that i wrote code for sending a single mail to multiple Mail-Id's .
Hear i used For-loop for continues sending mail.
So, Hear my question is,
1. I want to stop or pause sending mails , when i click "stop" button ???.
2. Is it possible to kill or pause the process of continues sending mails. ???
for (int i = 0; i < B.Length; i++)
{
if (txt_To.Text == "")
{
txt_To.Text = B[i].ToString();
Methord1(); ////////////// UID ,PWD code
int k = i + 1;
Session["num"] = k;
txt_To.Text = "";
Label4.Text = Session["NUM"].ToString() + "Mail sent ...";
}
}
If I am understanding your question correctly you have an emailing process you have started in a thread, and you want to be able to terminate the thread when a stop button is clicked. Is that correct?
The correct way to do this is to create a flag you can set on the threaded class to 'ask' it to terminate - force terminating a thread is a terrible terrible thing.
So, using your existing method I have added a bool in that determines whether the thread keeps executing. You will also need a bool in your class definition that runs all this code:
private volatile bool KeepRunning = true;
public void SendEmails()
{
for (int i = 0; i < B.Length; i++)
{
if (!KeepRunning) return; //<--- this is the new line
if (txt_To.Text == "")
{
txt_To.Text = B[i].ToString();
Methord1(); ////////////// UID ,PWD code
int k = i + 1;
Session["num"] = k;
txt_To.Text = "";
Label4.Text = Session["NUM"].ToString() + "Mail sent ...";
}
}
}
To be able to access the KeepRunning variable it needs to be marked as volatile to indicate you will access it from multiple threads. Now you can invoke the SendEmails() method in a separate thread, and you have a way of asking it to stop later on. T
If that is the case then you will need to retain a reference to the thread you have started the process in:
Thread MyThread = new Thread(new ThreadStart("SendEmails"));
MyThread.Start();
Now the thread is running and looping.
To terminate the thread (in your 'stop' button handler or whatever) you just set KeepRunning as false, and the next time the loop executes it will drop out naturally on that line. You should also wait for the worker thread to rejoin the main thread before continuing:
KeepRunning = false;
MyThread.Join();
Please note this is all example code and hasn't been tested.

Tinyos reception after second reply doesn't work

I'm in trouble with my nesC code. In my code I send a first packet using AMSend.send(AM_BROADCAST_ADDR, &packet, sizeof(rd_message)).
After that, when a message is received in function event message_t* Receive.receive(message_t* bufPtr, void* payload, uint8_t len){ a reply is generated and sent successfully, but the other nodes are not able to receive the reply. In particular I have to process a RREP reply, following the basics of DSR protocol.
This is my code:
implementation{
/**********************Variables used*****************************/
short phase = 0;
message_t packet;
bool locked;
event void Boot.booted(){
dbg("Boot", "Node %hhu booted\n", TOS_NODE_ID);
call AMControl.start();
}
[cut]
event void MilliTimer.fired(){
/*This contains the discovery message*/
rd_message *rreq = NULL;
if (phase == 0){
//Route discovery phase
rreq = (rd_message *) call Packet.getPayload(&packet, (int) NULL);
if(call AMSend.send(AM_BROADCAST_ADDR, &packet, sizeof(rd_message)) == SUCCESS){
//locked = TRUE;
}
return;
}
}
event message_t* Receive.receive(message_t* bufPtr, void* payload, uint8_t len){
rd_message *received_mex = NULL;
rd_message *reply_mex = NULL;
int i,j;
received_mex = (rd_message*) payload; //cast to rd_message
if (received_mex->type == RREQ){
reply_mex = (rd_message*) call Packet.getPayload(&packet, (int) NULL); //reply packet is created.
if (received_mex->sender_id == TOS_NODE_ID){
//The original sender received its RREQ. Stopping the forward procedure
return bufPtr; //FIXME: see if it's correct to return null here
}
//RREQ message case 1: I am not the receiver_id
if (received_mex->receiver_id != TOS_NODE_ID){
}
else if (received_mex->receiver_id == TOS_NODE_ID){
//I am the receiver of the RREQ message. I can now reply with a RREP
}
if (call AMSend.send(AM_BROADCAST_ADDR, &packet, sizeof(rd_message)) == SUCCESS) {
dbg("dsr", "packet sent\n");
//locked = TRUE;
}
else{
dbg("dsr", "failed to send reply packet.\n");
}
}
else if (received_mex->type == RREP){
//DO SOMETHING WITH CHE NEW RECEIVED MESSAGE HERE
}
return bufPtr;
}
event void AMSend.sendDone(message_t* bufPtr, error_t error) {
if (&packet == bufPtr) {
//locked = FALSE;
}
}
I removed all the logic from the code to focus on the message exchange calls. I hope that someone can help me... thanks.
TinyOS follows almost everywhere a ownership discipline: at any point in time, every
"memory object" - a piece of memory, typically a whole variable or a single array element - should be owned by a single module. A command like send is said to pass ownership of its msg argument from caller to callee.
The main problem of your code is that in the Receive.receive event you are using the packet variable in two ways:
as outgoing packet by calling call AMSend.send(AM_BROADCAST_ADDR, &packet, sizeof(rd_message))
as buffer for the next incoming packet by executing return bufPtr;
the result of this code is unpredictable (since receiving a packet will corrupt the outgoing packet). To solve your problem, you should use a Pool<message_t> component. The typical pseudocode for a program like yours is like:
receive (m):
if I don't need to process this message, return m
if my free packet list is empty, return m
else
process/forward m
return entry from free packet list
This is a rough implementation of a module that uses Pool<message_t> as list of free packets to manage communication:
module Foo
{
/* this is our free packet list */
uses interface Pool<message_t>;
uses interface Receive;
uses interface AMSend;
}
implementation
{
event void MilliTimer.fired()
{
message_t *packet;
/* get a free packet */
packet = Pool.get();
if (packet)
{
/* code to send the packet */
}
}
event void AMSend.sendDone(message_t *msg, error_t error)
{
/* the send function ended, put back the packet in the free packet pool */
/* check here if msg was taken from Pool */
call Pool.put(msg);
}
event message_t* Receive.receive(message_t* msg, void* payload, uint8_t len)
{
if (!haveToProcess(msg))
return msg; // don't have to process this message
if (Pool.empty())
return msg; // memory exahusted;
/* ... */
/* code that processes the packet */
call AMSend.send(AM_BROADCAST_ADDR, msg, sizeof(rd_message));
/* return a free message_t* as buffer to store the next received packet */
return Pool.get();
}
}
If you don't like Pool, you can use a message_t array as circular buffer. Take a look at the BaseStation code for a hint on how to do so.
For more details, I suggest you to read the TinyOS programming book, especially section 3.5.1.
As for your comment:
return bufPtr; //FIXME: see if it's correct to return null here
you can never return NULL in a receive event, since TinyOS needs always a buffer to store incoming packets.

JAX-WS client ASYNC service invocation using WLS 10.3.3

I am writing an integration webservice which will consume various webservices from a couple different backend systems. I want to be able to parallelize non-dependent service calls and be able to cancel requests that take too long (since I have an SLA to meet).
to aid in parallel backend calls, I am using the ASYNC client apis (generated by wsimport using the client-side jax-ws binding alteration files)
the issue I am having is that when I try to cancel a request, the Response<> appropriately marks the request as canceled, however the actual request is not really canceled. apparently some part of the JAX-WS runtime actually submits a com.sun.xml.ws.api.pipe.Fiber to the run queue which is what actually does the request. the cancel on the Result<> does not prevent these PIPEs from running on the queue and making the request.
has anyone run into this issue or a similar issue before?
My code looks like this:
List<Response<QuerySubscriberResponse>> resps = new ArrayList<Response<QuerySubscriberResponse>>();
for (int i = 0; i < 10; i++) {
resps.add(FPPort.querySubscriberAsync(req));
}
for (int i = 0; i < 10; i++) {
logger.info("Waiting for " + i);
try {
QuerySubscriberResponse re = resps.get(i).get(1,
TimeUnit.SECONDS); // execution time for this request is 15 seconds, so we should always get a TimeoutException
logger.info("Got: "
+ new Marshaller().marshalDocumentToString(re));
} catch (TimeoutException e) {
logger.error(e);
logger.error("Cancelled: " + resps.get(i).cancel(true));
try {
logger.info("Waiting for my timed out thing to finish -- technically I've canceled it");
QuerySubscriberResponse re = resps.get(i).get(); // this causes a CancelledExceptio as we would expect
logger.info("Finished waiting for the canceled req");
} catch (Exception e1) {
e1.printStackTrace();
}
} catch (Exception e) {
logger.error(e);
} finally {
logger.info("");
logger.info("");
}
}
I would expect that all of these requests would end up being cancelled, however in reality they all continue to execute and only return when the backend finally decides to send us a response.
as it turns out this was indeed a bug in the jax-ws implementation. Oracle has issued a Patch (RHEL) against wls 10.3.3 to address this issue.

Resources