I'm in trouble with my nesC code. In my code I send a first packet using AMSend.send(AM_BROADCAST_ADDR, &packet, sizeof(rd_message)).
After that, when a message is received in function event message_t* Receive.receive(message_t* bufPtr, void* payload, uint8_t len){ a reply is generated and sent successfully, but the other nodes are not able to receive the reply. In particular I have to process a RREP reply, following the basics of DSR protocol.
This is my code:
implementation{
/**********************Variables used*****************************/
short phase = 0;
message_t packet;
bool locked;
event void Boot.booted(){
dbg("Boot", "Node %hhu booted\n", TOS_NODE_ID);
call AMControl.start();
}
[cut]
event void MilliTimer.fired(){
/*This contains the discovery message*/
rd_message *rreq = NULL;
if (phase == 0){
//Route discovery phase
rreq = (rd_message *) call Packet.getPayload(&packet, (int) NULL);
if(call AMSend.send(AM_BROADCAST_ADDR, &packet, sizeof(rd_message)) == SUCCESS){
//locked = TRUE;
}
return;
}
}
event message_t* Receive.receive(message_t* bufPtr, void* payload, uint8_t len){
rd_message *received_mex = NULL;
rd_message *reply_mex = NULL;
int i,j;
received_mex = (rd_message*) payload; //cast to rd_message
if (received_mex->type == RREQ){
reply_mex = (rd_message*) call Packet.getPayload(&packet, (int) NULL); //reply packet is created.
if (received_mex->sender_id == TOS_NODE_ID){
//The original sender received its RREQ. Stopping the forward procedure
return bufPtr; //FIXME: see if it's correct to return null here
}
//RREQ message case 1: I am not the receiver_id
if (received_mex->receiver_id != TOS_NODE_ID){
}
else if (received_mex->receiver_id == TOS_NODE_ID){
//I am the receiver of the RREQ message. I can now reply with a RREP
}
if (call AMSend.send(AM_BROADCAST_ADDR, &packet, sizeof(rd_message)) == SUCCESS) {
dbg("dsr", "packet sent\n");
//locked = TRUE;
}
else{
dbg("dsr", "failed to send reply packet.\n");
}
}
else if (received_mex->type == RREP){
//DO SOMETHING WITH CHE NEW RECEIVED MESSAGE HERE
}
return bufPtr;
}
event void AMSend.sendDone(message_t* bufPtr, error_t error) {
if (&packet == bufPtr) {
//locked = FALSE;
}
}
I removed all the logic from the code to focus on the message exchange calls. I hope that someone can help me... thanks.
TinyOS follows almost everywhere a ownership discipline: at any point in time, every
"memory object" - a piece of memory, typically a whole variable or a single array element - should be owned by a single module. A command like send is said to pass ownership of its msg argument from caller to callee.
The main problem of your code is that in the Receive.receive event you are using the packet variable in two ways:
as outgoing packet by calling call AMSend.send(AM_BROADCAST_ADDR, &packet, sizeof(rd_message))
as buffer for the next incoming packet by executing return bufPtr;
the result of this code is unpredictable (since receiving a packet will corrupt the outgoing packet). To solve your problem, you should use a Pool<message_t> component. The typical pseudocode for a program like yours is like:
receive (m):
if I don't need to process this message, return m
if my free packet list is empty, return m
else
process/forward m
return entry from free packet list
This is a rough implementation of a module that uses Pool<message_t> as list of free packets to manage communication:
module Foo
{
/* this is our free packet list */
uses interface Pool<message_t>;
uses interface Receive;
uses interface AMSend;
}
implementation
{
event void MilliTimer.fired()
{
message_t *packet;
/* get a free packet */
packet = Pool.get();
if (packet)
{
/* code to send the packet */
}
}
event void AMSend.sendDone(message_t *msg, error_t error)
{
/* the send function ended, put back the packet in the free packet pool */
/* check here if msg was taken from Pool */
call Pool.put(msg);
}
event message_t* Receive.receive(message_t* msg, void* payload, uint8_t len)
{
if (!haveToProcess(msg))
return msg; // don't have to process this message
if (Pool.empty())
return msg; // memory exahusted;
/* ... */
/* code that processes the packet */
call AMSend.send(AM_BROADCAST_ADDR, msg, sizeof(rd_message));
/* return a free message_t* as buffer to store the next received packet */
return Pool.get();
}
}
If you don't like Pool, you can use a message_t array as circular buffer. Take a look at the BaseStation code for a hint on how to do so.
For more details, I suggest you to read the TinyOS programming book, especially section 3.5.1.
As for your comment:
return bufPtr; //FIXME: see if it's correct to return null here
you can never return NULL in a receive event, since TinyOS needs always a buffer to store incoming packets.
Related
When a client ab ends the reactor seems to indefinitely go into a polling state resulting in roughly 15% of the processor being used. If the client reconnects I'm still losing that 15%, I'm trying to determine what is lacking in my code to handle this properly.
When the client ab ends _socket.available() immediately returns false so in the else block I'm attempting to do the right thing. Doing the same thing I do when a client terminates normally, 'delete this' eliminates the processor issue but the next time a client connects I get an allocation error, I'd like to understand why that is, what's the difference? Just putting a sleep in there solves everything but onSocketReadable continues to be called with _socket.available() == false, so it remains as a sort of orphaned active reactor, what am I missing? I also tried stopping the reactor, that stops the processor use but the restarted client will no longer connect, there's something I don't understand there also, seems like a new reactor would be created just as it was initially?
void onSocketReadable(const AutoPtr<ReadableNotification>& pNf)
{
// some socket implementations (windows) report available
// bytes on client disconnect, so we double-check here
if (_socket.available())
{
// No FIFO for now
//int len = _socket.receiveBytes(_fifoIn);
char* buffer = new char[65535];
memset(buffer, 0, 65535);
_socket.setReceiveBufferSize(65535);
int n = _socket.receiveBytes(buffer, 65535);
std::string json = buffer;
delete [] buffer;
if (json == "SHUTDOWN\r\n")
{
delete this;
return;
}
try
{
std::string result = _processor.process(json,_sm);
result.append("\r\n");
_socket.sendBytes(result.data(), (int)result.length());
}
catch (Poco::Exception& e)
{
std::cout << e.message();
}
}
else
{
// delete this;
// return;
// _reactor.stop();
Sleep(10);
}
}
I made this Source code to implement lwip tcp server
If you try to remove the st-link wire and do it without debugging, the server itself doesn't work.
I try to find a way, but I don't know where the problem is.
If the LED on the board blinks, it means it's down.
Is there a problem with why the server is not working?
If you turn it to debugging, you receive it without any problems, generate a message, and then send it.
Then isn't there a problem with the Source code?
If the connection itself is not working when there is no debugging, I don't think the server is created
But if it's a circuit problem, I think it's weird to be a server when you debug it
I don't know where to look.
When debugging, the client sent and received properly when it connected to the client from hercules or Raspberry Pi
void Tcp_Task(void const * argument)
{
/* USER CODE BEGIN Tcp_Task */
struct netconn *conn, *newconn;
err_t err, accept_err;
struct netbuf *buf;
void *data;
u16_t len;
MX_LWIP_Init();
LWIP_UNUSED_ARG(argument);
conn = netconn_new(NETCONN_TCP);
if (conn!=NULL)
{
// netconn_bind(conn, NULL, port 번호)
err = netconn_bind(conn, NULL, 5001);
if (err == ERR_OK)
{
netconn_listen(conn);
while (1)
{
accept_err = netconn_accept(conn, &newconn);
if (accept_err == ERR_OK)
{
while (netconn_recv(newconn, &buf) == ERR_OK)
{
do
{
netbuf_data(buf, &data, &len);
memcpy(receivemsg, data, len);
transmitmsg = procPacket();
msg_len = getsendPackSize();
netconn_write(newconn, transmitmsg, msg_len, NETCONN_COPY);
}
while (netbuf_next(buf) >= 0);
netbuf_delete(buf);
}
netconn_close(newconn);
netconn_delete(newconn);
}
osDelay(100);
}
}
else
{
netconn_delete(newconn);
}
}
/* Infinite loop */
for(;;)
{
osDelay(100);
}
/* USER CODE END Tcp_Task */
}
lwip tcp server Source code site
[https://blog.naver.com/eziya76/221867311729](lwip tcp server)
I don't think I can implement the server if I turn the board off and on or without debugging, but I don't know where to find the problem
Please let me know if the Source code is the problem, and if the other place is the problem, please tell me how to find it
I'm writing this using a translator, so please understand
I'm having troubles setting the no_delay option on an asio socket. The following code runs well, except for the delay. My server receives the messages only after the 5000 ms expire.
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using namespace boost::asio;
struct Client
{
io_service svc;
ip::tcp::socket sock;
Client() : svc(), sock(svc)
{
ip::tcp::resolver resolver(svc);
ip::tcp::resolver::iterator endpoint = resolver.resolve(boost::asio::ip::tcp::resolver::query("127.0.0.1", "32323"));
connect(sock, endpoint);
}
void send(std::string const& message) {
sock.send(buffer(message));
}
};
int main()
{
Client client;
client.send("hello world\n");
client.send("bye world\n");
boost::this_thread::sleep_for(boost::chrono::milliseconds(5000));
}
When trying to add a delay I have a few options:
1) Add the option before connection:
Client() : svc(), sock(svc)
{
ip::tcp::resolver resolver(svc);
ip::tcp::resolver::iterator endpoint = resolver.resolve(boost::asio::ip::tcp::resolver::query("127.0.0.1", "32323"));
sock.set_option(ip::tcp::no_delay(true));
connect(sock, endpoint);
}
However this throws set_option: Bad file descriptor
2) Add the option after the connection:
Client() : svc(), sock(svc)
{
ip::tcp::resolver resolver(svc);
ip::tcp::resolver::iterator endpoint = resolver.resolve(boost::asio::ip::tcp::resolver::query("127.0.0.1", "32323"));
connect(sock, endpoint);
sock.set_option(ip::tcp::no_delay(true));
}
However in this case, the option has no effect and I still see the delay. According to boost::asio with no_delay not possible? , I need to set the option after I've opened the socket but before I've connected the socket. So I've tried this:
Client() : svc(), sock(svc)
{
ip::tcp::endpoint endpoint( ip::address::from_string("127.0.0.1"), 32323);
sock.open(ip::tcp::v4());
sock.set_option(ip::tcp::no_delay(true));
sock.connect(endpoint);
}
However, I still see no effect. How can I set this option?
Edit: It's possible that I am not setting the option correctly on the server-side. This is the complete server code:
#include <boost/asio.hpp>
#include <iostream>
int main() {
boost::asio::io_service io_service;
boost::asio::ip::tcp::acceptor acceptor(io_service, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 32323));
boost::asio::ip::tcp::socket socket(io_service);
acceptor.accept(socket);
socket.set_option(boost::asio::ip::tcp::no_delay(true));
boost::asio::streambuf sb;
boost::system::error_code ec;
while (boost::asio::read(socket, sb, ec)) {
std::cout << "received:\n" << &sb;
}
}
The client is properly setting the ip::tcp::no_delay option. However, the delay being observed is not the result of this option. Instead, it is the result of the server attempting to read more data than the client has sent, and when the client exits after sleeping 5000ms, the server's read operation completes with an error.
The read() operation initiated by the server will complete when either it has read streambuf.max_size() bytes or an error occurs. The streambuf's max size defaults to std::numeric_limits<std::size_t>::max() and can be configured in its constructor. In this case, the server attempts to read std::numeric_limits<std::size_t>::max() bytes, but the client only sends 22 bytes, sleeps 5000ms, then closes the socket. When the server observes that connection has closed, the read() operation completes with 22 bytes read and an error code of boost::asio::error::eof.
I have been digging into creating a custom filtering module written in C that can process post messages. Read the json payload, make a decision whether to proxy the message or return a 204 to the client.
I am having trouble reading the payload of the message, I have read through quite a few of the modules but am missing something. I have tried quite a few ways to parse the payload but can't seem to figure out how to just simply print it.
I have read through the echo module, redis module, Evan Miller's documentation, etc.
I know that I need to read the full message before I can process the payload but the callback has got me confused. I don't really understand the callback in ngx_http_read_client_request_body. I have tried to read the code but there really is almost no documentation in the code.
In the code below I have tried multiple things to get the request body, I tried reading r->request_body->buf, bufs once the full body was read. When I try to print the request body, I get segfault. I think the echo module would have been the closest to what I wanted to do but I got confused in the ngx_http_echo_wev_handler on how it was processing the request.
static ngx_int_t ngx_http_ortb_handler(ngx_http_request_t *r) {
ngx_int_t rc;
ngx_flag_t read_body;
if (r->method != NGX_HTTP_POST) {
read_body = 1;
}
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "Received Post");
if (read_body) {
rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init);
if (rc >= NGX_HTTP_SPECIAL_RESPONSE) {
return rc;
}
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "return code %ld", rc);
// What do I need to do here to read the body.
} else {
r->main->count++;
rc = ngx_http_discard_request_body(r);
if (rc != NGX_OK) {
return rc;
}
ngx_http_upstream_init(r);
}
return NGX_DECLINED;
}
I am trying to create a Tcp socket server that accepts multiple clients. However, for the past couple of days, I haven't been able to overcome a certain obstacle. I believe I've isolated the problem to be in the TcpClient.BeginRead(callbackMethod) Method.
Basically, distinct clients activate this method but the callback isn't invoked/triggered until they actually send data into their outgoing stream. However, the encoding.ASCII.Getstring() Method I perform on the bytes that come in via the stream outputs an unwanted "0/0/0/" depending on the order the beginread methods were started. Why is this happening? Why? Please help.
The Situation/Scenario in Order
Event 1.) ClientOne Connects which then triggers a BeginRead with asynchronous call back.(Now callback is waiting for data)
Event 2.) ClientTwo Connects which then triggers a BeginRead with asynchronous call back. (Now callback is waiting for data)
Event 3.) If ClientOne sends a message first, the data definitely is serviced, however, the Encoding.ASCII.GetString(3 arguments) outputs "0/" for every byte. I think ClientTwo's BeginRead is interfering with ClientOne's BeginRead somehow.
Event 3. (Not 4)) If ClientTwo sends a message first, the data is serviced and decoded/stringified correctly using Encoding.ASCII.GetString(3 arguments).
Source Code
void onCompleteAcceptTcpClient(IAsyncResult iar){TcpListener tcpl = (TcpListener)iar.AsyncState;
try
{
mTCPClient = tcpl.EndAcceptTcpClient(iar);
var ClientEndPoint = mTCPClient.Client.RemoteEndPoint;
Console.log(ClientEndPoint.ToString());
Console.log("Client Connected...");
_sockets.Add(mTCPClient);
tcpl.BeginAcceptTcpClient(onCompleteAcceptTcpClient, tcpl);
mRx = new byte[512];
_sockets.Last().GetStream().BeginRead(mRx, 0, mRx.Length, onCompleteReadFromTCPClientStream, mTCPClient);
}
catch (Exception exc)
{
MessageBox.Show(exc.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
void **onCompleteReadFromTCPClientStream**(IAsyncResult iar)
{
foreach (string message in messages)//For Testing previous saved messages
{
printLine("Checking previous saved messages: " + message);
}
TcpClient tcpc = new TcpClient();
int nCountReadBytes = 0;
try
{
tcpc = (TcpClient)iar.AsyncState;
nCountReadBytes = tcpc.GetStream().EndRead(iar);
printLine(nCountReadBytes.GetType().ToString());
if (nCountReadBytes == 0)
{
MessageBox.Show("Client disconnected.");
return;
}
string foo;
/*THE ENCODING OUTPUTS "0/" FOR EVERY BYTE WHEN AN OLDER CALLBACK'S DATA IS DECODED*/
foo = Encoding.ASCII.GetString(mRx, 0, nCountReadBytes);
messages.Add(foo);
foreach (string message in messages)
{
console.log(message);
}
mRx = new byte[512];
//(reopens the callback)
tcpc.GetStream().BeginRead(mRx, 0, mRx.Length, onCompleteReadFromTCPClientStream, tcpc);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}