I have an app that is connected to the balance through the serial port. The balance is quite large and pressing the PRINT button is not an option. So my app asks the balance to print programmatically upon a certain user action. The balance interface allows it, and defines a print command. All works for awhile. Then after weighting few items, the balance starts outputing the previous weight....I am buffled at this point since there are few commands defined and there is not too many options to what can be done. I am already flushing out the OUT buffer after each time. So I don't know why it keeps giving me the old value.
Here is my code:
if (askedToPrint)
{
_sp.DiscardOutBuffer();
//ask the balance to print
_sp.Write("P\r\n");
}
_sp - is a SerialPort object
I am using WinCE 6.0 and Compact Framework 2.0/C#
if you are reading data from serial port using Readline() or Read() then there is a possibility that balance have sent multiple packets queued. So before reading you have to discard already pending packets. other way around is before writing request to print use ReadExisting() method to read all available data. So after sending command if your balance still sending old packets then there might be a problem with balance.
Related
all
Recently I am debugging a problem on unix system, by using command
netstat -s
and I get an output with
$ netstat -s
// other fields
// other fields
TCPBacklogDrop: 368504
// other fields
// other fields
I have searched for a while to understand what does this field means, and got mainly two different answers:
This means that your tcp-date-receive-buffer is full, and there are some packages overflow
This means your tcp-accept-buffer is full, and there are some disconnections
Which is the correct one? any offical document to support it?
Interpretation #2 is referring to the queue of sockets waiting to be accepted, possibly because its size is set (more or less) by the value of the parameter named backlog to listen. This interpretation, however, is not correct.
To understand why interpretation #1 is correct (although incomplete), we will need to consult the source. First note that the string "TCPBacklogDrop"is associated with the Linux identifier LINUX_MIB_TCPBACKLOGDROP (see, e.g., this). This is incremented here in tcp_add_backlog.
Roughly speaking, there are 3 queues associated with the receive side of an established TCP socket. If the application is blocked on a read when a packet arrives, it will generally be sent to the prequeue for processing in user space in the application process. If it can't be put on the prequeue, and the socket is not locked, it will be placed in the receive queue. However, if the socket is locked, it will be placed in the backlog queue for subsequent processing.
If you follow through the code you will see that the call to sk_add_backlog called from tcp_add_backlog will return -ENOBUFS if the receive queue is full (including that which is in the backlog queue) and the packet will be dropped and the counter incremented. I say this interpretation is incomplete because this is not the only place where a packet could be dropped when the "receive queue" is full (which we now understand to be not as straightforward as a single queue).
I wouldn't expect such drops to be frequent and/or problematic under normal operating conditions as the sender's TCP stack should honor the advertised window of the receiver and not send data exceeding the capacity of the receive queue (with the exception of zero window probes and older kernel versions whose calculations could cause drops when the receive window was not actually full). If it is somehow indicative of a problem, I would start worrying about malicious clients (some form of DDOS maybe) or some failure causing a sockets lock to be held for an extended period of time.
Let's suppose, I have a custom server that listens to connections on some port and once it has received a connection, it starts sending data (sort of a logger). Here's the first question:
Can it be just binary data? Actually, I need just two non-zero 8-bit values, and I was thinking of 0-value byte to separate each new portion of data.
These three bytes will be sent once or may be twice a second.
So, now I am looking for some code snippet in Swift 2 to properly read this data. Normally, I would expect calling
connectSocket(IP,port)
which would connect to the socket, and once it receives the first chunk of data,
socketCallBack()
is called, or something like that.
Intuitively, I don't like the idea of checking data in a while (true) loop. Or is this the proper way?
I've seen an example, when it first sends 'get' request to the server and immediately starts waiting for response. Probably, I can call it using a timer, once a second? Will it be correct?
What I am concerned about is trafic. Right now I have impemented it through a web-server, but I don't like that it spends way too much trafic for that overhead http data.
Probably, with that tcp connections on timer that would be much less, and it would save even more trafic if I establish just one connection in the beginning and transmit the data within this connection. Am I right?
I have seen a number of examples of paho clients reading sensor data then publishing, e.g., https://github.com/jamesmoulding/motion-sensor/blob/master/open.py. None that I have seen have started a network loop as suggested in https://eclipse.org/paho/clients/python/docs/#network-loop. I am wondering if the network loop is unnecessary for publishing? Perhaps only needed if I am subscribed to something?
To expand on what #hardillb has said a bit, his point 2 "To send the ping packets needed to keep a connection alive" is only strictly necessary if you aren't publishing at a rate sufficient to match the keepalive you set when connecting. In other words, it's entirely possible the client will never need to send a PINGREQ and hence never need to receive a PINGRESP.
However, the more important point is that it is impossible to guarantee that calling publish() will actually complete sending the message without using the network loop. It may work some of the time, but could fail to complete sending a message at any time.
The next version of the client will allow you to do this:
m = mqttc.publish("class", "bar", qos=2)
m.wait_for_publish()
But this will require that the network loop is being processed in a separate thread, as with loop_start().
The network loop is needed for a number of things:
To deal with incoming messages
To send the ping packets needed to keep a connection alive
To handle the extra packets needed for high QOS
Send messages that take up more than one network packet (e.g. bigger than local MTU)
The ping messages are only needed if you have a low message rate (less than 1 msg per keep alive period).
Given you can start the network loop in the background on a separate thread these days, I would recommend starting it regardless
I would want to send a message from the server actively, such as using UDP/TCPIP to a client using an arduino. It is known that this is possible if the user has port forward the specific port to the device on local network. However I wouldn't want to have the user to port forward manually, perhaps using another protocol, will this be possible?
1 Arduino Side
I think the closest you can get to this is opening a connection to the server from the arduino, then use available to wait for the server to stream some data to the arduino. Your code will be polling the open connection, but you are avoiding all the back and forth communications to open and close the connection, passing headers back and forth etc.
2 Server Side
This means the bulk of the work will be on the server side, where you will need to manage open connections so you can instantly write to them when a user triggers some event which requires a message to be pushed to the arduino. How to do this varies a bit depending on what type of server application you are running.
2.1 Node.js "walk-through" of main issues
In Node.js for example, you can res.write() on a connection, without closing it - this should give a similar effect as having an open serial connection to the arduino. That leaves you with the issue of managing the connection - should the server periodically check a database for messages for the arduino? That simply removes one link from the arduino -> server -> database polling link, so we should be able to do better.
We can attach a function triggered by the event of a message being added to the database. Node-orm2 is a database Object Relational Model driver for node.js, and it offers hooks such as afterSave and afterCreate which you can utilize for this type of thing. Depending on your application, you may be better off not using a database at all and simply using javascript objects.
The only remaining issue then, is: once the hook is activated, how do we get the correct connection into scope so we can write to it? Well you can save all the relevant data you have on the request to some global data structure, maybe a dictionary with an arduino ID as index, and in the triggered function you fetch all the data, i.e. the request context and you write to it!
See this blog post for a great example, including node.js code which manages open connections, closing them properly and clearing from memory on timeout etc.
3 Conclusion
I haven't tested this myself - but I plan to since I already have an existing application using arduino and node.js which is currently implemented using normal polling. Hopefully I will get around to it soon and return here with results.
Typically in long-polling (from what I've read) the connection is closed once data is sent back to the client (arduino), although I don't see why this would be necessary. I plan to try keeping the same connection open for multiple messages, only closing after a fixed time interval to re-establish the connection - and I hope to set this interval fairly high, 5-15 minutes maybe.
We use Pubnub to send notifications to a client web browser so a user can know immediately when they have received a "message" and stuff like that. It works great.
This seems to have the same constraints that you are looking at: No static IP, no port forwarding. User can theoretically just plug the thing in...
It looks like Pubnub has an Arduino library:
https://github.com/pubnub/arduino
I am designing and testing a client server program based on TCP sockets(Internet domain). Currently , I am testing it on my local machine and not able to understand the following about SIGPIPE.
*. SIGPIPE appears quite randomly. Can it be deterministic?
The first tests involved single small(25 characters) send operation from client and corresponding receive at server. The same code, on the same machine runs successfully or not(SIGPIPE) totally out of my control. The failure rate is about 45% of times(quite high). So, can I tune the machine in any way to minimize this.
**. The second round of testing was to send 40000 small(25 characters) messages from the client to the server(1MB of total data) and then the server responding with the total size of data it actually received. The client sends data in a tight loop and there is a SINGLE receive call at the server. It works only for a maximum of 1200 bytes of total data sent and again, there are these non deterministic SIGPIPEs, about 70% times now(really bad).
Can some one suggest some improvement in my design(probably it will be at the server). The requirement is that the client shall be able to send over medium to very high amount of data (again about 25 characters each message) after a single socket connection has been made to the server.
I have a feeling that multiple sends against a single receive will always be lossy and very inefficient. Shall we be combining the messages and sending in one send() operation only. Is that the only way to go?
SIGPIPE is sent when you try to write to an unconnected pipe/socket. Installing a handler for the signal will make send() return an error instead.
signal(SIGPIPE, SIG_IGN);
Alternatively, you can disable SIGPIPE for a socket:
int n = 1;
setsockopt(thesocket, SOL_SOCKET, SO_NOSIGPIPE, &n, sizeof(n));
Also, the data amounts you're mentioning are not very high. Likely there's a bug somewhere that causes your connection to close unexpectedly, giving a SIGPIPE.
SIGPIPE is raised because you are attempting to write to a socket that has been closed. This does indicate a probable bug so check your application as to why it is occurring and attempt to fix that first.
Attempting to just mask SIGPIPE is not a good idea because you don't really know where the signal is coming from and you may mask other sources of this error. In multi-threaded environments, signals are a horrible solution.
In the rare cases were you cannot avoid this, you can mask the signal on send. If you set the MSG_NOSIGNAL flag on send()/sendto(), it will prevent SIGPIPE being raised. If you do trigger this error, send() returns -1 and errno will be set to EPIPE. Clean and easy. See man send for details.