lwip tcp/ip checksum error - tcp

LWIP: SSI output lengths may cause TCP/IP checksum errors
I am using LWIP 1.4.1 running on an ARM LPC4357 (LPCOpen 2.1.12, date 5/15/2014).
I am using httpserver_raw and have added a few SSI handlers.
I find that the system is sensitive to both the size of the line that contains the tag and the number of bytes the SSI handler outputs.
The tag is included in output.
When expanding the tag:
\r\n
spacespace<!--#s_add-->\r\n
(two spaces proceed the tag, forum formatting!)
The browser fails to receive the output & Wireshark reports a checksum error when the tag is expanded with 1, 3, 5, 7 etc characters.
The browser receives the output & Wireshark reports OK when the tag is expanded with 2, 4, 6, 8 etc characters.
If I remove a space from before the tag then the situation is reveresed.
Additionally if the tag is greater than 129 characters (suspect 122) then output is always ok (not seen a fault yet).
When the SSI output is >=129 bytes then http_write() function is being called with lengths 122 (header and tag), 122 (initial part of SSI) and 7 (remaining part of SSI).
Having the second tcp_write() of 122 seems to cure the checksum problem.
(Splitting the SSI output into segments is itself an oddity. The sending in chunks of upto 122 seems to be driven by the size of the resource file upto the tag).
I have inspected the SSI output using UART printf()'s and in Wireshark and all appears to be correct, there are no corruptions in the output. http_write() lengths are correct as is the output.
I have tracked as far as tcp_write() at which point I start to doubt myself; surely tcp_write() cannot be broke?
Has anyone used LWIP SSI on an embedded target or indeed tcp_write()?

Commenting/removing:
#define LWIP_CHECKSUM_ON_COPY
seems to resolve.

Related

In a UNIX pipeline how to get the user-tool interaction of the first stage piped into the next stage?

Page 32 of the book "The UNIX Programming Environment" makes this profound statement about UNIX pipes:
The programs in a pipeline actually run at the same time, not one
after another. This means that the programs in a pipeline can be
interactive; the kernel looks after whatever scheduling and
synchronization is need to make it all work.
Wow! By using pipes I can get parallel processing for free!
"I've got to illustrate this awesome capability to my colleagues" I thought. I will implement a demo: create a simple interactive tool that mimics what I type in at the command window. I'll name that tool mimic. Use a pipe to connect it to another tool which counts the number of lines and characters. I'll name that tool wc (sadly, I am working on Windows, which doesn't have the UNIX wc program so I must implement my own). The two tools will run on a command line like this:
mimic | wc
The output of mimic is piped into wc.
Well, that's the idea.
I implemented the mimic tool with a very simple Flex lexer, which I show below. Compiling it generated mimic.exe
When I run mimic.exe from the command line it does indeed mimic what I type:
> mimic
hello world
hello world
greetings
greetings
ctrl-c
I implemented wc using AWK. The AWK program (wc.awk) is shown below. When I run wc from the command line it does indeed count the lines and characters:
> echo Hello World | awk -f wc.awk
lines 1 chars 13
However, when I put them together with a pipe, they don't work as I imagined they would. Here's a sample run:
> mimic | awk -f wc.awk
hello world
greetings
ctrl-c
Hmm, nothing. No mimicking. No line counts. No char counts.
How come it's not working? What am I doing wrong, please?
What can I do to make it work as I expected? I expected it to work like this: I type something in at the command line. mimic repeats it, sending it to the pipe which sends it to wc which reports the number of lines and characters. I type in the next thing at the command line. mimic repeats it, sending it to the pipe which sends it to wc which reports the number of lines and characters. And so forth. That's the behavior I thought I was implementing.
Here is my simple Flex lexer (mimic):
%option noyywrap
%option always-interactive
%%
%%
int main(int argc, char *argv[])
{
yyin = stdin;
yylex();
return 0;
}
Here is my simple AWK program (wc):
{ nchars = nchars + length($0) + 1 }
END { printf("lines %-10d chars %d\n", NR, nchars) }
This question has little or nothing to do with Flex, Bison, Awk, and really not so much about Unix, either (since you're experimenting with Windows).
I don't have Windows handy, but the underlying issue is basically about stdio buffering, so it's reproducible on Unix as well.
To simplify, I only implemented mimic, which I did directly rather than using Flex (which is clearly overkill):
#include <stdio.h>
int main(void) {
for (int ch; (ch = getchar()) != EOF; ) putchar(ch);
return 0;
}
Since you use %always-interactive, which forces Flex to always read one character at a time with fgetc(), that has basically the same sequence of standard library calls as your program, except for simplifying fwrite of one byte to the equivalent putchar.
It certainly has the same execution characteristic:
$ ./mimic
Here we go round the mulberry busy,
Here we go round the mulberry busy,
the mulberry bush, the mulberry bush.
the mulberry bush, the mulberry bush.
In the above, I signalled end-of-input by typing Ctrl-D for the third line. On Windows, I would have had to have typed Ctrl-Z followed by Enter to get the same effect. If I kill the execution by typing Ctrl-C instead, I get roughly the same result (other than the fact that Ctrl-C shows up in the console):
$ ./mimic
Here we go round the mulberry bush,
Here we go round the mulberry bush,
the mulberry bush, the mulberry bush.
the mulberry bush, the mulberry bush.
^C
Now, since mimic just copies stdin to stdout, I might expect to be able to pipe it into itself and get the same result. But the output is a little different:
$ ./mimic | ./mimic
Here we go round the mulberry bush,
the mulberry bush, the mulberry bush.
Here we go round the mulberry bush,
the mulberry bush, the mulberry bush.
Again, I signaled end of input by typing Ctrl-D. And it was only after I typed the Ctrl-D that any output appeared; at that point, both lines were echoed. And if I terminate the program abruptly with Ctrl-C, I don't get any output at all:
$ ./mimic | ./mimic
Here we go round the mulberry bush,
the mulberry bush, the mulberry bush.
^C
OK, that's the data. Now, we need an explanation. And the explanation has to do with the way C standard library streams are buffered, which you can read about in the setvbuf manpage (and in many places on the web). I'll summarise:
The C standard specifies that all input functions execute "as if" implemented by repeated calls to fgetc, and all output functions "as if" implemented by repeated calls to fputc. Note that this means that there is nothing special about the chunk of bytes written by a single printf or fwrite. The library does not do anything to ensure that the sequence of calls is atomic. If you have two processes both writing to stdout, the messages can get interleaved, and that will happen from time to time.
The data written by fputc (and, consequently, all stdio output functions) is actually placed into an output buffer. This buffer is not part of the Operating System (which may add another layer of buffering). It's strictly part of the stdio library functions, which are ordinary userland functions. You could write them yourself (and it's a pretty good exercise to do so). From time to time, the contents of this buffer are sent to the appropriate operating system interface in order to be transferred to the output device.
"From time to time" is deliberately unspecific. There are three standard buffering modes, although the standard doesn't require them all to be used by a particular library implementation, and it also doesn't restrict the library implementation from using different buffering modes (although the three specified modes do basically cover the useful possibilities). However, most C library implementations you're likely to use do implement all three, pretty well as described in the standard. So take the following as a description of a common implementation technique, but be aware that on certain idiosyncratic platforms, it might not be accurate.
The three buffering modes are:
Unbuffered. In this mode, each byte written is transferred to the operating system (and, it is hoped, to the actual output device) as soon as possible.
Fully buffered. In this mode, there is a buffer of a predetermined size (often 8 kilobytes, but different library implementations have different defaults for different platforms). If you want to, you can supply your own buffer (of an arbitrary size) for a particular output stream, using the setvbuf standard library function (q.v.). Fully-buffered output might stay in the output buffer until it is full (although a given implementation may release output earlier). The buffer will, however, be sent to the operating system if you call fflush or fclose on the stream, or if fclose is called automatically when main returns. (It's not sent if the process dies, though.)
Line-buffered. In this mode, the stream again has an output buffer of a predetermined size, which is usually exactly the same as the buffer used in "fully-buffered" mode. The difference is that the buffer is also sent to the operating system when a new line character ('\n') is written. (If the buffer gets full before an end-of-line character is written, then it is sent to the operating system, just as in Fully-Buffered mode. But most of the time, lines will be fairly short, so the buffer will be sent to the OS at the end of each line.)
Finally, you need to know that stdout is fully-buffered by default unless the standard library can determine that stdout is connected to some kind of console device, in which case it is not fully-buffered. (On Unix, it is typically line-buffered.) By contrast, stderr is not fully-buffered. (On Unix, it is typically unbuffered.) You can change the buffering mode, and the buffer size if relevant, calling setvbuf before the first write operation to the stream.
The above-mentioned defaults are one of the reasons you are encouraged to write error messages to stderr rather than stdout: since stderr is unbuffered, the error message will appear as soon as possible. Also, you should normally put \n at the end of output line; if standard output is a terminal and therefore line buffered, the \n will ensure that the line is actually output, rather than languishing in the output buffer.
You can see all of this in action in the above examples. When I just ran ./mimic, leaving stdout mapped to the terminal, the output showed up each time I entered a line. (That also has to do with the way terminal input is handled by the terminal driver, which is another kettle of fish.)
But when I piped mimic into itself, the first mimics standard output is redirected to a pipe. A pipe is not a terminal, so that mimic's stdout is fully-buffered by default. Since the buffer is longer than the total input, the entire program runs without sending anything to stdout, until the buffer is flushed when stdout is implicitly closed by main returning.
Moreover, if I kill the process (by typing Ctrl-C, for example, or by sending it a SIGKILL signal), then the output buffer is never sent to the operating system, and nothing appears on the console.
If you're writing console apps using standard C library calls, it's very important to understand how stdio output buffering affects the sequence of outputs you see. That's just as true on Windows as on Unix. (Of course, it doesn't apply if you use native Windows or Posix I/O interfaces.)

Read pdf page size on .NET Core

I have only one requirement. I need to read PDF page size and determine if page is not bigger then 17x17 inches to not send it to some external service which rejects such pdfs.
Is there any free library working on .NET Core? I wasn't able to find it. Or maybe anyone implemented this by reading binary file?
A pdf does not HAVE TO declare page size externally since every page can be a different size thus 100 pages may be 100 different page sizes.
However many PDF will contain a text entry for one or more pages so you can (depending on construction) parse as text for /MediaBox and or potentially /CropBox dimensions.
So the first PDF example I pick on and open to search for /MediaBox in WordPad tells me its 210 mm x 297 mm (i.e my local A4) /MediaBox [0 0 594.95996 841.91998] and for a 3 page file all 3 entries are the same.
you can try that using command line as
type "filename.pdf" | find /i "/media"
but may not work in all cases so a bigger chance of result (but more chaff) is
type "filename.pdf" | findstr /i "^/media ^/crop"
The value is based on the default number of point size units per inch (so can be divided by 72 as a rough guide), however, thats not your aim since you know you dont want more that 17x72=1224.
So in simple terms, if either value was over 1224 then I could reject as "TOO BIG".
HOWEVER I need to also consider those two 0 values, thus if one was +100 then the limit becomes 100 more and more importantly, if one was -100 then your desired 17" restriction will fail at 1124.
So you can write in any method or language (even CMD) a simple test, however, that will require too much expanding to cover all cases, SO:-
Seriously I would use / shell a one line command tool like xpdf/poppler pdfinfo to parse all different types of PDF and then grep that output.
The output is similar for both with many lines but for your need
xpdf\pdfinfo -box filename
gives Page size: 594.96 x 841.92 pts (A4) (rotated 0 degrees)
and
poppler\pdfinfo -box filename
gives Page size: 594.96 x 841.92 pts (A4)
Thus to check the file does not exceed 17" (in either direction) it should be easy to set a comparison testing that both values are under 1224.01

jpg file difference : from wireshark tcp stream and from a C++ socket

I'm trying to record a jpeg image sent by an Ethernet camera in a mjpg stream.
The image I obtain with my Borland C++ application (VSPCIP) looks identical in Notepad++ to the tcp stream saved from the application Wireshark (except for the number of characters : 15540 in my file, and 15342 in the wireshark file, whereas the jpeg content-length is announced to be 15342).
That is to say that I have 198 non-displayable characters more than expected but both files have 247 lines.
Here are the two files :
http://demo.ovh.com/fr/a61295d39f963998ba1244da2f55a27d/
Which tool could I use (in Notepad++ (I tried to display in UTF8 or ANSI : files still match whereas they don't have the same number of characters) or another editor) to view the non-displayable characters ?
std::ofstream by default opens the file in text mode, which means it might translate newline characters ('\n' binary 0x0a) into a carriage-return/newline sequence ("\r\n", binary 0x0d and 0x0a).
Open the output file in binary mode and it will most likely solve your problem:
std::ofstream os("filename", ios_base::out | ios_base::binary);

Using NPH Mode in CGI executable EXE on IIS 7.0 - can this be "realtime?"

I am writing a prototype CGI application to run on an IIS 7.0 server, which will ultimately result in that application sending a number of http chunked responses that are generated in response to a query that may take some time to generate the actual data.
before delving into that side of things (ie generating the real data), i wanted to test if my cgi app is in fact running in non parsed header mode (ie the cgi app itself generates all headers, and IIS just returns all output to the requesting http browser/agent.
so i wrote a simple app that returns the following (note timetamps - there is a 1 second pause between each timestamped line, and as the text implies, this take just over 10 seconds to execute - you can in fact run the cgi app on the command line and observe this timing.
HTTP/1.1 200 OK
Date: Sun, 17 Jul 2011 09:07:47 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
17;
NPH CGI Chunked Example
4D;
the fat cat sat on the mat, perplexed at the antics of the quick brown fox.
2D;
counting ... 1... the time is now 09:07:48
2D;
counting ... 2... the time is now 09:07:49
2D;
counting ... 3... the time is now 09:07:50
2D;
counting ... 4... the time is now 09:07:50
2D;
counting ... 5... the time is now 09:07:51
2D;
counting ... 6... the time is now 09:07:52
2D;
counting ... 7... the time is now 09:07:53
2D;
counting ... 8... the time is now 09:07:54
2D;
counting ... 9... the time is now 09:07:55
2E;
counting ... 10... the time is now 09:07:56
18;
NPH Chunked Example Ends
0
When I install the exe on the server and open it in internet explorer, the expected text is displayed (times differ as i ran them at a different time of day, and in different time zones):
NPH CGI Chunked Example
the fat cat sat on the mat, perplexed at the antics of the quick brown fox.
counting ... 1... the time is now 16:14:58
counting ... 2... the time is now 16:14:59
counting ... 3... the time is now 16:15:00
counting ... 4... the time is now 16:15:01
counting ... 5... the time is now 16:15:02
counting ... 6... the time is now 16:15:03
counting ... 7... the time is now 16:15:04
counting ... 8... the time is now 16:15:05
counting ... 9... the time is now 16:15:06
counting ... 10... the time is now 16:15:07
NPH Chunked Example Ends
HOWEVER - nothing is displayed until after the entire 10 seconds has elapsed.
(note - same result in chrome)
to make sure, i crafted a http request in a text editor and pasted it into telnet on port 80 to the server, and observed a 10 second gap before the entire response as shown above.
I get the same result, regardless of wether the exe has the prefix "nph-" or not - i performed the same test in telnet, with "nph-hello.exe", "nphhello.exe" and "hello.exe", using the following:
(note:identifying addresses are adjusted for this post)
GET /cgi-bin/nph-hello.exe HTTP/1.0
Host: www.myserver.com:80
From: someone#gmail.com
User-Agent: telnet/1.0
(for relevance of exe naming prefix, read the notes at http://support.microsoft.com/default.aspx?scid=kb;EN-US;q176113 under "Resolution" beginning "as a workaround...")
I do not appear to observe any discernible output difference in telnet relating to the exe name prefix which leads me to believe this information is either out of date, or that something else needs to be enabled to turn on this feature, which clearly needs to happen before the cgi app is started - which explains why the prefix is used as opposed to something the app itself does - ie IIS needs to know wether to wait for a response)
whilst on the surface, it is achieving the desired result - the data finds it's way to the requesting http agent - i would prefer it to happen in real time, so progress updates can be displayed as the request is made.
so my questions:
1) is there some setting that needs to be made to turn on NPH (or is the prefix correct?)
2) is there some test the cgi app can make to tell if it is really in NPH mode?
3) is it intended that stddout (ie response data) be piped to the requesting agent in real time, or does IIS absolutely have to buffer and parse it in some way first?

Can I determine if the terminal interprets the C1 control codes?

ISO/IEC 2022 defines the C0 and C1 control codes. The C0 set are the familiar codes between 0x00 and 0x1f in ASCII, ISO-8859-1 and UTF-8 (eg. ESC, CR, LF).
Some VT100 terminal emulators (eg. screen(1), PuTTY) support the C1 set, too. These are the values between 0x80 and 0x9f (so, for example, 0x84 moves the cursor down a line).
I am displaying user-supplied input. I do not wish the user input to be able to alter the terminal state (eg. move the cursor). I am currently filtering out the character codes in the C0 set; however I would like to conditionally filter out the C1 set too, if terminal will interpret them as control codes.
Is there a way of getting this information from a database like termcap?
The only way to do it that I can think of is using C1 requests and testing the return value:
$ echo `echo -en "\x9bc"`
^[[?1;2c
$ echo `echo -e "\x9b5n"`
^[[0n
$ echo `echo -e "\x9b6n"`
^[[39;1R
$ echo `echo -e "\x9b0x" `
^[[2;1;1;112;112;1;0x
The above ones are:
CSI c Primary DA; request Device Attributes
CSI 5 n DSR; Device Status Report
CSI 6 n CPR; Cursor Position Report
CSI 0 x DECREQTPARM; Request Terminal Parameters
The terminfo/termcap that ESR maintains (link) has a couple of these requests in user strings 7 and 9 (user7/u7, user9/u9):
# INTERPRETATION OF USER CAPABILITIES
#
# The System V Release 4 and XPG4 terminfo format defines ten string
# capabilities for use by applications, .... In this file, we use
# certain of these capabilities to describe functions which are not covered
# by terminfo. The mapping is as follows:
#
# u9 terminal enquire string (equiv. to ANSI/ECMA-48 DA)
# u8 terminal answerback description
# u7 cursor position request (equiv. to VT100/ANSI/ECMA-48 DSR 6)
# u6 cursor position report (equiv. to ANSI/ECMA-48 CPR)
#
# The terminal enquire string should elicit an answerback response
# from the terminal. Common values for will be ^E (on older ASCII
# terminals) or \E[c (on newer VT100/ANSI/ECMA-48-compatible terminals).
#
# The cursor position request () string should elicit a cursor position
# report. A typical value (for VT100 terminals) is \E[6n.
#
# The terminal answerback description (u8) must consist of an expected
# answerback string. The string may contain the following scanf(3)-like
# escapes:
#
# %c Accept any character
# %[...] Accept any number of characters in the given set
#
# The cursor position report () string must contain two scanf(3)-style
# %d format elements. The first of these must correspond to the Y coordinate
# and the second to the %d. If the string contains the sequence %i, it is
# taken as an instruction to decrement each value after reading it (this is
# the inverse sense from the cup string). The typical CPR value is
# \E[%i%d;%dR (on VT100/ANSI/ECMA-48-compatible terminals).
#
# These capabilities are used by tac(1m), the terminfo action checker
# (distributed with ncurses 5.0).
Example:
$ echo `tput u7`
^[[39;1R
$ echo `tput u9`
^[[?1;2c
Of course, if you only want to prevent display corruption, you can use less approach, and let the user switch between displaying/not displaying control characters (-r and -R options in less). Also, if you know your output charset, ISO-8859 charsets have the C1 range reserved for control codes (so they have no printable chars in that range).
Actually, PuTTY does not appear to support C1 controls.
The usual way of testing this feature is with vttest, which provides menu entries for changing the input- and output- separately to use 8-bit controls. PuTTY fails the sanity-check for each of those menu entries, and if the check is disabled, the result confirms that PuTTY does not honor those controls.
I don't think there's a straightforward way to query whether the terminal supports them. You can try nasty hacky workarounds (like print them and then query the cursor position) but I really don't recommend anything along these lines.
I think you could just filter out these C1 codes unconditionally. Unicode declares the U+0080.. U+009F range as control characters anyway, I don't think you should ever use them for anything different.
(Note: you used the example 0x84 for cursor down. It's in fact U+0084 encoded in whichever encoding the terminal uses, e.g. 0xC2 0x84 for UTF-8.)
Doing it 100% automatically is challenging at best. Many, if not most, Unix interfaces are smart (xterms and whatnot), but you don't actually know if connected to an ASR33 or a PC running MSDOS.
You could try some of the terminal interrogation escape sequences and timeout if there is no reply. But then you might have to fall back and maybe ask the user what kind of terminal they are using.

Resources