FileOutputStream: write to serial port - serial-port

I'm trying to write single bytes to a serial port in Vala using a FileOutputStream:
var dev = File.new_for_path("/dev/ttyACM0");
var dev_io = dev.open_readwrite();
var dev_o = dev_io.output_stream as FileOutputStream;
dev_o.write({0x13});
dev_o.flush();
My aim is to do this similar to echo -en '\x13' > /dev/ttyACM0 but it just behaves weirdly. The Byte 0x13 seems to be written multiple times, sometimes /dev/ttyACM0 is blocked for a few seconds, sometimes it's even blocked after the Vala program exited and sometimes it's not blocked at all. If i write my FileOutputStream to a file and send this to the serial port via cat byte_file > /dev/ttyACM0 everything is fine.
It seems to me that GIO struggles with the fact that the file is a device. My problem is that I need GIO to monitor /dev/ttyACM0 if it's plugged in and for asynchronous reading.

The problem is most likely that you have to configure the serial port to set things like baud rate, flow control, and parity. If you don't get all those options right there is a good chance that you'll end up with garbage data like you describe.
Basically, you first need an integer descriptor for the file; the easiest way to get one is probably to just open the file using Posix.open, but you can also use GLib.FileStream.fileno to get the integer descriptor of a GLib.FileStream, etc. Next, use Posix.cfmakeraw and Posix.cfsetspeed to configure it. Then, to get your nice GIO streams, just pass the integer descriptor to the default GLib.UnixInputStream/GLib.UnixOutputStream constructors.
I wrote a class to handle serial communication in Vala many years ago. As an example it is a bit horrible—it's convoluted (I had plans to use it as an abstraction layer), doesn't use GIO or async (Vala didn't have the async keyword), uses char[] instead of uint8[] (we hadn't yet standardized on uint8[]), etc., but it should help you understand what you need to do. Between that example and what I wrote above, you should be able to get it working, but if you are still having trouble after you've played with it let me know and I can throw together a quick example.

Related

How to get raw (un-decoded) byte arrays from a TCPSocket in Ruby

I've been using ruby to setup a TCPSocket with a server and I've hit a snag. When receiving data from the socket (with either a socket.gets or a socket.recv) I get something like this:
x00\x03!\xB2\x00\x00*\xCF
What I get when I capture the packets in Wireshark is
x00\x03\x21\xB2\x00\x00\x2a\xCF
As you can see, the \x21 is decoded into the ASCII equivalent ! and the \x2a is decoded into the ASCII equivalent *.
I've checked and googled a ton of times and have not yet found a solution to get the raw, un-decoded information. I have a parser built that will search for the relevant data from the stream and grab what I need, but I don't want to have to waste time re-encoding it before I have to decode it. Or, I incorporate ASCII into my parser, but that would be a huge pain. There is a lot of bytes in this stream and to re-encode them all would be time consuming. I also see that netcat returns the same output from the TCP stream that ruby does. I could not figure out how to get netcat to output the un-decoded byte arrays either.
Code:
require 'socket'
s = TCPSocket.new "10.0.0.3", 27000
while true do
item = s.recv(5000)
puts item
puts item.inspect
end
This is my first foray into socket programming, so I apologize if I missed something very obvious.
I kind of invented the problem in my head, I am dumb.
To solve this, all you need to do is take the string of TCP information and call unpack("H*") on it like this:
"x00\x03!\xB2\x00\x00*\xCF".unpack("H*")
=> ["7830300321b200002acf"]
Which is exactly like x00\x03\x21\xB2\x00\x00\x2a\xCF
Now I just need to adjust my parser to split it or deal with the big clump of byte arrays
More info on unpack

Pharo: read byte from Arduino serial port

I have an Arduino hanging off /dev/ttyUSB1, communicating at 115kbaud. The statements below work fine up to the 's next' method call, where Pharo hangs. The Arduino responds to the '99' command by sending a single character $1 back to the computer. If I pull out the cable, the program continues and s contains the character $1 just like it should, but not until I pull out the cable. So it's my impression that 's next' does not return after it reads just a single byte (ok, sure, there's nothing that says it should return after reading a single byte). How do I read a single byte from a stream in Pharo? Or how do I open a read/write byte stream? I haven't found anything in the source classes that seem to do this. I've tried setting the stream to ascii, to binary, to text, and it doesn't change the behavior.
s := FileStream oldFileNamed: '/dev/ttyUSB1'.
s readWrite.
s nextPutAll: '99'. "'99' is successfully received by Arduino"
s next. "hangs here"
s close.
Thanks for your help.
Take a look at the class side of FileStream. There you'll notice that you are getting a MultiByteStream (the concreteStream) when asking Filestream for an oldFileNamed:.
There can be a TextConverter or buffer involved. open:forWrite: of MultiByteStream is called, and that calls super. StandardFileStream>open:forWrite: calls enableReadBuffering.
You probably want to call disableReadBuffering on your stream.
There is an Arduino package that has all these issues solved, take a look at this repo:
http://ss3.gemstone.com/ss/Arduino.html

Arduino Bootloader

Can someone please explain how the Arduino bootloader works? I'm not looking for a high level answer here, I've read the code and I get the gist of it.
There's a bunch of protocol interaction that happens between the Arduino IDE and the bootloader code, ultimately resulting in a number of inline assembly instructions that self-program the flash with the program being transmitted over the serial interface.
What I'm not clear on is on line 270:
void (*app_start)(void) = 0x0000;
...which I recognize as the declaration, and initialization to NULL, of a function pointer. There are subsequent calls to app_start in places where the bootloader is intended to delegate to execution of the user-loaded code.
Surely, somehow app_start needs to get a non-NULL value at some point for this to all come together. I'm not seeing that in the bootloader code... is it magically linked by the program that gets loaded by the bootloader? I presume that main of the bootloader is the entry point into software after a reset of the chip.
Wrapped up in the 70 or so lines of assembly must be the secret decoder ring that tells the main program where app_start really is? Or perhaps it's some implicit knowlege being taken advantage of by the Arduino IDE? All I know is that if someone doesn't change app_start to point somewhere other than 0, the bootloader code would just spin on itself forever... so what's the trick?
Edit
I'm interested in trying to port the bootloader to an Tiny AVR that doesn't have separate memory space for boot loader code. As it becomes apparent to me that the bootloader code relies on certain fuse settings and chip support, I guess what I'm really interested in knowing is what does it take to port the bootloader to a chip that doesn't have those fuses and hardware support (but still has self-programming capability)?
On NULL
Address 0 does not a null pointer make. A "null pointer" is something more abstract: a special value that applicable functions should recognize as being invalid. C says the special value is 0, and while the language says dereferencing it is "undefined behavior", in the simple world of microcontrollers it usually has a very well-defined effect.
ATmega Bootloaders
Normally, on reset, the AVR's program counter (PC) is initialized to 0, thus the microcontroller begins executing code at address 0.
However, if the Boot Reset Fuse ("BOOTRST") is set, the program counter is instead initialized to an address of a block at the upper end of the memory (where that is depends on how the fuses are set, see a datasheet (PDF, 7 MB) for specifics). The code that begins there can do anything—if you really wanted you could put your own program there if you use an ICSP (bootloaders generally can't overwrite themselves).
Often though, it's a special program—a bootloader—that is able to read data from an external source (often via UART, I2C, CAN, etc.) to rewrite program code (stored in internal or external memory, depending on the micro). The bootloader will typically look for a "special event" which can literally be anything, but for development is most conveniently something on the data bus it will pull the new code from. (For production it might be a special logic level on a pin as it can be checked nearly-instantly.) If the bootloader sees the special event, it can enter bootloading-mode, where it will reflash the program memory, otherwise it passes control off to user code.
As an aside, the point of the bootloader fuse and upper memory block is to allow the use of a bootloader with no modifications to the original software (so long as it doesn't extend all the way up into the bootloader's address). Instead of flashing with just the original HEX and desired fuses, one can flash the original HEX, bootloader, and modified fuses, and presto, bootloader added.
Anyways, in the case of the Arduino, which I believe uses the protocol from the STK500, it attempts to communicate over the UART, and if it gets either no response in the allotted time:
uint32_t count = 0;
while(!(UCSRA & _BV(RXC))) { // loops until a byte received
count++;
if (count > MAX_TIME_COUNT) // 4 seconds or whatever
app_start();
}
or if it errors too much by getting an unexpected response:
if (++error_count == MAX_ERROR_COUNT)
app_start();
It passes control back to the main program, located at 0. In the Arduino source seen above, this is done by calling app_start();, defined as void (*app_start)(void) = 0x0000;.
Because it's couched as a C function call, before the PC hops over to 0, it will push the current PC value onto the stack which also contains other variables used in the bootloader (e.g. count and error_count from above). Does this steal RAM from your program? Well, after the PC is set to 0, the operations that are executed blatantly "violate" what a proper C function (that would eventually return) should do. Among other initialization steps, it resets the stack pointer (effectively obliterating the call stack and all local variables), reclaiming RAM. Global/static variables are initialized to 0, the address of which can freely overlap with whatever the bootloader was using because the bootloader and user programs were compiled independently.
The only lasting effects from the bootloader are modifications to hardware (peripheral) registers, which a good bootloader won't leave in a detrimental state (turning on peripherals that might waste power when you try to sleep). It's generally good practice to also fully initialize peripherals you will use, so even if the bootloader did something strange you'll set it how you want.
ATtiny Bootloaders
On ATtinys, as you mentioned, there is no luxury of the bootloader fuses or memory, so your code will always start at address 0. You might be able to put your bootloader into some higher pages of memory and point your RESET vector at it, then whenever you receive a new hex file to flash with, take the command that's at address 0:1, replace it with the bootloader address, then store the replaced address somewhere else to call for normal execution. (If it's an RJMP ("relative jump") the value will obviously need to be recalculated)
Edit
I'm interested in trying to port the bootloader to an Tiny AVR that doesn't have separate memory space for boot loader code. As it becomes apparent to me that the bootloader code relies on certain fuse settings and chip support, I guess what I'm really interested in knowing is what does it take to port the bootloader to a chip that doesn't have those fuses and hardware support (but still has self-programming capability)?
Depending on your ultimate goal it may be easier to just create your own bootloader rather than try to port one. You really only need to learn a few items for that part.
1) uart tx
2) uart rx
3) self-flash programming
Which can be learned separately and then combined into a bootloader. You will want a part that you can use spi or whatever to write the flash, so that if your bootloader doesnt work or whatever the part came with gets messed up you can still continue development.
Whether you port or roll your own you will still need to understand those three basic things with respect to that part.

Windows XP embedded - RS485 problems

We've got a system running XP embedded, with COM2 being a hardware RS485 port.
In my code, I'm setting up the DCB with RTS_CONTROL_TOGGLE. I'd assume that would do what it says... turn off RTS in kernel mode once the write empty interrupt happens. That should be virtually instant.
Instead, We see on a scope that the PC is driving the bus anywhere from 1-8 milliseconds longer than the end of the message. The device that we're talking to is responding in about 1-5 milliseconds. So... communications corruptions galore. No, there's no way to change the target's response time.
We've now hooked up to the RS232 port and connected the scope to the TX and RTS lines, and we're seeing the same thing. The RTS line stays high 1-8 milliseconds after the message is sent.
We've also tried turning off the FIFO, or setting the FIFO depths to 1, with no effect.
Any ideas? I'm about to try manually controlling the RTS line from user mode with REALTIME priority during the "SendFile, clear RTS" cycle. I don't have many hopes that this will work either. This should not be done in user mode.
RTS_CONTROL_TOGGLE does not work (has a variable 1-15 millisecond delay before turning it off after transmit) on our embedded XP platform. It's possible I could get that down if I altered the time quantum to 1 ms using timeBeginPeriod(1), etc, but I doubt it would be reliable or enough to matter. (The device responds # 1 millisecond sometimes)
The final solution is really ugly but it works on this hardware. I would not use it on anything where the hardware is not fixed in stone.
Basically:
1) set the FIFOs on the serial port's device manager page to off or 1 character deep
2) send your message + 2 extra bytes using this code:
int WriteFile485(HANDLE hPort, void* pvBuffer, DWORD iLength, DWORD* pdwWritten, LPOVERLAPPED lpOverlapped)
{
int iOldClass = GetPriorityClass(GetCurrentProcess());
int iOldPriority = GetThreadPriority(GetCurrentThread());
SetPriorityClass(GetCurrentProcess(), REALTIME_PRIORITY_CLASS);
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
EscapeCommFunction(hPort, SETRTS);
BOOL bRet = WriteFile(hPort, pvBuffer, iLength, pdwWritten, lpOverlapped);
EscapeCommFunction(hPort, CLRRTS);
SetPriorityClass(GetCurrentProcess(), iOldClass);
SetThreadPriority(GetCurrentThread(), iOldPriority);
return bRet;
}
The WriteFile() returns when the last byte or two have been written to the serial port. They have NOT gone out the port yet, thus the need to send 2 extra bytes. One or both of them will get trashed when you do CLRRTS.
Like I said... it's ugly.
Any ideas?
You may find that there's source code for the serial port driver in the DDK, which would let you see how that option is supposed to be implemented: i.e. whether it's at interrupt-level, at DPC-level, or worse.
Other possibilities include rewriting the driver; using a 3rd-party RS485 driver if you can find one; or using 3rd-party RS485 hardware with its own driver (e.g. at least in the past 3rd parties used to make "intelligent serial port boards" with 32 ports, deep buffers, and its own microprocessor; I expect that RS485 is a problem that's been solved by someone).
8 milliseconds does seem like a disappointingly long time; I know that XP isn't a RTOS but I'd expect it to (usually) do better than that. Another thing to look at is whether there are other high-priority threads running which may be interfering. If you've been boosting the priorities of some threads in your own application, perhaps instead you should be reducing the priorities of other threads.
I'm about to try manually controlling the RTS line from user mode with REALTIME priority during the "SendFile, clear RTS" cycle.
Don't let that thread spin out of control: IME a thread like that can if it's buggy preempt every other user-mode thread forever.

Piping as interprocess communication

I am interested in writing separate program modules that run as independent threads that I could hook together with pipes. The motivation would be that I could write and test each module completely independently, perhaps even write them in different languages, or run the different modules on different machines. There are a wide variety of possibilities here. I have used piping for a while, but I am unfamiliar with the nuances of its behaviour.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Are there differences in the behaviour named and unnamed pipes?
Does it matter which end of the pipe I open first with named pipes?
Is the behaviour of pipes consistent between different Linux systems?
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Wow, that's a lot of questions. Let's see if I can cover everything...
It seems like the receiving end will
block waiting for input, which I would
expect
You expect correctly an actual 'read' call will block until something is there. However, I believe there are some C functions that will allow you to 'peek' at what (and how much) is waiting in the pipe. Unfortunately, I don't remember if this blocks as well.
will the sending end block sometimes
waiting for someone to read from the
stream
No, sending should never block. Think of the ramifications if this were a pipe across the network to another computer. Would you want to wait (through possibly high latency) for the other computer to respond that it received it? Now this is a different case if the reader handle of the destination has been closed. In this case, you should have some error checking to handle that.
If I write an eof to the stream can I
keep continue writing to that stream
until I close it
I would think this depends on what language you're using and its implementation of pipes. In C, I'd say no. In a linux shell, I'd say yes. Someone else with more experience would have to answer that.
Are there differences in the behaviour
named and unnamed pipes?
As far as I know, yes. However, I don't have much experience with named vs unnamed. I believe the difference is:
Single direction vs Bidirectional communication
Reading AND writing to the "in" and "out" streams of a thread
Does it matter which end of the pipe I
open first with named pipes?
Generally no, but you could run into problems on initialization trying to create and link the threads with each other. You'd need to have one main thread that creates all the sub-threads and syncs their respective pipes with each other.
Is the behaviour of pipes consistent
between different linux systems?
Again, this depends on what language, but generally yes. Ever heard of POSIX? That's the standard (at least for linux, Windows does it's own thing).
Does the behaviour of the pipes depend
on the shell I'm using or the way I've
configured it?
This is getting into a little more of a gray area. The answer should be no since the shell should essentially be making system calls. However, everything up until that point is up for grabs.
Are there any other questions I should
be asking
The questions you've asked shows that you have a decent understanding of the system. Keep researching and focus on what level you're going to be working on (shell, C, so on). You'll learn a lot more by just trying it though.
This is all based on a UNIX-like system; I'm not familiar with the specific behavior of recent versions of Windows.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
Yes, although on a modern machine it may not happen often. The pipe has an intermediate buffer that can potentially fill up. If it does, the write side of the pipe will indeed block. But if you think about it, there aren't a lot of files that are big enough to risk this.
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Um, you mean like a CTRL-D, 0x04? Sure, as long as the stream is set up that way. Viz.
506 # cat | od -c
abc
^D
efg
0000000 a b c \n 004 \n e f g \n
0000012
Are there differences in the behaviour named and unnamed pipes?
Yes, but they're subtle and implementation dependent. The biggest one is that you can write to a named pipe before the other end is running; with unnamed pipes, the file descriptors get shared during the fork/exec process, so there's no way to access the transient buffer without the processes being up.
Does it matter which end of the pipe I open first with named pipes?
Nope.
Is the behaviour of pipes consistent between different linux systems?
Within reason, yes. Buffer sizes etc may vary.
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
No. When you create a pipe, under the covers what happens is your parent process (the shell) creates a pipe which has a pair of file descriptors, then does a fork exec like this pseudocode:
Parent:
create pipe, returning two file descriptors, call them fd[0] and fd[1]
fork write-side process
fork read-side process
Write-side:
close fd[0]
connect fd[1] to stdout
exec writer program
Read-side:
close fd[1]
connect fd[0] to stdin
exec reader program
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Is everything you want to do really going to lay out in a line like this? If not, you might want to think about a more general architecture. But the insight that having lots of separate processes interacting through the "narrow" interface of a pipe is desirable is a good one.
[Updated: I had the file descriptor indices reversed at first. They're correct now, see man 2 pipe.]
As Dashogun and Charlie Martin noted, this is a big question. Some parts of their answers are inaccurate, so I'm going to answer too.
I am interested in writing separate program modules that run as independent threads that I could hook together with pipes.
Be wary of trying to use pipes as a communication mechanism between threads of a single process. Because you would have both read and write ends of the pipe open in a single process, you would never get the EOF (zero bytes) indication.
If you were really referring to processes, then this is the basis of the classic Unix approach to building tools. Many of the standard Unix programs are filters that read from standard input, transform it somehow, and write the result to standard output. For example, tr, sort, grep, and cat are all filters, to name but a few. This is an excellent paradigm to follow when the data you are manipulating permits it. Not all data manipulations are conducive to this approach, but there are many that are.
The motivation would be that I could write and test each module completely independently, perhaps even write them in different languages, or run the different modules on different machines.
Good points. Be aware that there isn't really a pipe mechanism between machines, though you can get close to it with programs such as rsh or (better) ssh. However, internally, such programs may read local data from pipes and send that data to remote machines, but they communicate between machines over sockets, not using pipes.
There are a wide variety of possibilities here. I have used piping for a while, but I am unfamiliar with the nuances of its behaviour.
OK; asking questions is one (good) way to learn. Experimenting is another, of course.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
Yes. There is a limit to the size of a pipe buffer. Classically, this was quite small - 4096 or 5120 were common values. You may find that modern Linux uses a larger value. You can use fpathconf() and _PC_PIPE_BUF to find out the size of a pipe buffer. POSIX only requires the buffer to be 512 (that is, _POSIX_PIPE_BUF is 512).
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Technically, there is no way to write EOF to a stream; you close the pipe descriptor to indicate EOF. If you are thinking of control-D or control-Z as an EOF character, then those are just regular characters as far as pipes are concerned - they only have an effect like EOF when typed at a terminal that is running in canonical mode (cooked, or normal).
Are there differences in the behaviour named and unnamed pipes?
Yes, and no. The biggest differences are that unnamed pipes must be set up by one process and can only be used by that process and children who share that process as a common ancestor. By contrast, named pipes can be used by previously unassociated processes. The next big difference is a consequence of the first; with an unnamed pipe, you get back two file descriptors from a single function (system) call to pipe(), but you open a FIFO or named pipe using the regular open() function. (Someone must create a FIFO with the mkfifo() call before you can open it; unnamed pipes do not need any such prior setup.) However, once you have a file descriptor open, there is precious little difference between a named pipe and an unnamed pipe.
Does it matter which end of the pipe I open first with named pipes?
No. The first process to open the FIFO will (normally) block until there's a process with the other end open. If you open it for reading and writing (aconventional but possible) then you won't be blocked; if you use the O_NONBLOCK flag, you won't be blocked.
Is the behaviour of pipes consistent between different Linux systems?
Yes. I've not heard of or experienced any problems with pipes on any of the systems where I've used them.
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
No: pipes and FIFOs are independent of the shell you use.
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Just remember that you must close the reading end of a pipe in the process that will be writing, and the writing end of the pipe in the process that will be reading. If you want bidirectional communication over pipes, use two separate pipes. If you create complicated plumbing arrangements, beware of deadlock - it is possible. A linear pipeline does not deadlock, however (though if the first process never closes its output, the downstream processes may wait indefinitely).
I observed both above and in comments to other answers that pipe buffers are classically limited to quite small sizes. #Charlie Martin counter-commented that some versions of Unix have dynamic pipe buffers and these can be quite large.
I'm not sure which ones he has in mind. I used the test program that follows on Solaris, AIX, HP-UX, MacOS X, Linux and Cygwin / Windows XP (results below):
#include <unistd.h>
#include <signal.h>
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
static const char *arg0;
static void err_syserr(char *str)
{
int errnum = errno;
fprintf(stderr, "%s: %s - (%d) %s\n", arg0, str, errnum, strerror(errnum));
exit(1);
}
int main(int argc, char **argv)
{
int pd[2];
pid_t kid;
size_t i = 0;
char buffer[2] = "a";
int flags;
arg0 = argv[0];
if (pipe(pd) != 0)
err_syserr("pipe() failed");
if ((kid = fork()) < 0)
err_syserr("fork() failed");
else if (kid == 0)
{
close(pd[1]);
pause();
}
/* else */
close(pd[0]);
if (fcntl(pd[1], F_GETFL, &flags) == -1)
err_syserr("fcntl(F_GETFL) failed");
flags |= O_NONBLOCK;
if (fcntl(pd[1], F_SETFL, &flags) == -1)
err_syserr("fcntl(F_SETFL) failed");
while (write(pd[1], buffer, sizeof(buffer)-1) == sizeof(buffer)-1)
{
putchar('.');
if (++i % 50 == 0)
printf("%u\n", (unsigned)i);
}
if (i % 50 != 0)
printf("%u\n", (unsigned)i);
kill(kid, SIGINT);
return 0;
}
I'd be curious to get extra results from other platforms. Here are the sizes I found. All the results are larger than I expected, I must confess, but Charlie and I may be debating the meaning of 'quite large' when it comes to buffer sizes.
8196 - HP-UX 11.23 for IA-64 (fcntl(F_SETFL) failed)
16384 - Solaris 10
16384 - MacOS X 10.5 (O_NONBLOCK did not work, though fcntl(F_SETFL) did not fail)
32768 - AIX 5.3
65536 - Cygwin / Windows XP (O_NONBLOCK did not work, though fcntl(F_SETFL) did not fail)
65536 - SuSE Linux 10 (and CentOS) (fcntl(F_SETFL) failed)
One point that is clear from these tests is that O_NONBLOCK works with pipes on some platforms and not on others.
The program creates a pipe, and forks. The child closes the write end of the pipe, and then goes to sleep until it gets a signal - that's what pause() does. The parent then closes the read end of the pipe, and sets the flags on the write descriptor so that it won't block on an attempt to write on a full pipe. It then loops, writing one character at a time, and printing a dot for each character written, and a count and newline every 50 characters. When it detects a write problem (buffer full, since the child is not reading a thing), it stops the loop, writes the final count, and kills the child.

Resources