2.3x DM Serial Port Connections - serial-port

I'd like to acquire an image, send a command to a serial port, take another image, send a new command, etc (using GMS 2.3x).
It seems like LaunchExternalProcess() would be cumbersome to use (you'd have to have a unique command prompt expression for each command to the serial port, right?).
I would assume SerialControl.dll would be easier to use, but I don't know where to get it. Would using the commands in SerialControl.dll be more efficient than using LaunchExternalProcess()?
The image acquisition times are long so communication speed is not a major factor.

Unfortunately, I don't know in which GMS configurations SerialConrol.dll is installed with - it is for sure only in online installations - but if you have the DLL, then using the commands therein seems to be the better way to go about the serial port communication. In particular, if you want it to be a two-way communication where the script should also receive something via that connection.
However, if your installation doesn't have the DLL I would strongly advise against copying it from a different installation which has it, as compatibility between versions is not guaranteed and a mismatch can really mess with you.
I have never tried the LaunchExternalProcess for serial communication so far. If you give it a test and it is fast enough for you, I don't see any problems there neither. It surely is giving you more flexibility, and I wouldn't be too concerned about it being "cumbersome". In the end, somebody always has to put some code somewhere. Depending on what you need, you can neatly wrap one or two script-methods around the commands, and from there on it is just "a simple call" to be used as well.
Making a two-way communication with 'LaunchExternaProcess' is trickier though and requires one to be a bit creative and storing things intermediately to a file-location.
One should note that there currently is no official documentation on the script commands in the SerialControl.DLL and that they are not a Gatan supported feature, which generally means: You can use them if they work, but you can't complain or request help if they don't or get removed in later versions.
The commands in SerialControl.dll for the RS232C interfaces are:
Number SPOpen( Number port, Number baud, Number stop, Number parity, Number data )
Number SPOpen( String prefix )
void SPClose( Number serialPortL )
Number SPSendString( Number serialPortL, String string )
Number SPSendHex( Number serialPortL, String string )
void SPFlushInput( Number serialPortL )
Number SPGetPendingBytes( Number serialPortL )
Number SPGetTime( )
String SPReceiveString( Number serialPortL, Number maxLength, NumberVariable actual )
String SPReceiveHexString( Number serialPortL, Number maxLength, NumberVariable actual )
void SPSetRTS( Number serialPortL, Boolean on )
void SPSetDTR( Number serialPortL, Boolean on )
Any serial port opened by the commands must be closed by the script too, or it will remain open (and hence blocked).
Edit: Later versions of GMS call the DLL SerialControlPlugin.dll

Related

Strange characters coming from Arduino when writing to SD card

I am connecting a SD card to an Arduino which is then communicating over serial to Visual studio. Everything works fine independently and 99% collectively. Now if i write this code in the setup in works fine. If i pop it into a function which is called when a specific character is sent from visual studio I get the strange characters at the bottom.
I have debugged each step of the code and nothing seems abnormal, unfortunately I cannot the code as
1) it's far too long...
2) it's confidential...
:(
I understand without code I cannot get a complete solution but what are those characters! why in the setup does it work perfectly and in a function I get all kinds of randomness?
myFile = SD.open("test.txt");
if (myFile) {
Serial.println("test.txt:");
// read from the file until there's nothing else in it:
while (myFile.available()) {
Serial.write(myFile.read());
}
// close the file:
myFile.close();
} else {
// if the file didn't open, print an error:
Serial.println("error opening test.txt");
}
}
整瑳湩⁧ⰱ㈠‬⸳ࠀ -- Copied straight from the text file
整瑳湩%E2%81%A7ⰱ㈠%E2%80%AC⸳ࠀ -- Output when pasted into google
Its the arduino DUE and yes lots of "String" including 4 x 2D string arrays we are using to upload to a tft screen. I ran into memory issues with the UNO but thought we would be ok with the DUE as its got considerably more ram?
Well, there's your problem. I like how you ended that with a question mark. :) Having extra RAM is no guarantee that the naughty String will behave, NOT EVEN ON SERVERS WITH GIGABYTES OF RAM. To summarize my Arduino forum post:
Don't use String™
The solutions using C strings (i.e., char arrays) are very efficient. If you're not sure how to convert a String usage to a character array, please post the snippet. You will get lots of help!
String will add at least 1600 bytes of FLASH to your program size and 10 bytes per String variable. Although String is fairly easy to use and understand, it will also cause random hangs and crashes as your program runs longer and/or grows in size and complexity. This is due to dynamic memory fragmentation caused by the heap management routines malloc and free.
Commentary from systems experts elsewhere on the net:
The Evils of Arduino Strings (required reading!)
Why is malloc harmful in embedded systems?
Dr Dobbs Journal
Memory Fragmentation in servers (MSDN)
Memory Fragmentation, your worst nightmare (nice graphics)
Another answer of mine.

FileOutputStream: write to serial port

I'm trying to write single bytes to a serial port in Vala using a FileOutputStream:
var dev = File.new_for_path("/dev/ttyACM0");
var dev_io = dev.open_readwrite();
var dev_o = dev_io.output_stream as FileOutputStream;
dev_o.write({0x13});
dev_o.flush();
My aim is to do this similar to echo -en '\x13' > /dev/ttyACM0 but it just behaves weirdly. The Byte 0x13 seems to be written multiple times, sometimes /dev/ttyACM0 is blocked for a few seconds, sometimes it's even blocked after the Vala program exited and sometimes it's not blocked at all. If i write my FileOutputStream to a file and send this to the serial port via cat byte_file > /dev/ttyACM0 everything is fine.
It seems to me that GIO struggles with the fact that the file is a device. My problem is that I need GIO to monitor /dev/ttyACM0 if it's plugged in and for asynchronous reading.
The problem is most likely that you have to configure the serial port to set things like baud rate, flow control, and parity. If you don't get all those options right there is a good chance that you'll end up with garbage data like you describe.
Basically, you first need an integer descriptor for the file; the easiest way to get one is probably to just open the file using Posix.open, but you can also use GLib.FileStream.fileno to get the integer descriptor of a GLib.FileStream, etc. Next, use Posix.cfmakeraw and Posix.cfsetspeed to configure it. Then, to get your nice GIO streams, just pass the integer descriptor to the default GLib.UnixInputStream/GLib.UnixOutputStream constructors.
I wrote a class to handle serial communication in Vala many years ago. As an example it is a bit horrible—it's convoluted (I had plans to use it as an abstraction layer), doesn't use GIO or async (Vala didn't have the async keyword), uses char[] instead of uint8[] (we hadn't yet standardized on uint8[]), etc., but it should help you understand what you need to do. Between that example and what I wrote above, you should be able to get it working, but if you are still having trouble after you've played with it let me know and I can throw together a quick example.

Why is there a CL_DEVICE_MAX_WORK_GROUP_SIZE?

I'm trying to understand the architecture of OpenCL devices such as GPUs, and I fail to see why there is an explicit bound on the number of work items in a local work group, i.e. the constant CL_DEVICE_MAX_WORK_GROUP_SIZE.
It seems to me that this should be taken care of by the compiler, i.e. if a (one-dimensional for simplicity) kernel is executed with local workgroup size 500 while its physical maximum is 100, and the kernel looks for example like this:
__kernel void test(float* input) {
i = get_global_id(0);
someCode(i);
barrier();
moreCode(i);
barrier();
finalCode(i);
}
then it could be converted automatically to an execution with work group size 100 on this kernel:
__kernel void test(float* input) {
i = get_global_id(0);
someCode(5*i);
someCode(5*i+1);
someCode(5*i+2);
someCode(5*i+3);
someCode(5*i+4);
barrier();
moreCode(5*i);
moreCode(5*i+1);
moreCode(5*i+2);
moreCode(5*i+3);
moreCode(5*i+4);
barrier();
finalCode(5*i);
finalCode(5*i+1);
finalCode(5*i+2);
finalCode(5*i+3);
finalCode(5*i+4);
}
However, it seems that this is not done by default. Why not? Is there a way to make this process automated (other than writing a pre-compiler for it myself)? Or is there an intrinsic problem which can make my method fail on certain examples (and can you give me one)?
I think that the origin of the CL_DEVICE_MAX_WORK_GROUP_SIZE lies in the underlying hardware implementation.
Multiple threads are running simultaneously on computing units and every one of them needs to keep state (for call, jmp, etc). Most implementations use a stack for this and if you look at the AMD Evergreen family their is an hardware limit for the number of stack entries that are available (every stack entry has subentries). Which in essence limits the number of threads every computing unit can handle simultaneously.
As for the compiler can do this to make it possible. It could work but understand that it would mean to recompile the kernel over again. Which isn't always possible. I can imagine situations where developers dump the compiled kernel for each platform in a binary format and ships it with their software just for "not so open-source" reasons.
Those constants are queried from the device by the compiler in order to determine a suitable work group size at compile-time (where compiling of course refers to compiling the kernel). I might be getting you wrong, but it seems you're thinking of setting those values by yourself, which wouldn't be the case.
The responsibility is within your code to query the system capabilities to be prepared for whatever hardware it will run on.

Piping as interprocess communication

I am interested in writing separate program modules that run as independent threads that I could hook together with pipes. The motivation would be that I could write and test each module completely independently, perhaps even write them in different languages, or run the different modules on different machines. There are a wide variety of possibilities here. I have used piping for a while, but I am unfamiliar with the nuances of its behaviour.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Are there differences in the behaviour named and unnamed pipes?
Does it matter which end of the pipe I open first with named pipes?
Is the behaviour of pipes consistent between different Linux systems?
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Wow, that's a lot of questions. Let's see if I can cover everything...
It seems like the receiving end will
block waiting for input, which I would
expect
You expect correctly an actual 'read' call will block until something is there. However, I believe there are some C functions that will allow you to 'peek' at what (and how much) is waiting in the pipe. Unfortunately, I don't remember if this blocks as well.
will the sending end block sometimes
waiting for someone to read from the
stream
No, sending should never block. Think of the ramifications if this were a pipe across the network to another computer. Would you want to wait (through possibly high latency) for the other computer to respond that it received it? Now this is a different case if the reader handle of the destination has been closed. In this case, you should have some error checking to handle that.
If I write an eof to the stream can I
keep continue writing to that stream
until I close it
I would think this depends on what language you're using and its implementation of pipes. In C, I'd say no. In a linux shell, I'd say yes. Someone else with more experience would have to answer that.
Are there differences in the behaviour
named and unnamed pipes?
As far as I know, yes. However, I don't have much experience with named vs unnamed. I believe the difference is:
Single direction vs Bidirectional communication
Reading AND writing to the "in" and "out" streams of a thread
Does it matter which end of the pipe I
open first with named pipes?
Generally no, but you could run into problems on initialization trying to create and link the threads with each other. You'd need to have one main thread that creates all the sub-threads and syncs their respective pipes with each other.
Is the behaviour of pipes consistent
between different linux systems?
Again, this depends on what language, but generally yes. Ever heard of POSIX? That's the standard (at least for linux, Windows does it's own thing).
Does the behaviour of the pipes depend
on the shell I'm using or the way I've
configured it?
This is getting into a little more of a gray area. The answer should be no since the shell should essentially be making system calls. However, everything up until that point is up for grabs.
Are there any other questions I should
be asking
The questions you've asked shows that you have a decent understanding of the system. Keep researching and focus on what level you're going to be working on (shell, C, so on). You'll learn a lot more by just trying it though.
This is all based on a UNIX-like system; I'm not familiar with the specific behavior of recent versions of Windows.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
Yes, although on a modern machine it may not happen often. The pipe has an intermediate buffer that can potentially fill up. If it does, the write side of the pipe will indeed block. But if you think about it, there aren't a lot of files that are big enough to risk this.
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Um, you mean like a CTRL-D, 0x04? Sure, as long as the stream is set up that way. Viz.
506 # cat | od -c
abc
^D
efg
0000000 a b c \n 004 \n e f g \n
0000012
Are there differences in the behaviour named and unnamed pipes?
Yes, but they're subtle and implementation dependent. The biggest one is that you can write to a named pipe before the other end is running; with unnamed pipes, the file descriptors get shared during the fork/exec process, so there's no way to access the transient buffer without the processes being up.
Does it matter which end of the pipe I open first with named pipes?
Nope.
Is the behaviour of pipes consistent between different linux systems?
Within reason, yes. Buffer sizes etc may vary.
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
No. When you create a pipe, under the covers what happens is your parent process (the shell) creates a pipe which has a pair of file descriptors, then does a fork exec like this pseudocode:
Parent:
create pipe, returning two file descriptors, call them fd[0] and fd[1]
fork write-side process
fork read-side process
Write-side:
close fd[0]
connect fd[1] to stdout
exec writer program
Read-side:
close fd[1]
connect fd[0] to stdin
exec reader program
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Is everything you want to do really going to lay out in a line like this? If not, you might want to think about a more general architecture. But the insight that having lots of separate processes interacting through the "narrow" interface of a pipe is desirable is a good one.
[Updated: I had the file descriptor indices reversed at first. They're correct now, see man 2 pipe.]
As Dashogun and Charlie Martin noted, this is a big question. Some parts of their answers are inaccurate, so I'm going to answer too.
I am interested in writing separate program modules that run as independent threads that I could hook together with pipes.
Be wary of trying to use pipes as a communication mechanism between threads of a single process. Because you would have both read and write ends of the pipe open in a single process, you would never get the EOF (zero bytes) indication.
If you were really referring to processes, then this is the basis of the classic Unix approach to building tools. Many of the standard Unix programs are filters that read from standard input, transform it somehow, and write the result to standard output. For example, tr, sort, grep, and cat are all filters, to name but a few. This is an excellent paradigm to follow when the data you are manipulating permits it. Not all data manipulations are conducive to this approach, but there are many that are.
The motivation would be that I could write and test each module completely independently, perhaps even write them in different languages, or run the different modules on different machines.
Good points. Be aware that there isn't really a pipe mechanism between machines, though you can get close to it with programs such as rsh or (better) ssh. However, internally, such programs may read local data from pipes and send that data to remote machines, but they communicate between machines over sockets, not using pipes.
There are a wide variety of possibilities here. I have used piping for a while, but I am unfamiliar with the nuances of its behaviour.
OK; asking questions is one (good) way to learn. Experimenting is another, of course.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
Yes. There is a limit to the size of a pipe buffer. Classically, this was quite small - 4096 or 5120 were common values. You may find that modern Linux uses a larger value. You can use fpathconf() and _PC_PIPE_BUF to find out the size of a pipe buffer. POSIX only requires the buffer to be 512 (that is, _POSIX_PIPE_BUF is 512).
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Technically, there is no way to write EOF to a stream; you close the pipe descriptor to indicate EOF. If you are thinking of control-D or control-Z as an EOF character, then those are just regular characters as far as pipes are concerned - they only have an effect like EOF when typed at a terminal that is running in canonical mode (cooked, or normal).
Are there differences in the behaviour named and unnamed pipes?
Yes, and no. The biggest differences are that unnamed pipes must be set up by one process and can only be used by that process and children who share that process as a common ancestor. By contrast, named pipes can be used by previously unassociated processes. The next big difference is a consequence of the first; with an unnamed pipe, you get back two file descriptors from a single function (system) call to pipe(), but you open a FIFO or named pipe using the regular open() function. (Someone must create a FIFO with the mkfifo() call before you can open it; unnamed pipes do not need any such prior setup.) However, once you have a file descriptor open, there is precious little difference between a named pipe and an unnamed pipe.
Does it matter which end of the pipe I open first with named pipes?
No. The first process to open the FIFO will (normally) block until there's a process with the other end open. If you open it for reading and writing (aconventional but possible) then you won't be blocked; if you use the O_NONBLOCK flag, you won't be blocked.
Is the behaviour of pipes consistent between different Linux systems?
Yes. I've not heard of or experienced any problems with pipes on any of the systems where I've used them.
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
No: pipes and FIFOs are independent of the shell you use.
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Just remember that you must close the reading end of a pipe in the process that will be writing, and the writing end of the pipe in the process that will be reading. If you want bidirectional communication over pipes, use two separate pipes. If you create complicated plumbing arrangements, beware of deadlock - it is possible. A linear pipeline does not deadlock, however (though if the first process never closes its output, the downstream processes may wait indefinitely).
I observed both above and in comments to other answers that pipe buffers are classically limited to quite small sizes. #Charlie Martin counter-commented that some versions of Unix have dynamic pipe buffers and these can be quite large.
I'm not sure which ones he has in mind. I used the test program that follows on Solaris, AIX, HP-UX, MacOS X, Linux and Cygwin / Windows XP (results below):
#include <unistd.h>
#include <signal.h>
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
static const char *arg0;
static void err_syserr(char *str)
{
int errnum = errno;
fprintf(stderr, "%s: %s - (%d) %s\n", arg0, str, errnum, strerror(errnum));
exit(1);
}
int main(int argc, char **argv)
{
int pd[2];
pid_t kid;
size_t i = 0;
char buffer[2] = "a";
int flags;
arg0 = argv[0];
if (pipe(pd) != 0)
err_syserr("pipe() failed");
if ((kid = fork()) < 0)
err_syserr("fork() failed");
else if (kid == 0)
{
close(pd[1]);
pause();
}
/* else */
close(pd[0]);
if (fcntl(pd[1], F_GETFL, &flags) == -1)
err_syserr("fcntl(F_GETFL) failed");
flags |= O_NONBLOCK;
if (fcntl(pd[1], F_SETFL, &flags) == -1)
err_syserr("fcntl(F_SETFL) failed");
while (write(pd[1], buffer, sizeof(buffer)-1) == sizeof(buffer)-1)
{
putchar('.');
if (++i % 50 == 0)
printf("%u\n", (unsigned)i);
}
if (i % 50 != 0)
printf("%u\n", (unsigned)i);
kill(kid, SIGINT);
return 0;
}
I'd be curious to get extra results from other platforms. Here are the sizes I found. All the results are larger than I expected, I must confess, but Charlie and I may be debating the meaning of 'quite large' when it comes to buffer sizes.
8196 - HP-UX 11.23 for IA-64 (fcntl(F_SETFL) failed)
16384 - Solaris 10
16384 - MacOS X 10.5 (O_NONBLOCK did not work, though fcntl(F_SETFL) did not fail)
32768 - AIX 5.3
65536 - Cygwin / Windows XP (O_NONBLOCK did not work, though fcntl(F_SETFL) did not fail)
65536 - SuSE Linux 10 (and CentOS) (fcntl(F_SETFL) failed)
One point that is clear from these tests is that O_NONBLOCK works with pipes on some platforms and not on others.
The program creates a pipe, and forks. The child closes the write end of the pipe, and then goes to sleep until it gets a signal - that's what pause() does. The parent then closes the read end of the pipe, and sets the flags on the write descriptor so that it won't block on an attempt to write on a full pipe. It then loops, writing one character at a time, and printing a dot for each character written, and a count and newline every 50 characters. When it detects a write problem (buffer full, since the child is not reading a thing), it stops the loop, writes the final count, and kills the child.

TCP send queue depth

How do I discover how many bytes have been sent to a TCP socket but have not yet been put on the wire?
Looking at the diagram here:
I would like to know the total of Categories 2, 3, and 4 or the total of 3 and 4. This is in C(++) and on both Windows and Linux. Ideally there is a ioctl that I could use, but there doesn't seem to be any.
Under Linux, see the man page for tcp(7).
It appears that you can get the number of untransmitted bytes by ioctl(sock,SIOCINQ ...
Other stats might be available from members of the structure given back by the TCP_INFO getsockopt() call.
Some Unix flavors may have an API way to do this, but there is no way to do it that is portable across different variants.
If you want to determine wheter to add data or not: don't worry, send will block until the data is in the queue. If you don't want it to block, you can tell it to send(2):
send(socket, buf, buflen, MSG_DONTWAIT);
But this only works on Linux.
You can also set the socket to non-blocking:
fcntl(socket, F_SETFD, O_NONBLOCK);
This way write will return an error (EAGAIN) if the data cannot be written to the stream.

Resources