Strange characters coming from Arduino when writing to SD card - arduino

I am connecting a SD card to an Arduino which is then communicating over serial to Visual studio. Everything works fine independently and 99% collectively. Now if i write this code in the setup in works fine. If i pop it into a function which is called when a specific character is sent from visual studio I get the strange characters at the bottom.
I have debugged each step of the code and nothing seems abnormal, unfortunately I cannot the code as
1) it's far too long...
2) it's confidential...
:(
I understand without code I cannot get a complete solution but what are those characters! why in the setup does it work perfectly and in a function I get all kinds of randomness?
myFile = SD.open("test.txt");
if (myFile) {
Serial.println("test.txt:");
// read from the file until there's nothing else in it:
while (myFile.available()) {
Serial.write(myFile.read());
}
// close the file:
myFile.close();
} else {
// if the file didn't open, print an error:
Serial.println("error opening test.txt");
}
}
整瑳湩⁧ⰱ㈠‬⸳ࠀ -- Copied straight from the text file
整瑳湩%E2%81%A7ⰱ㈠%E2%80%AC⸳ࠀ -- Output when pasted into google

Its the arduino DUE and yes lots of "String" including 4 x 2D string arrays we are using to upload to a tft screen. I ran into memory issues with the UNO but thought we would be ok with the DUE as its got considerably more ram?
Well, there's your problem. I like how you ended that with a question mark. :) Having extra RAM is no guarantee that the naughty String will behave, NOT EVEN ON SERVERS WITH GIGABYTES OF RAM. To summarize my Arduino forum post:
Don't use String™
The solutions using C strings (i.e., char arrays) are very efficient. If you're not sure how to convert a String usage to a character array, please post the snippet. You will get lots of help!
String will add at least 1600 bytes of FLASH to your program size and 10 bytes per String variable. Although String is fairly easy to use and understand, it will also cause random hangs and crashes as your program runs longer and/or grows in size and complexity. This is due to dynamic memory fragmentation caused by the heap management routines malloc and free.
Commentary from systems experts elsewhere on the net:
The Evils of Arduino Strings (required reading!)
Why is malloc harmful in embedded systems?
Dr Dobbs Journal
Memory Fragmentation in servers (MSDN)
Memory Fragmentation, your worst nightmare (nice graphics)
Another answer of mine.

Related

ESP8266 failing post requests

I'm hesitant to ask this on here; I have made an issue on GitHub, and I really need it resolved.
Basically, I'm sending posts to dweetPro every 15 seconds. It will work flawlessly for hours on end, then just randomly it will stop, send one successful one for every 20 failed responses.
I have tried everything I can think of, I'm pretty sure my code is right, not sure if it is something with the library or what. I can send the posts from Python on my PC indefinitely and never fails, so it's definitely on the ESP's side.
I'm using a clean 2A power supply, so it's not a power issue.
Here is the link of a couple Wireshark logs from the ESP sending posts. Both valid and invalid are on them.
One
Two
Some more info can be found on the issue on GitHub. This is the code I'm using, with debug lines used:
void send_to_server(String* time_sent, float magnitude, String status,
int earthquake_occured, float* data, int data_size)
{
int content_length;
String content = make_json_content(&content_length, magnitude,
status, earthquake_occured, data, data_size);
int s = -100;
int c = 0;
while (s<0 && c<10)
{
HTTPClient http;
http.setTimeout(1000);
bool suc = http.begin("http://dweetpro.io/v2/dweets");
Serial.print("Success?: "); Serial.println(suc);
http.addHeader("Content-Type", "application/json");
http.addHeader("Accept", "application/json");
http.addHeader("X-DWEET-AUTH", "xxx");
int timerr = millis();
Serial.println("Posting");
s = http.POST(content);
Serial.print("Posted: "); Serial.println(millis() - timerr);
Serial.print("Post len: "); Serial.println(s);
http.writeToStream(&Serial);
Serial.println("");
http.end();
++c;
}
Serial.println("Ended");
}
When successful, s prints 200, when failed, it print -1.
On Wireshark, every successful post will show data, but for unsuccessful posts, it will show some data for the first several failed ones, then it just stops. It's like it overloads itself or something.
Once again, sorry for this cluster of a question, but I just don't know what else to do. I have spent so much time on this with no end in sight. Thank you all so very much.
EDIT: Some other information I should have added is that if it isn't sending valid posts, I can reset the ESP and it will still not be working. Its really weird, that's why I am so at a loss. If it is on a roll sending valid posts, I can reset/unplug and it will work still. If it isn't working and I do the same, it won't be working still.
Is there any chance that it could just be some sort of interference at my home location?
Anytime I hear about an Arduino project that crashes after some time and I see the String class I get suspicious of it right away. In older versions for the Arduino it had a nasty memory leak. Current versions have fixed that leak but still leave the problem with concatenation. Basically, when you concatenate two Strings it has to deallocate the spot in memory where they were and allocate new space for the larger String. If nothing else has gone on the heap or stack since the last concatenation then that might not be a problem.
But what happens is that someone is building two Strings that are both growing larger and larger. And each time it has to allocate that memory further and further up in memory, leaving holes where the smaller Strings used to be. Eventually, the heap looks like swiss cheese and there's no garbage collection to come through and defragment it. After the project has run fine for quite some time then suddenly the heap and the stack get together and it's lights out.
I don't know if there is a function like freemem for the Arduino that works for the ESP8266, but it would be worth looking into finding a way to find out how much free memory you have. If it is steadily shrinking to 0 as the program runs then you've found your problem.

Memory limits on Arduino

I have recently bought an Arduino Uno, and now I am experimenting a bit with it. I have a couple of 18B20 sensors and an ENC28J60 network module connected to it, then I am making a sketch to allow me to connect to it from a browser and read out the temperatures either as a simple web page or as JSON. The code that makes the web pages looks like this:
client.print("Inne: ");
client.print(tempin);
client.println("<br />");
client.print("Ute: ");
client.print(tempout);
client.print("<br /><br />");
client.println(millis()/1000);
// client.print("j");
The strange thing is: if I uncomment the last line, the sketch compiles fine, uploads fine, but I cannot get to connect to the board. The same thing happens if I add on some more characters in some of the other printouts. Thus, it looks to me as if I'm running into some kind of memory limit (the total size of the sketch is about 15 KB, and there are some other strings used elsewhere in the code - and yes I know, I will rewrite it to use an array to store the temporaries, I've just stolen some code from an example).
Is there any limit on how much memory I can use to store strings in an Arduino and are there any way to get around that? (using GUI v 1.0.1 on a Debian PC with GCC-AVR 4.3.5 and AVR Libc 1.6.8).
The RAM is rather small, as the UNO's 328 is only 2K. You may just be running out of RAM. I learned that when it runs out, it just kind of sits there.
I suggest reading the readme from this library to get the FreeRAM. It mentions how the ".print" can consume both RAM and ROM.
I always now use (for Arduino IDE 1.0.+)
Serial.print(F("HELLO"));
versus
Serial.print("HELLO");
as it saves RAM, and this should be true for lcd.print. Where I always put a
Serial.println(freeMemory(), DEC); // print how much RAM is available.
in the beginning of the code, and pay attention. Noting that there needs to be room to run the actual code and re-curse into its subroutines.
For IDE's prior to 1.0.0 the library provides getPSTR()).
IDE 1.0.3 now starts to display the expected usage of RAM at the end of the compile. However, I find it is often short, as it is only an estimate.
I also recommend that you look at Webduino as it has a library that supports JSON. Its examples are very quick to get going. However it does not directly support the ENC28J60.
I use the following code to get free available RAM
int getFreeRam()
{
extern int __heap_start, *__brkval;
int v;
v = (int) &v - (__brkval == 0 ? (int) &__heap_start : (int) __brkval);
Serial.print("Free RAM = ");
Serial.println(v, DEC);
return v;
}
You can check the memory usage with a small lib called memoryFree.
When there is ram left, you might be pushing the serial buffer limit instead of the ram limit. If so, you can increase SERIAL_BUFFER_SIZE in HardwareSerial.cpp
(C:\Program Files (x86)\Arduino\hardware\arduino\cores\arduino on a windows machine)
Be carefull though, ram and serial buffer are both stored on the SRAM. Increasing the serial buffer will result in less available memory for your variables.
For playing with JSON on the arduino there is a really nice lib, called aJson.
add this function and call it in setup and every now and then in your loop to make sure RAM is not being used up.
// Private function: from http://arduino.cc/playground/Code/AvailableMemory
int freeRam () {
extern int __heap_start, *__brkval;
int v;
return (int) &v - (__brkval == 0 ? (int) &__heap_start : (int) __brkval);
}
You need to call it for example inside a print:
Serial.println(freeRam());

Solaris de-referencing bus error

I have a function where I'm trying to take a 16-bit from a large chunk of data. I'm running this code on a Solaris box and I can compile without warning or error. When I run this code, however, whenever it gets to the part where I de-deference my pointer, I instantly get a bus error. The code looks something like:
void find_info(unsigned char* packet) {
int offset = 9;
uint16_t short_value = *(uint16_t*)(packet+offset);
}
The bus error occurs when I'm trying to de-reference that "packet+offset" pointer in order to get a short. I know for a fact there is data at packet[offset] and packet[offset+1]. On Linux and Cygwin this code works fine. As far as I know, I'm not doing anything revolutionary. What's going on here?
Sounds like an alignment problem. On the Sun SPARC processor, you can only access something like a short via a pointer that is divisible by some power of 2, typically 8. So the value of offset=9 is clearly going to cause a problem.
See http://blogs.oracle.com/d/entry/on_misaligned_memory_accesses for more details.
I can't recommend any way of fixing this without seeing more context; but if you're reading data from some input source, you could just read the bytes and convert to a short using ntohs (see the man page for ntohs for details).

Bus error when recompiling the binary

Sometimes, on various Unix architectures, recompiling a program while it is running causes the program to crash with a "Bus error". Can anyone explain under which conditions this happens? First, how can updating the binary on disk do anything to the code in memory? The only thing I can imagine is that some systems mmap the code into memory and when the compiler rewrites the disk image, this causes the mmap to become invalid. What would the advantages be of such a method? It seems very suboptimal to be able to crash running codes by changing the executable.
On local filesystems, all major Unix-like systems support solving this problem by removing the file. The old vnode stays open and, even after the directory entry is gone and then reused for the new image, the old file is still there, unchanged, and now unnamed, until the last reference to it (in this case the kernel) goes away.
But if you just start rewriting it, then yes, it is mmap(3)'ed. When the block is rewritten one of two things can happen depending on which mmap(3) options the dynamic linker uses:
the kernel will invalidate the corresponding page, or
the disk image will change but existing memory pages will not
Either way, the running program is possibly in trouble. In the first case, it is essentially guaranteed to blow up, and in the second case it will get clobbered unless all of the pages have been referenced, paged in, and are never dropped.
There were two mmap flags intended to fix this. One was MAP_DENYWRITE (prevent writes) and the other was MAP_COPY, which kept a pure version of the original and prevented writers from changing the mapped image.
But DENYWRITE has been disabled for security reasons, and COPY is not implemented in any major Unix-like system.
Well this is a bit complex scenario that might be happening in your case. The reason of this error is normally the Memory Alignment issue. The Bus Error is more common to FreeBSD based system. Consider a scenario that you have a structure something like,
struct MyStruct {
char ch[29]; // 29 bytes
int32 i; // 4 bytes
}
So the total size of this structure would be 33 bytes. Now consider a system where you have 32 byte cache lines. This structure cannot be loaded in a single cache line. Now consider following statements
Struct MyStruct abc;
char *cptr = &abc; // char point points at start of structure
int32 *iptr = (cptr + 1) // iptr points at 2nd byte of structure.
Now total structure size is 33 bytes your int pointer points at 2nd byte so you can 32 byte read data from int pointer (because total size of allocated memory is 33 bytes). But when you try to read it and if the structure is allocated at the border of a cache line then it is not possible for OS to read 32 bytes in a single call. Because current cache line only contains 31 bytes data and remaining 1 bytes is on next cache line. This will result into an invalid address and will give "Buss Error". Most operating systems handle this scenario by generating two memory read calls internally but some Unix systems don't handle this scenario. To avoid this, it is recommended take care of Memory Alignment. Mostly this scenario happen when you try to type cast a structure into another datatype and try reading the memory of that structure.
The scenario is bit complex, so I am not sure if I can explain it in simpler way. I hope you understand the scenario.

Piping as interprocess communication

I am interested in writing separate program modules that run as independent threads that I could hook together with pipes. The motivation would be that I could write and test each module completely independently, perhaps even write them in different languages, or run the different modules on different machines. There are a wide variety of possibilities here. I have used piping for a while, but I am unfamiliar with the nuances of its behaviour.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Are there differences in the behaviour named and unnamed pipes?
Does it matter which end of the pipe I open first with named pipes?
Is the behaviour of pipes consistent between different Linux systems?
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Wow, that's a lot of questions. Let's see if I can cover everything...
It seems like the receiving end will
block waiting for input, which I would
expect
You expect correctly an actual 'read' call will block until something is there. However, I believe there are some C functions that will allow you to 'peek' at what (and how much) is waiting in the pipe. Unfortunately, I don't remember if this blocks as well.
will the sending end block sometimes
waiting for someone to read from the
stream
No, sending should never block. Think of the ramifications if this were a pipe across the network to another computer. Would you want to wait (through possibly high latency) for the other computer to respond that it received it? Now this is a different case if the reader handle of the destination has been closed. In this case, you should have some error checking to handle that.
If I write an eof to the stream can I
keep continue writing to that stream
until I close it
I would think this depends on what language you're using and its implementation of pipes. In C, I'd say no. In a linux shell, I'd say yes. Someone else with more experience would have to answer that.
Are there differences in the behaviour
named and unnamed pipes?
As far as I know, yes. However, I don't have much experience with named vs unnamed. I believe the difference is:
Single direction vs Bidirectional communication
Reading AND writing to the "in" and "out" streams of a thread
Does it matter which end of the pipe I
open first with named pipes?
Generally no, but you could run into problems on initialization trying to create and link the threads with each other. You'd need to have one main thread that creates all the sub-threads and syncs their respective pipes with each other.
Is the behaviour of pipes consistent
between different linux systems?
Again, this depends on what language, but generally yes. Ever heard of POSIX? That's the standard (at least for linux, Windows does it's own thing).
Does the behaviour of the pipes depend
on the shell I'm using or the way I've
configured it?
This is getting into a little more of a gray area. The answer should be no since the shell should essentially be making system calls. However, everything up until that point is up for grabs.
Are there any other questions I should
be asking
The questions you've asked shows that you have a decent understanding of the system. Keep researching and focus on what level you're going to be working on (shell, C, so on). You'll learn a lot more by just trying it though.
This is all based on a UNIX-like system; I'm not familiar with the specific behavior of recent versions of Windows.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
Yes, although on a modern machine it may not happen often. The pipe has an intermediate buffer that can potentially fill up. If it does, the write side of the pipe will indeed block. But if you think about it, there aren't a lot of files that are big enough to risk this.
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Um, you mean like a CTRL-D, 0x04? Sure, as long as the stream is set up that way. Viz.
506 # cat | od -c
abc
^D
efg
0000000 a b c \n 004 \n e f g \n
0000012
Are there differences in the behaviour named and unnamed pipes?
Yes, but they're subtle and implementation dependent. The biggest one is that you can write to a named pipe before the other end is running; with unnamed pipes, the file descriptors get shared during the fork/exec process, so there's no way to access the transient buffer without the processes being up.
Does it matter which end of the pipe I open first with named pipes?
Nope.
Is the behaviour of pipes consistent between different linux systems?
Within reason, yes. Buffer sizes etc may vary.
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
No. When you create a pipe, under the covers what happens is your parent process (the shell) creates a pipe which has a pair of file descriptors, then does a fork exec like this pseudocode:
Parent:
create pipe, returning two file descriptors, call them fd[0] and fd[1]
fork write-side process
fork read-side process
Write-side:
close fd[0]
connect fd[1] to stdout
exec writer program
Read-side:
close fd[1]
connect fd[0] to stdin
exec reader program
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Is everything you want to do really going to lay out in a line like this? If not, you might want to think about a more general architecture. But the insight that having lots of separate processes interacting through the "narrow" interface of a pipe is desirable is a good one.
[Updated: I had the file descriptor indices reversed at first. They're correct now, see man 2 pipe.]
As Dashogun and Charlie Martin noted, this is a big question. Some parts of their answers are inaccurate, so I'm going to answer too.
I am interested in writing separate program modules that run as independent threads that I could hook together with pipes.
Be wary of trying to use pipes as a communication mechanism between threads of a single process. Because you would have both read and write ends of the pipe open in a single process, you would never get the EOF (zero bytes) indication.
If you were really referring to processes, then this is the basis of the classic Unix approach to building tools. Many of the standard Unix programs are filters that read from standard input, transform it somehow, and write the result to standard output. For example, tr, sort, grep, and cat are all filters, to name but a few. This is an excellent paradigm to follow when the data you are manipulating permits it. Not all data manipulations are conducive to this approach, but there are many that are.
The motivation would be that I could write and test each module completely independently, perhaps even write them in different languages, or run the different modules on different machines.
Good points. Be aware that there isn't really a pipe mechanism between machines, though you can get close to it with programs such as rsh or (better) ssh. However, internally, such programs may read local data from pipes and send that data to remote machines, but they communicate between machines over sockets, not using pipes.
There are a wide variety of possibilities here. I have used piping for a while, but I am unfamiliar with the nuances of its behaviour.
OK; asking questions is one (good) way to learn. Experimenting is another, of course.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
Yes. There is a limit to the size of a pipe buffer. Classically, this was quite small - 4096 or 5120 were common values. You may find that modern Linux uses a larger value. You can use fpathconf() and _PC_PIPE_BUF to find out the size of a pipe buffer. POSIX only requires the buffer to be 512 (that is, _POSIX_PIPE_BUF is 512).
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Technically, there is no way to write EOF to a stream; you close the pipe descriptor to indicate EOF. If you are thinking of control-D or control-Z as an EOF character, then those are just regular characters as far as pipes are concerned - they only have an effect like EOF when typed at a terminal that is running in canonical mode (cooked, or normal).
Are there differences in the behaviour named and unnamed pipes?
Yes, and no. The biggest differences are that unnamed pipes must be set up by one process and can only be used by that process and children who share that process as a common ancestor. By contrast, named pipes can be used by previously unassociated processes. The next big difference is a consequence of the first; with an unnamed pipe, you get back two file descriptors from a single function (system) call to pipe(), but you open a FIFO or named pipe using the regular open() function. (Someone must create a FIFO with the mkfifo() call before you can open it; unnamed pipes do not need any such prior setup.) However, once you have a file descriptor open, there is precious little difference between a named pipe and an unnamed pipe.
Does it matter which end of the pipe I open first with named pipes?
No. The first process to open the FIFO will (normally) block until there's a process with the other end open. If you open it for reading and writing (aconventional but possible) then you won't be blocked; if you use the O_NONBLOCK flag, you won't be blocked.
Is the behaviour of pipes consistent between different Linux systems?
Yes. I've not heard of or experienced any problems with pipes on any of the systems where I've used them.
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
No: pipes and FIFOs are independent of the shell you use.
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Just remember that you must close the reading end of a pipe in the process that will be writing, and the writing end of the pipe in the process that will be reading. If you want bidirectional communication over pipes, use two separate pipes. If you create complicated plumbing arrangements, beware of deadlock - it is possible. A linear pipeline does not deadlock, however (though if the first process never closes its output, the downstream processes may wait indefinitely).
I observed both above and in comments to other answers that pipe buffers are classically limited to quite small sizes. #Charlie Martin counter-commented that some versions of Unix have dynamic pipe buffers and these can be quite large.
I'm not sure which ones he has in mind. I used the test program that follows on Solaris, AIX, HP-UX, MacOS X, Linux and Cygwin / Windows XP (results below):
#include <unistd.h>
#include <signal.h>
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
static const char *arg0;
static void err_syserr(char *str)
{
int errnum = errno;
fprintf(stderr, "%s: %s - (%d) %s\n", arg0, str, errnum, strerror(errnum));
exit(1);
}
int main(int argc, char **argv)
{
int pd[2];
pid_t kid;
size_t i = 0;
char buffer[2] = "a";
int flags;
arg0 = argv[0];
if (pipe(pd) != 0)
err_syserr("pipe() failed");
if ((kid = fork()) < 0)
err_syserr("fork() failed");
else if (kid == 0)
{
close(pd[1]);
pause();
}
/* else */
close(pd[0]);
if (fcntl(pd[1], F_GETFL, &flags) == -1)
err_syserr("fcntl(F_GETFL) failed");
flags |= O_NONBLOCK;
if (fcntl(pd[1], F_SETFL, &flags) == -1)
err_syserr("fcntl(F_SETFL) failed");
while (write(pd[1], buffer, sizeof(buffer)-1) == sizeof(buffer)-1)
{
putchar('.');
if (++i % 50 == 0)
printf("%u\n", (unsigned)i);
}
if (i % 50 != 0)
printf("%u\n", (unsigned)i);
kill(kid, SIGINT);
return 0;
}
I'd be curious to get extra results from other platforms. Here are the sizes I found. All the results are larger than I expected, I must confess, but Charlie and I may be debating the meaning of 'quite large' when it comes to buffer sizes.
8196 - HP-UX 11.23 for IA-64 (fcntl(F_SETFL) failed)
16384 - Solaris 10
16384 - MacOS X 10.5 (O_NONBLOCK did not work, though fcntl(F_SETFL) did not fail)
32768 - AIX 5.3
65536 - Cygwin / Windows XP (O_NONBLOCK did not work, though fcntl(F_SETFL) did not fail)
65536 - SuSE Linux 10 (and CentOS) (fcntl(F_SETFL) failed)
One point that is clear from these tests is that O_NONBLOCK works with pipes on some platforms and not on others.
The program creates a pipe, and forks. The child closes the write end of the pipe, and then goes to sleep until it gets a signal - that's what pause() does. The parent then closes the read end of the pipe, and sets the flags on the write descriptor so that it won't block on an attempt to write on a full pipe. It then loops, writing one character at a time, and printing a dot for each character written, and a count and newline every 50 characters. When it detects a write problem (buffer full, since the child is not reading a thing), it stops the loop, writes the final count, and kills the child.

Resources