Sending http requests with cookies using ESP8266 - arduino

I wrote this API :
https://github.com/prp-e/iot-api-temp-humid
and when I tested it, I used this command :
curl -b cookies.txt http://localhost:8000/login/username/password
and each time I wanted to check the data in the "Enviroment" table, I use
curl -c cookies.txt http://localhost:8000/env/username
I need the cookies to be stored somewhere, or regenerate each time ESP8266 sends data to the API. is there any way?

If the cookie data is small (fewer than 4096 bytes), you might store it using the EEPROM class. Note that the ESP8266 doesn't really have an EEPROM (Arduinos generally do), so this is just writing the data to a reserved area of its flash storage. Be sure to call EEPROM.commit() after you write or your changes won't actually be saved. The EEPROM documentation includes links to some examples of how to use it.
If the cookie data is larger, you can store it in a file using SPIFFS. SPIFFS lets you use part of the ESP8266's flash storage as a simple filesystem.
ESP8266 boards usually have low quality flash storage which can only handle at most a few hundred thousand writes, so you don't want to write to the flash very frequently. For instance, if you updated the cookies in flash once per second, in just one day you'd write to the flash 86,400 times. Within two days you'd quite possibly wear out the sector of flash that was being used to store cookie values. So be careful with how often you change the values of the cookies and how often you write to the flash memory.
The ESP8266 also has 512 bytes of RAM associated with its real time clock (RTC). Data stored here will persist across reboots but will be lost if power is removed from the chip. Because it's normal RAM and not flash, it doesn't suffer from wear problems and can be rewritten safely. Here's an example of how to use it.

Related

Using PIC18F K40 microcontroller flash memory as storage

I'm using PIC18F67K40 microcontroller in my project.
It has 1kB EEPROM memory and 128kB program memory (flash).
For now I'm using EEPROM to store my settings.
Application is "growing" and I realized that at some point 1kB will be not enough. Some of settings are arrays of pretty big structures.
I realize, that flash memory has 100k 10k write cycles and that I can buy external EEPROM, but I don't want to change anything in hardware and memory in this product will never reach 2k writes for sure.
My quesion is:
How can I switch from EEPROM storage to flash storage?
Do I have to recalculate some CRC after program memory changes?
Do I have to define somewhere in project settings, that I'm using some flash memory for storage?
Is there anything what I have to do in order to use flash memory like this?
100k writes is only the endurance of the data EEPROM not of the flash memory (only 10k writes). You could expand the endurance with a EEPROM emulation.
There is a really nice library from Microchip for EEPROM emulation in flash memory.
Have a look here: EEPROM emulation
I did this for a client a couple of years ago. I can't post the code for NDA and copyright reasons, but the basic trick was to use something called RTSP (Run Time Self Programming). RTSP may be going obsolete now, but whatever replaces it may work in a similar way.
Essentially the flash looks like a series of pages which can be written word at a time, but erased page at a time. What you will need to do is write some code that can unlock and erase a page then write to it. Once you have done this the page can be read as ordinary memory.
You don't need to change the settings. However make sure that the page you use is well clear of the program code.
If you want a CRC (usually a good move) you'll have to calculate it yourself.

Synchronizing multiple audio sources over a network

The problem in short form is the following:
I'm buffering (real-time analysis) multiple audio sources from different locations and I want to sync them with millisecond precision when I've retrieved them in my server.
What I've discovered so far:
The reason I'm doing this is to analyze sound that microphones are enclosing. For this to work NTP is implemented so that should be sufficient clock-synchronizing-wise.
Right now I'm able to access a RTP stream but is it possible to access a time-stamp in that stream? How should I buffer?
Other ideas are welcome if they are better ofcourse ;)

What are some tips for buffer usage and tuning in custom TCP services?

I've been researching a number of networking libraries and frameworks lately such as libevent, libev, Facebook Tornado, and Concurrence (Python).
One thing I notice in their implementations is the use of application-level per-client read/write buffers (e.g. IOStream in Tornado) -- even HAProxy has such buffers.
In addition to these application-level buffers, there's the OS kernel TCP implementation's buffers per socket.
I can understand the app/lib's use of a read buffer I think: the app/lib reads from the kernel buffer into the app buffer and the app does something with the data (deserializes a message therein for instance).
However, I have confused myself about the need/use of a write buffer. Why not just write to the kernel's send/write buffer? Is it to avoid the overhead of system calls (write)? I suppose the point is to be ready with more data to push into the kernel's write buffer when the kernel notifies the app/lib that the socket is "writable" (e.g. EPOLLOUT). But, why not just do away with the app write buffer and configure the kernel's TCP write buffer to be equally large?
Also, consider a service for which disabling the Nagle algorithm makes sense (e.g a game server). In such a configuration, I suppose I'd want the opposite: no kernel write buffer but an application write buffer, yes? When the app is ready to send a complete message, it writes the app buffer via send() etc. and the kernel passes it through.
Help me to clear up my head about these understandings if you would. Thanks!
Well, speaking for haproxy, it has no distinction between read and write buffers, a buffer is used for both purposes, which saves a copy. However, it is really painful to do some changes. For instance, sometimes you have to rewrite an HTTP header and you have to manage to move data correctly for your rewrite, and to save some state about the previous header's value. In haproxy, the connection header can be rewritten, and its previous and new states are saved because they are need later, after being rewritten. Using a read and a write buffer, you don't have this complexity, as you can always look back in your read buffer if you need any original data.
Haproxy is also able to make use of splicing between sockets on Linux. This means that it does not read nor write data, it just tells the kernel what to take where, and where to move it. The kernel then automatically moves pointers without copying data to transfer TCP segments from a network card to another one (when possible), but data are then never transferred to user space, thus avoiding a double copy.
You're completely right about the fact that in general you don't need to copy data between buffers. It's a waste of memory bandwidth. Haproxy runs at 10Gbps with 20% CPU with splicing, but without splicing (2 more copies), it's close to 100%. But then consider the complexity of the alternatives, and make your choice.
Hoping this helps.
When you use asynchronous socket IO operation, the asynchronous read/write operation returns immediately, since the asynchronous operation does not guaranty dealing all the data (ie put all the required data to TCP socket buffer or get all the required data from it) successfully with one invocation, the partial data must outlive through mutiple operations. Then you need an application buffer space to keep the data as long as IO operations last.

Howto pipe raw PCM-Data from /dev/ttyUSB0 to soundcard?

I'm working currently on a small microhpone, connected to PC via an FPGA. The FPGA spits a raw datastream via UART/USB into my computer. I'm able to record, play and analyze the data.
But I can't play the "live" audiostream directly.
What works is saving the datastream in PCM raw-format with a custom made C-program, and piping the content of the file into aplay. But that adds a 10sec lag into the datastream... Not so nice for demoing or testing.
tail -f snd.raw | aplay -t raw -f S16_LE -r 9000
Does someone have another idea, how get the audiostream faster into my ears? Why does
cat /dev/ttyUSB0 | aplay
not work? (nothing happens)
Thanks so far
marvin
You need an api that lets you stream audiobuffers directly to the soundcard. I haven't done it on Linux, but I've used FMOD for this purpose. You might find another API in this question. SDL seems popular.
The general idea is that you set up a streaming buffer, then your c program stuffs the incoming bytes into an array. The size is chosen to balance lag with jitter in the incoming stream. When the array is full, you pass it to the API, and start filling another one while the first plays.
That would seem to be the domain of the alsaloop program. However, this program requires two ALSA devices to work with and you can see from its options that it goes to considerable effort in order to match the data flow of the devices, something that you would not necessarily want to do yourself.
This Stackoverflow topic talks about how to create a virtual userspace device available to ALSA: maybe that is a route worth pursuing.

Is it possible to downsample an audio stream at runtime with Flash or FMS?

I'm no expert in audio, so if any of you folks are, I'd appreciate your insights on this.
My client has a handful of MP3 podcasts stored at a relatively high bit rate, and I'd like to be able to serve those files to her users at "different" bit rates depending on that user's credentials. (For example, if you're an authenticated user, you might get the full, unaltered stream, but if you're not, you'd get a lower-bit-rate version -- or at least a purposely tweaked lower-quality version than the original.)
Seems like there are two options: downsampling at the source and downsampling at the client. In this case, knowing of course that the source stream would arrive at the client at a high bit rate (and that there are considerations to be made about that, which I realize), I'd prefer to alter the stream at the client somehow, rather than on the server, for several reasons.
Is doing so possible with the Flash Player and ActionScript alone, at runtime (even with a third-party library), or does a scenario like this one require a server-based solution? If the latter, can Flash Media Server handle this requirement specifically? Again, I'd like to avoid using FMS if I can, since she doesn't really have the budget for it, but if that's the only option and it's really an option, I'm open to considering it.
Thanks in advance...
Note: Please don't question the sanity of the request -- I realize it might sound a bit strange, but the requirements are what they are. In that light, for purposes of answering the question, you can ignore the source and delivery path of the bits; all I'm really looking for is an explanation of whether (and ideally how) a Flash client can downsample an MP3 audio stream at runtime, irrespective of whether the audio's arriving over a network connection or being read directly from disk. Thanks much!
I'd prefer to alter the stream at the client somehow, rather than on the server, for several reasons.
Please elucidate the reasons, because resampling on the client end would normally be considered crazy: wasting bandwidth sending the higher-quality version to a user who cannot hear it, and risking a canny user ripping the higher-quality stream at it comes in through the network.
In any case the Flash Player doesn't give you the tools to process audio, only play it.
You shouldn't need FMS to process audio at the server end. You could have a server-side script that loaded the newly-uploaded podcasts and saved them back out as lower-bitrate files which could be served to lowly users via a normal web server. For Python see eg. PyMedia, py-lame; or even a shell script using lame or ffmpeg or something from the command line should be pretty easy to pull off.
If storage is at a premium, have you looked into AAC audio? I believe Flash 9 and 10 on desktop browsers will play it. AAC in my experience takes only half of the size of the comparable MP3 (i.e. a 80kbps AAC will sound the same as a 160kbps MP3).
As for playback quality, if I recall correctly there's audio playback settings in the Publish Settings section in the Flash editor. Wether or not the playback bitrate can be changed at runtime is something I'm not sure of.

Resources