I want to prevent my file to copy from any devices. Means that i have memory card and when i insert it in any device like android or computer than my file can't copy from that.
Any resources to read or any place that i can get some information about copy preventing
Maybe you could partition the SD card and leave some space unpartitioned and write some magic bytes to it. When your program executes you'll determine the device the application ran from. If this is the SD card you'll try and read the raw bytes from the SD card and compare it with the magic bytes, if the program is not ran from SD card or if the magic bytes do not match it does not execute. Done!
Please don't get me wrong, this won't be easy, but maybe it could work. Copying would still work, but the file will be useless. Also, this is not a ready to made solution, but rather an outline how you could achieve your goal.
For accessing SD-Cards raw data please see
http://www.codeproject.com/Articles/28314/Reading-and-Writing-to-Raw-Disk-Sectors
And for partitioning http://geeks.lockergnome.com/profiles/blogs/how-to-partition-an-sd-card
Related
I am trying to write firmware code for RFID device which will have config data storage as well as the temporary storage that maybe can be read and then if convenient be removed.
I am using Arduino IDE to program this on an ESP32 Wroom32. I have tried to understand how the storage actually works, finding various resources. One being datasheet of the same, that says that there could be 4 MB of program code storage possible, and that sounds fantastic, my question is if for example I take EEPROM library and save about 214 bytes to config which will rarely be touched, where is it exactly being stored? Is it simply in NVS? I can see that the default settings show me about 1310720 Bytes of storage and I know that I can utilise other partitions as well to store more in case I ever try to have more sketch storage than 1310720 Bytes.
My question is if I am trying to store data such as config and real time data, how much would I possibly be able to store? Is there a limit? Would it cause any kind of problems if I try to use the other such partitions to write the code? Will it be only NVS that is storing that data or can I utilise the other app0, app1, spiffs etc to store extra Bytes? A lot of the resources are confusing me, here are the data that I am referring to from online 1 and 2. Any idea would help me proceed very further.
P.S. I am aware that the EEPROM library has been deprecated and I shall use either Preferences or littlefs for better management but if I am aware correctly I can still utilise them, and without much issue that will work since there is still compatibility for that. I am also curious about using inbuilt SRAM of RTC with the RTC attribute RTC_DATA_ATTR, since I hope to also utilise deep sleep mode incorporated.
My question is if I am trying to store data such as config and real time data, how much would I possibly be able to store? Is there a limit?
It depends. First on the module; there is ESP32-WROOM with 4MB flash but you could also order different flash sizes.
Then the question is: how big is your application (code)? Obviously this needs to be saved on the flash as well, reducing the total usable amount for data storage (by the size of the application). Also there is a bootloader which needs some small space as well.
Next, ESP32 is using a partition scheme. One partition is reserved for the bootloader. The rest can be divided between one or more application partitions, NVS partitions, and possibly other utility partitions (i.e. OTAData).
If you are using the OTA functions, there will be at least 3 application partitions of equal size, further reducing the total usable amount for data storage.
So the absolute upper limit of what you can store using NVS functions is the size of your NVS partition. However since it's a key-value storage, you must take into account the size of the key, which can be considerably larger than the data you store (up to 12 times for a 12 character key and a uint8 value).
So there is no way to say exactly how much data you can put into the system without knowing exactly how you're going to use it. For example, you could store one very large "blob" value that could take "up to 97.6%" of the partition size. But you could not store 10 "blob" values of 1/10 (9.76%) the size since you must take into account the keys and some flash metadata used internally.
Would it cause any kind of problems if I try to use the other such partitions to write the code?
That depends on what these partitions are used for. If you override the partition table, or bootloader, or your application code, yes there will be problems. If there is "free space" then it won't be a problem, but then you should redefine this free space as NVS space. It's nice of Espressif to provide this NVS library, dont work around it, work with it.
Using Espressif's esptool you can create custom partition tables where you could minimize the size of the application partition to just barely fit your application, and maximize the NVS partition size. This way you will get the most storage out of your device without manually implementing a filesystem. If you are using OTA, you should leave some empty room in your application partition, in case your application code grows, as it usually does.
Will it be only NVS that is storing that data or can I utilise the other app0, app1, spiffs etc to store extra Bytes?
You absolutely can, but you will destroy whatever data is on that partition. And you will have lots of work to do, because you'll have to implement all of this yourself (basically roll your own flash driver).
If you don't need OTA, you dont need app0/app1 partitions at all.
Note that SPIFFS is also a way to store data, except it's not key-value but file-based. If you dont need it, remove that partition, and fill the space with your NVS partition.
On the other hand, SPIFFS is probably a better alternative if you are really tight on flash space, since you can omit the key and do your own referencing.
I have been writing lots of different kinds of Sketches for Arduino, but there is something I have not heard if it is possible.
I would like to be able to get an Arduino to save preferences that can be restored back when the Arduino restarts. The kind of data that you want to change (in the field) without having to hardcode it in the sketch, or needing to upload changes from the IDE.
Examples:
Thermostat settings
Time lapse camera frame rate
So if I set the thermostat for 68° and the power goes out, I want it to remember what temperature I set when the power comes back on.
Unless you have a large amount of settings that you wish to save, EEPROM should suffice for your purposes. What I would do is define a struct to store the relevant options, and use the EEPROM library to read/write the entire struct.
A wonderful example of how to do that exists on the Arduino reference pages:
EEPROM Put
I was hoping to do some kind of open file, read/write operation like you can do in php, where I can read/write whole strings at a time
There is really no need to do this unless you have a large amount of options. Otherwise your best bet is to use something like an SD card, which does support FileIO operations
Seems you looking for Arduino EEPROM
EEPROM: memory whose values are kept when the board is turned off (like a tiny hard drive). This library enables you to read and write those bytes.
I'm working on a project with an Arduino, and I'd like to be able to save some data persistently. I'm already using an Ethernet shield, which has a MicroSD reader.
The data I'm saving will be incredibly small. At the moment, I'll just be saving 3 bytes at a time. What I'd really like is a way to open the SD card for writing starting at byte x and then write y bytes of data. When I want to read it back, I just read y bytes starting at byte x.
However, all the code I've seen involves working with a filesystem, which seems like an unneeded overhead. I don't need this data to be readable on any other system, storage space isn't an issue, and there's no other data on the card to worry about. Is there a way to just write binary data directly to an SD card?
It is possible to write raw binary data to an SD card. Most people do this using the 4-pin SPI interface supported by the SD card. Unfortunately, data isn't byte-addressed, but block-addressed (block size usually 512 bytes).
This means if you wanted to write 4 bytes at byte 516, you'd have to read in block 0x00000001 (the second block), and then calculate an offset, write your data, then write the entire block back. (I can't say that this limitation applies to the SD interface using more pins, I have no experience with it)
This complication is why a lot of people opt for using libraries that include "unneeded overhead".
With that said, I've had to do this in the past, because I needed a way of logging data that was robust in the face of power failures. I found the following resource very helpful:
http://elm-chan.org/docs/mmc/mmc_e.html
You'll probably find it easier to make your smaller writes to a memory buffer, and dump them to the SD card when you have a large enough amount of data to make it worthwhile.
If you look around, you'll find plenty of open-source code dealing with the SD SPI interface to make use of directly, or as reference to implement your own system.
The question may looks duplicate. But i am not getting the answer which i am looking.
The problem is, in unix, one of the 4GL binary is fetching data from the table using cursor and writing the data in .txt file.
The table contains around 50 Million records.
The binary took lot of time and not completing. the .txt file is also 0 byte.
I want to know the possibilities why the records are not written in the .txt file.
Note: There is enough disk space available.
Also, for 30 Million records, i can get the data in the .txt file as i expected.
The information you provide is insufficient to tell for sure why the file is not written.
In UNIX, a text file is just like any another file - a collection of bytes. No specific limit (or structure) is enforced on "row size" or "row count," although obviously, some programs might have certain limits on maximum supported line sizes and such (depending on their implementation).
When a program starts writing data to a file (i.e. once the internal buffer is flushed for the first time) the file will no longer be zero size, so clearly your binqary is doing something else all that time (unless it wipes out the file as part of the cleanup).
Try running your executable via strace to see the file I/O activity - that would give some clues as to what is going on.
Try closing the writer if you are using one to write to the file. It achieves the dual purpose of closing the resource along with flushing the remaining contents of the buffer.
CPU calculated output needs to be flushed if you are using any mechanism of buffered writer. I have encountered such situations a few times and in almost all cases, the issue was that of flushing the output.
In java specifically, usually the best practice of writing data involves buffers. So when the buffer limit is reached, it gets written to the file but doesn't get written to the file when the end of buffer has not been reached yet. This happens when program closes without flushing the buffered writer.
So, in your case, if the processing time that it takes is reasonable and still the output is not on the file, it may mean that the output has been calculated and put on the RAM but could not be written to the file (which represents the disk) due to the output not being flushed.
You can also consider the answers to this question.
I have a fairly generic question, so please pardon if it is a bit vague.
So, let's a assume a file of 1GB, that needs to be encrypted and later decrypted on a given system.
Problem is that the system has less than 512 mb of free memory and about 1.5 GB storage space (give or take), so, with the file "onboard" we have about ~500 MB of "hard drive scratch space" and less than 512 mb RAM to "play with".
The system is not unlikely to experience an "unscheduled power down" at any moment during encryption or decryption, and needs to be able to successfully resume the encryption/decryption process after being powered up again (and this seems like an extra-unpleasant nut to tackle).
The questions are:
1) is it at all doable :) ?
2) what would be the best strategy to go about
a) encrypting/decrypting with so little scratch space (can't have the entire file lying around while decrypting/encrypting, need to truncate it "on the fly" somehow...)
and
b) implementing a disaster recovery that would work in such a constrained environment?
P.S.:
The cipher used has to be AES.
I looked into AES-CTR specifically but it does not seem to bode all that well for the disaster recovery shenanigan in an environment where you can't keep the entire decrypted file around till the end...
[edited to add]
I think I'll be doing it the Iserni way after all.
It is doable, provided you have a means to save the AES status vector together with the file position.
Save AES status and file position P to files STAGE1 and STAGE2
Read one chunk (say, 10 megabytes) of encrypted/decrypted data
Write the decrypted/encrypted chunk to external scratch SCRATCH
Log the fact that SCRATCH is completed
Write SCRATCH over the original file at the same position
Log the fact that SCRATCH has been successfully copied
Goto 1
If you get a hard crash after stage 1, and STAGE1 and STAGE2 disagree, you just restart and assume the stage with the earliest P to be good.
If you get a hard crash during or after stage 2, you lose 10 megabytes worth of work: but the AES and P are good, so you just repeat stage 2.
If you crash at stage 3, then on recovery you won't find the marker of stage 4, and so will know that SCRATCH is unreliable and must be regenerated. Having STAGE1/STAGE2, you are able to do so.
If you crash at stage 4, you will BELIEVE that SCRATCH must be regenerated, even if you could avoid this -- but you lose nothing in regenerating except a little time.
By the same token, if you crash during 5, or before 6 is committed to disk, you just repeat stages 5 and 6. You know you don't have to regenerate SCRATCH because stage 4 was committed to disk. If you crash after stage 1, you will still have a good SCRATCH to copy.
All this assumes that 10 MB is more than a cache's (OS + hard disk if writeback) worth of data. If it is not, raise to 32 or 64 MB. Recovery will be proportionately slower.
It might help to flush() and sync(), if these functions are available, after every write-stage has been completed.
Total write time is a bit more than twice normal, because of the need of "writing twice" in order to be sure.
You have to work with the large file in chunks. Break a piece of the file off, encrypt it, and save it to disk; once saved, discard the unencrypted piece. Repeat. To decrypt, grab an encrypted piece, decrypt it, store the unencrypted chunk. Discard the encrypted piece. Repeat. When done decrypting the pieces, concatenate them.
Surely this is doable.
The "largest" (not large at all however) problem is that when you encrypt say 128 Mb of original data, you need to remove them from the source file. To do this you need to copy the remainder of the file to the beginning and then truncate the file. This would take time. During this step power can be turned off, but you don't care much -- you know the size of data you've encrypted (if you encrypt data by blocks with size multiple to 16 bytes, the size of the encrypted data will be equal to size that was or has to be removed from the decrypted file). Unfortunately it seems to be easier to invent the scheme than to explain it :), but I really see no problem other than extra copy operations which will slowdown the process. And no, there's no generic way to strip the data from the beginning of the file without copying the remainder to the beginning.