Options in bitbake to link files to the boot partition - bitbake

I have a bitbake configuration that generates two partitions, the BOOT (FAT) partition containing uBoot, uEnv.txt, etc. and a root file system that gets mounted read-only. There may be instances where the filesystem isn't a separate partition, but rather a ramdisk, so I'm trying to enforce a design pattern that works in both instances:
What I'm trying to do is provide some of the files in the root filesystem as links to locations on the SD card. This way I can build a single SD card image and the minor edits for node IDs or names can be easily tweaked by end-users. So for example, if /etc/special_config.conf would be a useful one, then rather than store it on the read-only partition, a link would be created pointing back to the real file on the BOOT partition.
So far I've tried making a recipe that, for that case, does the following:
IMAGE_BOOT_FILES += "special_config.conf"
do_install () {
ln -s /media/BOOT/special_config.conf \
${D}${sysconfigdir}/special_config.conf
}
This doesn't seem to do anything. The IMAGE_BOOT_FILES doesn't collect the special_config.conf file into the BOOT partition, as if when the system image gets populated all of those changes get wiped out.
Has anyone seen a clever way to enforce this kind of behavior in BitBake?

If I understand correctly, you get your ${sysconfdir}/special_config.conf symlink in the image (via a package built from recipe mentioned), but you don't get the special_config.conf file on your BOOT partition using wic image fstype.
If that's the case, then the only problem is that you define IMAGE_BOOT_FILES in the package recipe, rather than defining it in the image recipe, because this variable is only evaluated at image build time. So drop that from your config file recipe and add it to the image recipe, it should work this way.

Related

Delete a Google Storage folder including all versions of objects inside

Hi and thanks in advance. I want to delete a folder from Google Cloud Storage, including all the versions of all the objects inside. That's easy when you use gsutil from your laptop (you can just use the folder name as prefix and put the flag to delete all versions/generations of each object)
..but I want it in a script that is triggered periodically (for example when I'm on holidays). My current ideas are Apps Script and Google Cloud Functions (or firebase functions). The problem is that in these cases I don't have an interface as powerful as gsutil, I have to use REST API, so I cannot say something like "delete everything with this prefix" and neither "all the versions of this object". Thus the best I can do is
a) List all the object given a prefix. So for prefix "myFolder" I receive:
myFolder/obj1 - generation 10
myFolder/obj1 - generation 15
myFolder/obj2 - generation 12
... and so on for hundreds of files and at least 1 generation/version per file.
b) For each file-generation delete it giving the complete object name plus its generation.
As you can see that seems a lot of work. Do you know a better alternative?
Listing the objects you want to delete and deleting them is the only way to achieve what you want.
The only alternative is to use Lifecycle which can delete objects for you automatically based on conditions, if the conditions satisfy your requirements.

adding arbitrary information to recorder

I'm running an optimization in OpenMDAO. One of the components in the model writes a few files to a directory which is given a random name. I track the progress of the optimization using a SqliteRecorder. I would like to be able to correlate iterations in the sqlite database to the directories of each evaluation.
Is there a way to attach arbitrary information to a recorder - in this case, the directory name?
i suggest that you add a string typed output to the component and set it to the folder name. Then the recorder will capture it.

Can Graphite (whisper) metrics be renamed?

I'm preparing to refactor some Graphite metric names, and would like to be able to preserve the historical data. Can the .wsp files be renamed (and possibly moved to new directories if the higher level components change)?
Example: group.subgroup1.metric is stored as:
/opt/graphite/storage/whisper/group/subgroup1/metric.wsp
Can I simply stop loading data and move metric.wsp to metricnew.wsp?
Can I move metric.wsp to whisper/group/subgroup2/metric.asp?
Yes.
The storage architecture is pretty flexible. Rename/move/delete away, just make sure update your storage-schema and aggregation settings for the new location/pattern.
More advanced use cases, like merging into existing whisper files, can get tricky but also can be done with the help of the included scripts. This contains an overview of the Whisper Scripts included. Check it out:
https://github.com/graphite-project/whisper
That said, it sounds like you don't already have existing data in the new target location so you can just move them.

One to one correspending to files - in unix - log files

I am writing a Log Unifier program. That is, I have a system that produces logs:
my.log, my.log.1, my.log.2, my.log.3...
I want on each iteration to store the number of lines I've read from a certain file, so that on the next iteration - I can continue reading on from that place.
The problem is that when the files are full, they roll:
The last log is deleted
...
my.log.2 becomes my.log.3
my.log.1 becomes my.log.2
my.log becomes my.log.1
and a new my.log is created
I can ofcourse keep track of them, using inodes - which are almost a one-to-one correspondence to files.
I say "almost", because I fear of the following scenario:
Between two of my iterations - some files are deleted (let's say the logging is very fast), and are then new files are created and some have inodes of files just deleted. The problem is now - that I will mistake these files as old files - and start reading from line 500 (for example) instead of 0.
So I am hoping to find a way to solve this- here are a few directions I thought about - that may help you help me:
Either another 1-to-1 correspondence other than inodes.
An ability to mark a file. I thought about using chmod +x to mark the file as an
existing file, and for new files that don't have these permissions - I will know they are new - but if somebody were to change the permissions manually, that would confuse my program. So if you have any other way to mark.
I thought about creating soft links to a file that are deleted when the file is deleted. That would allow me to know which files got deleted.
Any way to get the "creation date"
Any idea that comes to mind - maybe using timestamps, atime, ctime, mtime in some clever way - all will be good, as long as they will allow me to know which files are new, or any idea creating a one-to-one correspondence to files.
Thank you
I can think of a few alternatives:
Use POSIX extended attributes to store metadata about each log file that your program can use for its operation.
It should be a safe assumption that the contents of old log files are not modified after being archived, i.e. after my.log becomes my.log.1. You could generate a hash for each file (e.g. SHA-256) to uniquely identify it.
All decent log formats embed a timestamp in each entry. You could use the timestamp of the first entry - or even the whole entry itself - in the file for identification purposes. Log files are usually rolled on a periodic basis, which would ensure a different starting timestamp for each file.

How to keep the sequence file created by map in hadoop

I'm using Hadoop and working with a map task that creates files that I want to keep, currently I am passing these files through the collector to the reduce task. The reduce task then passes these files on to its collector, this allows me to retain the files.
My question is how do I reliably and efficiently keep the files created by map?
I know I can turn off the automatic deletion of map's output, but that is frowned upon are they any better approaches?
You could split it up into two jobs.
First create a map only job outputting the sequence files you want.
Then, taking your existing job (doing really nothing in the map anymore but you could do some crunching depending on your implementation & use cases) and reducing as you do now inputting the previous map only job through as your input to the second job.
You can wrap this all up in one jar running the 2 jars as such passing the output path as an argument to the second jobs input path.

Resources