Is that possible to link a huge library within small memory? - qt

I've tried several times to cross-compile Qt4.8.2 with mingw32 4.7 on my poor little linux box , which only has 2GB mem,all failed because of the memory limitation. The mingw ld just keeps swallowing the memory until its belly explode. I just want to ask if it is even possible to link such big-ass lib within such a small memory. If it's definitely a no-go, I'd have to resort to some other approaches. Thanks in advance.
~~~~~
Haha! Finally, I found the answer. I just need to temporarily increase my swap memory by creating a swap file on the harddrive. The specific steps will be shown as below:
sudo dd if=/dev/zero of=/mnt/swapfile bs=1M count=2048 #create a 2GB big emptyfile
sudo mkswap /mnt/swapfile #format it as a swapfile
sudo swapon /mnt/swapfile #turn on the newly created swap
#... build your big-ass package ...
sudo swapoff /mnt/swapfile #turn off the swap. That means your swap space will be reassigned to your originally arranged swap partion on your HD
Happy hacking! ;)

Related

rsync with --fake-super not preserving owner after restore - Monterey/Synology DS920+/rsync 3

Working through a backup script debug backup/restore on:
macStudio M1 / macOS Monterey <-> Synology DS920+
On the mac, I've downloaded HomeBrew rsync 3.2.4
On the synology, I'm running what it shipped with - rsync 3.1.2
For debug, I used /Volumes/Recovery which has files with
owner set to root and group set to wheel.
src="/Volumes/Recovery/"
dest="$userID#$remoteIP::NetBackup/MacStudio1/Volumes/Recovery/
restore="/tmp/RestoreBackup/"
userID is has admin privileges on the NAS.
rsync services are enabled on the NAS.
user directories are enabled on the NAS.
Backup:
rsync -ahX --delete -M--fake-super $src $dest
Restore:
rsync -ahX --delete -M--fake-super $dest $restore
It all seems to work without error. Files are on restore as expected except I'm seeing the files have owner set to my ID.
for example, ls -laR shows (abridged) :
/Volumes/Recovery/E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 root wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
/tmp/RestoreBackup//E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 myID wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
I've looked at the rsync man (more than once) and I see words like "To affect the remote side of a remote-shell connection...".
However, I'm not sure how to apply that to a backup or a restore.
Do I want to effect the remote side on the backup?
Do I want to effect the remote side on the restore?
Any guidance on what I should have set the options to?
So looks like I'm not getting any responses. Guess I'll wrap this up with my observations.
In testing I've done on a user directory (with test data files), the rsync is working to save and restore files with extended attributes (I verified they got set and that they matched on restore). So I think the overall switches on the rsync commands are correct.
The problems I'm seeing on backing up and restoring the "Recovery" volume have the following issues:
All regular files have the wrong "owner". The groups look correct.
The one linked directory has the wrong "owner" and the wrong "group".
I believe (1) problem is caused because I need to use sudo rsync on the restore. I'm guessing that the files that are backup up have the correct owner/group in metadata, but the restore doesn't have the authority to set the owner to 'root'. I tried using sudo briefly and it died with some errors I didn't quite understand. I believe I need to set up the etc/sudoers file with some information. The (2) problem may partially go away if I fix (1) or it may need some additional rsync flags to do with linked files and directories.
Overall, my backup script is working, but I'm now starting to question if I know enough to know what to backup on macOS. A rather length article by the CCC folk seems to explain this but it leaves me feeling I don't know enough above macOS data structures and it seems some of this may change over time when new version are released. I had started with the idea of just backuping up everything under /* (Macintosh HD), and perhaps this would work, though there are at least somethings that need to be excluded (like /Volumes/* and perhaps /tmp/* ). Also noticed that there is a /System tree that doesn't show up with ls /* that CCC folk say to leave alone. So not exactly got a good feeling I understand what I need to know.
So for the moment I'm going to sideline this effort. I've got Time Machine running to my NAS and I need to get the NAS backed up to a cloud first. My fall back positions are either (1) to just be dependent on TimeMachine only, (2) to buy and use CCC as a secondary backup, or (3) to create a backup with just my user directories as a secondary backup - which will require my reinstalling any 3rd party software in the event that I can't recover with Time Machine.

Data loss under Linux

I work under linux debian buster.
This morning I worked as usual and my PC crashed. I forced it to shut down and when I restarted, it presents the terminal with initramfs (if I'm not mistaken) by inviting me to do an fsck.
This is not the first time this has happened to me. I usually do an fsck -y / dev / sda1 then fsck -y / dev / sda3 for my root and home partition.
But this morning, after crashing, when I did that, he scrolled through several messages quickly, and that worried me. At the end I restarted my PC and voila, I can no longer find my work folder. In fact, I have a folder containing two other folders. Hey there is only one visible folder left. All of my shortcuts to the missing folder no longer works.
When I make a df -h, the size appears as if the file is present, but impossible to see it. It is not in / lost + found
I have a global search in my home, and nothing
I can no longer work, all my work was there, I have a 1 month old backup, but good.
If really really, you have a solution, please I'm desperate.
My disk is partitioned into 4 including 3 for linux and one for ntfs
Thank you
too happy
I found my data.
What made me have 0.5% of hope (I must admit I was at the edge of the window with the pc and looking for my data one last time), was the size of my partition. When I right click on home and I look at the size I have 31go, and with a
df -h
and as a result:
/ dev / sda3 192G 95G 87G 53% / home
or 95 GB of used, compared to 31 GB above, so where are the 60 GB?
Before the problem, I was around 95 GB in size.
It is true that several messages including the word inode or node (I do not remember) with numbers quickly appeared during fsck -y
Someone suggested I take a look at / home / lost + found, and when I did, I saw nothing. But when I logged in as root in a terminal, then "cd / home" and ls lost + found ", I saw numbers like # 13032 # 13036 # 1181667, and a folder with the number # 4703. So I made a "chmod 777 -R lost + found" in order to be able to access it via my account (simple user account). Once the command was executed and after a few minutes, I opened / home / lost + found via a "nemo" file explorer and TADAM, all my data was there.
I’ve done SEVERAL SAVINGS and vowed not to trust fsck -y anymore, even though it’s a great tool, but I’ll use it with caution.

cvxopt uses just one core, need to run on all / some

I call cvxopt.glpk.ilp in Python 3.6.6, cvxopt==1.2.3 for a boolean optimization problem with about 500k boolean variables. It is solved in 1.5 hours, but it seems to run on just one core! How can I make it run on all or a specific set of cores?
The server with Linux Ubuntu x86_64 has 16 or 32 physical cores. My process affinity is 64 cores (I assume due to hyperthreading).
> grep ^cpu\\scores /proc/cpuinfo | uniq
16
> grep -c ^processor /proc/cpuinfo
64
> taskset -cp <PID>
pid <PID> current affinity list: 0-63
However top shows only 100% CPU for my process, and htop shows that only one core is 100% busy (some others are slightly loaded presumably by other users).
I set OMP_NUM_THREADS=32 and started my program again, but still one core. It's a bit difficult to restart the server itself. I don't have root access to the server.
I installed cvxopt from a company's internal repo which should be a mirror of PyPI. The following libs are installed in /usr/lib: liblapack, liblapack_atlas, libopenblas, libblas, libcblas, libatlas.
Here some SO-user writes, that GLPK is not multithreaded. This is the solver used by default as cvxopt has no own MIP-solver.
As cvxopt only supports GLPK as open-source mixed-integer programming solver, you are out of luck.
Alternatively you can use CoinOR's Cbc, which is usually a much better solver than GLPK while still being open-source. This one also can be compiled with parallelization. See some benchmarks which also indicate that GLPK is really without parallel support.
But as there is no support in cvxopt, you will need some alternative access-point:
own C/C++ based wrapper
pulp
binary install available
python-mip
binary install available
Google's ortools
binary install available
cylp
cvxpy + cylp
binary install available for cvxpy; without cylp-build
Those:
have very different modelling-styles (from completely low-level: cylp to very high-level: cvxpy)
i'm not sure if all those builds are compiled with enable-parallel (which is needed when compiling Cbx)
Furthermore: don't expect too much gain from multithreading. It's usually way worse than linear speedup (as for all combinatorial-optimization problems which are not based on brute-force).
(Imho the GIL does not matter as all those are C-extensions where the GIL is not in the way)

Extract files from Chrome OS / Chromebook recovery image

My Problem: I am trying get hold of the official Chrome WideVine CDM plugin for an ARM architecture.
My Understanding So Far: Given ARM-based Chromebooks can stream Netflix (and Netflix uses the WideVine CRM plugin), I am lead to believe a Chrome OS installation should contain the files I'm after. As I don't have access to an ARM-based Chromebook, my next best is a Chromebook recovery image.
Where I'm up to: I have downloaded a HP Chromebook 11 recovery image, chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin, from here (the HP Chromebook 11 is ARM-based)
What I'd like to do next: Extract two files from the recovery image.
Note: I don't have access to an ARM based Chromebook to just copy the files from :/
Does anyone know how I could do such a thing?
The .bin file is just a disk image that contains many partitions. You can "load" the image by running sudo kpartx -av chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin (the -v is for verbose mode). This will load 12 partitions (from /dev/mapper/loop0p1 to /dev/mapper/loop0p12) and make them available for mounting, and you should see some additional drives in your file manager.
In this case, the partition you're looking for is labelled ROOT-A, and corresponds to the third partition (/dev/mapper/loop0p3). For some reason, opening it in my file manager directly didn't work, so I had to mount it manually by running sudo mount -t ext2 /dev/mapper/loop0p3 -o ro /media/saikrishna/chromeos/. This will mount the ext2 partition in read-only mode in the /media/saikrishna/chromeos directory (change the last part to an existing empty directory on your system).
To remove the mappings, run sudo kpartx -dv chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin. If that doesn't print out anything (which was the case for me), run sudo kpartx -dv /dev/loop0.

Rsync freezing mid transfer with no warning

Intermittently, I'll run into an issue where my rsync script will simply freeze mid transfer. This freeze may occur while downloading a file, or amidst listing uptodate files.
I'm running this on my mac, here's the code below:
rsync -vvhrtplHP -e "ssh" --rsync-path="sudo rsync" --filter=". $FILTER" --delete --delete-excluded --log-file="$BACKUP/log" --link-dest="$BACKUP/current/" $CONNECT:$BASE $BACKUP/$DATE/
For example, the console will output the download progress of a file, and stop at an arbitrary percentage and speed. The log doesn't even list the file (probably because it's incomplete).
I'll try numerous attempts and it'll freeze on different files or steps with no rhyme or reason. Terminal will show the loading icon while it's working, the output will freeze, and after a few seconds the loading icon vanishes.
Any ideas what could be causing this? I'm using rsync 3.1.0 on Mavericks. Could it be a connectivity issue or a system max execution time issue?
I have had rsync freezes in the past and I recall reading somewhere that it may have to do with rsync having to look for files to link, something increasingly difficult as you accumulate backup over backup. I suggest you skip the --link-dest in the next backup if your disk space allows it (to break the chain, so to speak).
As mentioned in https://serverfault.com/a/207693 you could use the hardlink command afterwards, I haven't tried it yet.
Just had a similar problem while doing rsync from harddisk to a fat32 usb. rsync froze already in less than a second in my case and did not react at all after that.
Found out that the problem was a combination of usage of hardlinks on the harddisk and having fat32 filesystem on the usb drive, which does not support hardlinks.
Formatting the usb drive with ext4 solved the problem for me.

Resources