.customize_environment failing for previously working apt-get update in cloud shell - google-cloud-shell

.customize_environment was failing and forcing me to boot as root (safe mode).
I've selectively recreated but even a simple apt-get update gets a bunch of errors like...
Err:1 http://packages.cloud.google.com/apt gcsfuse-buster InRelease Temporary failure resolving 'packages.cloud.google.com'
Not sure if this is the root cause, I'm ultimately trying to..
apt-get install -y libxss1
apt-get install -y libgbm-dev
apt-get install -y parallel
In the meantime I'm having to manually run a whole bunch of installs which is getting pretty repetitive.

Normally customize_environment scripts cannot cause shell starts to failure. Unfortunately there was a bug causing this to be possible. The release containing the fix should be out in the next few days.
Would you mind trying to create the script again in a few days and letting me know if it works?
Thanks

Related

Trying to run mariadb using homebrew and receiving the following error Bootstrap failed: 5: Input/output error

When I enter "brew services start mariadb" on the command line I receive the following error -
Bootstrap failed: 5: Input/output error
Try re-running the command as root for richer errors.
Error: Failure while executing; /bin/launchctl bootstrap gui/501 /Users/jordanjohnston/Library/LaunchAgents/homebrew.mxcl.mariadb.plist exited with 5.
I've seen folks having the same error and have tried entering -
"launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mariadb.plist"
followed by -
"launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mariadb.plist"
which does nothing for me, still receiving the same error. I have also entered "brew restart mariadb" which does not work either. I have also uninstalled and reinstalled mariadb which did not work. Thank you in advance for any help!
Also tried the same as you described with no luck. Finally ended up reinstalling Homebrew:
Make any pending updates for XCode. However, it might ask to also update the developer tools later on in step 4
Remove Homebrew. Make sure to have a backup of anything you want to save. For me, it was MariaDB and PostgreSQL databases. I also kept track of any important package I wanted to reinstall later:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)"
Remove any other pending directory as Homebrew suggests:
sudo rm -r /opt/homebrew
Reinstall Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Install MariaDB:
brew install mariadb
Start MariaDB:
brew services start mariadb
Check that is all good:
brew services list
You should see something like this:

Centos7: Can't install nginx (or epel)

I have a clean install of Centos 7 on a RaspberryPi3b+. I am trying to install nginx and am running into problems with each approach.
Most of the research I've done points to installing epel, and then installing nginx. When I run yum install epel-release, I get the error:
No package epel-release available.
Error: Nothing to do.
Some searching led me to wget it directly from fedora. I was able do that. I then ran rpm -ivh epel-release-latest-7.noarch.rpm successfully and then tried yum install nginx. That gave me this long error:
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot retrieve metalink for repository: epel/armhfp. Please verify its path and try again
So, I found another method that doesn't require epel. I created a .repo file for nginx at /etc/yum.repos.d/nginx.repo, and added:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1
I ran yum repolist and got an error:
http://nginx.org/packages/centos/7/armhfp/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
For laughs, I tried installing nginx anyway and got an error similar to the long one above that the nginx repo failed.
Finally, I tried going to nginx.org and finding the correct link and hard-coding it in the repo file. That didn't work either, and now I am well and truly stuck.

Symlink lost within Docker image

I am defining a Dockerfile where I install sqlite3 in a ubuntu based image, something very similar (I also install grpc and rust as well as all the necessary dependencies) to:
FROM ubuntu
RUN apt-get update && \
apt-get install -y sqlite3 libsqlite3-dev&& \
apt-get clean && \
apt-get autoremove
I use this image to built my Rust project within it. The issue that I am facing is that cargo build fails on my GitLab CI due to a linking issue:
Compiling migrations_macros v1.4.0
error: linking with `cc` failed: exit code: 1
...
= note: /usr/bin/ld: cannot find -lsqlite3
I found out that this is due to this symlink not being present on the Docker image that is running on CI:
libsqlite3.so -> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
while the file libsqlite3.so.0.8.6 exists. So if I create the symlink during the CI jobs I can have a working workaround. The weird thing is that if I pull the same exact image from my registry on my pc and run the container I can build without any issue and any change because the symlink is actually there.
What could be the cause of the problem and how to solve it?
After quite a bit of thinking the following ideas come to my mind which could help.
Docker history
Docker command has a build in feature to view the history of a built image. You have the option to identify the problematic command in the DockerFile.
docker history <image id or name>
For more visual filtering i do recommend dive tool but others are also available on google.
Correct docker version
Since in this scenario the two docker instances are different the question is trivial. Are they on the same version of docker daemon and docker file system driver?

Atom on Raspbian?

Very nooby question, but I'm trying to install Atom text editor on Raspbian Stretch. Is it possible? I've heard because it runs on Electron, it's quite slow for Raspbian. I keep getting an error saying:
E: Unable to locate package atom
I'm following the official instructions for Debian. How can I fix this?
As of today you can't install the official package provided for Debian for its mismatching the hardware platform. Provided binary is for running on x86 hardware, but RPi doesn't come with an Intel/AMD processor, but ARM. So, you most probably need to build it from source yourself.
Primer
So, if you really want to build this from source, you should be aware of the waste of disk space caused by the IMHO poorly implemented build tool which is downloading tons of deps and copying and transpiling code around so you'll end up with 2GB+ of files with 80% accounting to dependencies, only. Since my RPi works with 8GB smartcard, only, I couldn't ever meet the need for disk space even though I was bleeding out Linux by manually removing docs, manpages, locales, ton's of outdated and mostly unused apps etc. The build also requires a whole build tooling chain, tons of dev packages for libraries, so there is a limit to milk the system ... 8GB disk drive simply isn't enough for this.
Eventually I tried moving all the files to a USB pen drive. But that drive must be formatted using a filesystem capable of symlinking. So you can't use vfat or FAT32. I didn't succeed to get a 16GB stick formatted with either version of extfs. The mkfs always ended up in a deadlock on trying to write its superblocks. Astonishingly, I couldn't even kill the mkfs with -KILL, but unplugging the drive did help in that case.
So, as a conclusion: here is a short list of steps I passed in expectation to get this working, but in the end I didn't finish due to the memory issues above. And frankly, I stopped caring ... I'd rather work with nano/vi in a terminal than using this ridiculous lego-like built software. I guess, atom is today's version of emacs with regards to the latter's acronym. Maybe you succeed with this, but I won't ...
Build from Source
Inspired by https://discuss.atom.io/t/atom-on-the-raspberry-pi/33332
Install toolchain for building native stuff
sudo apt-get install build-essential git libgnome-keyring-dev fakeroot gconf2 gconf-service libgtk2.0-0 libudev1 libgcrypt20 python rpm libsecret-1-dev xorg-dev
This set of tools was sufficient to build core files without error. Since I didn't start with a fresh installation of Raspbian there might have been some tool I have been using before, so maybe in your case there are more tools to be installed here. Look out for error messages in early stage of building and try to see if some library or header file isn't found. This mostly indicates lack of some package with name ending in -dev to be installed, too. Start by searching for the package using apt search <name-of-mentioned-library> and look for a package combining the missing library's name with suffix -dev. Then install it the usual way by invoking sudo apt-get install <package-name>.
Install up-to-date nodejs
Raspbian Stretch comes with support for NodeJS 8.11 which is basically okay. Install it and its package manager npm using this command:
sudo apt-get install node-js npm
Check installed versions with
node -v
npm -v
This should display 8.x.x on behalf of NodeJs. Use n afterwards if you want to step up:
sudo npm i -g n
sudo n lts
This will switch NodeJS to latest LTS release, which is 10.x as of now. Upgrading NodeJS is optional, but feel advised to always use latest version of npm:
sudo npm i -g npm
Check if upgrades succeeded:
node -v
npm -v
Adjust configuration of npm and install some essential dep:
sudo npm config set -g python /usr/bin/python2
sudo npm i -g node-gyp
Build Atom
Get the source. One option is to pull latest code from its repository:
git clone https://github.com/atom/atom.git
This is creating subfolder atom containing all source files. You might want to download sources of a recent release instead. But this tutorial was made with the sources fetched from Github. So make sure there is subfolder called atom containing sources similar to the ones fetched above.
It's time to start the beast:
cd atom
./script/build
This process will take a while. And it is the culprit that never finished on success in my case due to eating up all disk space over and over again.
Whenever the script fails on error, try to analyze the error, find the cause, fix it, then start the script by repeating the last command above again. If you don't remove any file in subfolder atom in between, the build script keeps passing steps of building atom it has passed successfully before.
Install atom
According to the original tutorial linked before the script should finish on success eventually. Then it's time to install with:
./script/grunt install
I guess this is causing atom to be available as a command from CLI. So, try it out. If everything looks fine you are finally ready to remove the waste of files in subfolder atom.
Feel free to report if this was working in your case.
From what I recall Atom runs 64-bit architecture; need the latest raspberry Pi.
run the following
wget https://atom.io/download/deb && dpkg -i deb

Enabling rtmp_swfurl option in FFmpeg

In Unix, in order to process RTMP live stream URLs some live streams needs swfurl, pageurl extra. We are able to pass this parameters to the RtmpDump and pipe it to the FFmpeg.
However some FFmpeg has rtmp_swfurl its own. How can we enable this options in FFmpeg, not all of them have. Although I have latest version I do not have them.
I have solved the problem by first removing libav-tools and installing ffmpeg manually after download last version from its web site. When I try to install ffmpeg without removing libav-tools it did not work, hence you should remove then install ffmpeg and install libav-tools again
sudo apt-get --purge remove libav-tools
sudo apt-get --purge autoremove

Resources