How to root your own AOSP build? - root

We have root access to our AOSP build, when we use:
lunch smarc_mx8mq-eng
How can we get a root shell after using:
lunch smarc_mx8mq-user
This can be any form of temporary hack to the AOSP source, but the build must remain a user build.
I've read about / being mounted with nosetuid, but that doesn't seem to be the case for us:
smarc_mx8mq:/ # mount
/dev/block/mmcblk1p5 on / type ext4 (ro,seclabel,relatime,block_validity,delalloc,barrier,user_xattr,acl,inode_readahead_blks=8)
(I will check this again once I have built a user build.)
What's the best way of rooting our own AOSP build?

The solution was to flash the boot-debug.img into the boot partition.
$ sudo fastboot flash boot boot-debug.img
target reported max download size of 419430400 bytes
sending 'boot_a' (43252 KB)...
OKAY [ 2.270s]
writing 'boot_a'...
OKAY [ 2.163s]
finished. total time: 4.433s
...
$ adb shell
smarc_mx8mq:/ $ exit
$ adb root
restarting adbd as root
$ adb shell
smarc_mx8mq:/ #

Related

How to unmount gphoto2 device as root

As a regular user I can see my mounted camera with gio mount,
user#localhost $ gio mount -l
Volume(0): NIKON DSC D3200
Type: GProxyVolume (GProxyVolumeMonitorGPhoto2)
Mount(0): NIKON DSC D3200 -> gphoto2://%5Busb%3A002,007%5D/
Type: GProxyShadowMount (GProxyVolumeMonitorGPhoto2)
Mount(1): NIKON DSC D3200 -> gphoto2://%5Busb%3A002,007%5D/
Type: GDaemonMount
But switching to root, it becomes invisible
root#localhost $ gio mount -l
Volume(0): Filesystem root
Type: GUnixVolume
Mount(0): Filesystem root -> file:///
Type: GUnixMount
So running script as root, I cannot unmount camera with the following command,
gio mount -s gphoto2
This is because GIO uses a different backend for enumerating mounts when running as root, because the GVFS daemons which provide (for example) gphoto2 support run in a user session (on the D-Bus session bus) rather than system-wide. So root can’t talk to them.
Run your script as non-root, or you’ll have to do some plumbing to give your script explicit access to your D-Bus session bus (but then it’ll only work when your user session is active).
You shouldn’t need root privileges to list or unmount GIO mounts: permission for that is controlled by polkit, and you should get an authorisation prompt for it if it’s not allowed by default.

Is the editor Atom able to open projects on a remote server?

Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?
Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.
I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.
Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.
Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs
I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor

Spark - No Space Left on device error

I am getting the below error . The Spark_local_dir has been set and has enough space and inodes left.
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.apache.spark.storage.TimeTrackingOutputStream.write(TimeTrackingOutputStream.java:58)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
at org.xerial.snappy.SnappyOutputStream.dumpOutput(SnappyOutputStream.java:294)
at org.xerial.snappy.SnappyOutputStream.compressInput(SnappyOutputStream.java:306)
at org.xerial.snappy.SnappyOutputStream.rawWrite(SnappyOutputStream.java:245)
at org.xerial.snappy.SnappyOutputStream.write(SnappyOutputStream.java:107)
at org.apache.spark.io.SnappyOutputStreamWrapper.write(CompressionCodec.scala:190)
at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:218)
at org.apache.spark.util.collection.ChainedBuffer.read(ChainedBuffer.scala:56)
at org.apache.spark.util.collection.PartitionedSerializedPairBuffer$$anon$2.writeNext(PartitionedSerializedPairBuffer.scala:137)
at org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:757)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
cat spark-env.sh |grep -i local
export SPARK_LOCAL_DIRS=/var/log/hadoop/spark
disk usage
df -h /var/log/hadoop/spark
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/meta 200G 1.1G 199G 1% /var/log/hadoop
inodes
df -i /var/log/hadoop/spark
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/meta 209711104 185 209710919 1% /var/log/hadoop
I also encountered the same issue. To resolve it, I first checked my hdfs disk usage by running hdfs dfsadmin -report.
The Non DFS Used column was above 250 GB. This implied that my logs or tmp or intermediate data was consuming too much space.
After running du -lh | grep G from root folder I figured that spark/work was consuming over 200 GB.
After looking at the folders inside spark/work I understood that by mistake I forgot to uncomment System.out.println statement and hence the logs were consuming high space.
If you're running YARN in yarn-cluster mode then the local dirs used by both Spark executors and driver will be taken from YARN config (yarn.nodemanager.local-dirs). spark.local.dir and your env variable will be ignored.
If you're running YARN in yarn-client mode then the executors will use the local dirs configured the in the YARN config again but the driver will use the one you specified in your env variable because in that mode the driver is not ran on the YARN cluster.
So try setting that config.
You can find a bit more information in the documentation
And there's even a whole section on running spark on yarn
Please check how many inodes were used by hadoop. If they all have gone, the generic error would be the same, no space left, while there is still a space.

Nginx service not starting on Windows 10 - nginx: [alert] could not open error log file: CreateFile()

I have an Nginx service that's configured to start automatically on my Windows 10; however, this morning, the service wouldn't start.
The error log says: nginx: [alert] could not open error log file: CreateFile() "C:\someForlderName\build\distribution\.\nginx/logs/error.log" failed (3: The system cannot find the path specified)
Looking at the path in the error log above, I do NOT have the /logs/ folder on my local system so it looks like Nginx doesn't have the proper permissions to create that folder?
I'm setup as an admin user and my service is set to Log On As - Local System Account
This only happens on Windows 10; but the service starts and works on
older Windows i.e 8.1
So does anyone know how to grant administrator's permissions to Nginx so that Nginx can create folders and files on Windows 10 ?
You need:
To install nginx/Windows, download the latest mainline version distribution (1.13.8), since the mainline branch of nginx contains all known fixes. Then unpack the distribution, go to the nginx-1.13.8 directory, and run nginx. Here is an example for the drive C: root directory: (Run cmd as administrator)
cd c:\
unzip nginx-1.13.8.zip
cd nginx-1.13.8
start nginx
Go to: http://localhost:80 -> test install
Goback to console cmd: "nginx -s stop"
Run for next time:
Config with file: "C:\nginx-1.13.8\conf\nginx.conf"
Open cmd as administrator
Run bash: "cd C:\nginx-1.13.8"
Run nginx with bash: "start nginx" . If you run with bash: "nginx", will get trouble for exit nginx.
And
nginx -s stop #fast shutdown
nginx -s quit #graceful shutdown
nginx -s reload #changing configuration, starting new worker processes with a new configuration, graceful shutdown of old worker processes
nginx -s reopen #re-opening log files
Under the directory that you run nginx.exe, try to create a directory named logs, and a file named error.log under log.
It should pass this error.
After downloading zip file, you have unzip.
Make sure that you dont have nested folder names. You have to copy your folder which has nginx.exe file in it, and paste it into C:/ folder.
While running commands, like nginx -s stop, make sure that current your working directory is same as the nginx.exe file.
enter image description here
Nginx start on default port 80, not 8080. Try localhost:80 on browser.
If you want to change port, open C:\nginx-1.16.1\conf\nginx.conf with text editor.
change port number what you want use default port.
server {
listen 80;
server_name localhost;
to:
server {
listen 8080;
server_name localhost;
I had a similar issue with starting the nginx server, but after looking at it closely and trying to run the command in different consoles, I realized it just a simple issue of a missing path.
How I solved it was to cd into the containing folder for the nginx.exe file (which actually contains error logs and all the necessary files) and then run the nginx command which started the server and fixed it for me.

After configuring Nexus 3 SSL Nexus no longer runs without sudo

I had my new Nexus 3 repository running okay. I was able to configure some of the basic settings. Then I went through the process of enabling SSL. I used the instructions here. I also watched the video on that page, which does not give instructions that match the page.
My system info: ubuntu 14.4 with Java 8.
Install directory: /opt/nexus-3.0.0-b2016011501/
To simplify the path, I created a link to this directory: nexus -> /opt/nexus-3.0.0-b2016011501/ therefore the path to nexus is /opt/nexus
I generated my keystore as follows:
Created directory: /opt/nexus/etc/ssl
Changed to that directory and ran: keytool -keystore keystore -alias jetty -genkey -keyalg RSA -validity 3650. This generated a file called keystore. I then copied that file to keystore.jks.
Updated the following files: /opt/nexus/etc/org.sonatype.nexus.cfg added application-port-ssl=443 and added ${karaf.etc}/jetty-https.xml(this is different from the written instructions) to the end of the nexus-args=$ line. Then (this is in the video, but not the written instructions) I edited the /opt/nexus/etc/jetty-https.xml file and replaced the password in three places with the password I specified when I generated my keystore.
After this if I start nexus with ./nexus run it get the following error:
2016-01-27 02:20:41,013+0000 ERROR [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Failed to start
java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind0(Native Method) [na:1.8.0_72]
at sun.nio.ch.Net.bind(Net.java:433) [na:1.8.0_72]
at sun.nio.ch.Net.bind(Net.java:425) [na:1.8.0_72]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) [na:1.8.0_72]
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) [na:1.8.0_72]
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:326) [org.eclipse.jetty.server:9.3.5.v20151012]
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) [org.eclipse.jetty.server:9.3.5.v20151012]
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:244) [org.eclipse.jetty.server:9.3.5.v20151012]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [org.eclipse.jetty.util:9.3.5.v20151012]
at org.eclipse.jetty.server.Server.doStart(Server.java:384) [org.eclipse.jetty.server:9.3.5.v20151012]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [org.eclipse.jetty.util:9.3.5.v20151012]
at org.sonatype.nexus.bootstrap.jetty.JettyServer$JettyMainThread.run(JettyServer.java:274) [org.sonatype.nexus.bootstrap:3.0.0.b2016011501]
If it start it with sudo ./nexus run it will work but shows me the nag message saying I should not run it as root.
I have verified that my user is the owner of all the files and directories /opt/nexus
On Linux (and other unix type systems) you can't run on port numbers less than 1024 unless you are root. The best way to solve this is to run Nexus behind a reverse proxy. You can find instructions for this here:
http://books.sonatype.com/nexus-book/reference/install-sect-proxy.html
The above was written for Nexus 2.x, but the configuration needed will be the same in Nexus 3.
Regarding running as non-root as a service, there is a bug in 3.0m7 that makes this problematic:
https://issues.sonatype.org/browse/NEXUS-9437
The fix is to edit the "bin/nexus" startup script is to replace this line:
INSTALL4J_JAVA_PREFIX="su - $run_as_user -c"
With this:
exec su - $run_as_user "$prg_dir/$progname" $#
This fix will be in the next release.
Once that change is made, symlink $NEXUS_HOME/bin/nexus to /etc/init.d/nexus, then install the service. And edit "$NEXUS_HOME/bin/nexus.rc" and set the "run_as_user" appropriately.

Resources