wi-fi disconnected during crouton installation - chromebook

I purchased an Asus c300m with the soul aim of developing my linux skills
I followed the instruction to boot in developer mode and execute the following command to start downloading downloading crouton/ubuntu on it
sudo sh -e ~/Downloads/crouton -t xfce
it was going well until my wifi disconnected temporary and i got the following error:
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Failed to complete chroot setup
Unmounting /mtn.stateful_partition/crouton/chroots/precise..
Then I tried to run the sudo command again but I got the following:
/usr/local/chroots/precise already has stuff in it!
Either delete it, specify a different name (-n) or specify -u to update it
However, I'm not sure how to modify the command so i can resume installation or restart it.

You could try sudo sh ~/Downloads/crouton -u -n xfce but it's unlikely that will work. That's from the Crouton docs.
The best approach, since you never finished installing and therefore don't need to recover any data, just delete the install directory and start again. There is no good way for crouton the pick up where it left off.
Also during the install you don't want the Chromebook going to sleep. There is no way in the built-in ChromeOS settings to prevent that but according to this article you can go to the Chrome Web Store and install Keep Awake from Google.
This gives a cool icon in the upper right of the Chrome browser showing a sun, a sunset or a moon depending on what settings you want. Before you start your next install, click it to the sun so the machine won't sleep.

I had the same problem. I ran this and it's downloading:
sudo sh ~/Downloads/crouton -e -t xfce -u
I'm not sure if it's where i left off but it is definitely downloading and reinstalling after the interrupted connection.

Just type in sudo start[desktop environment]
and press y. It should keep going.

Related

Network manager flickering between device not managed, and no network devices available

Used sudo apt-get upgrade
After reloading the machine, when I hover over the network icon in the top right it flickers between device not managed, and no network devices available.
I changed /etc/NetworkManager/NetworkManager.conf to flip managed to true.
Running ip link will sometimes display wlan0 and will sometimes not, but when it does display wlan0 the number before it increases incrementally.
I can't copy the output due to having no connection on the machine, but after qdisc it has noop state down.
For ifconfig i get the output: wlan0: error fetching interface information: Device not found, and sometimes it does not find the device at all.
A similar output for iwconfig sometimes it will give information about wlan0 and sometimes it will state No such device
tail syslog gives iwlwifi Firmware not running - cannot dump error
ip link set wlan0 up returns either cannot find device or RTNETLINK answers: Input/output error
So first, looking into the dump error lead to this forum:
https://bbs.archlinux.org/viewtopic.php?id=247575
From here it lead to this bug forum: https://bugs.archlinux.org/task/63117
This says to downgrade the version of iwlwifi
Unfortunately, I am not able to downgrade and have it work, maybe someone else could explain that, but another workaround is to reboot, and at the grub menu go onto advanced and pick the previous kernel.
https://www.linuxuprising.com/2018/10/how-to-keep-package-from-updating-in.html
https://www.tecmint.com/remove-old-kernel-in-debian-and-ubuntu/
Now type sudo apt-mark hold firmware-iwlwifi
sudo apt-get upgrade will now say there is no files to update
uname -sr to get the current kernel (the one where the wifi works)
dpkg -l | grep linux-image | awk '{print$2}'
sudo apt remove --purge linux-image
sudo update-grub2
sudo reboot
Now when you boot you should boot into the previous saved kernel
sudo apt-mark showhold -to make sure iwlwifi is still held
sudo apt-get update
sudo apt upgrade
Now when you reboot you should have the updates without the new iwlwifi which has the bug
If I am wrong please correct me, this came from my own personal issue with the upgrade and this is how I fixed it for me, so if there is a much easier way please let those that come across this know

badblocks: Resource busy while trying to determine device size

I am trying to run bad blocks on macOS High Sierra 10.13.6. I installed bad blocks using macports. I keep encountering errors when attempting to run it and I am not sure how to even get bad blocks running
sudo badblocks -c 4096 -s -w -o /Users/mcbeav/Desktop/blocks.txt /dev/disk0s2
This keeps returning the error
badblocks: Resource busy while trying to determine device size
If I try
sudo badblocks -c 4096 -s -w -o /Users/mcbeav/Desktop/blocks.txt /dev/disk0
I get the error
badblocks: Value too large to be stored in data type invalid end block (7813820416): must be 32-bit value
Can anyone please help me out?
My recommendation is that you:
a) Run badblocks via the Mac OS X console in Recovery Mode
High Sierra (10.13+) along with APFS (file format system) prevent certain operations on disk. You'll have to be in recovery mode or turn off disk protection to do as you propose.
Turn off your Mac (Apple > Shut Down).
Hold down Command-R and press the Power button. ...
Wait for OS X to boot into the OS X Utilities window.
Choose Utilities > Terminal.
Enter csrutil disable.
Enter reboot.
Mac OS X Workaround:
My sense from past experience is that you are hitting the MacOSX security features (Disk protection and app certification).
Booting to Ubuntu (USB Stick) and running the badblocks test that way is going to be easier. (In my opinion)
I hope this points you in the right direction.
I had the same issue. But then I opened Disk Utility and pressed Eject on the physical device (make sure it's the hard drive and not the volume). This will unmount the volumes but will keep the device still available, which you can check by running:
diskutil list
Now run the badblocks command again and it should work fine.
I was able to get badblocks working for OSX 10.15 by
1) disabling csrutil, as explained here
2) unmounting the badblock-desired drive via Disk Utility
3) running badblocks: sudo badblocks -b 4096 -w -s -v "$MOUNT_POINT" > "badblocks.info", where MOUNT_POINT=/dev/disk2
I installed badblocks via brew install e2fsprogs, as described here
Tangentially, I also did this in order to query the USB-connected drive via smartctl.

Something missing in configuration for publishing from VS to Docker on Ubuntu?

I want to publish my projects from Visual Studio to Docker service on my own server. So there are some questions rising:
1) Install Docker on Ubuntu - plenty of manuals, for example: http://blog.tonysneed.com/2015/05/25/develop-and-deploy-asp-net-5-apps-to-docker-on-linux/
For me it ends (I think) at the point he going do "dockerize" something, but okay, at least I have the Docker installed.
2) Somehow find a way to publish VS projects to Docker. Again, plenty of manuals: http://www.hanselman.com/blog/PublishingAnASPNET5AppToDockerOnLinuxWithVisualStudio.aspx
3) And the problem is when I finally choose "Publish", specifying connection and other stuff, it fails checking connection. So, Docker out of the box isn't ready to receive deployments from VS? What do I need to fill the gap?
Edit for some details:
Docker was installed with these exact commands with no further configuration:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys
36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
sudo apt-get update
sudo apt-get install lxc-docker
What I'm deploying is ASP.NET 5 beta 7 app, specifying:
URL: tcp://19.85.23.13:2376
Image: microsoft/aspnet
And leaving other parameters default. What I get is error:
An error occured during publish. The command [docker -H
tcp://19.85.23.13:2376 build -t microsoft/aspnet -f
"C:\Users\adski\AppData\Local\Temp\PublishTemp\DockTest185\approot\src\DockTest1\Dockerfile"
"C:\Users\adski\AppData\Local\Temp\PublishTemp\DockTest185"] exited
with code [1]: Post
http://19.85.23.13:2376/v1.20/build?cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=approot%2Fsrc%2FDockTest1%2FDockerfile&memory=0&memswap=0&rm=1&t=microsoft%2Faspnet&ulimits=null:
dial tcp 19.85.23.13:2376: ConnectEx tcp: No connection could be made
because the target machine actively refused it..
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
Please visit http://go.microsoft.com/fwlink/?LinkID=529706 for
troubleshooting guide.
Well I'm not really a web-security expert. I've found this yet another manual: http://sheerun.net/2014/05/17/remote-access-to-docker-with-tls/ but can't really understand if it is what I need. After all, nobody in those "Visual Studio Publish to Docker" guides mentioned I need a certificate or something.
But obviously I need some credentials to access my server, otherwise, if it is on the web, anyone could dock something in it. And what are those cursed credentials? Any guides for dummies?
Edit 2: found something that looks like relevant: https://docs.docker.com/articles/https/
Er, is this really that complicated? But goddamit, none of those asp.net/docker tutorials mentioned that. Guides for dummies, pleeease?

How to install R on Solaris on a VirtualBox virtual machine?

This Q&A is a response to this comment. The answer to the question in the comment is not trivial, is too big for a comment, and not suitable as an answer to the question in that thread (answering my own question is officially encouraged). If you have a better answer please post it!
The question is: How to install R on Solaris on a VirtualBox virtual machine?
A more up-to-date version is available from csw: r_base. To install, see the example in Getting started where you replace vim with r_base:
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -a r_base
/opt/csw/bin/pkgutil -y -i r_base
To install a development environment, you might also want:
/opt/csw/bin/pkgutil -y -i gcc4g++
/opt/csw/bin/pkgutil -y -i texlive
Start by downloading and installing Oracle VM VirtualBox.
Then download and unzip the Oracle Solaris 11.1 VirtualBox Template. After you unzip the Oracle template you should see a file called OracleSolaris11_1.ova, that's what you'll open in VirtualBox.
Start VirtualBox, click on File, then Import Appliance, then navigate to chose the ova file you just extracted. It will take some time to import.
Start the Solaris virtual machine by clicking on the start button on VirtualBox. It will take some time to start up and you'll be prompted to add a root password, user name and user password. You'll then use those details to log in, wait for the system to load, choose gnome to ensure you get a desktop environment, and choose your time zone, keyboard layout and language (mine seems to highlight Chinese as the default choice, so be careful not to click through that one too quickly).
Eventually you'll get a desktop, right-click on the desktop and click open terminal, then in the terminal type (or paste):
sudo wget https://oss.oracle.com/ORD/ord-3.0.1-sol10-x86-64-sunstudio12u3.tar.gz && sudo wget https://oss.oracle.com/ORD/ord-3.0.1-supporting-sol10-x86-64-sunstudio12u3.tar.gz
That will connect to the internet and download two files you need. The next line will unpack those two archives:
sudo tar -xzvf ord-3.0.1-sol10-x86-64-sunstudio12u3.tar.gz && sudo tar -xzvf ord-3.0.1-supporting-sol10-x86-64-sunstudio12u3.tar.gz
And then this next line installs R, watch for the prompts after you run the line:
sudo bash install.sh
A lot will flash by in the terminal, concluding with Installation of <ORD> was successful
Now the next bit is where I deviate from the instructions here because I didn't understand them. You'll move all files beginning with lib from the archives that you unpacked into another directory where they are needed by R:
sudo mv lib* /usr/lib/64/R/lib/
That will return nothing in the terminal. Then we can run R simply by typing in the terminal like so
R
And now you should have a regular R session running in the terminal.

Why doesn't wireshark detect my interface?

I just installed Wireshark, but when I click capture > interfaces, the dialog box appears, but it does not contain my network interface.
When click on capture > interfaces it appears as in the screenshot below. What can cause this?
This is usually caused by incorrectly setting up permissions related to running Wireshark correctly. While you can avoid this issue by running Wireshark with elevated privileges (e.g. with sudo), it should generally be avoided (see here, specifically here). This sometimes results from an incomplete or partially successful installation of Wireshark. Since you are running Ubuntu, this can be resolved by following the instructions given in this answer on the Wireshark Q&A site. In summary, after installing Wireshark, execute the following commands:
sudo dpkg-reconfigure wireshark-common
sudo usermod -a -G wireshark $USER
Then log out and log back in (or reboot), and Wireshark should work correctly without needing additional privileges. Finally, if the problem is still not resolved, it may be that dumpcap was not correctly configured, or there is something else preventing it from operating correctly. In this case, you can set the setuid bit for dumpcap so that it always runs as root.
sudo chmod 4711 `which dumpcap`
One some distros you might get the following error when you execute the command above:
chmod: missing operand after ‘4711’
Try 'chmod --help' for more information.
In this case try running
sudo chmod 4711 `sudo which dumpcap`
In Windows, with Wireshark 2.0.4, running as Administrator did not solve this for me. What did was restarting the NetGroup Packet Filter Driver (npf) service:
Open a Command Prompt with administrative privileges.
Execute the command sc query npf and verify if the service is running.
Execute the command sc stop npf followed by the command sc start npf.
Open WireShark and press F5.
Source: http://dynamic-datacenter.be/?p=1279
For *nix OSes, run wireshark with sudo privileges. You need to be superuser in order to be able to view interfaces. Just like running tcpdump -D vs sudo tcpdump -D, the first one won't show any of the interfaces, won't compalain/prompt for sudo privileges either.
So, from terminal, run:
$ sudo wireshark
As described in other answer, it's usually caused by incorrectly setting up permissions related to running Wireshark correctly.
Windows machines:
Run Wireshark as administrator.
By Restarting NPF, I can see the interfaces with wireshark 1.6.5
Open a Command Prompt with administrative privileges.
Execute the command "sc stop npf".
Then start npf by command "sc start npf".
Open WireShark.
That's it.
On Fedora 29 with Wireshark 3.0.0 only adding a user to the wireshark group is required:
sudo usermod -a -G wireshark $USER
Then log out and log back in (or reboot), and Wireshark should work correctly.
I hit the same problem on my laptop(win 10) with Wireshark(version 3.2.0), and I tried all the above solutions but unfortunately don't help.
So,
I uninstall the Wireshark bluntly and reinstall it.
After that, this problem solved.
Putting the solution here, and wish it may help someone......
Just uninstall NPCAP and install wpcap. This will fix the issue.

Resources