fwrite(): write of XX bytes failed with errno=5 Input/output error - homestead

I had 2 similar questions before, however after more debugging I came to the conclusion the problem was (probably) not within my own code.
In my code I am trying to unzip a gzipped file, for this I wrote a small method;
<?php
namespace App\Helpers;
class Gzip
{
public static function unzip($filePath)
{
$outFilePath = str_replace('.gz', '', $filePath);
// Open our files (in binary mode)
$file = gzopen($filePath, 'rb');
$outFile = fopen($outFilePath, 'wb');
// Keep repeating until the end of the input file
while (!gzeof($file)) {
// Read buffer-size bytes
// Both fwrite and gzread and binary-safe
fwrite($outFile, gzread($file, 4096));
}
// Files are done, close files
fclose($outFile);
gzclose($file);
}
}
This should result in the unzipped file;
Gzip::unzip('path/to/file.csv.gz');
This is where it gets tricky, sometimes it will unzip the file and sometimes it will throw this exception; (keep in mind that this has nothing to do with the StreamHandler itself, this is a pure input/output error problem)
I can refresh the page as many times as I want but nothing will change, if I would try the gunzip command on the command line it will fail with sort off the same error;
Which file I am unzipping does not matter, it randomly happens to a random file.
Now it also won't matter if I run the gunzip command multiple times, but like I said these exceptions / errors happen randomly so they also randomly "fix" them self.
The application is written in Laravel 8.0, PHP7.4 running on a Homestead environment (Ubuntu 18.04.5 LTS) my base laptop runs on Windows 10.
To me it's super weird that this exception / error happens randomly and also randomly out of nowhere "fixes" itself, so my question is: how does this happen, why does this happen and ultimately how can I fix it.

errno=5 Input/output error is a failure to read/write the Linux file system.
A real server, you need to check the disk with fsck, etc...
Homestead running on Windows, I think we should look for the windows 10 homestead errno -5 issue.
winnfsd - https://github.com/winnfsd/vagrant-winnfsd/issues/96#issuecomment-336105685

If your Vagrant on Windows is using VirtualBox, HyperV can course this.
Try to disable HyperV for VirtualBox on the powershell, and reboot Windows
Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Hypervisor
regards

The problem relied in me using Homestead (a Vagrant box) with NFS turned on, Vagrant + NFS + Windows = problems. There are many possible solutions to the problem, most exceptions regarding a errno5 come down to NFS + Vagrant.
The solution for me was to stop using NFS, for now this will be the accepted answer as this fixes my problem. However if someone manages to find a actual solution to this error I will accept that.

I solved it and I keep using NFS.
This situation happens to me usually when I'm dropping a database schema or creating a new one and fill it with a lot of data on my virtual box. I'm talking about volumes like around 50 Megabytes. I guess this is enough for virtual box to start re-scaling the virtual hard disc and it makes Ubuntu crazy and Kernel panicking.
So the solution is to reboot vagrant as many times as it takes for it to fix the issue.
That is what usually work for me:
make vagrant halt - it would go with errors
then vagrant up - it probably would not work
them vagrant halt - it probably would go with errors again
then vagrant up --provision - it would take time and probably also give errors
then vagrant halt - it should work this time
then vagrant up --provision - because "why not provisioning it again" and it is usually enough.
In 9 out of 10 cases it is enough. When it is not enough then I just create a new homestead.

Related

Pintos - UserProg all tests fail is_kernel_vaddr()

I am doing the Pintos project on the side to learn more about operating systems. I had tons of devops trouble at first with it not running well on an 18.04 Ubuntu droplet. I am now running it on the VirtualBox image that UCCS tells students to download for pintos.
I finished project 1 and started to map out my solution to project 2. Following the instructions to create a file I ran
pintos-mkdisk filesys.dsk --filesys-size=2
pintos -- -f -q
but am getting error
Kernel PANIC at ../../threads/vaddr.h:87 in vtop(): assertion
`is_kernel_vaddr (vaddr)' failed.
I then tried running make check (all the tests). They are all failing for the same reason.
Am I missing something? Is there something I need to implement to fix this? I reread the instructions and didnt see anything?
Would appreciate help!
Thanks
I had a similar problem. My code for Project 1 ran fine, but I could not format the filesystem for Project 2.
The failure for me came from the following call chain:
thread_init() -> ... -> thread_schedule_tail() -> process_activate() -> pagedir_activate() -> vtop()
The problem is that init_page_dir is still NULL when pagedir_activate() is called. init_page_dir should have been initialized in paging_init() but this is called after thread_init().
The root cause was that my scheduler was being called too early, i.e. before the call to thread_start(). The reason for my problem was that I had built in a call to thread_yield() upon completion of every call to lock_release() which makes sense from a priority donation standpoint. Unfortunately, locks are used prior to the scheduler being ready! To fix this, I installed a flag called threading_started that bails in the first line of my thread_block() and thread_yield() functions if thread_start() has not yet been called.
Good luck!

How to rotate a redirected log

I am running my application in Ubuntu 16.04. I do not have the logs for my application redirected to a file yet. It logs out to the console when I run it. I do preserve the logs using redirection > to a file.
But its becoming quite heavy with time.
I need to rotate the logs for which I tried using logrotate. But it does not rotate my logs. Below is a snippet from my logrotate config file -
/home/rranjan/my-app/logs/log {
su rranjan rranjan
missingok
size 100k
hourly
create 0660 rranjan rranjan
rotate 20
}
I had tried truncating the file as well but failed. I am not sure but is it that in case of redirection, the file handle would never be released?
How do I get my logs rolling?
Instead of redirecting, you should use piping trough the logger command.
<your_script> | logger -p /home/rranjan/my-app/logs/log
Then if this is not enough, maybe try asking it on https://superuser.com/
I found that one interesting for you.

Systrace - error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19) unable to start

I am currently working on a project which aims to find out what the system is doing behind a series of user interaction on the android UI. For example, if user click send button in Facebook Messenger, the measured response time for such action is 1.2 seconds. My goal is to figure out what the 1.2 seconds consist of. My friend suggested that I should take a look into 'Systrace'.
However, when I tried systrace on my HTC one M8, I have encountered some problems:
First, error opening /sys/kernel/debug/tracing/options/overwrite - no such file or directory. I solved this problem by building up the support of the kernel following http://opensourceforu.com/2010/11/kernel-tracing-with-ftrace-part-1/ and mount -t debugfs none /sys/kernel/debug. Then I could find the tracing directory. Besides, I set ro.debuggable=1 in file default.prop within Ramdisk and burn the boot.img into my phone.
Now I encounter another problem: when I run - python systrace.py --time=10 -o mynewtrace.html sched gfx view wm, the following error(19) pop up: error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19). I don't know if the way my building up kernel support for systrace is incorrect or anything is missing.
Could anyone helps me out with this problem, please?
I think I have worked out the solution. My environment is Ubuntu 16.04 + HTC one M8. I will write the steps as followed:
open terminal and enter: $adb shell
(1) $su (2) $mount -t debugfs none /sys/kernel/debug. Now you should be able to see many directories under /sys/kernel/debug/. (You may cd into /sys/kernel/debug to confirm this)
New a new terminal and enter: dd if=/dev/block/platform/msm_sdcc.1/by-name/boot of=/sdcard/boot.img to generate the boot.img kernel image from your device.
Use AndroidImageKitchen to unpack the boot.img and find the default.prop within Ramdisk folder. Then change ro.debuggable=0 to ro.debuggable=1. Repack the boot.img and flash boot it to your device.
Once the device boot, under terminal, enter: adb root and message like: restarting adbd as root may pop up. Disconnect the USB and connect again.
cd to the systrace folder, e.g. ~/androidSDK/platform-tools/systrace and use:
python systrace.py --time=10 -o mynewtrace.html sched gfx view wm
Now you may able to generate your own systrace files.

OpenCPU - Failed to set rlimit. ENOSYS

I already installed OpenCPU on a Ubuntu Server - Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64) - and everything worked perfectly without any problems.
Here I want to say that I really like this API and I am very thankful for all the effort from the people (I think mostly Jeroen Ooms) working on it.
Now I installed it again, but on another Server hosted at another provider. It is also a Ubuntu Server - Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-042stab093.4 x86_64) - and therefore I expected it to work as smoothly as before.
But now I have a problem. After the installation and starting the service, I wanted to check through my browser if everything is OK.
So I just opened http://xxx.xxx.xxx.xxx/ocpu like it worked on my other server. This time my browser doesn't show the OpenCPU API Explorer, but the following message:
Failed to set rlimit. ENOSYS
In call:
rlimit_wrapper("rlimit_as", hardlim, softlim, pid, verbose)
The server only has 1GB of physical memory, so I thought changing the "rlimit.as" to 1e9 instead of the standrd 2e9 would fix the problem (I also tried 750000000 and 500000000), but nothing helped (of course I restarted the service opencpu again after each change).
I also think that this is not the problem, because I guess the server would use virtual memory when an operation uses more than one GB.
I think the problem has to do with RAppArmor. So I tried to disable it and restart opencpu, but the problem didn't vanish:
$ sudo aa-disable usr.bin.r
Disabling /etc/apparmor.d/usr.bin.r.
Traceback (most recent call last):
File "/usr/sbin/aa-disable", line 30, in
tool.cmd_disable()
File "/usr/lib/python3/dist-packages/apparmor/tools.py", line 148, in cmd_disable
raise apparmor.AppArmorException(cmd_info[1])
apparmor.common.AppArmorException: 'Warning: unable to find a suitable fs in /proc/mounts, is it mounted?\nUse --subdomainfs to override.\n'
So does anyone know what the problem here could be or has any suggestions where to look for a solution (I tried to google already, but didn't find anything helpful)?
I don't think anything of the following is the cause of the problem, but since I'm not sure, I add these warnings anyways:
The only strange thing I encountered during the OpenCPU installation was this message (which appeared 4 times):
iptables v1.4.21: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
But afterwards it said:
* Reloading nginx configuration nginx [ OK ]
OK
Setting up opencpu (1.4.4-trusty15) ...
Also when I tried to install RAppArmor separately, I got the following warning:
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = (unset)
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Selecting previously unselected package r-cran-rapparmor.
And also this one:
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?\nUse --subdomainfs to override.\n
Thanks in advance!
It looks like your new hosting provider uses some sort of virtualization system that has a shared kernel which limits all kind of linux functionality, including rlimit, iptables and probably apparmor. Is an actual cloud host, or something you setup yourself?
It would be helpful to debug this in R (outside of opencpu). On your server, start R in the console and type:
library(RAppArmor, lib="/usr/lib/opencpu/library")
rlimit_as(1e9)
rlimit_fsize(1e9)
rlimit_cpu(1e5)

How to solve "Device 0 (vif) could not be connected. Hotplug scripts not working."?

When starting a virtual machine, xm shows:
Device 0 (vif) could not be connected. Hotplug scripts not working.
Why does xm show this? How to solve it?
From the Xen wiki:
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.
This problem is often caused by not having "xen-netback" driver loaded in dom0 kernel.
The hotplug scripts are located in /etc/xen/scripts by default, and are labeled with the prefix vif-*. Those scripts log to /var/log/xen/xen-hotplug.log, and more detailed information can be found there.
http://wiki.xen.org/wiki/Xen_Common_Problems
As weird as it sound, I encountered this error in a situation where the sum of vm memory I assigned left the dom0 with too little memory to complete the addition of a virtual interface. Sizing down the virtual machines was the solution.
I agree with PypeBros. Once I put a new entry in /etc/fstab to mount /tmp as tempfs and allocate 10G memory to it. Then the Xen guest won't start and gives me this error:
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.
It worked fine when I removed /tmp as tempfs. So I think this error could be due to memory problem.

Resources