launchd jobs run at wrong time under OS X Yosemite - plist

I have a number of launchd agents that were working fine until I upgraded to Yosemite. The jobs continue to work when run manually. The jobs do not when they're supposed but do run automatically occasionally. I don't know what triggers them to run themselves when they do, but it is not always at the same time of day and it can happen when I'm in the middle of doing something (not when I wake the computer up from sleep).
I've boiled it down to the simplest job I can think of, just an AppleScript command that displays the time the job was run (so I can tell that the time is wrong). I've pasted the plist at the bottom of this post. LaunchControl believes that the job is loaded and it shows up in launchctl list:
$ launchctl list | grep "PID\|show time"
PID Status Label
- 0 0 - tmp show time
I'm usually at my computer at the time the job is scheduled to run.
Here's the plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>0 - tmp show time</string>
<key>ProgramArguments</key>
<array>
<string>/usr/bin/osascript</string>
<string>-e</string>
<string>display dialog (current date) as string</string>
</array>
<key>RunAtLoad</key>
<false/>
<key>StartCalendarInterval</key>
<dict>
<key>Hour</key>
<integer>7</integer>
<key>Minute</key>
<integer>45</integer>
</dict>
</dict>
</plist>

I was having the same issue - it worked the day before I upgraded to Yosemite and broke the next day. Manually running still worked but the timed launch would run many hours later. I noticed a lot of these items in the logs:
2/18/15 8:05:36.000 AM kernel[0]: BUG in process suhelperd[454]: over-released legacy external boost assertions (1 total, 1 external, 0 legacy-external)
suhelperd is the software updater helper daemon
I went into System Preferences / App Store and shut off all the automatic updates.
This morning my agent ran perfectly on time again and those entries in the log were gone.

Potential solution
I was running LaunchControl and I noticed it displaying this warning about one of my jobs:
Program arguments contain globbing symbols but shell globbing is not supported by launchd. This might be intentional
The globbing symbols in question weren't being used for globbing; they were part of a URL: http://www.weather.com/weather/hourbyhour/graph/New+York+NY+10014:4:US?pagenum=2. Though this job has been around since before Yosemite came out, removing it seems to have virtually solved the problem, though I'm not confident that it has done so 100% yet.
Workaround
Since no one has what they know to be a complete solution, I'm leaving this workaround up:
I still don't know what's going on, but here's a somewhat unpleasant workaround. I noticed that loading and unloading broke the logjam, so added an entry to crontab to unload and then reload some launchd item. So effectively I'm using cron (which you're supposed to avoid in favor of launchd under Mac OS whenever possible) to prod launchd. I ran crontab -e and then stuck this in the file:
50 * * * * launchctl unload '/Users/kuzzooroo/Library/LaunchAgents/foo bar.plist' && launchctl load '/Users/kuzzooroo/Library/LaunchAgents/foo bar.plist'

I've been unable to find a reliable way to get this work purely within launchd, but here's the workaround that seems to be working for me: remove the StartCalendarInterval section from the plist, and run crontab -e to create a cron job to launch the task in question:
50 * * * * launchctl start start.example.taskName
Of course, you could just skip launchd and use cron directly to launch your job. For my particular situation, that doesn't work (I'm doing git pull based on some ssh credentials that are loaded into memory, which launchd / launchctl handles correctly but which standalone cron doesn't), but cron is great for lots of situations.

"cron" doesn't work if your crontab entry specifies a time when your computer is asleep. What I had to do was change my job to read a "stamp-file" that contains the date of last run. If today's date matches the stamp date, the job quits. But if different, it rewrites the stamp with today's date, and does its job. At least that way, it does its job only ONCE, but "cron" fires it up every 30 minutes. My crontab looks like this: 15,45 * * * * /Users/etc.

Related

How to extract the maps via AC Dashboard?

I ran into the problem that everything went well with the compilation and the database. But when I start the worldserver, I get an error
Loading world information...
> RealmID: 1
> Version DB world: ACDB 335.6-dev
Will clear `logs` table of entries older than 1209600 seconds every 10 minutes.
Using DataDir /azerothcore-wotlk/data/
WORLD: VMap support included. LineOfSight:true, getHeight:true, indoorCheck:true PetLOS:true
Map file '/azerothcore-wotlk/data/maps/0004331.map': does not exist!
exit code: 1
worldserver terminated, restarting...
worldserver Terminated after 1 seconds, termination count: : 6
worldserver Restarter exited. Infinite crash loop prevented. Please check your system
What could be the problem? I rechecked the permissions to the directory including the owner and everything is fine. Tried different paths DataDir, now it set to **DataDir = "/home/azcore/azerothcore-wotlk/data". I get an error, how to fix that?
**
First of all, if you only need to get latest maps compatible with AzerothCore you can download them from here.
Otherwise, change in your config.sh file CTOOLS_BUILD='all', afterwards, run again the build using
./acore.sh compiler build
This will generate the binaries to extract the data inside azerothcore-wotlk/env/dist/bin/.
Having the binaries you can follow the guide here to extract them manually, you only need to move the binaries into the WoW directory and run the binaries in the right order.

fwrite(): write of XX bytes failed with errno=5 Input/output error

I had 2 similar questions before, however after more debugging I came to the conclusion the problem was (probably) not within my own code.
In my code I am trying to unzip a gzipped file, for this I wrote a small method;
<?php
namespace App\Helpers;
class Gzip
{
public static function unzip($filePath)
{
$outFilePath = str_replace('.gz', '', $filePath);
// Open our files (in binary mode)
$file = gzopen($filePath, 'rb');
$outFile = fopen($outFilePath, 'wb');
// Keep repeating until the end of the input file
while (!gzeof($file)) {
// Read buffer-size bytes
// Both fwrite and gzread and binary-safe
fwrite($outFile, gzread($file, 4096));
}
// Files are done, close files
fclose($outFile);
gzclose($file);
}
}
This should result in the unzipped file;
Gzip::unzip('path/to/file.csv.gz');
This is where it gets tricky, sometimes it will unzip the file and sometimes it will throw this exception; (keep in mind that this has nothing to do with the StreamHandler itself, this is a pure input/output error problem)
I can refresh the page as many times as I want but nothing will change, if I would try the gunzip command on the command line it will fail with sort off the same error;
Which file I am unzipping does not matter, it randomly happens to a random file.
Now it also won't matter if I run the gunzip command multiple times, but like I said these exceptions / errors happen randomly so they also randomly "fix" them self.
The application is written in Laravel 8.0, PHP7.4 running on a Homestead environment (Ubuntu 18.04.5 LTS) my base laptop runs on Windows 10.
To me it's super weird that this exception / error happens randomly and also randomly out of nowhere "fixes" itself, so my question is: how does this happen, why does this happen and ultimately how can I fix it.
errno=5 Input/output error is a failure to read/write the Linux file system.
A real server, you need to check the disk with fsck, etc...
Homestead running on Windows, I think we should look for the windows 10 homestead errno -5 issue.
winnfsd - https://github.com/winnfsd/vagrant-winnfsd/issues/96#issuecomment-336105685
If your Vagrant on Windows is using VirtualBox, HyperV can course this.
Try to disable HyperV for VirtualBox on the powershell, and reboot Windows
Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Hypervisor
regards
The problem relied in me using Homestead (a Vagrant box) with NFS turned on, Vagrant + NFS + Windows = problems. There are many possible solutions to the problem, most exceptions regarding a errno5 come down to NFS + Vagrant.
The solution for me was to stop using NFS, for now this will be the accepted answer as this fixes my problem. However if someone manages to find a actual solution to this error I will accept that.
I solved it and I keep using NFS.
This situation happens to me usually when I'm dropping a database schema or creating a new one and fill it with a lot of data on my virtual box. I'm talking about volumes like around 50 Megabytes. I guess this is enough for virtual box to start re-scaling the virtual hard disc and it makes Ubuntu crazy and Kernel panicking.
So the solution is to reboot vagrant as many times as it takes for it to fix the issue.
That is what usually work for me:
make vagrant halt - it would go with errors
then vagrant up - it probably would not work
them vagrant halt - it probably would go with errors again
then vagrant up --provision - it would take time and probably also give errors
then vagrant halt - it should work this time
then vagrant up --provision - because "why not provisioning it again" and it is usually enough.
In 9 out of 10 cases it is enough. When it is not enough then I just create a new homestead.

Pintos - UserProg all tests fail is_kernel_vaddr()

I am doing the Pintos project on the side to learn more about operating systems. I had tons of devops trouble at first with it not running well on an 18.04 Ubuntu droplet. I am now running it on the VirtualBox image that UCCS tells students to download for pintos.
I finished project 1 and started to map out my solution to project 2. Following the instructions to create a file I ran
pintos-mkdisk filesys.dsk --filesys-size=2
pintos -- -f -q
but am getting error
Kernel PANIC at ../../threads/vaddr.h:87 in vtop(): assertion
`is_kernel_vaddr (vaddr)' failed.
I then tried running make check (all the tests). They are all failing for the same reason.
Am I missing something? Is there something I need to implement to fix this? I reread the instructions and didnt see anything?
Would appreciate help!
Thanks
I had a similar problem. My code for Project 1 ran fine, but I could not format the filesystem for Project 2.
The failure for me came from the following call chain:
thread_init() -> ... -> thread_schedule_tail() -> process_activate() -> pagedir_activate() -> vtop()
The problem is that init_page_dir is still NULL when pagedir_activate() is called. init_page_dir should have been initialized in paging_init() but this is called after thread_init().
The root cause was that my scheduler was being called too early, i.e. before the call to thread_start(). The reason for my problem was that I had built in a call to thread_yield() upon completion of every call to lock_release() which makes sense from a priority donation standpoint. Unfortunately, locks are used prior to the scheduler being ready! To fix this, I installed a flag called threading_started that bails in the first line of my thread_block() and thread_yield() functions if thread_start() has not yet been called.
Good luck!

Systrace - error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19) unable to start

I am currently working on a project which aims to find out what the system is doing behind a series of user interaction on the android UI. For example, if user click send button in Facebook Messenger, the measured response time for such action is 1.2 seconds. My goal is to figure out what the 1.2 seconds consist of. My friend suggested that I should take a look into 'Systrace'.
However, when I tried systrace on my HTC one M8, I have encountered some problems:
First, error opening /sys/kernel/debug/tracing/options/overwrite - no such file or directory. I solved this problem by building up the support of the kernel following http://opensourceforu.com/2010/11/kernel-tracing-with-ftrace-part-1/ and mount -t debugfs none /sys/kernel/debug. Then I could find the tracing directory. Besides, I set ro.debuggable=1 in file default.prop within Ramdisk and burn the boot.img into my phone.
Now I encounter another problem: when I run - python systrace.py --time=10 -o mynewtrace.html sched gfx view wm, the following error(19) pop up: error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19). I don't know if the way my building up kernel support for systrace is incorrect or anything is missing.
Could anyone helps me out with this problem, please?
I think I have worked out the solution. My environment is Ubuntu 16.04 + HTC one M8. I will write the steps as followed:
open terminal and enter: $adb shell
(1) $su (2) $mount -t debugfs none /sys/kernel/debug. Now you should be able to see many directories under /sys/kernel/debug/. (You may cd into /sys/kernel/debug to confirm this)
New a new terminal and enter: dd if=/dev/block/platform/msm_sdcc.1/by-name/boot of=/sdcard/boot.img to generate the boot.img kernel image from your device.
Use AndroidImageKitchen to unpack the boot.img and find the default.prop within Ramdisk folder. Then change ro.debuggable=0 to ro.debuggable=1. Repack the boot.img and flash boot it to your device.
Once the device boot, under terminal, enter: adb root and message like: restarting adbd as root may pop up. Disconnect the USB and connect again.
cd to the systrace folder, e.g. ~/androidSDK/platform-tools/systrace and use:
python systrace.py --time=10 -o mynewtrace.html sched gfx view wm
Now you may able to generate your own systrace files.

resume Rsnapshot to same drive

Sometimes, on a large rsync using Rsnapshot, the NFS mount we are syncing to will drop.
Then when you run:
rnsapshot monthly
to resume it, it will behave as if this is a brand new, rotating the monthly.0 to monthly.1 and so on.
Is there a way to resume the rsync using rsnapshot monthly if something gets interrupted? That won't start a brand new backup?
The answer is no not really but sort of. rsnapshot runs as a batch job - generally triggered from a cron job. It does not keep any state between runs apart from the backups themselves. If your NFS mount goes away mid backup, after a while you'll get some sort of IO error, and rsnapshot will give up and die with an error. The backup will fail. Next time it is run after the failure, it will start the backup as if from scratch.
However, if you use the sync_first config option, and not the link_dest option, rsnapshot will do a better job of recovery. It will leave the files its already transferred in place and won't have to transfer them again, but it will have to check again that the source and destination are the same in the usual rsync way. The man page gives some detail on this.
This is not the case with the link_dest method which for most errors removes the work its done and starts again. Specifically, on error link_dest does a "rollback" like this:
ERROR: /usr/bin/rsync returned 255 while processing sam#localhost:..
WARNING: Rolling back "localhost/"
/bin/rm -rf /tmp/rs-test/backups/hourly.0/localhost/
/bin/cp -al /tmp/rs-test/backups/hourly.1/localhost \
/tmp/rs-test/backups/hourly.0/localhost
touch /tmp/rs-test/backups/hourly.0/
rm -f /tmp/rs-test/rsnapshot-test.lock
If your not using it already, and you can use it (apparently some non UNIX systems can't), use the sync_first method.
I don't use rsnapshot, but I've written my own equivalent wrapper over rsync [in perl] and I do delta backups, so I've encountered similar problems.
To solve the problem of creating "false whole backups", the basic idea is to do:
while 1
rsnapshot remote_whatever/tmp
if (rsnapshot_was_okay) then
mv remote_whatever/tmp remote_whatever/monthly_whatever
break
endif
end
The above mv should be atomic, even over NFS.
I just answered a similar question here: https://serverfault.com/questions/741346/rsync-directory-so-all-changes-appear-atomically/741420#741420 that has more details

Resources