jekyll/knitr: automatically regen rmd AND --watch - r

I'm struggling getting knitr and jekyll to play nicely. I want to automatically regenerate my site, recompiling RMDs, if an RMD changes (handled by servr) or anything that would trigger a usual rebuild (as in jekyll serve --watch).
At the moment I'm use servr::jekyll() which automatically regenerates my site any time I modify an RMD - that's great.
However, if I modify just an ordinary MD file (that would usually trigger a rebuild in jekyll serve --watch), the site is not rebuilt.
If I try to pass --watch e.g.
servr::jekyll(command='jekyll serve --watch')
# or
servr::jekyll(command='jekyll build --watch')
then:
with jekyll serve --watch, it looks like the site gets regenerated every time I change an MD or RMD file, but the RMDs are not recompiled - jekyll will update MDs if they've changed and serve them, but not recompile an RMD.
with jekyll build --watch, the site isn't served (I guess this is not surprising, though even with serve=T it does not serve), and updating behaviour appears to be as with serve --watch - MDs are updated but RMDs are not recompiled.
if I do this in RStudio I can't return back to the prompt - I get the dialog that tries to get me to force quit and eventually the session crashes. I then have to find the jekyll process that is still running and kill it.
I guess this is because with --watch the call to jekyll doesn't return, so you never get back to R for servr to do its own watching? Even with daemon=TRUE, servr doesn't seem to re-knit the RMDs.
So my question is, how can I get (can I get?):
normal site regeneration as per jekyll serve --watch behaviour, and
automatic RMD re-compilation as per servr::jekyll() ?
I'm using jekyll 2.4.0 at the moment.

For now, if you run jekyll build --watch in a separate terminal, they work concurrently. Not an ideal solution, but good enough in the meantime.

Related

In Docker WordPress image what causes a delay in copying application files?

I have created a new Dockerfile based on the official WordPress image, and I have been trying to troubleshoot why I cannot remove the default themes. I discovered that the reason is because at the time the command is executed the files do not actually exist yet.
Here are the relevant lines from my Dockerfile:
FROM wordpress
RUN rm -rf /var/www/html/wp-content/themes/twenty*
The delete command works as expected if I run it manually after the container is running.
As a side note, I have also discovered that when I copy additional custom themes to the /var/www/html/wp-content/themes directory, from the Dockerfile, it does work but not quite as I would expect. Because any files in the official docker image will overwrite my custom versions of the same file. I would have imagined this to work the other way around, in case I want to supply my own config file.
So I actually have two questions:
Is this behavior Docker-related? Or is it in the WordPress-specific image?
How can I resolve this? It feels like a hack, but is there a way to asynchronously run a delayed command from the Dockerfile?
What's up, Ben!
Your issue is related to a concept introduced by Docker named entrypoint. It's typically a script that is executed when the container is run, and contains actions that need to be ran at runtime, not buildtime. That script is ran right after you run the image. They are used to make containers behave like services. The parameters set with the CMD directive are, by default, the ones passed directly to the entrypoint, and can be overwritten.
You can find the debian template of the Dockerfile of the image you are pulling here. As you can see, it calls an entrypoint named docker-entrypoint.sh. Since I don't want to dive into it too much, basically, it's performing the installation of your application.
Since you are inheritating the Wordpress image, the entrypoint of the wordpress image is being executed. Overwriting it so that it is not executed anymore is not a good idea either, since it would render your image useless.
A simple hack that would work in this case would be the following:
FROM wordpress
RUN sed -i 's/exec \"\$\#\"/exec \"rm -rf \/var\/www\/html\/wp-content\/themes\/twenty\* \&\& \$\#\"/g'
That would rewrite the entrypoint, making the last exec clause to remove those files and run whatever service it decided to run (typically apache, but I don't know which could be the case in this container).
I hope that helps! :)

Exclude files from auto-rebuilding meteor

I'm building an app in Meteor, and am a big fan the auto-rebuild action, that listens to any file change, rebuilds the app and refreshes my browser. However, I recently started to use flycheck with eslint and flycheck creates a temporary files called flycheck_my-module.js in the current directory whenever I make a change in a file (as opposed to only on save). Meteor sees these files being created and rebuilds and reloads. This causes a lot of action and is heavy for my browser and ruining my workflow.
I can customize the prefix for for these files f.x. to .#flycheck_ and that will make Meteor ignore them, however that will break the eslint checker.
So my question is. Is there any way to tell meteor to prevent all files starting with flycheck_ from triggering a rebuild, something like meteor run --ignore "flycheck_*.js".
Meteor handles the whole build process, so introducing something like flycheck isn't really ideal (same goes for transpilers, etc).
One possibility is to save those files inside the test folder, which is ignored by meteor. If the file is required to be in the same folder, you could quickly edit the source of flycheck to look in 'test/' + currentDir & recreate the same folder structure within the test folder. Not really ideal, but that's about your only option if you want to keep using flycheck + emacs.

Trouble designing grunt workflow, with rev and usemin, for webdev

I'm going to start using grunt-rev & grunt-usemin with grunt-watch for my web development needs (a RESTful Web App specifically).
I have a local development machine which will run grunt-watch to attach revision identifiers on my JS files. I git commit and git push my tree to a git repo, and then ask the production server to git pull the changes from the git repo to show them to the web visitors.
The problem is that I don't want my git repo to store different filenames (due to grunt-rev) on each commit. That would be bad, because then I wouldn't be able to do git diff between commits without having my screen get flooded with the contents of files that appear and disappear, and also it could (sometimes) take up a lot more storage than if it only stored the small diffs of the files.
The only solution I see is to add the build directory containing the versioned filenames in my .gitignore, so as to not store those files (with the constantly changing filenames) in git. But wouldn't that mean that I would have to run grunt-watch on my production server as well, in order to produce the build directory with the versioned filenames there as well? But that gets complicated: a new process has to run on the remote server, maybe with its small chances of error in processing the files. Not the solution I was hoping for.
Do you people have another solution? What would you suggest I did?
What I do to solve this, is to remove previous "build" files before committing and deploying a new file. There is no need to keep older files that have been generated, because you can always rebuild them with the source files (which are in git).

DiscoverMeteor: what is the purpose of bundling?

In Chapter 2.5 of Discover Meteor by Tom Coleman, the instructions for deploying to EC2 require me to first bundle the app, move the tarball one directory up, then unzip that tarball again, before running node on the app.
This seems rather a tedious procedure as I'll have to bundle and unzip after every commit to test the app. Also, isn't this just the same thing as moving the entire app one directory up, or does bundling then unzipping do some magic that makes it run right?
Bundling creates a production application, optimized to run on the live environment, while your working directory is more suitable for development. For example, bundled app only have one js and one css file instead of tens to hundreds.
Additionally, bundled app contains all the engine needed to run with plain node, so that you may run it in an environment without Meteor.

CSS is looking different on heroku

As you should see in the images below, the css on my local host site is spaced much better at the top than it is on heroku.
Has anyone had this type of problem before. You can see it best on this page http://pltcpal.herokuapp.com/forums/
I'm using Twitter bootstrap, which recommends adding
`padding-top: 40px;`
to body if using the top nav bar. Somehow it's not working...
The problem is related to the handling of the asset pipeline on Heroku. There are several ways on how this can be handled, see http://devcenter.heroku.com/articles/rails31_heroku_cedar
I fixed the issue in my application by pre-compiling the assets locally on my machine and then pushing them to Heroku.
Pre-compile the assets:
RAILS_ENV=production bundle exec rake assets:precompile
Add/commit the changes to git repository:
git add public/assets
git commit -m "vendor compiled assets"
To be safe I tested the whole thing on a local branch on my machine first which I pushed to Heroku using the following command (Heroku normally ignores all branches except the master branch, thus the trick):
git push -f heroku heroku-assetpipeline:master
FWIW, I had this same issue and checked everything I could think of, as well as those above. It turned out I must have zoomed out in my browser while on localhost, and I had the standard zoom on my production url.
It was as simple as resetting the zoom in my browser on both pages. Hope this helps someone else with the same problem.
I have the very same issue.
When I compare the development and production code, it occurs that on the development machine the stylesheets and javascript files from bootstrap all get loaded whereby on the production site (Heroku), there is only one application-XYZ.css and one application-XYZ.js.
I am not sure if this could be an issue with the asset pipeline.
Could probably someone elaborate on what needs to be done to (pre-)compile the asset pipeline such that deployment on Heroku succeeds.
Is it possible that you pre-compiled your assets locally at some point? To force heroku to compile your assets during slug compilation you can rename your public/assets/manifest.yml to public/assets/manifest.yml.bak, commit your source, and push to heroku.
Heroku assumes you compiled your assets locally when it sees the manifest.yml file.
I had this same problem and followed the instructions of a couple of different pages including Heroku's own documentation. I'm posting here to help the next guy because possibly due to changes in Rails 4, Heroku, or Github but the above directions did not work at all for me. However I did get it to work and here's how.
Yes you probably should precompile your assets using RAILS_ENV=production bundle exec rake assets:precompile but after that go into you 'public/assets' folder and copy "all" '.css','css.gz', '.json', '.yml', '.js' files that start either with 'application' or 'manifest'. Move them to a folder outside of the application's directories. Do this just encase anything goes wrong. Verify all of those files are deleted from the apps 'public/assets/' folder. Next restart your local rails server and verify your app is still behaving as you intend it to. Then go to your Github account and go into the 'public/assets/' directory of your repository and delete all the same files that you just did locally. Then add/commit locally, then push to git, then to heroku, and walla you're done it should be working.
The rationale behind it, what I assume, is because when you push to Heroku it checks for compiled assets in your repository and because of this, even though I had precompiled locally, it was still pulling some asset configurations from previous commits. By removing these files, Heroku must compile them during the push. One thing I didn't try which may work is just switching to another branch and deleting those files and deploying that branch to Heroku, so you may want to try that first, but this is what worked for me.
One other note, renaming the files to .bak or .old Heroku still considered them as their regulars and displayed them as it was the original ones that were not displaying properly.

Resources