Second Schedule for a Morea Course Site - morea-framework

I'm running my course in parallel for two student tracks but there are some differences in submission dates, so I'd like to mark assessments with 2 different dates and have each track get their own schedule.
Would you add another morea keyword, e.g., morea_start_date2, and then process those into a different file? a quicker idea?
Thanks

The simplest approach I can think of is (unfortunately) a hack:
In master/src/_plugins/MoreaGenerator.rb, search for "ScheduleInfoFile". You'll see the creation of an instance near the top and the definition of the class near the bottom. That code looks for the morea_start_date (and morea_end_date) labels and writes out the schedule-info.js file into the schedule/ directory.
One approach: you can create a new class called ScheduleInfoFile2 that is a copy of ScheduleInfoFile but looks for your new keywords and writes out a file called schedule-info2.js into a schedule2/ directory.
The schedule2/ directory is a copy of the schedule/ directory, but its index.html file loads the schedule-info2.js file.
That should work fine and is simple to implement, but creates a lot of duplicate code and so as a software engineer I am embarrassed to be recommending that approach. However, this use case is sufficiently rare that I haven't been motivated to generalize Morea to support N calendar pages.
Good luck and feel free to follow up if you run into problems.

Related

Using git for feedback from proof readers

I am currently writing a text with R bookdown and asked two friends to read my text and give comments, corrections and general feedback. My source files for the text are stored on GitHub and I would like my collaborators to make changes in the files (one for each chapter) with the help of git. However, none of us are really experts on git. This makes it hard to figure out what a suitable workflow is.
For now, we decided that each one of them creates himself a branch so that he does not directly push into the master branch. After I have read their changes I would like to decide what I merge into the master branch and what not. So far, it looks like each change needs to be in a separate commit because I am not able to merge single lines from a specific commit (not sure if that is at all possible). However, this seems like a lot of annoying and unnecessary commits to create. So, I guess I am looking for a way to avoid that and/or general pointers towards a good workflow for such kind of projects.
A useful command will be git cherry-pick, it allows you to select specific commits from a branch.
A general good practice is that commits should be self contained (if applied alone they make sense) and they target a specific feature (in the use case mentioned, that could be a paragraph or a section or a chapter).
In the end, if you would like to apply only specific changes of a commit, that would have to happen manually, someone has to decide which parts to apply and which not. A commit can be edited using git rebase -i <branch name> before being merged. This question might also be useful.
I finally found what worked for me in here. Basically, on my master branch I had to use
git merge --no-commit --no-ff branch-to-merge
This will merge all changes into my master branch but does not immediatly commit the changes so that they can still be staged/unstaged. Then, I can decide what line change to include by staging the line changes I want to keep and discard all other line changes. Finally, I commit all staged line changes et voilĂ , that's what I wanted to get.
Sidenote: I am using gitkraken and as a beginner with git I enjoy using the GUI but the merge part with the options "no-commit" and "no-fast-forwarding" had to be done via the git console (at least I could not find a way to to that using the GUI). Choosing which lines to stage and which to discard is then an easy task via the GUI.

Is there a way to make R code only able to be run and not edited? Essentially read-only?

I am in the midst of writing some scripts to perform data analysis on large excel sheets faster than by hand. However, my company has a strict quality review system where the program used needs to be validated and secure (i.e. no one can edit it, there is proof of what code was run, etc.). So essentially I would like my code to be able to be ran by my coworkers without them being able to edit the script. I was also interested in inserting prompts that they can fill in (e.g. "Which column would you like to analyze?")
Is all of this possible? I have read a few things online about file permissions but I know that these can easily be changed by the user. I also read about obfuscators but am entirely unfamiliar with their use.
One thought I have is to use Rmarkdown as a method of displaying which lines were run for which results. However, I believe that document could be edited as well? This would also leave the issue of the script itself being able to be edited.

Wordpress bloginfo function

I am just wondering how much would improve performance if we replace all bloginfo function in wordpress installation with static data - for example instead of bloginfo('template_url') to use http://blog.com/wp-content/themes/theme, and so on.
Second question is: Is there a way to automaticly pass through all files in plugin and themes directory and replace all bloginfo functions that we want to replace with static string?
Thank you for your time and best regards?
For your first question, Wordpress always performs some basic queries on every load and one of these reads the data used by the bloginfo calls. So using then in a template does add the very slight overhead of the function call, but it does not add any additional query overhead. Pretty negligible unless you are serving millions of hits a day.
Second question: it would be pretty trivial to write a script that does such a replacement. I really don't think it is warranted or necessary, but yes, it could be done.
The first question. In theory..It will has a little improvement to performance ..But very little.and judged from the Web Development.It is not convenient in future.
The second question. As I know.The only way is connect your host through ftp.and then modify these files in localhost.after that.update in your remote host.
how to modify files in localhost automaticly. There are some tools you can use.just like this website introduced.
25-text-batch-processing-tools-reviewed:
http://www.smashingmagazine.com/2009/04/10/25-text-batch-processing-tools-reviewed/
If you are good at the bat tools in windows. you can do it yourself through a bat file.

Mass Thunderbird folder to Gnus nnfolder conversions

I'm pondering the idea of importing a few thousand Thunderbird folders, each folder containing many emails of course, as a set of Emacs' Gnus mailgroups. Each mailgroup name would be derived from the folder hierarchy. Because of the quantity, the work is going to be fairly tedious, so I would automate this massive import if possible.
Among the available backends, nnfolder seems the most promising in this case. I presume it would be better to populate the mailgroups from within Gnus. Otherwise, I would have to thoroughly understand the nnfolder format, and this might require many iterations before I really get it right. Moreover, as email continues to flow in, iterations may become difficult to properly organize without loosing anything.
I guess I have to respool everything, under the constraint that the selected mailgroup is a function of the Thunderbird origin, overriding the standard Gnus selection mechanism. I did some Gnus coding in the past, but since I did not touch Emacs for a dozen years, it is all very rusty. I'm a bit lost about how to approach this task as efficiently and quickly as possible. So my question: how would you handle it? Or is there some clever Gnus hidden corner that I should explore more deeply? :-)
François
P.S. After I wrote this question, I found out that Gnus has a nice, helping function towards this goal. The idea is to first copy all Thunderbird folder files within the ~/Mail directory, as they are for the contents, but properly renamed. Once this done, M-x nnfolder-generate-active-file does at once, for each copied folder, edit the contents, leave a ~ backup, generate NOV data, create one mailgroup and, of course, adjust the ~/Mail/active file.
To copy the folders underneath the ~/.thunderbird/LOGIN/Mail/Local Folders/ directory, I wrote a small Python script. It ignores all .msf files, and recurse within .sbd directories. The folder path name, relative to Local Folders/, has all its .sbd/ strings turned into periods to produce the mailgroup name, also lowering case, turning spaces and underlines to dashes, and handling other special characters appropriately. In particular, non-ASCII characters are not handled properly, nnfolder is confusing UTF-8 and ISO-8859-1 here and there. The script also has to skip msgfilterrules.dat and likely drafts, junk and such things.
I notice two details requiring attention :
Thunderbird itself can be used to compact folders before copying them, otherwise one might unwillingly recover messages which were already deleted.
(setq nnmail-use-long-file-names t) is needed in ~/.emacs prior to the whole operation.
The batch transformation aborted, saying it is not able to decrypt one of the message. I moved the offending folder out of the way, and then, the lengthy operation succeeded.

Drupal - Attach files automatically by name to nodes

i need better file attachement function. Best would be that if you upload files to FTP and have a similar name as the name of the node (containing the same word), so they appear under this node (to not have to add each file separately if you need to have more nodes below). Can you think of a solution? Alternatively, some that will not be as difficult as it always manually add it again.
Dan.
This would take a fair bit of coding. Basically you want to implement hook_cron() and run a function that loops through every file in your FTP folder. The function will look for names of files that have not already been added to any node and then decide which node to add them to.
Bear in mind there will be a delay once you upload your files until they are attached to the node until the next cron job runs.
This is not a good solution and if I could give you any advice it would be not to do it - The reason you upload files through the Drupal interface is so that they are tracked in the files table and can be re-used.
Also the way you're proposing leaves massive amounts of ambiguity as to which file will go where. Consider this:
You have two nodes, one about cars and one about motorcycle sidecars. Your code will have to be extremely complex to make the decision of which node to add to if the file you've uploaded is called 'my-favourite-sidecar.jpg'.

Resources