zope.testrecorder on Plone 4.1 - integration-testing

I want to install zope.testrecorder to create some functional tests for a Plone 4.1 package I'm writing.
I followed the instructions on both, package's INSTALL.txt file and Martin Aspeli's tutorial Using zope.testrecorder to record functional tests, but they seem to be outdated.

Have a look at http://davidjb.com/blog/2010/03/plonezope-utilising-zope-testrecorder-for-unit-testing
There he says:
"""
Go to http://zopehost:port/++resource++recorder/index.html to access the test recorder, and not try and add an object in the root of Zope (or elsewhere).
"""
Also make sure to install the zope.testrecorder as an egg and zcml (there's no autoinclude ATM) in in your buildout -- instead of unzipping the tarball inside the products/ directory as Aspeli suggested in his a bit outdated tutorial.

Related

Nix tutorial on installing in home directory

I am trying to follow this tutorial, in order to install the Nix package manager in my home directory instead of /nix.
I am doing the PRoot installation (see 2. in tutorial). At the end, the
tutorial proposes to be smart in Building native packages section, to be
able to run packages without PRoot:
To run packages natively (without PRoot) they have to be build from source because all paths to the nix store are hard-coded. It is simple, really:
mkdir $HOME/nix
nix-channel --update
env NIX_STORE_DIR=$HOME/nix nix-env -i nix
And now your Nix store gets built up using the new paths. The built binaries can be run directly from there.
I did that, but I don't see how it frees me from PRoot. If I don't do the /nix mounting point with PRoot, nothing works (no nix-env executable,
I can't install new packages).
Should this NIX_STORE_DIR environment variable be put in my .bashrc ?
It seems I always need to run PRoot because ~/.nix-profile points to
a /nix/... directory:
.nix-profile -> /nix/var/nix/profiles/default
There are more steps in the tutorial (5., 6.) - should I follow them ? It seems they apply only in case of using the manual installation (step 4.),
although it is not explicit.
Any help would be appreciated :)
For anyone stumbling on this old question: there is no currently supported way to install Nix without root. The above wiki was moved to https://nixos.wiki/wiki/Nix_Installation_Guide . It may well be out of date. PRoot could work, but even then, rebuilding the whole store at a different path is not a good idea, not the least because the binary caches won't help and you'll need to build everything.
I suggest trying Nix in a virtual machine or cloud server.
Future people from Google, it's still unsupported but does work. Script here that installs a couple dependencies, builds a temporary Nix, and uses that to install a proper version in your directory of choice.

R FAQ for package tcltk mentions "teacup". What is this and how can I use it?

In the R FAQ section 4.6 (Package TclTk does not work) I found the following sentence:
... although they [missing Tcl/tk packages] may be downloaded via the Teacup facility
What is "teacup"? How can I install and use it?
I am using RStudio running on Ubuntu Linux and Windows 7.
Teacup is a program that ships as part of ActiveTcl, a commercial zero-cost distribution of Tcl (and Tk and many other packages) for various platforms. It does package management, looking after the key part that is download, installation and upgrading of packages from a remote repository. It is not open source, though Tcl itself is (as are the majority of packages that aren't single-company-specific).
If you've got it installed, you use these commands from a shell:
teacup update-self
teacup update
Depending on where your Tcl installation is, you might need to elevate privileges to make these command calls work. How you do this is platform-dependent; on Unix it's usually simplest to use sudo for each of the commands, whereas on Windows it is probably easier to create an elevated command shell and run inside that.
Depending on your site, you might need to configure a web proxy with teacup proxy. Try without first.
If you're using a non-ActiveTcl installation but you have an ActiveTcl installation present, you can still use teacup. You just need to use teacup link to connect that Tcl installation to the teacup local repository. This is slightly more complex because you can have multiple repositories on the one system (though I've never needed that).
First, you find where the repository is:
teacup default
Then you need to link the shell to the repository:
teacup link make $PATH_FROM_TEACUP_DEFAULT $LOCATION_OF_TCLSH_TO_LINK
Making this work with R Studio will be a matter of determining which Tcl installation it is using. If it's already an ActiveTcl, you just need the first part of this answer. Otherwise, you need the second part as well. Also note that pretty much requires that you be using either Tcl 8.5 or 8.6; there are no guarantees for older, unsupported versions.

Compiling haskell module Network on win32/cygwin

I am trying to compile Network.HTTP (http://hackage.haskell.org/package/network) on win32/cygwin. However, it does fail with following message:
Setup.hs: Missing dependency on a foreign library:
* Missing (or bad) header file: HsNet.h
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
If the header file does exist, it may contain errors that are caught by the C
compiler at the preprocessing stage. In this case you can re-run configure
with the verbosity flag -v3 to see the error messages.
Unfortuntely it does not give more clues. The HsNet.h includes sys/uio.h which, actually should not be included, and should be configurered correctly.
Don't use cygwin, instead follow Johan Tibells way
Installing MSYS
Install the latest Haskell Platform. Use the default settings.
Download version 1.0.11 of MSYS. You'll need the following files:
MSYS-1.0.11.exe
msysDTK-1.0.1.exe
msysCORE-1.0.11-bin.tar.gz
The files are all hosted on haskell.org as they're quite hard to find in the official MinGW/MSYS repo.
Run MSYS-1.0.11.exe followed by msysDTK-1.0.1.exe. The former asks you if you want to run a normalization step. You can skip that.
Unpack msysCORE-1.0.11-bin.tar.gz into C:\msys\1.0. Note that you can't do that using an MSYS shell, because you can't overwrite the files in use, so make a copy of C:\msys\1.0, unpack it there, and then rename the copy back to C:\msys\1.0.
Add C:\Program Files\Haskell Platform\VERSION\mingw\bin to your PATH. This is neccesary if you ever want to build packages that use a configure script, like network, as configure scripts need access to a C compiler.
These steps are what Tibell uses to compile the Network package for win and I have used this myself successfully several times on most of the haskell platform releases.
It is possible to build network on win32/cygwin. And the above steps, though useful (by Jonke) may not be necessary.
While doing the configuration step, specify
runghc Setup.hs configure --configure-option="--build=mingw32"
So that the library is configured for mingw32, else you will get link or "undefined references" if you try to link or use network library.
This combined with #Yogesh Sajanikar's answer made it work for me (on win64/cygwin):
Make sure the gcc on your path is NOT the Mingw/Cygwin one, but the
C:\ghc\ghc-6.12.1\mingw\bin\gcc.exe
(Run
export PATH="/cygdrive/.../ghc-7.8.2/mingw/bin:$PATH"
before running cabal install network in the Cygwin shell)

What makes "paster addcontent" fail for a package created with Zopeskel template "archetype"?

Here is the situation:
fresh Plone 4.2 buildout
a fresh package created using Zopeskel 2.21.1 with template 'archetype'
and configured in my buildout using mr.developer
Trying to add some content types inside my package fails with:
[ajung#dev1 nva.aktionsmittel]$ bin/paster addcontent
Command 'addcontent' not known (you may need to run setup.py egg_info)
Running setup.py egg_info did not help.
setup.py contains:
setup.py: paster_plugins=["ZopeSkel"]
setup.cfg contains:
[zopeskel]
template = archetype
What is the magic behind the local commands in order to make "paster addcontent" working?
It worked in other contexts as it should?!
ZopeSkel 2 issues
You are following a bad tutorial. Please make sure that
You follow instructions specific here http://collective-docs.readthedocs.org/en/latest/getstarted/paste.html#adding-zopeskel-to-your-buildout - paster command must come from buildout
If you are not following the link above then please give the link to the page whose instructions you are follow and I can burn that page as it contains misleading instructions.
Make sure that paster you are using comes from buildout (from your command line it doesn't seem to be so).
Make sure your egg is registered in buildout correctly in eggs =
section
Make sure your setup.py contains necessary boilerplate http://collective-docs.readthedocs.org/en/latest/getstarted/paste.html#how-paster-local-commands-work (note: example is ZopeSkel 3+)
This is the way to make paster aware of your egg and its dependencies correctly and thus local commands can work.
ZopeSkel 3 issue (seemingly unrelated)
There was a recent change in ZopeSkel, meaning that you if you use ZopeSkel 3+ you need to be in srcfolder when running the command.
See note here:
https://github.com/collective/templer.plone.localcommands/#executing-local-commands
In order for the paster localcommand to run, it must be called from the same directory that contains the .egg-info directory (or a child directory inside it). If pasterns unable to find the .egg-info directory, it cannot run a local command. Paster uses the location of the .egg-info directory to locate setup.cfg, which is then used to determine if any local command entry points are available.
Check to see that you have an .egg-info directory generated inside your package, and that you are invoking paster from the same location or a child folder.

Dependency management in R

Does R have a dependency management tool to facilitate project-specific dependencies? I'm looking for something akin to Java's maven, Ruby's bundler, Python's virtualenv, Node's npm, etc.
I'm aware of the "Depends" clause in the DESCRIPTION file, as well as the R_LIBS facility, but these don't seem to work in concert to provide a solution to some very common workflows.
I'd essentially like to be able to check out a project and run a single command to build and test the project. The command should install any required packages into a project-specific library without affecting the global R installation. E.g.:
my_project/.Rlibs/*
Unfortunately, Depends: within the DESCRIPTION: file is all you get for the following reasons:
R itself is reasonably cross-platform, but that means we need this to work across platforms and OSs
Encoding Depends: beyond R packages requires encoding the Depends in a portable manner across operating systems---good luck encoding even something simple such as 'a PNG graphics library' in a way that can be resolved unambiguously across systems
Windows does not have a package manager
AFAIK OS X does not have a package manager that mixes what Apple ships and what other Open Source projects provide
Even among Linux distributions, you do not get consistency: just take RStudio as an example which comes in two packages (which all provide their dependencies!) for RedHat/Fedora and Debian/Ubuntu
This is a hard problem.
The packrat package is precisely meant to achieve the following:
install any required packages into a project-specific library without affecting the global R installation
It allows installing different versions of the same packages in different project-local package libraries.
I am adding this answer even though this question is 5 years old, because this solution apparently didn't exist yet at the time the question was asked (as far as I can tell, packrat first appeared on CRAN in 2014).
Update (November 2019)
The new R package renv replaced packrat.
As a stop-gap, I've written a new rbundler package. It installs project dependencies into a project-specific subdirectory (e.g. <PROJECT>/.Rbundle), allowing the user to avoid using global libraries.
rbundler on Github
rbundler on CRAN
We've been using rbundler at Opower for a few months now and have seen a huge improvement in developer workflow, testability, and maintainability of internal packages. Combined with our internal package repository, we have been able to stabilize development of a dozen or so packages for use in production applications.
A common workflow:
Check out a project from github
cd into the project directory
Fire up R
From the R console:
library(rbundler)
bundle('.')
All dependencies will be installed into ./.Rbundle, and an .Renviron file will be created with the following contents:
R_LIBS_USER='.Rbundle'
Any R operations run from within this project directory will adhere to the project-speciic library and package dependencies. Note that, while this method uses the package DESCRIPTION to define dependencies, it needn't have an actual package structure. Thus, rbundler becomes a general tool for managing an R project, whether it be a simple script or a full-blown package.
You could use the following workflow:
1) create a script file, which contains everything you want to setup and store it in your projectd directory as e.g. projectInit.R
2) source this script from your .Rprofile (or any other file executed by R at startup) with a try statement
try(source("./projectInit.R"), silent=TRUE)
This will guarantee that even when no projectInit.R is found, R starts without error message
3) if you start R in your project directory, the projectInit.R file will be sourced if present in the directory and you are ready to go
This is from a Linux perspective, but should work in the same way under windows and Mac as well.

Resources