plone buildout with kgs in offline mode - plone

My buildout.cfg for a plone project uses a kgs (known good set):
[buildout]
extends = http://dist.plone.org/release/4.2/versions.cfg
Since it's a network dependency, buildout does not work when being offline.
$ bin/buildout -o
While:
Initializing.
Error: Couldn't download 'http://dist.plone.org/release/4.2/versions.cfg' in offline mode.
What is the best practice to work in offline mode and having kgs references? I assume there is some way to cache external references. Of course I could use a caching proxy locally but there must IMHO be a more lightweight solution.

We always download the KGS URLs to local files and use that as an extends instead:
curl -o plone-versions.cfg http://dist.plone.org/release/4.2.4/versions.cfg
where our versions.cfg reads:
[buildout]
extends =
zopeapp-versions.cfg
ztk-versions.cfg
zope-versions.cfg
plone-versions.cfg
We add a header to the file to name the original source, and comment out the URL extends in the files:
# Sourced from http://dist.plone.org/release/4.2.4/versions.cfg
[buildout]
# extends = http://download.zope.org/zopetoolkit/index/1.0.7/zopeapp-versions.cfg
# http://download.zope.org/Zope2/index/2.13.19/versions.cfg

You can use the extends cache (which can also be shared between different machines such as your development machine and the production machine).
Setup
Add a file at ~/.buildout/default.cfg for enabling the cache for all buildouts on this machine:
[buildout]
extends-cache = /path/to/your/extends/cache
Or you can do the same configuration in a specific buildout.
This will create files with hashed filenames in the directory you configure. Since the name of the file is generated by the URL of the extends, it can easily be copied around. So if you never have an internet connection on the server, you can run the buildout on another server with extends-cache and copy the direct

I just made an odd observation, with could be of interest:
Changing the extends-url from
extends = http://dist.plone.org/release/4.2/versions.cfg
to
extends = http://dist.plone.org/release/4-latest/versions.cfg
will let buildout run without any errors (why?)
Might be a faster solution for your case, but Martijn's answer is of course the way to go for a replicable, controlled development-enviroment.

Related

Vaadin Flow 14, Jetty embedded and static files

I'm trying to create app based on Jetty 9.4.20 (embedded) and Vaadin Flow 14.0.12.
It based on very nice project vaadin14-embedded-jetty.
I want to package app with one main-jar and all dependency libs must be in folder 'libs' near main-jar.
I remove maven-assembly-plugin, instead use maven-dependency-plugin and maven-jar-plugin. In maven-dependency-plugin i add section <execution>get-dependencies</execution> where i unpack directories META-INF/resources/,META-INF/services/ from Vaadin Flow libs to the result JAR.
In this case app work fine. But if i comment section <execution>get-dependencies</execution> then result package didn't contain that directories and app didn't work.
It just cannot give some static files from Vaadin Flow libs.
This error occurs only if i launch packaged app with ...
$ java -jar vaadin14-embedded-jetty-1.0-SNAPSHOT.jar
... but from Intellij Idea it launch correctly.
There was an opinion that is Jetty staring with wrong ClassLoader and cannot maintain requests to static files in Jar-libs.
The META-INF/services/ files MUST be maintained from the Jetty libs.
That's important for Jetty to use java.util.ServiceLoader.
If you are merging contents of JAR files into a single JAR file, that's called a "uber jar".
There are many techniques to do this, but if you are using maven-assembly-plugin or maven-dependency-plugin to build this "uber jar" then you will not be merging critical files that have the same name across multiple JAR files.
Consider using maven-shade-plugin and it's associated Resource Transformers to properly merge these files.
http://maven.apache.org/plugins/maven-shade-plugin/
http://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html
The ServicesResourceTransformer is the one that merges META-INF/services/ files, use it.
As for static content, that works fine, but you have to setup your Base Resource properly.
Looking at your source, you do the following ...
final URI webRootUri = ManualJetty.class.getResource("/webapp/").toURI();
final WebAppContext context = new WebAppContext();
context.setBaseResource(Resource.newResource(webRootUri));
That won't work reliably in 100% of cases (as you have noticed when running in the IDE vs command line).
The Class.getResource(String) is only reliable if you lookup a file (not a directory).
Consider that the Jetty Project Embedded Cookbook recipes have techniques for this.
See:
WebAppContextFromClasspath.java
ResourceHandlerFromClasspath.java
DefaultServletFileServer.java
DefaultServletMultipleBases.java
XmlEnhancedServer.java
MultipartMimeUploadExample.java
Example:
// Figure out what path to serve content from
ClassLoader cl = ManualJetty.class.getClassLoader();
// We look for a file, as ClassLoader.getResource() is not
// designed to look for directories (we resolve the directory later)
URL f = cl.getResource("webapp/index.html");
if (f == null)
{
throw new RuntimeException("Unable to find resource directory");
}
// Resolve file to directory
URI webRootUri = f.toURI().resolve("./").normalize();
System.err.println("WebRoot is " + webRootUri);
WebAppContext context = new WebAppContext();
context.setBaseResource(Resource.newResource(webRootUri));

How do I add a Nexus resolver after local repository, but before the default repositories?

We have an internal Nexus repository that we use to publish artifacts to and also to cache external dependencies (from Maven Central, Typesafe, etc.)
I want to add the repository as a resolver in my SBT build, under the following restrictions:
The settings need to be part of the build declaration (either .sbt or .scala, but not in the "global" sbt settings
If a dependency exists in the local repository, it should be taken from there. I don't want to have to access the network to get all the dependencies for every build.
If a dependency doesn't exist locally, sbt should first try to get it from the Nexus repository before trying the external repositories.
I saw several similar questions here, but didn't find any solution that does exactly this. Specifically, the code I currently have is:
externalResolvers ~= { rs => nexusResolver +: rs }
But when I show externalResolvers the Nexus repo appears before the local one.
So far, I've come up with the following solution:
externalResolvers ~= { rs =>
val grouped = rs.groupBy(_.isInstanceOf[FileRepository])
val fileRepos = grouped(true)
val remoteRepos = grouped(false)
fileRepos ++ (nexusResolver +: remoteRepos)
}
It works, but is kinda dirty... If anyone has a "cleaner" solution, I'd love to hear it.

Getting a Buildout Cache, so that buildout will work when download.zope.org is down

This blog gives a summary of how to use buildout when download.zope.org is down http://devblog.4teamwork.ch/blog/2013/06/06/download-dot-zope-dot-org-is-down-how-to-fix-buildout/ however it is specific to Plone 4.2.
How do I go about getting a similar cache for Plone 4.3.1 so that my buildout won't fail when download.zope.org is down?
In the case of this particular outage, you don't need a cache[1]: you need a valid extends = target. I've just fixed my Plone 4.3 buildout to avoid download.zope.org[2]. This should work for you:
[buildout]
extends = https://raw.github.com/plock/pins/master/plone-4-3
[plone]
#eggs +=
[1] Because Plone extends configuration files located on download.zope.org: http://dist.plone.org/release/4.3.1/versions.cfg
[2] As soon as a find the appropriate Zope configuration files, I'll fix 4.3.1 too.

"instance" part is not deleted by buildout when switching to 2 Plone instances using buildout section extension

We're switching from one to two Zope instances for our productoin Plone deployment. I have the following buildout structure defined:
buildout.cfg
[buildout]
extends = app.cfg
... some environment specific stuff
app.cfg
[buildout]
extends = base.cfg
parts =
zope2
productdistros
instance1
instance2
zopepy
supervisor
[instance1]
<= instance
http-address = 18081
[instance2]
<= instance
http-address = 18082
base.cg
[buildout]
parts =
zope2
productdistros
instance
zopepy
... bulk of buildout configuration suitable for both server and development
Testing this I'd expect this buildout configuration to result in the existing instance part to be deleted and replaced with instance1 and instance2. However the instance part is not deleted - it can still be found in bin and parts directory.
[zopetest#dev home]$ bin/buildout
Updating zope2.
Updating fake eggs
Updating productdistros.
Updating instance1.
Updating instance2.
Updating instance.
Updating zopepy.
Updating supervisor.
I have a very similar set-up on a different zope instance that was configured this way from the start and it has no "instance" part.
We're running zc.buildout 1.4.4 with Python 2.4.6 building Plone 3.3.6.
I've tried the following with no change:
* upgrading to buildout 1.5.2
* removing the parts assignment from base.cfg
This is a "feature" of plone.recipe.zope2instance. Traditionally the recipe has avoided removal of the instances and scripts it creates for running plone (for whatever reason: either poor design or deliberate decision, I am not sure).
For whatever it is worth, as of version 4.2.0 there is support for generating non-plone scripts (similar to zc.recipe.egg) and those scripts are managed properly. See:
https://github.com/plone/plone.recipe.zope2instance/blob/master/src/plone/recipe/zope2instance/init.py#L119
for all the gory details. (I believe the "feature" is that the install method does not return a tuple, unless you are using scripts in which case a tuple containing the scripts is returned.)
This was in fact due to the zc.buildout automatic part selection feature
When a section with a recipe is referred to, either through variable substitution or by an initializing recipe, the section is treated as a part and added to the part list before the referencing part
I had the following section
[zopepy]
# For more information on this step and configuration options see:
# http://pypi.python.org/pypi/zc.recipe.egg
recipe = zc.recipe.egg
eggs = ${instance:eggs}
As it referenced the “instance” section “instance” was included in the list of parts.
To fix I changed it to copy-paste the eggs value of instance
eggs =
Plone
${buildout:eggs}
and then ran bin/buildout

how do I get configuration from buildout in my plone products?

How do I include configuration information from Buildout in my Plone products?
One of the plone products i'm working on reads and writes info to and from the filesystem. It currently does that inside the egg namespace (for example inside plone/product/directory), but that doesn't look quite right to me.
The idea is to configure a place to store that information in a configurable path, just like iw.fss and iw.recipe.fss does.
For example, save that info to ${buildout:directory}/var/mydata.
You could add configuration sections to your zope.conf file via the zope-conf-additional section of the plone.recipe.zope2instance part:
[instance]
recipe = plone.recipe.zope2instance
...
zope-conf-additional =
<product-config foobar>
spam eggs
</product-config>
Any named product-config section is then available as a simple dictionary to any python product that cares to look for it; the above example creates a 'foobar' entry which is a dict with a 'spam': 'eggs' mapping. Here is how you then access that from your code:
from App.config import getConfiguration
config = getConfiguration()
configuration = config.product_config.get('foobar', dict())
spamvalue = configuration.get('spam')

Resources