Drupal 9 in an airgap or without composer - drupal

Does anyone have any experience with Drupal 9 either without composer or in an airgap? Basically we're trying to run it in an airgapped server. Composer obviously wants to access the internet for checking and downloading packages.

You'll need to run composer to install your packages and create your autoload files to make it all work.
You could create your own local package repository and store the packages you need there, however this would be a large undertaking given all the dependencies Drupal Core and contrib modules use. You'd need to manage them all yourself, and keep your local versions synced with the public versions, especially for security updates.
If you need to do that anyway, you're better off just using the public repos.
Documentation on composer repos is here:
https://getcomposer.org/doc/05-repositories.md
Near the bottom it shows how to disable the default packagist repo:
https://getcomposer.org/doc/05-repositories.md#disabling-packagist-org
A much simpler alternative would be to do development in a non air gapped environment so you have access to the packages you need and can run composer commands to install everything. Then once your code is in the state you need, copy that to your air gapped server to run. Once composer install has run it is not required to do anything else. Just make sure you include the vendor directory with all your dependencies, as well as drupal core and contribs.
The server you run your Drupal instance on, does not even require composer to be installed.

Related

Can a MSIX package use an external file for user settings?

We are evaluating the migration from our current client/server application to .NET Core. The 3.0 release added the support for WinForms we need for our client, but ClickOnce will not be supported.
Our solution is installed on-premise and we need to include settings (among others) like the address to the application server. We create dynamically ClickOnce packages that can be used to install and update the clients and include the settings. This is working like a charm today. The users install the client using the ClickOnce package and every time we update the software we regenerate these packages at the customer's site and they get automatically the new version with the right settings.
We are looking at MSIX as an alternative, but we have got a question:
- Is it possible to add some external settings files to the MSIX package that will be used (deployed) when installing?
The package for the software itself could be statically generated, but how could we distribute the settings to the clients on first install / update?
MSIX has support for modification packages. This is close to what you want, the customization is done with a separate package installed after you install the main MSIX package of your app.
It cannot be installed at the same time as your main app. The OS checks if the main app is installed when you try to install the modification package and it will reject its installation if the main is not found on the machine.
The modification package is a standalone package, installed in a separate location. Check the link I included, there is a screenshot of a PS window where you can see the install path for the main package and the modification are different.
At runtime (when the user launches the app) the OS knows these two packages are connected and merges their virtual files and registry system, so the app "believes" all the resources are in one package.
This means you can update the main app and the modification package separately, and deploy them as you wish.
And if we update the modification package itself (without touching the main), will it be re-installed to all the clients that used it?
How do you deploy the updates? Do you want to use an auto-updater tool over the internet? Or ar these users managed inside an internal company network and get all the app updates from tools like SCCM?
The modification packages were designed mostly for IT departments to use them, and this is what I understood you would need too.
A modification package is deployed via SCCM or other tools just like the main package, there are no differences.
For ISVs I believe optional packages are a better solution.

How to install graphite and all its prerequisites as a user without root permissions?

We are trying to install Graphite to capture neo4j database metrics. The installation will be done under the neo4j user which does not have root permissions. On the web there are multiple pages which detail this procedure but most of them fail at one stage or another. Is there a way to install all components of graphite and its pre-requisites using a non root user?
If you don't have root, then you are most likely not supposed to install applications on that server and should ask you system administrator to install it for you.
That said. On which pre-requisite does it fail? You can install all Python parts in virtualenv. For cairo and other system requirements it's a bit harder but still doable. You'll also have some issues getting it started automatically after reboot.
I'm actually working on making installation easier and updating the documentation. I could use some feedback.
https://github.com/piotr1212/graphite-web/blob/setuptools/docs/install.rst#using-virtualenv <- work in progress, you will have to follow virtualenv -> install from source, but then replace "graphite-project" with "piotr1212" in the github url's and git checkout setuptools in every directory before runnin pip install .

Deploying to a production environment with Symfony Flex and --no-dev

I have a couple of large Symfony projects, and have noticed that after updating everything to Symfony 4 (Flex), when our deployment automation runs its normal process of:
composer install --no-dev
We end up with (for example) this:
Symfony operations: 2 recipes (72fad9713126cf1479bb25a53d64d744)
- Unconfiguring symfony/maker-bundle (>=1.0): From github.com/symfony/recipes:master
- Unconfiguring phpunit/phpunit (>=4.7): From github.com/symfony/recipes:master
Then, as expected, this results in changes to symfony.lock and config/bundles.php, plus whatever else, depending on what was included in require-dev in composer.json.
None of this is breaking, exactly, but it is annoying to have a production deploy that no longer has a clean git status output, and can lead to confusion as to what is actually deployed.
There are various workarounds for this, for example I could just put everything in require rather than require-dev since there is no real harm in deploying that stuff, or I could omit the --no-dev part of the Composer command.
But really, what is the right practice here? It seems odd that there is no way to tell Flex to make no changes to configuration if you are just deploying a locked piece of software. Is this a feature request, or have I missed some bit of configuration here?
This was briefly the case on old Syfmony Flex 1 versions.
It was reported here, and fixed here.
It this happens to you, it means you have a very old installation on your hands, and you should do your best to at least update Symfony Flex to a more current version (Symfony Flex 1 does not even work anymore, so switching to Flex 2 if possible would be the only way, or simply remove Flex in production).
If you deploy to prod from your master branch, you can setup a deploy branch instead. And in that branch you can prevent certain files from being merged in. See this post for more detail. This creates a situation in which you have a master branch, a version branch (example: 3.21.2) and you're having devs checkout master, work on it, then merge their changes into the version branch. From there you pick and choose what gets deployed to prod. (There will be a small balancing act here. You'll want to merge all dev changes into master until it matches your version branch and make sure master matches the version after you've deployed. This adds a bit of work and you have to keep an eye on it. etc.)
Another option is to separate your git repository from your deployment directory. In this example, a git directory is created in /var/repo/site.git and the deploy directory is /var/www/domain.com and a post-receive git hook is used to automatically update the www directory after a push is received to the repo/site directory. You obviously run composer, npm, gulp, whathaveyou in the www directory and the git directory is left as is.
Without getting into commercial options such as continuous deployment apps, you can write a deployment script. There are a lot of ways to write a shell script that takes one directory and copies it over, runs composer, runs npm, etc. all in one command -- separating your git from your deploy directories. Here's a simple one that makes use of the current datetime to name a directory then symlinking it to the deployment directory.

Composer And Wordpress

I'm a little new to Composer so this is just a general question. I've been developing a WP plugin, and I'm requiring a few libraries in via composer. I've uploaded the plugin to a server and I'm having problems. Am I required to run composer install on the server as well as on my localhost?
If you prefer to manage your dependencies with composer during development, you can do so.
But none of the WP workflows (neither the base installation, nor plugins) use composer. WordPress simply expects a folder with your code, it doesn't care about any of the internals, as long as you follow some simple rules.
If your plugin is public, you will have to submit it to the WordPress SVN, but there's no such thing as a build process. Also, WP plugin users will usually neither be interested in, nor have the possibility to, execute composer.
It is up to you if you create your own build process before committing to the WP SVN, or if you create your plugin in a way that it can run from your development code.
However, if you “build” your code before committing it to WP SVN (e.g. creating cache files, removing development-only dependencies etc.), you could run into discussions with people who insist on getting the original sources, too.

Recommended way to require Node.js modules in Meteor

I'm hacking my way through my first Meteor app, and I've opened a bit of a rabbit hole in trying to connect to S3. I've installed awssum using meteorite, but it appears that I need to install the Node.js module of the same name to actually work through the examples. I'll eventually deploy my app to Heroku, and I'd like to be able to package my dependencies with my code. Googling a bit I've found a number of ways to do this, and I'm wondering which is close to being the best practice:
install the package I need in /public (https://github.com/possibilities/meteor-node-modules) (seems risky)
hack the buildpack I'm using (https://github.com/oortcloud/heroku-buildpack-meteorite) to require the node packages I need
Deploy my project as a Node module itself, thereby allowing dependencies (https://github.com/matb33/heroku-meteor-npm)
bundle your project, untar it, and install in the created node_modules dir (Recommended way to use node.js modules with meteor)
Which route should I take?
We are closing in on a release that interoperates w/ NPM packages. See Avi's writeup on meteor-talk.
He also gave a tech talk at last month's Devshop previewing the work, using S3 as the example: http://youtu.be/kA-QB9rQCq8

Resources