I'm hacking my way through my first Meteor app, and I've opened a bit of a rabbit hole in trying to connect to S3. I've installed awssum using meteorite, but it appears that I need to install the Node.js module of the same name to actually work through the examples. I'll eventually deploy my app to Heroku, and I'd like to be able to package my dependencies with my code. Googling a bit I've found a number of ways to do this, and I'm wondering which is close to being the best practice:
install the package I need in /public (https://github.com/possibilities/meteor-node-modules) (seems risky)
hack the buildpack I'm using (https://github.com/oortcloud/heroku-buildpack-meteorite) to require the node packages I need
Deploy my project as a Node module itself, thereby allowing dependencies (https://github.com/matb33/heroku-meteor-npm)
bundle your project, untar it, and install in the created node_modules dir (Recommended way to use node.js modules with meteor)
Which route should I take?
We are closing in on a release that interoperates w/ NPM packages. See Avi's writeup on meteor-talk.
He also gave a tech talk at last month's Devshop previewing the work, using S3 as the example: http://youtu.be/kA-QB9rQCq8
Related
We are currently in the process of developing custom operators and sensors for our Airflow (>2.0.1) on Cloud Composer. We use the official Docker image for testing/developing
As of Airflow 2.0, the recommended way is not to put them in the plugins directory of Airflow but to build them as separate Python package. This approach however seems quite complicated when developing DAGs and testing them on the Docker Airflow.
To use Airflows recommended approach we would use two separate repos for our DAGs and the operators/sensors, we would then mount the custom operators/sensors package to Docker to quickly test it there and edit it on the local machine. For further use on Composer we would need to publish our package to our private pypi repo and install it on Cloud Composer.
The old approach however, to put everything in the local plugins folder, is quite straight forward and doesnt deal with these problems.
Based on your experience what is your recommended way of developing and testing custom operators/sensors ?
You can put the "common" code (custom operators and such) in the dags folder and exclude it from being processed by scheduler via .airflowignore file. This allows for rather quick iterations when developing stuff.
You can still keep the DAG and "common code" in separate repositories to make things easier. you can easily use a "submodule" pattern for that (add "common" repo as submodule of the DAG repo - this way you will be able to check them out together, you can even keep different DAG directories (for different teams) with different version of the common packages this way (just submodule-link it to different versions of the packages).
I think the "package" pattern if more of a production deployment thing rather than development. Once you developed the common code locally, it would be great if you package it together in common package and version accordingly (same as any other python package). Then you can release it after testing, version it etc. etc..
In the "development" mode you can checkout the code with "recursive" submodule update and add the "common" subdirectory to PYTHONPATH. In production - even if you use git-sync, you could deploy your custom operators via your ops team using custom image (by installing appropriate, released version of your package) where your DAGS would be git-synced separately WITHOUT the submodule checkout. The submodule would only be used for development.
Also it would be worth in this case to run a CI/CD with the Dags you push to your DAG repo to see if they continue working with the "released" custom code in the "stable" branch, while running the same CI/CD with the common code synced via submodule in "development" branch (this way you can check the latest development DAG code with the linked common code).
This is what I'd do. It would allow for quick iteration while development while also turning the common code into "freezable" artifacts that could provide stable environment in production, while still allowing your DAGs to be developed and evolve quickly, while also CI/CD could help in keeping the "stable" things really stable.
Does anyone have any experience with Drupal 9 either without composer or in an airgap? Basically we're trying to run it in an airgapped server. Composer obviously wants to access the internet for checking and downloading packages.
You'll need to run composer to install your packages and create your autoload files to make it all work.
You could create your own local package repository and store the packages you need there, however this would be a large undertaking given all the dependencies Drupal Core and contrib modules use. You'd need to manage them all yourself, and keep your local versions synced with the public versions, especially for security updates.
If you need to do that anyway, you're better off just using the public repos.
Documentation on composer repos is here:
https://getcomposer.org/doc/05-repositories.md
Near the bottom it shows how to disable the default packagist repo:
https://getcomposer.org/doc/05-repositories.md#disabling-packagist-org
A much simpler alternative would be to do development in a non air gapped environment so you have access to the packages you need and can run composer commands to install everything. Then once your code is in the state you need, copy that to your air gapped server to run. Once composer install has run it is not required to do anything else. Just make sure you include the vendor directory with all your dependencies, as well as drupal core and contribs.
The server you run your Drupal instance on, does not even require composer to be installed.
I'm currently working on a fork of the Meteor application Rocket Chat. I have a requirement to stand up the application for testing and development on an isolated network, so no internet access whatsoever.
I can't just get it running on a connected system and then copy it wholesale into the disconnected lab. Rather, I need to be able to check out a copy of the source code (from a local SCM) and then run Meteor, letting it perform all necessary compilation and dependency resolution on the fly.
Even though it is a huge kludge, I was hoping that I could just copy the .meteor folder from a working system directly onto the target system so that it would already have a cache of all required packages and therefore not need to reach out to any repositories. However, from what I have found, that only works for Meteor dependencies downloaded from Atmosphere.
Within Rocket Chat, there are several private packages (such as rocketchat-ldap) that have dependencies on NPM packages (in this case, ldapjs). When the application is run and these packages are built, the .npm folder in the user's home directory gets populated with those NPM packages. So, I tried to package that folder up along with the .meteor folder to accomplish the same task.
Unfortunately, when I tested it on the offline system, despite having the populated .npm folder, Meteor spits out the following error:
While building package rocketchat:ldap:
error: Can't install npm dependencies. Are you connected to the internet?
Obviously, I'm not connected - by design.
So, I am currently looking into Sinopia to stand up an NPM repository mirror on our local network that can host these dependencies. However, I have no idea how I'm supposed to point Meteor to the alternate server. The Meteor documentation includes information about the Npm.depends and Npm.requires directives, which the application uses, but I can't find anything about specifying a URL from which to find said packages.
Further, is it possible to do something similar with the Atmosphere packages? Or is copying the .meteor folder the only way? As in, is there some application out there that I can use to host some of the Meteor packages? Or am I going about this in the wrong way?
The solution I went with, which isn't as elegant as I'd hoped was the following:
First, I copied the .meteor folder from the user account of a "working" system (this contains the Meteor executable and all of the Meteor packages downloaded from Atmosphere) to the user account of the disconnected target system. This allowed the target system to run Meteor.
Second, the NPM packages in question were being downloaded directly into the private packages in the source, but the .gitignore file on the source was set to ignore the node_modules folders. So I altered that and then checked those node_modules folders into the source with the rest of the application.
So, for example, the application source included a /packages/rocketchat-ldap/.npm/package folder. Then, when the application was run using meteor, the associated NPM packages (such as ldapjs) would get downloaded directly into a node_modules folder in that folder structure, at which point the private packages could be built.
Now, the source code in Git already contains those downloaded packages, so when a copy is checked out onto the disconnected target system, there is no need to download them.
Fortunately, this did not increase the size of the source very much (just a few hundred kilobytes).
The result is that when running meteor to run the application on the target system, all dependencies are already in place, and no internet connection is required.
I just saw a package that, in order to run properly, asks you to put something in the public section of settings.json. That made me wonder if the rest of the information there (sometimes sensible, like AWS keys) is accessible as well.
So, should I be worried about this or does Meteor hides this information from packages?
Any package you install from any package manager including NPM, Ruby Gems, and the Meteor package server can run arbitrary code on your computer as your user, including using the fs module to read and write files, accessing the network to send and receive data, etc.
In fact, you place the same trust in the developer whenever you install an application from the internet - almost any application on your computer could read your settings.json file, for example Dropbox, Chrome, etc.
Therefore, there is no way to completely secure the settings.json file from package code. The only way to be sure that packages are safe is to use only community-approved packages or read the source code of the packages you are using.
While investigating CI tools, I've found that many installations of CI also integrate to artifact repositories like SonaType Nexus and JFrog Artifactory.
Those tools sound highly integrated to Maven. We do not use Maven, nor do we compile Java even. We compile C++ using Qt/qmake/make, and this build works really well for us. We are still investigating CI tools.
What is the point of using an Artifact repository?
Is archiving to Nexus or Artifactory (or Archiva) supposed to be a step in our make chain, or part of the CI chain, or could it be either?
How might I make our "make" builds or perl/bash/batch scripts interact with them?
An artifact repository has several purposes. The main purpose is to have an copy of maven central (or any other maven repo) to have faster download times and you can use maven even if the internet is down. Since your not using maven this is irrelevant for you.
The second purpose is to store files in it you want to use as dependency but you can not download freely from the internet. So you buy them or get them from your vendors and put them in your repo. This is also more applicable to maven user and there dependency mechanism.
The third important purpose is to have a central way were you can store your releases. So if you build a release v1.0 you can upload it to such a repository and with the clean way of naming in maven its kinda easy to know how to find v1.0 and to use it with all other tools. So you could write a script which downloads your release with wget and install it on a host.
Most of the time these repos have a way of a staging process. So you can store v1.0 in the repo in staging. Someone do the test and when its fine he promotes it to the release repo were everybody can find and use it.
Its simple to integrate them with Maven projects and they are lot of other build tools frameworks with has a easy possiblity to connect against it like ant ivy, groovy grape and so on. Because of the naming schema there is no limitation that you use bash or perl to download/upload files from it.
So if you have releases or files which should be shared between projects and do not have a good solution for it an artefact repository could be good starting point to see how this could work.
As mentioned here:
Providing stable and reliable access to repositories
Supporting a large number of common binaries across different environments
Security and access control
Tracing any action done to a file back to the user
Transferring a large number of binaries to a remote location
Managing infrastructure configuration across different environments