Where does Meteor store atmosphere package templates/files - meteor

How can I modify source/templates of packages that I am using inside my Meteor Project.
For example, I download the package from Atmosphere, begin using the package and want to make some minor tweaks to the package.
What is the best approach to making changes to a installed Atmosphere package?

You should fork the package from the github source and make changes to your own version. If you think those changes might benefit others then you can make a pull request for your changes and, if the original author(s) agree, your changes can now benefit others.
If you make changes in the deployed package under your project directory then those will probably be clobbered the next time you run $ meteor update
If you're still curious as to where they live, look under /.meteor

Related

Global cache versus library -- what is used to load packages?

Can someone clarify how the global cache differs from a project library in an renv? Does renv first search a project library, then the global cache?
The global cache is an implementation detail. It never determines what packages are part of the local library, and what gets loaded in a project.
The only importance of the global cache is, well, caching installed packages, which makes setting up a new project faster because, rather than having to download and install a package, ‘renv’ can just link the already installed, cached package into the local library:
Future calls to renv::restore() and renv::install() will become much faster, as renv will be able to find and re-use packages already installed in the cache.
Because it is not necessary to have duplicate versions of your packages installed in each project, the renv cache should also help you save disk space relative to an approach with project-specific libraries without a global cache.
If you prefer, you can (mostly) ignore the existence of the global cache entirely. ‘renv’ should work as if it didn’t exist.

Wordpress plugins: commit or not commit

Which is better if you are not changing the plugins or if you aren't doing your own plugin, should I commit the plugin folder or add it to git.ignore file
As a general rule I'd say, if you are making any changes to an external plugin/library you should obviously commit it along with your solution.
If you are not making any changes, there is usually no reason why you'd want to do that.
However you will want to have some means of package management. If someone changes the version of that plugin, others need to get that same version updated upon git pull. Otherwise the dependant code may break. Hence if you are working with some kind of package manager and your plugin is treated as a package - your problem is solved. The packages themselves will be ignored by git, you will only version some kind of package configuration file and all will be updated regurarly by the package manager.
Now, if you do not have any package manager, then you will actually probably want to consider versioning your plugin code, even if you are not making changes to it. That way the person who updates the plugin to a new version (and it's probably going to happen with Wordpress pretty often), pushes the new version and it is updated for everyone with every git pull.

Big R project with several packages and developers: Best setup for easy version controll based on packages

I have to restructure a big project written in R, which is later consisting several packages as well as developers. Everything is set up on a git server.
The question is: How do I manage frequent changes inside packages without having to build them every time and developers updating them after they made a new pull? Is there any best practice or automation for that? I don't want source() with unbuilt packages and R.files but would like to stick with a package like structure as much as possible. We will work in a Windows environment.
Thanks.
So I fiddled around a while, tried different setups and came up with an arrangement which fits my needs.
It basically consists two git repositories. The first on (let's call it base-repo) of them contains most scripts on which all later packages are based on. The second repo we will call the "package-repo".
Most development work should be done on the base-repo. The base-repo is under CI control via a build server and unit tests.
The package-repo contains folders for each package we want to build and the base-repo as a git-submodule.
Each package can now be constructed via a very simple bash/shell script (“build script”):
check out a commit/tag of the submodule base-repo on which the stable
package build should be based on
copy files which are necessary for the package into the specific package folder
checks and builds the package
script can also create a history file of package
script can either be invoked manually or by a build server
This approach can also be combined with packrat. Additional code which is very package specific can now be also added to the package-repo and is under version control while independed from the base-repo
The approach could be further extended to trigger the build of packages from the package-repo based on pushes to the base-repo. Packages with a build script pointing to master as a commit will always be up to date and if under control of a build server it will ensure that changes to the base-repo will not break the package. Also it is possible to create several packages containing the same scripts from base-repo.
See also: git: symlink/reference to a file in an external repository

How to include a full R distribution in my GitHub repository

I build transport models for various government agencies. My model is managed through GitHub, and it depends on R to perform certain calculations. I currently have my entire r installation folder in the repository. This can't be the right solution, but here are some of my constraints:
My clients are usually even less sophisticated programmers then I am. When they download/clone the model, it just needs to work.
This needs to be the case 10 years from now - regardless of what the current build of R and all the package dependencies are.
Placing my entire R folder in the repo solves these two problems, but creates some new ones:
The repository is much larger than it needs to be / longer download time.
If the transport model is updated to a new version (say v2.0), I'd want to update R and its packages to the latest versions. I'm afraid this would increase the size of the repo even further.
One solution I understand is submodules. I could place the full R folder in a separate repo and bring it in as a submodule. This, at the very least, cleans up the model repository.
What about zipping the R folder? Some early testing showed that git can diff the zip file, but I don't know if it is doing it as a flat file or reading the contents. Also, is GitHub going to complain about 100MB+ zip file? I'd like to avoid GitLFS if I can, but asking my clients to unzip that file wouldn't be a problem.
I also looked at packrat, but as far as I can tell, that only works for R projects.
Lastly, I don't entirely understand makefiles / recipes, but it would be nice if there was a script I could run that would download specific versions of R and it's libraries. One complicating thing is that some of the R packages are private GitHub repos.
Anyway, I'm happy to provide more info if needed. Thank you for your help!

Are there any R package repository management tools?

I'm creating a custom R package repository and would like to replicate the CRAN archive structure whereby old versions of packages are stored in the src/contrib/Archive/packageName/directory. I'd like to use the install_version function in devtools (source here), but that function is dependent on having a CRAN-like archive structure instead of having all package versions in src/contrib/.
Are there any R package repository management tools that facilitate the creation of this directory structure and other related tasks (e.g. updating the Archive.rds file)?
It would also be nice if the management tools handled the package type logic on the repository side so that I can use the same install.packages() or install_version() code on a Linux server as on my local Mac (i.e. I don't have to use type="both" or type="source" when installing locally on a Mac).
Short answer:
Not really for off-the-shelf use.
Long answer:
There are a couple of tools that one can use to manage their repo, but there isn't a coherent off-the-shelf ecosystem yet.
The CRAN maintainers keep a bevy of scripts here to manage the CRAN repository, but it's unclear how they all work together or which parts are needed to update the package index, run package checks, or manage the directory structure.
The tools::write_PACKAGES function can be used to update the package index, but this needs to be updated each time a package is added, updated, or removed from the repository.
M.eik Michalke has created the roxyPackage package, which has the ability to automatically update a given repository, install it, etc. The developer has also recently added the ability to have the archive structure mimic that of CRAN with the archive_structure function. The downside is the package isn't on CRAN and would probably be better if integrated with devtools. It's also brand new and isn't ready for wide use yet.
Finally, I created a small Ruby script that watches a given repository and updates the package index if any files change. However, this is made to work for my specific organization and will need to be refactored for external use. I can make it more general if anyone is interested in it.

Resources