I am thinking that might already be answered but I cannot find anything.
What is the difference between the minimongo/monog atmosphere packages and what is already supplied in Meteor?
I added them in early on and I was wondering if they are still needed.
Thanks,
minimongo is a dependency of mongo, and mongo is definitely part of the meteor package
so yes, it is part of the distribution.
if I run meteor show minimongo I get:
meteor show minimongo master
Package: minimongo#1.0.10
Maintainers: mdg
Exports: LocalCollection, Minimongo
`minimongo` is reimplementation of (almost) the entire MongoDB API, against an
in-memory JavaScript database. It is like a MongoDB emulator that runs inside
your web browser. You can insert data into it and search, sort, and update that
data. This is great if you like the MongoDB API (which you may already be using
on the server), need a way to store data on the client that you've fetched and
are using to render your interface, and want to use that familiar API.
Minimongo is used as a temporary data cache in the standard Meteor stack, to
learn more about mini-databases and what they can do, see [the project page on
www.meteor.com](https://www.meteor.com/mini-databases)
Recent versions:
1.0.6 December 19th, 2014
1.0.7 March 17th, 2015
1.0.8 March 31st, 2015 installed
1.0.9 September 21st, 2015 installed
1.0.10 September 28th, 2015 installed
Older and pre-release versions of minimongo have been hidden. To see all 56 versions, run 'meteor show --show-all minimongo'.
Related
I am new to Artifactory and trying to figure out the best strategy for my company's need. We've been using in-house package management system so far and want to go with more of industry standard solution.
Our situation:
We have on-prem deployed software. Since each customer has their own story and strategy for software update, we need to support more than the latest release (let's say, we support the last 2 releases).
We have 40+ git repos that forms a single installer. Some of those 40+ git repos create npm package or nuget package, and some others consume them (and make its own nuget/npm package sometimes.)
Every release gets their own branch and CI pipeline, so that updating a package in release-1.1 pipeline will not accidentally leak to any consumer of the package in release-1.0. This happens on all these 40+ git repos.
New release branch/CI pipeline is spawned about twice a year.
I see Artifactory provides multiple repos feature. In their recommended repo best practice doc, https://jfrog.com/whitepaper/best-practices-structuring-naming-artifactory-repositories/, it suggests you use a separator for maturity, such as dev vs. prod.
To apply this to our situation, one idea is to perceive each release as maturity, so we will have dev, release-1.0, release-1.1, etc. artifactory repos, and each artifactory release repos are tied to their own branch. This will work okay but it takes more automation on artifactory side. I can see making separate Artifactory repos to manage permission, but making new repos just to filter packages feels overkill for us. There is no permission difference between releases.
Alternatively, we could go with single artifactory repo, and each package to be labeled with releases. Say, CI pipeline for release 1.0 will release a package with a label release-1.0. With a tooling like GitVersion that guarantees each CI pipeline will produce unique version number, this can provide nice filtering/grouping mechanism for all the packages without the burden of per-release repos. Only if nuget update or npm update can do the update of the package versions with label-based filtering.
jfrog cli provides a way to download files based upon labeling from certain artifactory repo. To build one git repo package, I could download all the packages from 40+ git repos with the label filtering, and then do nuget update using local folder. It doesn't sound ideal.
I am surprised that nuget or npm already don't have update with label filtering feature. They support labels, but it is only for searching. The only way I can think of is to write custom script that will go through each package reference in nuget.config or package.config (for npm), run query with jfrog cli (or api) to get the latest version of the package, and then update one by one. It will work, but I am wondering if this is the right solution since it involves handful of custom work.
Any advice from package management guru is much appreciated.
My problem is resolvable by utilizing path, as noted in this article with baseRev.
https://www.jfrog.com/confluence/display/JFROG/Repository+Layouts
It was important to recognize our releases are not maturity (as in dev vs. prod), but long-living branch. The difference is that long living branch's artifacts are compiled again, whereas prod artifacts are promoted from dev artifacts as-is. So when I tried to resolve long-living branch problem by applying maturity practice, it created awkward flow here and there.
Each long-living branch set of 40+ repos will provide/consume their own nuget packages within. To address this without making new repos for each release, we can utilize path in local repo, such as artifactory.my-server.com/api/nuget/nuget-local/master/* vs artifactory.my-server.com/api/nuget/nuget-local/release-1/*.
Unfortunately, you can use path for push, but not for install.
So for consumption side, you need to make one virtual repo for each release, which is not too big of a deal for us.
I have an R project that generates some Solr and RDF output. The project is in a GitHub repo, and I have a pre-release tagged 1.0.0
My team has decided that any knowledge artifacts we create should have an internal indication of the version of the software that created them.
Currently, I manually enter the release tag into a JSON configuration file after manually/interactively making the release on the GitHub website.
Could I either
automatically enter the release number into the config file when the release is built
automatically determine the release version when running the R scripts
And are either of those approaches good ideas? A user could theoretically make local changes to the R code and get out of sync with the cited release, right?
I am trying to install riak-ts version 1.0.0 and on this page. They mention download package is available from ZenDesk, but in zendesk site I did not find such link.
ZenDesk is a customer support site. If Basho has put files there, you will likely need to get an account from them in order to log in download them.
Riak TS is currently (as the time of answer) only available to Riak enterprise customers and they can download its package from their ZenDesk panel.
Update (Oct 2016)
Riak TS has been open sourced since that time. The download packages are at http://docs.basho.com/riak/ts/latest/downloads/.
does meteor have backend admin panel like "rais_admin" or "active admin" in rails for CRUD operations under models?
One of the teams at the first Meteor Summer Hackathon wrote the z-mongo-admin package that gives you a panel for basic CRUD operations. This should have the functionality that you're looking for.
Update 6/1/2015 - YES, since version 1.0.2. Once your app is running using meteor, run meteor shell in the same directory in a separate tab and you'll have a REPL.
Not yet. You can run meteor mongo in the app directory to access the database. Currently, you need the app running for this to work.
Observatory is a burgeoning logging and testing framework. Perhaps some kind of REPL will fit in the future.
Meteor Admin is an alternative to Houston based on the Autoform package.
It offers full CRUD based on your collection schemas.
Meteor Candy is an admin panel made just for Meteor. The idea is, everyone builds their Meteor app differently, but we do have commonalities such as the use of Accounts packages, etc, and that's a good place to start.
The package uses Dynamic Import available in Meteor 1.5, which means it adds virtually no weight to your client bundle.
Disclosure: I am the creator of the package
You should try Houston: https://github.com/gterrono/houston
Watch the video presentation here:
https://www.youtube.com/watch?v=8ASwWEZsAog
I'm creating a custom R package repository and would like to replicate the CRAN archive structure whereby old versions of packages are stored in the src/contrib/Archive/packageName/directory. I'd like to use the install_version function in devtools (source here), but that function is dependent on having a CRAN-like archive structure instead of having all package versions in src/contrib/.
Are there any R package repository management tools that facilitate the creation of this directory structure and other related tasks (e.g. updating the Archive.rds file)?
It would also be nice if the management tools handled the package type logic on the repository side so that I can use the same install.packages() or install_version() code on a Linux server as on my local Mac (i.e. I don't have to use type="both" or type="source" when installing locally on a Mac).
Short answer:
Not really for off-the-shelf use.
Long answer:
There are a couple of tools that one can use to manage their repo, but there isn't a coherent off-the-shelf ecosystem yet.
The CRAN maintainers keep a bevy of scripts here to manage the CRAN repository, but it's unclear how they all work together or which parts are needed to update the package index, run package checks, or manage the directory structure.
The tools::write_PACKAGES function can be used to update the package index, but this needs to be updated each time a package is added, updated, or removed from the repository.
M.eik Michalke has created the roxyPackage package, which has the ability to automatically update a given repository, install it, etc. The developer has also recently added the ability to have the archive structure mimic that of CRAN with the archive_structure function. The downside is the package isn't on CRAN and would probably be better if integrated with devtools. It's also brand new and isn't ready for wide use yet.
Finally, I created a small Ruby script that watches a given repository and updates the package index if any files change. However, this is made to work for my specific organization and will need to be refactored for external use. I can make it more general if anyone is interested in it.