I am just starting with Meteor creating some test/practice apps. After I have created the app and run it, the .meteor folder size baloons to 500 MB. Each practice app adds 500 MB or so to my working folder.
I am not playing with any huge data sets on anything, my database will be less than 10 MB.
As I sync my work folder with my laptop, it is a major pain to back it up. How can I reduce the size of default mongodb while creating a practice app so that backing it up or folder sync
Also even when I copy the whole app folder to the new location, It does not run, likely because the database is stored somewhere else.
Can I save the database to the same folder as the app, so that just copying the folder over will enable me to continue working on the laptop as well?
Sorry if the question is too noobish.
Thanks for your time.
meteor reset >>> deletes my database. I want to be able to preserve it.
Yes, this can be a pain and is unavoidable by default at present. However, a couple of ideas that might be useful:
If you have multiple meteor apps, it's possible to use the same DB for each, as per #elfoslav: link. However, note that you have to supply the env variable every time or create a shell script for when you start meteor, otherwise it'll create a new db for you if you run meteor on its own just once!
If it's just portability of the app you're concerned about, get comfortable with mongodump and mongorestore, which will yield bson files containing just your database contents (i.e. about 10mb) which are pretty easy to insert back into another instance of mongoDB, so that you only have to copy these backwards and forwards. Here is a guide to doing this with your Meteor DB, and here is a great gist from #olizilla.
Have you tried below mongoDB configuration options to limit the space it occupies?
storage.smallFiles
Type: boolean Default: False
Sets MongoDB to use a smaller default file size. The
storage.smallFiles option reduces the initial size for data files and
limits the maximum size to 512 megabytes. storage.smallFiles also
reduces the size of each journal file from 1 gigabyte to 128
megabytes. Use storage.smallFiles if you have a large number of
databases that each holds a small quantity of data.
storage.journal.enabled
Type: boolean
Default: true on 64-bit systems, false on 32-bit systems
Enables the durability journal to ensure data files remain valid and
recoverable. This option applies only when you specify the --dbpath
option. The mongod enables journaling by default on 64-bit builds of
versions after 2.0.
Refer to: http://docs.mongodb.org/manual/reference/configuration-options/
Related
The blobs folder on my Sonatype Nexus has completely filled the server memory.
Does anyone know how to make room? Does it exist an automatic way to free that space, or I have to do it manually..?
And, at last: what happens if I completety remove all the data in the directory blobs/default/content?
Thank you all in advance
Marco
In NXRM3, the blobstore contains all the components of your repository manager. If your disk is full, you will not be able to write anything more to NXRM and risk corruption of existing data.
Cleanup can be performed using scheduled tasks. What you need varies based around what formats your system is using. You can find more general information here: https://help.sonatype.com/display/NXRM3/System+Configuration#SystemConfiguration-ConfiguringandExecutingTasks
It is important to note that you must run the "Compact blob store" task after any cleanup is done, otherwise the space will not be freed.
However, it is advisable if you have reached full disk space, you shut down and restore from a backup in case there is corruption, preferably giving yourself a larger disk for your blobstore before restarting.
RE "what happens if I completety remove all the data in the directory blobs/default/content": That is in effect removing all data from NXRM in the default blobstore. You will have no components if you do that.
Unfortunately, I have a more than 500GB ZODB, Data.fs in my Plone site(Plone 5.05)
So, I have no way to use bin/zeopack to packing it,
Seriously affecting performance
What should I do ?
I assume you're running out of space on the volume containing your data.
First, try turning off pack-keep-old in your zeoserver settings:
[zeoserver]
recipe = plone.recipe.zeoserver
...
pack-keep-old false
This will disable the creation of a .old copy of the Data.fs file and matching blobs. That may allow you to complete your pack.
Alternatively, create a matching Zope/Plone install on a separate machine or volume with more storage and copy over the data files. Run zeopack there. Copy the now-packed storage back.
I'm working on migration from Alfresco 4 to 5 and applying any add-ons on Alfresco 4 for the purpose is not applicable. Database used for the both versions are different from each other. I have tried with ACP files and it is very time consuming. Is there a size limitation on ACP files? What other methods can be used?
Use Standard Upgrade Procedure
What is your main intention? "Just" doing an upgrade from 4 to 5?
In that case the robust, easy way would be to:
Install required modules having custom models in your target sytstem (or if you customized models in the extension path than you have to copy that config)
backup and restore the alfresco repo database to your new (5.x) system. If your target system uses a different db product (not just a different version) you need to manage the db migration using db specific migration tools. It is no alternative to use Alfresco export/import.
sync alf_data/contentstore to your new system (make sure the db dump
is always older or you need to do an offline sync)
During startup Alfresco recognizes that the repo needs to be upgraded and does everything. Check the catalina.out for any output during migration.
If you need a subset from your previous system it is much easier to delete the content afterwards (don't forget to purge the trash and you should configure the cleaner job not to wait 14 days).
Some words concerning ACP
It is a nice tooling to export single directories but unfortunately it is limited:
no support accross Alfresco versions (exactly your case)
no support for site metadata / no site export/import (maybe it is working after the changes in 4.x when putting site metadata in nodes but I suppose nobody tested this)
must run in one transaction. So hard limits depend on your hardware / JVM configuration but I wouldn't recommend to export/import more than some thousand nodes at once.
If you really need to use export/import a huge number of documents you should use the import/export in a separate java process which means your Alfresco needs to be shut down. s. https://wiki.alfresco.com/wiki/Export_and_Import#Export_Tool
ACP does have a file limit (I can't remember the actual number), but we've had problems with ones below that limit too. We've given up on this approach in favor of using Alfresco bulk import tool.
One big advantage this tool has, it can continue a failed import from the point of failure, no need to delete the partially imported batch and start all over again. It can also update files as needed, something ACP method can't (would fail with DuplicateChildNameNotAllowed).
I've been cleaning up my directories and noticed that each Meteor.js project takes up at least 77MB (and typically, more like 150MB)! To figure out what was happening, I went ahead and created a new app:
meteor create myapp
At this point, the folder takes up about 7kb.
But after I do this
cd myapp
meteor
the folder size balloons up to 77MB.
After some digging around, I managed to pinpoint to size increase the .meteor/db folder. More specifically, running the app creates these local* files inside .meteor/db which are each >16Mbs. I opened these and they're mainly just a long string of 0000s with a few non-0000s here and there.
If I start doing more -- adding data, to Meteor.collections, etc -- the size balloons to 100+MB.
My questions
What are these files for and why are they so huge?
Is there any way to make my app smaller (zipping the folder cuts the size down to 1.8MB so a lot of the additional bloat looks like it could be stripped away somehow.
Running meteor in development mode (the default) creates an instance of mongodb for you under your .meteor directory. It's huge, I know. But don't worry - this is only for development so you don't need to setup your own mongodb instance on your localhost. You can clean it up at any time by running:
$ meteor reset
When you go to deploy your app, you will bundle your project which does not include any of these files.
To add to what David Weldon said.
If the size of the app locally is an issue, you could always use a Mongo database that is not stored locally, like a mongodb-as-a-service provider such as: MongoLab or MongoHQ
Also, for me using jasmine tests created a mirrors folder totaling 15Gb...
I am implementing a file sharing system in ASP.NET MVC3. I suppose most file sharing sites store files in a standard binary format on a server's file system, right?
I have two options storage wise - a file system, or binary data field in a database.
Is there any advantages in storing files (including large one's) in a database, rather then on file system?
MORE INFO:
Expected average file size is 800 MB. 3 files per minute are to be usually requested to be fed back to the user, who is downloading.
If the files are as big as that, then using the filesystem is almost certainly a better option. Databases are designed to contain relational data grouped into small rows and are optimized for consulting and comparing the values in these relations. Filesystems are optimized for storing fairly large blobs and recalling them by name as a bytestream.
Putting files that big into a database will also make it difficult to manage the space occupied by the database. The tools to query space used in a filesystem, and remove and replace data are better.
The only caveat to using the filesystem is that your application has to run under an account that has the necessary permission to write the (portion of the) filesystem you use to store these files.
Use FileStream when:
Objects that are being stored are, on average, larger than 1 MB.
Fast read access is important.
You are developing applications that use a middle tier for application logic.
Here is MSDN link https://msdn.microsoft.com/en-us/library/gg471497.aspx
How to use it: https://www.simple-talk.com/sql/learn-sql-server/an-introduction-to-sql-server-filestream/