Is there a limit on the migration file size in Flyway? - flyway

We have massive reference data that we want to load when creating environments using Flyway. We can split it, but it will be useful to know if there is a limit on script file size in Flyway.
I understand it's most probably related to memory and heap size, in this case, is there a way to calculate the maximum file size?

Flyway Community Edition will load and parse your files in memory, so the heap size will be the biggest constraint.
Flyway Pro and Flyway Enterprise 5.1 and newer now also have the flyway.stream flag which makes Flyway stream the contents instead. This lets Flyway handle much larger files (multiple GBs are not an issue) with drastically lower memory requirements. See https://flywaydb.org/documentation/commandline/migrate#stream

Related

Can I compile an XQuery into a package using Saxon - or - how to minimize compile times

I'm using the dotnet Saxon9ee-api.
I have a very large schema (180,000 lines), and a schema aware XQuery.
When I compile it, it understandably takes several seconds to compile it. That's life.
But is there a way that I can compile it once, and serialise it to disk as a compiled entity? So that I can load it again later and use it?
(The XSLT compiler allows me to compile into XsltPackages, that I'm pretty sure will let me do this with XSLT).
There's no filestore format for compiled XQuery code in Saxon (unlike XSLT), but there is a filestore format for compiled schemas (the SCM format) and this may help. However, loading a schema this large will not be instantaneous.
Note that the compile time for XSD schemas can be very sensitive to the actual content of the schema. In particular, large finite bounds can be very costly (example maxOccurs="1000"). This is due to the algorithms used to turn a grammar into a finite state machine. Saxon optimises the textbook algorithm for some cases, but not for all. The finite state machine is held in the SCM file, so you won't incur the cost if loading from an SCM file; however, the FSMs that are expensive to compute also tend to be very large, so if you're in this situation, the SCM is going to be big and therefore slower to read.

Meteor mongodb file size too big

I am just starting with Meteor creating some test/practice apps. After I have created the app and run it, the .meteor folder size baloons to 500 MB. Each practice app adds 500 MB or so to my working folder.
I am not playing with any huge data sets on anything, my database will be less than 10 MB.
As I sync my work folder with my laptop, it is a major pain to back it up. How can I reduce the size of default mongodb while creating a practice app so that backing it up or folder sync
Also even when I copy the whole app folder to the new location, It does not run, likely because the database is stored somewhere else.
Can I save the database to the same folder as the app, so that just copying the folder over will enable me to continue working on the laptop as well?
Sorry if the question is too noobish.
Thanks for your time.
meteor reset >>> deletes my database. I want to be able to preserve it.
Yes, this can be a pain and is unavoidable by default at present. However, a couple of ideas that might be useful:
If you have multiple meteor apps, it's possible to use the same DB for each, as per #elfoslav: link. However, note that you have to supply the env variable every time or create a shell script for when you start meteor, otherwise it'll create a new db for you if you run meteor on its own just once!
If it's just portability of the app you're concerned about, get comfortable with mongodump and mongorestore, which will yield bson files containing just your database contents (i.e. about 10mb) which are pretty easy to insert back into another instance of mongoDB, so that you only have to copy these backwards and forwards. Here is a guide to doing this with your Meteor DB, and here is a great gist from #olizilla.
Have you tried below mongoDB configuration options to limit the space it occupies?
storage.smallFiles
Type: boolean Default: False
Sets MongoDB to use a smaller default file size. The
storage.smallFiles option reduces the initial size for data files and
limits the maximum size to 512 megabytes. storage.smallFiles also
reduces the size of each journal file from 1 gigabyte to 128
megabytes. Use storage.smallFiles if you have a large number of
databases that each holds a small quantity of data.
storage.journal.enabled
Type: boolean
Default: true on 64-bit systems, false on 32-bit systems
Enables the durability journal to ensure data files remain valid and
recoverable. This option applies only when you specify the --dbpath
option. The mongod enables journaling by default on 64-bit builds of
versions after 2.0.
Refer to: http://docs.mongodb.org/manual/reference/configuration-options/

A file storage format for file sharing site

I am implementing a file sharing system in ASP.NET MVC3. I suppose most file sharing sites store files in a standard binary format on a server's file system, right?
I have two options storage wise - a file system, or binary data field in a database.
Is there any advantages in storing files (including large one's) in a database, rather then on file system?
MORE INFO:
Expected average file size is 800 MB. 3 files per minute are to be usually requested to be fed back to the user, who is downloading.
If the files are as big as that, then using the filesystem is almost certainly a better option. Databases are designed to contain relational data grouped into small rows and are optimized for consulting and comparing the values in these relations. Filesystems are optimized for storing fairly large blobs and recalling them by name as a bytestream.
Putting files that big into a database will also make it difficult to manage the space occupied by the database. The tools to query space used in a filesystem, and remove and replace data are better.
The only caveat to using the filesystem is that your application has to run under an account that has the necessary permission to write the (portion of the) filesystem you use to store these files.
Use FileStream when:
Objects that are being stored are, on average, larger than 1 MB.
Fast read access is important.
You are developing applications that use a middle tier for application logic.
Here is MSDN link https://msdn.microsoft.com/en-us/library/gg471497.aspx
How to use it: https://www.simple-talk.com/sql/learn-sql-server/an-introduction-to-sql-server-filestream/

Can you use rsync to replicate block changes in a Berkeley DB file?

I have a Berkeley DB file that is quite large (~1GB) and I'd like to replicate small changes that occur (weekly) to an alternate location without having the entire file be re-written at the target location.
Does rsync properly handle Berkeley DBs by it's block level algo?
Does anyone have an alternative to only have changes be written to the Berkeley DBs files that are targets of replication?
Thanks!
Rsync handles files perfectly, at the block level. The problem with databases can come into play in a number of ways.
Caching
File locking
Synchronization/transaction logs
If you can insure that during the period of the rsync, no applications have the berkeley db open, then rsync should work fine, and offer a significent advantage over copying the entire file. However, depending on the configuration and version of bdb, there are transaction logs. You probably want to investigate the same mechanisms used for backups and hot backups. They also have a "snapshot" feature that might better facilitate a working solution.
You should probably read this carefully: http://www.cs.sunysb.edu/documentation/BerkeleyDB/ref/transapp/archival.html
I'd also recommend you consider using replication as an alternative solution that is blessed by BDB https://idlebox.net/2010/apidocs/db-5.1.19.zip/programmer_reference/rep.html
They now call this High Availabity -> http://www.oracle.com/technetwork/database/berkeleydb/overview/high-availability-099050.html

Oracle 11g External Tables size limit

Is there a limit for the files that are defined as external tables in Oracle 11g? As per http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm, the last parameter External Tables file - Maximum size, it is Dependent on the operating system.
Does this mean that external tables can be as big as the underlying OS or File System can handle?
Although I haven't been able to find a definitive answer, my feeling is that any file used for an external table can be as big as the OS can handle. You can have mutliple files for each external table definition so your external table can, theoretically at least, be very large, although performance is going to be a limiting factor here. Again there doesn't seem to be a definitive answer to the number of files you can have per external table definition. Here's the link to the 11g limits which are much the same as the 10g page you posted.
Limit on the number of files specified in the LOCATION clause is 32767.
Each location is passed out to the access driver as an ODCIArgDesc, the VARRAY ODCIArgDescList has a size of 32767 (do a "describe ODCIArgDescList").
The size of the external files is limited/determined by the OS system calls which access the files and this is OS port dependent. Most modern OS'es support 64 bit file sizes though. Some OS'es may still be stuck with 32 bit files.

Resources