on Flyway deployment script name starts with a version.
What is the maximum length one can use? I see that on the table column holding the version is 50 character long
There are a number of limits:
Version must be 50 characters or less
Description must be 200 characters or less
Migration filenames must be compatible with any OS limit
Do you have a specific use case for a version string longer than 50 characters? We're in the middle of work for Flyway 7 and this is a chance for us to change the history table if there's a good reason to do so.
If you read the documentation located here, you'll find that the limit is not one from Flyway. Rather, your limit on the length of the version is based on the OS and it's limit on the size of a file name. You must ensure that you're incrementing your numbers in an appropriate order. However, as you can see in the docs, Flyway supports a wide variety of formats, and the length of your string defining the version number is not an issue you need to worry about.
Related
As I understand from the documentation for Azure resource naming rules, I should be able to create a VMSS using Windows OS with up to 15 characters.
Source: https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules#microsoftcompute
Yet, when I deploy my ARM template for a Service Fabric Cluster, the deployment fails because the name is too long, which is 13 characters.
I am using "apiVersion": "2019-07-01", for the Microsoft.Compute/virtualMachineScaleSets type in the template, which appears to be the most recent version.
I can definitely make the deployment work be shortening the node type name but I'd prefer to take advantage of the additional characters that appears to be possible in the documentation.
Has anyone else been able to use more than nine characters for the node type name? If so, how?
last 6 characters are reserved for node incremental names (starts from 000000 and incremented by 1 for each node). Nothing you can do about it. Use 9 characters max
In Firebase Realtime DB, what are the limits on:
keys
paths
nesting level
?
Meaning restrictions on lengths as well as disallowed/special chars/values.
And any other restrictions (or discouragements) there might be.
Is this deprecated pre-Google-integration document (link here) still up to date?
Length of a key: 768 bytes
Depth of child nodes: 32
I don't see max path length mentioned there.
What is the non-deprecated location for this documentation?
I cannot find an equivalent in https://firebase.google.com/docs/ .
As if some of the docs "got lost in the shuffle"...
Thanks for any hints.
EDIT: I've broadened it slightly - not just lengths but any restrictions that might apply.
The Firebase documentation says 768 bytes is still the limit for a Key, and that they use UTF-8 encoding. With UTF-8, a character is 1-4 bytes.
However, most characters are 1 byte, unless you use a character such as ♥, which is 3 bytes. Therefore, for normal use of a key, the character limit is 768. If you want to anticipate some outlandish characters, it may be best to be conservative and limit total characters to 500 ,600, or 700. Depends on how you want to use the keys.
Test your characters and strings here:
https://mothereff.in/byte-counter
Documentation here:
https://firebase.google.com/docs/database/usage/limits
This documentation mentions that firebase realtime database can be nested up to 32 levels. But as its mentioned there itself that it is not a good practice to nest your data. Denormalisation of data though seems redundant, it gives more flexibility when writing rules and when writing queries to the database.
I am migrating my PostgreSQL9.1 schema to Oracle11g. I could not find online what is the maximum column length for CITEXT type column. What size should I put for Oracle when using VARCHAR2 type.
Note: I know CITEXT provides case-insensitive comparison, but, I am not much concerned with that.
According to the citext documentation citext is just a case insensitive version of text. text itself is of unlimited length (well, 1G actually). Therefore you cannot assume any meaningful upper limit. You have to ask the app developers for practical limits of each column.
ICU provides a way of cutting down the size of the .dat file. I'm almost certain I don't need most of the encodings that are default. If I want to build a CJK .dat file specifically for sqlite, which ones can I cut out.
I just need the tokenizer to work and possibly collation. Seems that all those character conversions may not really be necessary. At 17MB, it is too FAT! For all database, we use
PRAGMA encoding = UTF8;
Data Customizer Link: http://apps.icu-project.org/datacustom/
To put it another way, if I'm using UTF8 in SQLite to collate and index, what parts of the dat file do I really need? I bet the majority is never used. I suspect I don't need the Charset Mapping Tables, and maybe not some of the Misc data.
ICU.
This tool will generate a data library that can only be used with the 4.8 series of ICU. The help page provides information on how to use this tool.
Charset Mapping Tables (4585 KB) <-- axe?
Break Iterator (1747 KB) <-- seems like i need this
Collators (3362 KB) <-- seems like i need this for sorting (but maybe not)
Rule Based Number Format (292 KB) <-- axe?
Transliterators (555 KB) <-- axe?
Formatting, Display Names and Other Localized Data (856 KB) <-- axe?
Miscellaneous Data (5682 KB) <-- axe?
Base Data (311 KB) <-- seems basic
Update. It seems that everything can be removed except for Base Data and Break Iterator. Regarding the Collators from http://userguide.icu-project.org/icudata:
The largest part of the data besides conversion tables is in collation
for East Asian languages. You can remove the collation data for those
languages by removing the CollationElements entries from those
source/data/locales/*.txt files. When you do that, the collation for
those languages will become the same as the Unicode Collation
Algorithm.
This seems "good enough".
On Collation
Starting in release 1.8, the ICU Collation Service is updated to be
fully compliant to the Unicode Collation Algorithm (UCA)
(http://www.unicode.org/unicode/reports/tr10/ ) and conforms to ISO
14651. There are several benefits to using the collation algorithms defined in these standards. Some of the more significant benefits
include:
Unicode contains a large set of characters. This can make it difficult
for collation to be a fast operation or require collation to use
significant memory or disk resources. The ICU collation implementation
is designed to be fast, have a small memory footprint and be highly
customizable.
The algorithms have been designed and reviewed by experts in
multilingual collation, and therefore are robust and comprehensive.
Applications that share sorted data but do not agree on how the data
should be ordered fail to perform correctly. By conforming to the
UCA/14651 standard for collation, independently developed
applications, such as those used for e-business, sort data identically
and perform properly.
The ICU Collation Service also contains several enhancements that are
not available in UCA. For example:
Additional case handling: ICU allows case differences to be ignored or
flipped. Uppercase letters can be sorted before lowercase letters, or
vice-versa.
Easy customization: Services can be easily tailored to address a wide
range of collation requirements.
Flexibility: ICU offers both sort key generation and fast incremental
string comparison. It also provides low-level access to collation data
through the collation element iterator (§)
Update2. If Break Iterator is removed from the .dat, the following occurs:
sqlite> CREATE VIRTUAL TABLE test USING fts4(tokenize=icu);
sqlite> CREATE VIRTUAL TABLE testaux USING fts4aux(test);
sqlite> .import test.csv test
Error: SQL logic error or missing database
(We're talking about the Data Customizer page.)
I started with the biggest items, and was able to omit these entirely:
Charset mapping tables
Miscellaneous Data
I had to include Collators, but only the languages I was supporting.
I tried to trim Break Iterator, but it broke, so I stopped there. Nothing else is nearly as big.
Is there a maximum lengh for a file extension? The longest one I've seen is .compiled (8 chars)
Useless Background
I'm creating a IHttpHandler to return image icons for a specific filename. I'm simply calling a FileImage.axd?ext=pptx. I'm generating the files on the fly using SHGetFileInfo similar to my post for WPF, then caching them locally in a folder with the filename 'pptx.png'. I'd like to validate the length and trim it to prevent a DoS type attack where someone would try to generate images for and infinite number of junk characters (eg FileImage.axd?ext=asdfasdfweqrsadfasdfwqe...).
As far as I know, there is no limit, except the maximum length of the file name. Extension is not treated specially except in FAT16.
I agree with Arkadiy - there is no formal limit now that the DOS 8.3 system is a thing of the past (or other similar, limited systems). I would say that the majority of the extensions I've seen are in the range 1-3; Java uses 4 for .java and 5 for .class. Your example with 8 is longer than I recall. If I were scoping, I'd aim for 'unlimited'; if that's not feasible, allow at least 16 characters - with the confident expectation that in fact 16 would be quite sufficient for current systems.