What's is the maximum value of sequence in MariaDB - mariadb

What's is the maximum value of sequence in MariaDB?
Normally we create table to store sequence number.

The maximum of a sequence number or autoincrement column depends on the column (and it's type) which stores the sequence number:
TINYINT UNSIGNED 0xFF
SMALLINT UNSIGNED 0xFFFF
MEDIUMINT UNSIGNED 0xFFFFFF
INT UNSIGNED 0xFFFFFFFF
BIGINT UNSIGNED 0xFFFFFFFFFFFFFFFF

Related

I need to convert an int value of seconds into a byte depenting on the % value of the original Value

Let me clarify.
I have an int value of 3600. the timer ticks down. i now have 3000. how much is that compared to the original value but in the scale of a byte (0-255)
so 3600 original value. Now HALF 1800 current value should be half of a byte.
further examples:
600Starting value ---> Current value is now 500. This giving a byte value of about 208.
Linear dependence:
bytevalue = 255 * currentvalue / maxvalue

Will this addition always produce a unique number?

I don't know how to test the following exhaustively, without brute forcing it, so I'll just ask whether the concept is sound.
I have two 64bit unsigned int variables, which are both used as bit-fields. Both variables can have up to 60 bits set, from 1-60. Any amount of the 60 bits can be set, and they can be set in any order. bits 61, 62, and 63 do not get set in either variable. Additionally, one, and only one, of the variables always has the 64th bit set.
Given the above description, am I correct in thinking that hash will be unique for all possible combinations of field1 and field2?:
uint64_t field1 = ...;
uint64_t field2 = ...;
uint64_t hash = field1 + field2;
No. Simple example:
0b0011 + 0b0100 = 0b0111
0b0010 + 0b0101 = 0b0111
It is not possible to provide unique hash of length n for all pairs of values with length n. Note that there are about 2^60 * 2^60 = 2^120 combinations, so 2^60 hashes cannot fit them all.

what is meant by Extended Data Type in varchar2?

what is the difference between both and when it is used?
VARCHAR2(32767) and VARCHAR2(4000)
12c or 11g features?
Any other datatype will accept max variable....
When you work with 11g or earlier, the max size for a column of type VARCHAR2 is 4000, but in PL/SQL procedures or functions, the max size for a variable or parameter of type VARCHAR2 is 32767.
This page could be useful
https://blogs.oracle.com/oraclemagazine/working-with-strings
See the official documentation (http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements001.htm#i54330)
Variable-length character string having maximum length size bytes or
characters. Maximum size is 4000 bytes or characters, and minimum is 1
byte or 1 character. You must specify size for VARCHAR2. BYTE
indicates that the column will have byte length semantics; CHAR
indicates that the column will have character semantics.
But in Oracle Databast 12c maybe 32767 (http://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF30020)
Variable-length character string having maximum length size bytes or
characters. You must specify size for VARCHAR2. Minimum size is 1 byte
or 1 character. Maximum size is: 32767 bytes or characters if
MAX_STRING_SIZE = EXTENDED 4000 bytes or characters if MAX_STRING_SIZE
= STANDARD

Can I convert a decimal int to HEX number?

I would like to covert a decimal number (between 0 to 65536) to a hex number. Can I do it in Arduino script? Thanks
You can use sprintf to format a number as hex, e.g. something like
//lets be sure our integer is in desired range
myinteger=min(max(myinteger, 0), 65535);
//buffer big enough for 4 hex digits + terminating null
char hexbuffer[5];
sprintf(hexbuffer, "%04x", myinteger);

a negative unsigned int?

I'm trying to wrap my head around the truetype specification. On this page, in the section 'cmap' format 4, the parameter idDelta is listed as an unsigned 16-bits integer (UInt16). Yet, further down, a few examples are given, and here idDelta is given the values -9, -18, -27 and 1. How is this possible?
This is not a bug in the spec. The reason they show negative numbers in the idDelta row for the examples is that All idDelta[i] arithmetic is modulo 65536. (quoted from the section just above). Here's how that works.
The formula to get the glyph index is
glyphIndex = idDelta[i] + c
where c is the character code. Since this expression must be modulo 65536, that's equivalent to the following expression if you were using integers larger than 2 bytes :
glyphIndex = (idDelta[i] + c) % 65536
idDelta is a u16, so let's say it had the max value 65535 (0xFFFF), then glyphIndex would be equal to c - 1 since:
0xFFFF + 2 = 0x10001
0x10001 % 0x10000 = 1
You can think of this as a 16 integer wrapping around to 0 when an overflow occurs.
Now remember that a modulo is repeated division, keeping the remainder. Well in this case, since idDelta is only 16 bits, the max amount of divisions a modulo will need to do is 1, since the max value you can get from adding two 16 bit integers is 0x1FFFE , which is smaller than 0x100000. That means that a shortcut is to subtract 65536 (0x10000) instead of performing the modulo.
glyphIndex = (idDelta[i] - 0x10000) + c
And this is what the example shows as the values in the table. Here's an actual example from a .ttf file I've decoded :
I want the index for the character code 97 (lowercase 'a').
97 is greater than 32 and smaller than 126, so we use index 2 of the mappings.
idDelta[2] == 65507
glyphIndex = (65507 + 97) % 65536 === 68 which is the same as (65507 - 65536) + 97 === 68
The definition and use of idDelta on that page is not consistent. In the struct subheader it is defined as an int16, while a little earlier the same subheader is listed as UInt16*4.
It's probably a bug in the spec.
If you look at actual implementations, like this one from perl Tk, you'll see that idDelta is usually given as signed:
typedef struct SUBHEADER {
USHORT firstCode; /* First valid low byte for subHeader. */
USHORT entryCount; /* Number valid low bytes for subHeader. */
SHORT idDelta; /* Constant adder to get base glyph index. */
USHORT idRangeOffset; /* Byte offset from here to appropriate
* glyphIndexArray. */
} SUBHEADER;
Or see the implementation from libpdfxx:
struct SubHeader
{
USHORT firstCode;
USHORT entryCount;
SHORT idDelta;
USHORT idRangeOffset;
};

Resources