If I have a hash table that I know will store 13 items, how can I initialize my table to an appropriate size? I read in my book that the load factor should be at or below 2/3. Does this mean that if I already know that the maximum number of items in my table at any point will be 13, I could do something like:
tableSize = nextPrime((numEntries * 3)/2);
My thinking with the above assignment is that numEntries represents the number 13 and since I know the load factor has to be under 2/3, I find what value I need to make the ratio 2/3.
You can initialize a hashtable as new Hashtable(initialSize, loadFactor)
In simple terms the load factor determines when to allocate more memory to the hashtable
You must be aware that you need not specify memory at initialization to a hashtable unlike arrays. An appropriate loadfactor helps to reduce the overhead of repeated memory allocation.
AFAIK a load factor of 2/3 indicates that the hashtable be allocated with memory when it is 2/3rds full.
Check out : http://docs.oracle.com/javase/1.4.2/docs/api/java/util/Hashtable.html
If you know that the hashtable will be storing 13 entries why not initialize it with
new Hashtable(13) and not worry about the loadfactor which comes into picture when new allocation is needed.
Related
I want to publish metrics about my RocksDB instance, including the disk size of each column over time. I've come across a likely API method to calculate disk size for a column:
GetApproximateSizes():
Status s = GetApproximateSizes(options, column_family, ranges.data(), NUM_RANGES, sizes.data());
This is a nice API, but I don't know how to provide a Range that will specify my entire column. Is there a way to do so without finding the min/max key in the column?
For the whole database, you can approximate it using 0x00 or the empty byte string and an arbitrarily big key as end such as 0xFFFFFF.
Otherwise, if the column share a common prefix, use the following function to compute the end key:
def strinc(key):
key = key.rstrip(b"\xff")
return key[:-1] + int2byte(ord(key[-1:]) + 1)
strinc will compute the next byte string that is not prefix of key, together they describe the whole keyspace having KEY as prefix.
My tcl application uses dictionary to store large databases and has to ensure memory does not blow up significantly. I am looking for a simple approach to free up memory of entire dictionary.
In the below code sequence, I am reassigning a blank dict to a variable which already has a large size dict associated with it. I can see that the contents of the new dict with the variable are empty but will it also free up the memory? in other words, is this equivalent to executing unset statement for key-value pairs?
set db [dict create]
dict set db key1 value1
dict set db key2 value2
# Will this next step recover memory of all previous key-value assignments?
set db [dict create]
Background Info: Tcl's language model is that every value is a string. Strictly, at the implementation level every value is formally a subtype of string and values are passed by reference, with the references being immutable (and hence copied when they are written to) if they are shared. Unshared values are writable, though Tcl's got a very strict interpretation of that internally and the details are typically hidden entirely from scripts; you're not supposed to think of this stuff until you are dealing with optimising performance and even then not much.
So…
Consequences: The memory used to implement a dictionary is reclaimed automatically when the last reference to that dictionary goes away. This is identical to as in lists or large strings. (Well, small strings and numbers too, but they're usually not a Big Deal.)
So, if I do this:
set db [dict create big "bigger" biggest "even more"]
set db2 $db
unset db
then the memory is still allocated as the db2 variable is holding a reference. Replacing unset db with set db {} or set db [dict create] will have pretty much the same effect; that original dictionary is still hanging around. However, once the last reference to it goes away (which could be even from inside another dictionary or list) then the memory is tidied up.
So yes, in your exact example, the memory is freed. We can prove this by running this loop:
while true {
set db [dict create]
dict set db key1 value1
dict set db key2 value2
set db [dict create]
}
and seeing that the OS thinks that the memory usage for the process is static even if the CPU usage is (close to) 100% of a CPU core. If it leaked memory, you would see it! (You can confirm that this is a reasonable test by adding an lappend save $db in that loop and seeing that memory usage grows fast. You'll want to kill that process fairly quickly once you've seen that it indeed is a memory hog of the worst kind…)
I am not sure if this could be possible using sql statement.
I am looking for sql query to know number of CPUs available on hardware to be included in PARALLEL clause. We do not have access to our Linux environment and hence has we are seeking any possiblity to know this value. Is it possible using SQL? Kindly suggest.
Actually my index creation script is taking longer then expected time and it was implemented with "NOLOGGING PARALLEL COMPRESS" clause.
Kindly suggest if leaving number "N" in PARALLEL and COMPRESS clause is ok.
How Oracle manage the degree of parallelism in case we miss number of CPU information in PARALLEL clause.
In sqlplus you can use the below command to see number of cpu.
show parameter CPU_COUNT;
SQL> show parameter CPU_COUNT;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 2
Alternatively you can query v$parameter to get the same value
SQL> select value from v$parameter where name like 'cpu_count';
VALUE
--------------------------------------------------------------------------------
2
Creating an index with NOLOGGING PARALLEL COMPRESS is optional but they bring in some values when you use it. Compressed indexes save space and reduce CPU time cause they take less space. If you have to scan 100 blocks -- do 100 latches, 100 consistent gets that takes a certain amount of CPU. Now, if it is compressed -- you have to do many 60 blocks, 60% of the CPU as before. Apart you store more index entries per leaf block.
For how oracle works in parallel read below:
https://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm
If the PARALLEL clause is specified but no degree of parallelism is listed, the object gets the default DOP. Default parallelism uses a formula to determine the DOP based on the system configuration, as in the following:
For a single instance, DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT
For an Oracle RAC configuration, DOP = PARALLEL_THREADS_PER_CPU x
CPU_COUNT x INSTANCE_COUNT
you can query v$parameter to get the above two parameters name - cpu_count and parallel_threads_per_cpu
SQL> SELECT name, value FROM v$parameter WHERE UPPER(name) IN ('CPU_COUNT','PARALLEL_THREADS_PER_CPU');
I have this strange situation where i am currently doing this:
if (!this.randomize(delay) with {delay inside {strm};})
......
where
rand bit [2:0] delay;
bit [15:0] strm [bit [15:0]];
Now I want this delay to go in round robin from 0->....->7->0 and so on, but it should satisfy the condition that it should be present in strm. So I want something like
while (delay not in strm) begin
delay+=1;
end
Other than going though each and every index (2^16-1) is there any other way of finding if it exists in this packed+unpacked array? Thanks in advance!
If you are not aware, you are declaring strm as an associative array (aka a hash), where the key and the data are both 16 bit values.
If you want strm to be a fixed-size 2^16 entry array of 16-bit values, the declaration would be:
bit[15:0] strm [2**16];
With associative arrays you can use array.exists(key) to determine if the key is in the array. But it seems this may not be what you are trying to do.
With unpacked arrays, you can use the inside operator to test for set membership. But my simulator (Incisive) does not like this usage outside of a randomization call. So you may need to search for it yourself. However you don't have to iterate the entire array if you sort it first. And you can do that with array.sort().
I would also point out that you are looking for a 3-bit value in some sort of array of 16-bit values. This doesn't really make sense, so you may want to clarify your question.
I'm extremely new to AX and am starting with something very simple. I need to increase the size of a column named Invoice. In the AOT, the StringSize property on the column is greyed out so I cannot change it there.
In SQL Server (2005) the column is a nvarchar(20) so I'm thinking AX might just be using whatever DataType is defined in the db.
If I attempt to increase the size of the column in SQL Server it tells me that the table would need to be dropped and re-created.
What is the best way to increase a column size in AX?
To increase the capacity of the column you would normally change the StringSize property on the InvoiceId extended data type.
However, in this case the InvoiceId extended data type extends from the Num extended data type and you will need to make the change there. This size increase will also affect all other extended datatypes that extend Num.
This extended datatype can be found in the AOT at \Data Dictionary\Extended Data Types\Num.