How can I determine how much storage space on disk a restored database will consume once it has been fully restored? The purpose of this is to ensure that there is enough storage space before attempting to restore the database.
There's a nice example in the documentation. I'll pull it in here:
Table 108. Sample PROREST -list output
OpenEdge Release 10.2B1P as of Wed Oct 21 19:01:48 EDT 2009
Area Name: Schema Area
Size: 11264, Records/Block: 32, Area Number: 6, Cluster Size: 1
Area Name: Info Area
Size: 1024, Records/Block: 32, Area Number: 7, Cluster Size: 1
Area Name: Customer/Order Area
Size: 6656, Records/Block: 32, Area Number: 8, Cluster Size: 8
Area Name: Primary Index Area
Size: 112, Records/Block: 1, Area Number: 9, Cluster Size: 8
Area Name: Customer Index Area
Size: 256, Records/Block: 1, Area Number: 10, Cluster Size: 64
Area Name: Order Index Area
Size: 8192, Records/Block: 32, Area Number: 11, Cluster Size: 64
Area Name: Encryption Policy Area
Size: 20448, Records/Block: 32, Area Number: 12, Cluster Size: 64
Area Name: Audit Area
Size: 4608, Records/Block: 32, Area Number: 20, Cluster Size: 8
Area Name: Audit Index
Size: 8704, Records/Block: 32, Area Number: 22, Cluster Size: 8
Use the output of PROREST -list to calculate the size of each restored area as follows:
area-size = (Size / records-per-block) * database-block-size
For example, the size of the restored schema area is:
area-size = (Size / records-per-block) * database-block-size
1,441,792 = (11264 / 32) * 4096
1441792 will be in bytes so divide it by 1024 (or 1024 * 1024) to get it in kilobytes (or megabytes) etc.
https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dmadm%2Fprorest-utility.html%23
You'll have to do some math, but you can use prorest's -list parameter.
prorest -list <restoredb> <backupfile>
You'll get a list of each area along with its size and records per block.
Area Name: Schema Area
Size: 12345, Records/Block: 32, Area Number: 6, Cluster Size: 1
Divide the size by the records per block, then multiply that by the block size. Do that for each area, add them up, and that should be your database size.
Related
I have rdkafka producer written the following way, where localhost:9095 is tunneled to a digital ocean server in Germany with a kafka broker, whereas i, the localhost, am in New York.
I am noticing that at around ~150 miliseconds of thread::sleep(Duration::from_millis(150)) the last few of my messages stop being acknowledged by the callback (messages 8 and 9 for example),as if they didn't arrive. I know they did because i see the Kafka topic.
I'm wondering if my guess is correct that it is solely the latency between NY and Germany that dictates how long the broker takes to respond or if there is some config max.response.wait.ms setting i can set for acknowledgements to arrive sooner.
Of course if i increase the sleep time to ~200ms and higher i can see each message logged loud and clear. If i decrease sleep to 50ms the producer drops before any messages are sent and they never arrive to the topic... right?
let producer :ThreadedProducer< ProducerLogger> = ClientConfig::new()
.set("bootstrap.servers", "localhost:9095")
.create_with_context(ProducerLogger{})
.expect("Failed to create producer");
for i in 1..10{
let mut data = [0u8; 8];
rand::thread_rng().fill_bytes(&mut data);
println!("Sedndg msg with data {:x?}", data);
producer.send(
BaseRecord::to("rt-test")
.key(&format!("key-{}",i))
.payload(&format!("payload-{}",i))
)
.expect("couldn't send message");
// producer.flush(Duration::from_secs(1));
thread::sleep(Duration::from_millis(150)) // <--- Parameter to play with
}
Sedndg msg with data [da, 44, 1c, f0, 6, 1d, da, 9b]
Sedndg msg with data [37, 82, b2, 58, e2, 91, 40, 33]
Sedndg msg with data [8d, f8, 5c, df, 68, 12, a1, 26]
Sedndg msg with data [bd, 13, 7c, 81, 62, 5c, 93, 75]
Produced message key-1 successfully. Offset: 193. Partition :0
Produced message key-2 successfully. Offset: 194. Partition :0
Produced message key-3 successfully. Offset: 195. Partition :0
Produced message key-4 successfully. Offset: 196. Partition :0
Sedndg msg with data [d, 9d, 1c, 73, a7, 9f, b4, 2]
Produced message key-5 successfully. Offset: 197. Partition :0
Sedndg msg with data [c9, 1a, 3b, 8c, 31, cc, 84, f4]
Produced message key-6 successfully. Offset: 198. Partition :0
Sedndg msg with data [8a, 33, 26, 92, 2a, bf, d1, 7d]
Produced message key-7 successfully. Offset: 199. Partition :0
Sedndg msg with data [78, eb, e2, 41, 8d, b9, 29, 68]
I have a quadtree. The root node (level 0) is positioned at 0,0 by its centre. It has a width of 16, so its corners are at -8,-8 and 8,8. Since it's a quadtree, the root contains four children, each of which contain four children of their own and so on. The deepest level is level 3 at width 2. Here's a dodgy Paint drawing to better illustrate what I mean:
The large numbers indicate the centre position and width of each node. The small numbers around the sides indicate positions.
Given a valid position, how can I figure out what level or size of node exists at that position? It seems like this should be obvious but I can't get my head around the maths. I see the patterns in the diagram but I can't seem to translate it into code.
For instance, the node at position 1,1 is size 2/level 3. Position 4,6 is invalid because it's between nodes. Position -6,-2 is size 4/level 2.
Additional Notes:
Positions are addresses. They are exact and not worldspace, which is why it's possible to have an invalid address.
In practice the root node size could be as large as 4096, or even larger.
Observe that the coordinate values for the centre of each node are always +/- odd multiples of a power-of-2, the latter being related to the node size:
Node size | Allowed centre coordinates | Factor
-----------------------------------------------------
2 | 1, 3, 5, 7, 9, 11 ... | 1 = 2/2
-----------------------------------------------------
4 | 2, 6, 10, 14, 18, 22 ... | 2 = 4/2
-----------------------------------------------------
8 | 4, 12, 20, 28, 36, 44 ... | 4 = 8/2
-----------------------------------------------------
16 | 8, 24, 40, 56, 72, 88 ... | 8 = 16/2
The root node is a special case since it is always centred on 0,0, but in a larger quad-tree the 16x16 nodes would follow the same pattern.
Crucially, both X,Y values of the coordinate must share the same power-of-2 factor. This means that the binary representations of their absolute values must have the same number of trailing zeros. For your examples:
Example | Binary | Zeros | Valid
----------------------------------
X = 1 | 000001 | 0 | Y
Y = 1 | 000001 | 0 | S = 2
----------------------------------
X = 4 | 000100 | 2 | N
Y = 6 | 000110 | 1 |
----------------------------------
X =-6 | 000110 | 1 | Y
Y =-2 | 000010 | 1 | S = 4
Expressions for the desired results:
Size (S) = 2 ^ (Zeros + 1)
Level = [Maximum Level] - Zeros
HTML:
<span class="number"> - Sep 15, 1991<br><strong>Some Number: </strong>123, 123, 145</span>
Scrapy:
samples = response.css('ul li.somthing')
for sample in samples:
loader = ItemLoader(item=CatelogItem(), selector=sample)
loader.add_css('some', 'span.number::text')
yield loader.load_item()
Item.py
some = Field(
input_processor=MapCompose(str.strip),
output_processor=Join()
)
Result
- Sep 15, 1991
Expected
- Sep 15, 1991 Some Number: 123, 123, 145
Why is this behavior? how do i get the full value loaded in itemloader?
You needed to grab all the innerhtml instead of text which includes all of it's nested components.
loader.add_css('some', 'span.number *::text')
Most recent AAPL 10-k XBRL Instance Document for example:
doc <- "https://www.sec.gov/Archives/edgar/data/320193/000032019319000119/0000320193-19-000119-index.htm"
Run xbrlDoAll and xbrl_get_statements from XBRL and finstr packages, respectively
get_xbrl_doc <- xbrlDoAll(doc)
statements <- xbrl_get_statements(get_xbrl_doc)
Error: Each row of output must be identified by a unique combination of keys.
Keys are shared for 34 rows:
* 6, 8
* 5, 7, 9
* 49, 51
* 48, 50
* 55, 57
* 54, 56
* 11, 13
* 10, 12
* 25, 27
* 24, 26
* 59, 61
* 58, 60
* 29, 31
* 28, 30
* 63, 64, 66
* 62, 65
This sequence works perfectly up until 2019 when Apple switched to "Extracted XBRL Instance Document" from "XBRL Instance Document". Has anybody found a work around?
Not only have they switched that, but now the fact ID brings some kind of 32-digit key that seems to have no connections with any of the other files in the taxonomy. This is probably related to the iXBRL process and I've seen it with another company as well.
However, if the problem is just the word "Extracted", then you just need to change the script, but I'm guessing this is not the case and the solution relates to finding out what those 32-digit keys mean.
I'm not an R user and I don't use Finstr, but I guess we have the same problem. So, the answer to your question would be: you need to write your own parser now or wait until someone finishes theirs.
We are using Graphite to store stats about our websites. Everything works fine when we want to see the data for the last 24h and 7days. When we are trying to have a look at the last month data Graphite does not show any data.
We collect the data for one metric every 5min and the other ones once an hour.
When I use the GUI this "query" works:
width=1188&height=580&target=identifierXYP.value&lineMode=connected&from=-8days
And this one does not return any data
width=1188&height=580&target=identifierXYP.value&lineMode=connected&from=-9days
The only thing that changed was the "from" part.
I already ran
find ./ -type f -name '*.wsp' -exec whisper-resize.py --nobackup {} 5m:365d \;
but it did not help.
whisper-info.py value.wsp outputs:
maxRetention: 157680000
xFilesFactor: 0.5
aggregationMethod: average
fileSize: 2521504
Archive 0
retention: 691200
secondsPerPoint: 10
points: 69120
size: 829440
offset: 64
Archive 1
retention: 2678400
secondsPerPoint: 60
points: 44640
size: 535680
offset: 829504
Archive 2
retention: 31536000
secondsPerPoint: 600
points: 52560
size: 630720
offset: 1365184
Archive 3
retention: 157680000
secondsPerPoint: 3600
points: 43800
size: 525600
offset: 1995904