I'm studying for my cisco CCENT and I'm having a real hard time understanding wildcard masking this video was pretty straight forward and simple:
Wildcard Mask Video
However when applied to this question it yields the wrong answer:
You need to create a wildcard mask for the entire Class B private IPv4 address space
172.16.0.0/16 through 172.31.0.0/16. What is the wildcard mask?
One is tempted based upon the video to answer 0.0.255.255 however this is the incorrect answer with the correct answer being 0.15.255.255???
No matter how much I tinker with the bits to try and make sense of it I feel I am missing something can someone explain?
It occurs to me they could be super strict and only want the wildcard mask for the given range but in which case I would answer 2^5 for 32 = 255.248.0.0 = WCM 0.7.255.255 but alas this was not the answer either not even close what am I missing?
Here is another similar question...
Which address and wildcard mask combination will match all IPv4 addresses in the networks
192.168.0.0/24 through 192.168.63.0/24?
What I wanted to say: 192.168.0.0 0.0.0.255
Answer: 192.168.0.0 0.0.63.255
Create an IP Slide Rule with the mask on the first row and the bit values on the second row.
Mask 128 192 224 240 248 252 254 255
128 64 32 16 8 4 2 1
Now you need to convert the IP address to binary
I hope my paste doesn't go funky. Here are the examples
Mask 128 192 224 240 248 252 254 255
128 64 32 16 8 4 2 1
192.168.0 0 0 0 0 0 0 0.00000000
192.168.0 0 1 1 1 1 1 1.00000000 x.x=192.168 since they are the same
Now you need to find from the IP slide rule the last bit where they are identical.
In this case it is the 64 and the mask for 64 is 192. Now you will subtract the new mask from 255
255 255 255 255
255 255 192 0 =
0. 0. 63.255 Wildcard mask is 0.0.63.255 answer is 192.168.0.0 0.0.63.255
You need to create a wildcard mask for the entire Class B private IPv4 address space
172.16.0.0/16 through 172.31.0.0/16. What is the wildcard mask?
Answer 0.15.255.255
Mask 128 192 224 240 248 252 254 255
128 64 32 16 8 4 2 1
172. 0 0 0 1 0 0 0 0 0.0
172. 0 0 0 1 1 1 1 1 0.0
Now you need to find in the ip slide rule the last bit where they are identical.
In this case it is the 16 and the mask for 16 is 240. Now you will subtract the new mask from 255
255 255 255 255
255 240 0 0
0. 15.255.255 Wildcard mask is 0.15.255.255 answer is 172.0.0.0 0.15.255.255
Results of the wildcard mask calculation provide the first IP address and last IP address in the wildcard mask network range.
If it was only 192.168.0.0 /24 your output would be 0.0.0.255 but your question is combination of 192.168.0.0/24 through 192.168.63.0/24 so you must calculate it.
Related
reason behind ip range for these class
class A start from 0 to 126 ( 127 )
class B start from 128 to 191 ( 63 )
class C start from 192 to 223 ( 31 )
why we assign class A ip from 0 to 127 not 0 to 100 or 200 ?
there is any specific reason for these hind of allocation ??
I have the following dataframe. I would like to know which bacteria contribute more when comparing the location of the bacteria(categorical) and its pH(numeric).
For instance at the end i would like to say for example that a certain bacterial type is more frequently found in a certain location when looking at the temperature.
Bacillus Lactobacillus Janibacter Brevibacterium Lawsonella Location temperature
Sample1 2 30 164 8 21 48 bedroom 27
Sample2 0 211 0 996 195 108 bedroom 35
Sample3 1 938 1 21 38 43 pool 45
Sample4 0 95 17 1 4 334 pool 10
Sample5 0 192 91 25 1207 1659 soil 14
Sample6 0 12 33 6 12 119 soil 21
Sample7 0 16 3 0 0 805 soil 12
The idea is to run randomforest to select those features (bacteria) that are more important when looking at both the location and the temperature.
Is randomforest suitable for this ? When i run the follozinw command i get the following error:
randomForest(Location+Temperature ~.,data=mydf)
Error in Location + Temperature : non-numeric argument to binary operator.
From the error it looks that i cannot use a continous and categorical variable together. How can i fix this ?
Is for exemple convert the numeric temperature variable to ranges of temperatures as a categorical variables would be a solution ?
In fact i have tried and it worked by converting the numeric temperature to ranges and pasting the location so that i have a combination of location and temperature.
randomForest(Location_temperature ~.,data=dat)
I get the list of important bacteria which is what i was looking for. Now how can i know which one contributes more to one location or another since my model i was using all sites ? For example how to check that your important variables(let´s say Bacillus is the most important from the randomforest model ) is important in the pool location (how much variation it explains in the pool) ??
Hope it is clear....
I feel dumb but I can't see the timestep used to print the following graph from this data (it's been retrieved via tcpdump and I'm supposed to do the same kind of plot on my own for various websites):
18:43:39.577369 0 out
18:43:39.577449 0 out
18:43:39.819272 0 in
18:43:39.819300 0 out
18:43:39.819531 194 out
18:43:39.827914 0 out
18:43:39.829722 0 in
18:43:39.829741 0 out
18:43:39.829944 194 out
18:43:40.059952 0 in
18:43:40.061021 1448 in
18:43:40.061050 0 out
18:43:40.061108 1448 in
18:43:40.061124 0 out
18:43:40.061163 1200 in
18:43:40.061176 0 out
18:43:40.064159 0 in
18:43:40.064225 0 out
18:43:40.064864 194 out
18:43:40.069418 1448 in
18:43:40.069436 0 out
18:43:40.070015 859 in
18:43:40.070023 0 out
18:43:40.076474 126 out
18:43:40.081113 0 in
18:43:40.082162 1448 in
18:43:40.082174 0 out
18:43:40.082194 1448 in
18:43:40.082199 0 out
18:43:40.082208 1200 in
18:43:40.082212 0 out
18:43:40.094615 1448 in
18:43:40.094636 0 out
etc
Any help would be greatly appreciated, I really need to know this quickly !
The data has time stamp package size (bytes) and an indication for in or out.
The graph divides time into slots of 10 ms and sums up the bytes sent (out) and received (in) within each time slot. A data point is created at the end of each time slot.
E.g. between 30 and 40 ms packages of sizes 1448, 1448 and 1200 are received accounting for a data point of ca. 4100 at 40 ms in the red graph.
Thank you jakub and Hack-R!
Yes, these are my actual data. The data I am starting from are the following:
[A] #first, longer dataset
CODE_t2 VALUE_t2
111 3641
112 1691
121 1271
122 185
123 522
124 0
131 0
132 0
133 0
141 626
142 170
211 0
212 0
213 0
221 0
222 0
223 0
231 95
241 0
242 0
243 0
244 0
311 129
312 1214
313 0
321 0
322 0
323 565
324 0
331 0
332 0
333 0
334 0
335 0
411 0
412 0
421 0
422 0
423 0
511 6
512 0
521 0
522 0
523 87
In the above table, we can see the 44 land use CODES (which I inappropriately named "class" in my first entry) for a certain city. Some values are just 0, meaning that there are no land uses of that type in that city.
Starting from this table, which displays all the land use types for t2 and their corresponding values ("VALUE_t2") I have to reconstruct the previous amount of land uses ("VALUE_t1") per each type.
To do so, I have to add and subtract the value per each land use (if not 0) by using the "change land use table" from t2 to t1, which is the following:
[B] #second, shorter dataset
CODE_t2 CODE_t1 VALUE_CHANGE1
121 112 2
121 133 12
121 323 0
121 511 3
121 523 2
123 523 4
133 123 3
133 523 4
141 231 12
141 511 37
So, in order to get VALUE_t1 from VALUE_t2, I have, for instance, to subtract 2 + 12 + 0 + 3 + 2 hectares (first 5 values of the second, shorter table) from the value of land use type/code 121 of the first, longer table (1271 ha), and add 2 hectares to land type 112, 12 hectares to land type 133, 3 hectares to land type 511 and 2 hectares to land type 523. And I have to do that for all the land use types different than 0, and later also from t1 to t0.
What I have to do is a sort of loop that would both add and subtract, per each land use type/code, the values from VALUE_t2 to VALUE_t1, and from VALUE_t1 to VALUE_t0.
Once I estimated VALUE_t1 and VALUE_t0, I will put the values in a simple table showing the relative variation (here the values are not real):
CODE VALUE_t0 VALUE_t2 % VAR t2-t0
code1 50 100 ((100-50)/50)*100
code2 70 80 ((80-70)/70)*100
code3 45 34 ((34-45)/45)*100
What I could do so far is:
land_code <- names(A)[-1]
land_code
A$VALUE_t1 <- for(code in land_code{
cbind(A[1], A[land_code] - B[match(A$CODE_t2, B$CODE_t2), land_code])
}
If I use the loop I get an error, while if I take it away:
A$VALUE_t1 <- cbind(A[1], A[land_code] - B[match(A$CODE_t2, B$CODE_t2), land_code])
it works but I don't really get what I want to get... so far I was working on how to get a new column which would contain the new "add & subtract" values, but haven't succeeded yet. So I worked on how to get a new column which would at least match the land use types first, to then include the "add and subtract" formula.
Another problem is that, by using "match", I get a shorter A$VALUE_t1 table (13 rows instead of 44), while I would like to keep all the land use types in dataset A, because I will have then to match it with the table including VALUES_t0 (which I haven't shown here).
Sorry that I cannot do better than this at the moment... and I hope to have explained better what I have to do. I am extremely grateful for any help you can provide to me.
thanks a lot
I have just built a proof of concept for an asp.net MVC controller to (1) generate a barcode from user input using barcode rendering framework and (2) embed it in a PDF document using wkhtmltopdf.exe
Before telling my client it's a working solution, I want to know it's not going to bring down their website. My main concern is long-term reliability -- whether for instance creating and disposing the unmanaged system process for wkhtmltopdf.exe might leak something. (Peak performance and load is not expected to be such an issue - only a few requests per minute at peak).
So, I run a couple of tests from the Windows command line:
(1) 1,000 Requests in Sequence (ie 1 at a time)
for /l %i in (1,1,1000) do curl ^
"http://localhost:8003/Home/Test?text=Iteration_%i___012345&scale=2&height=50" -o output.pdf
(2) Up to 40 requests sent within 2 seconds
for /l %i in (1,1,40) do start "Curl %i" curl ^
"http://localhost:8003/Home/Test?text=Iteration_%i___012345&scale=2&height=50" -o output%i.pdf
And I record some performance counters in perfmon before, during & after. Specifically I look at total processes, threads, handles, memory, disk use on the machine and on the IIS process specifically.
So, my questions:
1) What would you consider acceptable evidence that the the solution looks to be at low risk of bringing down the server? Would you amend what I've done, or would you do something completely different?
2) Given my concern is reliability, I think that the 'Before' vs 'After' figures are the ones I most care about. Agree or not?
3) Looking at the Before vs After figures, the only concern I see is the 'Processes Total Handle Count'. I conclude that launching wkhtmltopdf.exe nearly a thousand times has probably not leaked anything or destabilised the machine. But I might be wrong and should run the same tests for hours or days to reduce the level of doubt. Agree or not?
(The risk level: A couple of people's jobs might be on the line if it went pear-shaped. Revenue on the site is £1,000s per hour).
My perfmon results were as follows.
700 Requests in Sequence
1-5 Mins 10 Mins
Counter Before Test Min Ave Max After Test
System
System Processes 95 97 100 101 95
System Threads 1220 1245 1264 1281 1238
Memory Available MB 4888 4840 4850 4868 4837
Memory % Committed 23 24 24 24 24
Processes Total Handle Cou 33255 34147 34489 34775 34029
Processor % Processor Time 4 to 30 40 57 78 1 to 30
Physical Disk % Disk Time 1 0 7 75 0 to 30
IIS Express
% Processor Time 0 0 2 6 0
Handle Count 610 595 640 690 614
Thread Count 34 35 35 35 35
Working Set 138 139 139 139 139
IO Data KB/sec 0 453 491 691 0
20 Requests sent within 2 seconds followed by 40 Requests sent within 3 seconds
1-5 Mins 10 Mins
Counter Before Test Min Ave Max After Test
System
System Processes 95 98 137 257 96
System Threads 1238 1251 1425 1913 1240
Memory Available MB 4837 4309 4694 4818 4811
Memory % Committed 24 24 25 29 24
Processes Total Handle Cou 34029 34953 38539 52140 34800
Processor % Processor Time 1 to 30 1 48 100 1 to 10
Physical Disk % Disk Time 0 to 30 0 7 136 0 to 10
IIS Express
% Processor Time 0 0 1 29 0
Handle Count 610 664 818 936 834
Thread Count 34 37 50 68 37
Working Set 138 139 142 157 141
IO Data KB/sec 0 0 186 2559 0