Hashtables and dealing with collisions - hashtable

Assume a hashtable is represented as an array of size 7. We want to store strings consisting of three digits. The primary hash key is the numerical value of the second digit modulo 7. The secondary hash key is the numerical value of the third digit modulo 4 increased by one. Insert the following strings into the initially empty hashtable: "111", "222", "737", "323" and "234".
My response:
0 - 234
1 - 111
2 - 222
3 - 737
4 - 323
5 -
6 -
111; 1 mod 7 = 1
222; 2 mod 7 = 2
737; 3 mod 7 = 3
323; 3 mod 4 + 1 = 4
234; 4 mod 4 + 1 = 4 (0)
is that correct?

You might want to mention what type of hash you are using. I assume from your description that it is cuckoo hashing. If this is the case you are fine up until the last insertion. Before 234 is inserted you have:
0:
1: 111
2: 222
3: 737
4: 323
5:
6:
Trying to insert 234 with h1 gives a key of 3 mod 7 = 3, but 3 already contains 373. Moving on to h2 we get 4 mod 4 + 1 = 1 but 1 already contains 111. At this point there are no more hash functions, so we insert 234 at 1 and rehash 111.
0:
1: 234
2: 222
3: 737
4: 323
5:
6:
Hashing 111 with h1 gives 1 again, h2 gives 1 mod 4 + 1 = 2, but 2 already contains 222, so we store 111 at 2 and rehash 222, etc. In this case, eventually you will find all the keys fit. In the case where they entries don't all fit (i.e. the reinsertion enters an infinite cycle) the table needs to be resized and rehashed with new hash functions.

I'm not sure what this problem wants you to do if there is still a collision after the secondary hash key is checked, but I think it goes like this:
111: 1 mod 7 = 1
222: 2 mod 7 = 2
737: 3 mod 7 = 3
323: 2 mod 7 = 2 => Collision: 3 mod 4 + 1 = 3 + 1 = 4
234: 3 mod 7 = 3 => Collision: 4 mod 4 + 1 = 0 + 1 = 1 => Collision
If you advance by one after the second collision, the result would be
0 -
1 - 111
2 - 222
3 - 737
4 - 323
5 - 234
6 -
7 -

Related

Cannot introduce offset into negative binomial regression

I'm performing a count data analysis in R, dealing with a data called 'doctor' which is:
V2 V3 L4 V5
1 1 32 10.866795 1
2 2 104 10.674706 1
3 3 206 10.261581 1
4 4 186 9.446440 1
5 5 102 8.578665 1
6 1 2 9.841080 2
7 2 12 9.275472 2
8 3 28 8.649974 2
9 4 28 7.857481 2
10 5 31 7.287561 2
The best model was V3~V2+L4+V5+V2:L4:V5 using stepwise AIC. Now I want to set L4 as the offset and perform negative binomial regression including the interaction, so I used the code nbinom=glm.nb(V3~V2+V5+V2:V5,offset=L4) but get this error message that says Error in glm.control(...) : unused argument (offset = L4). What have I done wrong here?
Offsets are entered using an offset term in the model formula:
nbinom=glm.nb(V3~V2+V5+V2:V5+offset(L4))
Also you can use V2*V5 instead of V2+V5+V2:V5

Column-specific arguments to lapply in data.table .SD when applying rbinom

I have a data.table for which I want to add columns of random binomial numbers based on one column as number of trials and multiple probabilities based on other columns:
require(data.table)
DT = data.table(
ID = letters[sample.int(26,10, replace = T)],
Quantity=as.integer(100*runif(10))
)
prob.vecs <- LETTERS[1:5]
DT[,(prob.vecs):=0]
set.seed(123)
DT[,(prob.vecs):=lapply(.SD, function(x){runif(.N,0,0.2)}), .SDcols=prob.vecs]
DT
ID Quantity A B C D E
1: b 66 0.05751550 0.191366669 0.17790786 0.192604847 0.02856000
2: l 9 0.15766103 0.090666831 0.13856068 0.180459809 0.08290927
3: u 38 0.08179538 0.135514127 0.12810136 0.138141056 0.08274487
4: d 27 0.17660348 0.114526680 0.19885396 0.159093484 0.07376909
5: o 81 0.18809346 0.020584937 0.13114116 0.004922737 0.03048895
6: f 44 0.00911130 0.179964994 0.14170609 0.095559194 0.02776121
7: d 81 0.10562110 0.049217547 0.10881320 0.151691908 0.04660682
8: t 81 0.17848381 0.008411907 0.11882840 0.043281587 0.09319249
9: x 79 0.11028700 0.065584144 0.05783195 0.063636202 0.05319453
10: j 43 0.09132295 0.190900730 0.02942273 0.046325157 0.17156554
Now I want to add five columns Quantity_A Quantity_B Quantity_C Quantity_D Quantity_E
which apply the rbinom with the correspoding probability and quantity from the second column.
So for example the first entry for Quantity_A would be:
set.seed(741)
sum(rbinom(66,1,0.05751550))
> 2
This problem seems very similar to this post: How do I pass column-specific arguments to lapply in data.table .SD? but I cannot seem to make it work. My try:
DT[,(paste0("Quantity_", prob.vecs)):= mapply(function(x, Quantity){sum(rbinom(Quantity, 1 , x))}, .SD), .SDcols = prob.vecs]
Error in rbinom(Quantity, 1, x) :
argument "Quantity" is missing, with no default
Any ideas?
I seemed to have found a work-around, though I am not quite sure why this works (probably has something to do with the function rbinom not beeing vectorized in both arguments):
first define an index:
DT[,Index:=.I]
and then do it by index:
DT[,(paste0("Quantity_", prob.vecs)):= lapply(.SD,function(x){sum(rbinom(Quantity, 1 , x))}), .SDcols = prob.vecs, by=Index]
set.seed(789)
ID Quantity A B C D E Index Quantity_A Quantity_B Quantity_C Quantity_D Quantity_E
1: c 37 0.05751550 0.191366669 0.17790786 0.192604847 0.02856000 1 0 4 7 8 0
2: c 51 0.15766103 0.090666831 0.13856068 0.180459809 0.08290927 2 3 5 9 19 3
3: r 7 0.08179538 0.135514127 0.12810136 0.138141056 0.08274487 3 0 0 2 2 0
4: v 53 0.17660348 0.114526680 0.19885396 0.159093484 0.07376909 4 8 4 16 12 3
5: d 96 0.18809346 0.020584937 0.13114116 0.004922737 0.03048895 5 17 3 12 0 4
6: u 52 0.00911130 0.179964994 0.14170609 0.095559194 0.02776121 6 1 3 8 6 0
7: m 43 0.10562110 0.049217547 0.10881320 0.151691908 0.04660682 7 6 1 7 6 2
8: z 3 0.17848381 0.008411907 0.11882840 0.043281587 0.09319249 8 1 0 2 1 1
9: m 3 0.11028700 0.065584144 0.05783195 0.063636202 0.05319453 9 1 0 0 0 0
10: o 4 0.09132295 0.190900730 0.02942273 0.046325157 0.17156554 10 0 0 0 0 0
numbers look about right to me
If someone finds a solution without the index would still be appreciated.

R 3.4.1 gsub on Windows 10 - find and replace all strings except for

I am trying to clean up data for a class project. The data deals with NOAA Storm data from 1950 to 2011. The storm types (EVTYPE) are only supposed to be 48 different levels, but there are over 1000 unique entries. I am trying to find all the snow related entries, which gives me:
table(grep("snow", temp$EVTYPE, ignore.case = TRUE, value = TRUE))
ACCUMULATED.SNOWFALL BLOWING.SNOW COLD.AND.SNOW DRIFTING.SNOW
4 5 1 1
EARLY.SNOWFALL EXCESSIVE.SNOW FALLING.SNOW.ICE FIRST.SNOW
7 25 2 2
HEAVY.SNOW HEAVY.SNOW.SHOWER HEAVY.SNOW.SQUALLS ICE.SNOW
13988 1 1 4
LAKE.EFFECT.SNOW LATE.SEASON.SNOW LATE.SEASON.SNOWFALL LATE.SNOW
656 1 3 2
LIGHT.SNOW LIGHT.SNOW.FLURRIES LIGHT.SNOW.FREEZING.PRECIP LIGHT.SNOWFALL
174 3 1 1
MODERATE.SNOW MODERATE.SNOWFALL MONTHLY.SNOWFALL MOUNTAIN.SNOWS
1 101 1 1
RECORD.MAY.SNOW RECORD.SNOW RECORD.SNOWFALL RECORD.WINTER.SNOW
1 2 2 3
SEASONAL.SNOWFALL SNOW SNOW.ACCUMULATION SNOW.ADVISORY
1 425 1 1
SNOW.AND.ICE SNOW.AND.SLEET SNOW.BLOWING.SNOW SNOW.DROUGHT
4 5 6 4
SNOW.ICE SNOW.SHOWERS SNOW.SLEET SNOW.SQUALL
1 5 5 5
SNOW.SQUALLS THUNDERSNOW.SHOWER UNUSUALLY.LATE.SNOW
14 1 1
There is a storm type called "Lake.Effect.Snow", which is one of the 48 storm types. How can I replace all of the other entries while excluding that particular storm type? I've tried:
table(grep("([^lake]?)snow", temp$EVTYPE, ignore.case = TRUE, value = TRUE))
to try and ignore the Lake.Effect.Snow entries, but no good.
Use stringr::str_detect with if.else.
library("stringr")
temp$EVTYPE <- if.else(str_detect(temp$EVTYPE, regex("snow", ignore_case = TRUE)) & temp$EVTYPE != "Lake.Effect.Snow", "Snow", temp$EVTYPE)

Identifying maximum number and longest set of time intervals

Say I have data that look like this:
level start end
1 1 133.631 825.141
2 2 133.631 155.953
3 3 146.844 155.953
4 2 293.754 302.196
5 3 293.754 302.196
6 4 293.754 301.428
7 2 326.253 343.436
8 3 326.253 343.436
9 4 333.827 343.436
10 2 578.066 611.766
11 3 578.066 611.766
12 4 578.066 587.876
13 4 598.052 611.766
14 2 811.228 825.141
15 3 811.228 825.141
or this:
level start end
1 1 3.60353 1112.62000
2 2 3.60353 20.35330
3 3 3.60353 8.77526
4 2 72.03720 143.60700
5 3 73.50530 101.13200
6 4 73.50530 81.64660
7 4 92.19030 101.13200
8 3 121.28500 143.60700
9 4 121.28500 128.25900
10 2 167.19700 185.04800
11 3 167.19700 183.44600
12 4 167.19700 182.84600
13 2 398.12300 418.64300
14 3 398.12300 418.64300
15 2 445.83600 454.54500
16 2 776.59400 798.34800
17 3 776.59400 796.64700
18 4 776.59400 795.91300
19 2 906.68800 915.89700
20 3 906.68800 915.89700
21 2 1099.44000 1112.62000
22 3 1099.44000 1112.62000
23 4 1100.14000 1112.62000
They produce the following graphs:
As you can see there are several time intervals at different levels. The level-1 interval always spans the entire duration of the time of interest. Levels 2+ have time intervals that are shorter.
What I would like to do is select the maximum number of non-overlapping time intervals covering each period that contain the maximum number of total time within them. I have marked in pink which ones those would be.
For small dataframes it is possible to brute force this, but obviously there should be some more logical way of doing this. I'm interested in hearing some ideas about what I should try.
EDIT:
I think one thing that could help here is the column 'level'. The results come from Kleinberg's burst detection algorithm (package 'bursts'). You will note that the levels are hierarchically organized. Levels of the same number cannot overlap. However levels successively increasing e.g. 2,3,4 in successive rows can overlap.
In essence, I think the problem could be shortened to this. Take the levels produced, but remove level 1. This would be the vector for the 2nd example:
2 3 2 3 4 4 3 4 2 3 4 2 3 2 2 3 4 2 3 2 3 4
Then, look at the 2s... if there are fewer than or only one '3' then that 2 is the longest interval. But if there are two or more 3's between successive 2's, then those 3s should be counted. Do this iteratively for each level. I think that should work...?
e.g.
vec<-df$level %>% as.vector() %>% .[-1]
vec
#[1] 2 3 2 3 4 4 3 4 2 3 4 2 3 2 2 3 4 2 3 2 3 4
max(vec) #4
vec3<-vec #need to find two or more 4's between 3s
vec3[vec3==3]<-NA
names(vec3)<-cumsum(is.na(vec3))
0 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 8 8
2 NA 2 NA 4 4 NA 4 2 NA 4 2 NA 2 2 NA 4 2 NA 2 NA 4
vec3.res<-which(table(vec3,names(vec3))["4",]>1)
which(names(vec3)==names(vec3.res) & vec3==4) #5 6
The above identifies rows 5 and 6 (which equate to rows 6 and 7 in original df) as having two fours that lie between 3's. Perhaps something using this sort of approach might work?
OK here is a stab using your second data set to test. This might not be correct in all cases!!
library(data.table)
dat <- fread("data.csv")
dat[,use:="maybe"]
make.pass <- function(dat,low,high,the.level,use) {
check <- dat[(use!="no" & level > the.level)]
check[,contained.by.above:=(low<=start & end<=high)]
check[,consecutive.contained.by.above:=
(contained.by.above &
!is.na(shift(contained.by.above,1)) &
shift(contained.by.above,1)),by=level]
if(!any(check[,consecutive.contained.by.above])) {
#Cause a side effect where we've learned we don't care:
dat[check[(contained.by.above),rownum],use:="no"]
print(check)
return("yes")
} else {
return("no")
}
}
dat[,rownum:=.I]
dat[level==1,use:=make.pass(dat,start,end,level,use),by=rownum]
dat
dat[use=="maybe" & level==2,use:=make.pass(dat,start,end,level,use),by=rownum]
dat
dat[use=="maybe" & level==3,use:=make.pass(dat,start,end,level,use),by=rownum]
dat
#Finally correct for last level
dat[use=="maybe" & level==4,use:="yes"]
I wrote these last steps out so you can trace in your own interactive session to see what's happening (see the print to get an idea) but you can remove the print and also condense the last steps into something like lapply(1:dat[,max(level)-1], function(the.level) dat[use=="maybe" & level==the.level,use:=make.pass......]) In response to your comment if there are an arbitrary number of levels you will definitely want to use this formalism, and follow it with a final call to dat[use=="maybe" & level==max(level),use:="yes"].
Output:
> dat
level start end use rownum
1: 1 3.60353 1112.62000 no 1
2: 2 3.60353 20.35330 yes 2
3: 3 3.60353 8.77526 no 3
4: 2 72.03720 143.60700 no 4
5: 3 73.50530 101.13200 no 5
6: 4 73.50530 81.64660 yes 6
7: 4 92.19030 101.13200 yes 7
8: 3 121.28500 143.60700 yes 8
9: 4 121.28500 128.25900 no 9
10: 2 167.19700 185.04800 yes 10
11: 3 167.19700 183.44600 no 11
12: 4 167.19700 182.84600 no 12
13: 2 398.12300 418.64300 yes 13
14: 3 398.12300 418.64300 no 14
15: 2 445.83600 454.54500 yes 15
16: 2 776.59400 798.34800 yes 16
17: 3 776.59400 796.64700 no 17
18: 4 776.59400 795.91300 no 18
19: 2 906.68800 915.89700 yes 19
20: 3 906.68800 915.89700 no 20
21: 2 1099.44000 1112.62000 yes 21
22: 3 1099.44000 1112.62000 no 22
23: 4 1100.14000 1112.62000 no 23
level start end use rownum
On the off chance this is correct, the algorithm can roughly be described as follows:
Mark all the intervals as possible.
Start with a given level. Pick a particular interval (by=rownum) say called X. With X in mind, subset a copy of the data to all higher-level intervals.
Mark any of these that are contained in X as "contained in X".
If consecutive intervals at the same level are contained in X, X is no good b/c it wastes intervals. In this case label X's "use" variable as "no" so we'll never think about X again. [Note: if it's possible that non-consecutive intervals are contained in X, or that containing multiple intervals across levels could ruin X's viability, then this logic might need to be changed to count contained intervals instead of finding consecutive ones. I didn't think about this at all, but it's just occurring to me now, so use at your own risk.]
On the other hand, if X passed the test, then we've already established it's good. Mark it as a "yes." But importantly, we also have to mark any single interval contained in X as "no," or else when we iterate the step it will forget that it was contained inside a good interval and mark itself as "yes" as well. This is the side effect step.
Now, iterate, ignoring any results that we've already determined.
Finally any "maybe"s leftover at the highest level are automatically in.
Let me know what you think of this--this is a rough draft and some aspects might not be correct.

Identification of items by use of wildcards

I have a dataset with several hundret items, looking like this
ID 01_ab_dog 01_ae_cat 02_ae_dog 02_hg_horse 01_oq_cat etc ...
1 1 3 5 8 10 ...
2 654 12 89 7 112 ...
3 4 9 4 978 64 ...
4 19 86 95 46 8 ...
I am looking to identify all items that include the word - let´s say - 'cat'. A solution that includes wildcards (e.g. 01_**_cat) would be great and I was looking for something like this but I did not suceed. How do I solve this problem?
I am not sure what you mean with item. To get all columns with cat, you could use grepl.
df <- data.frame(ab = 1, b = 1, cat_a = 1, bb_bbcat = 1)
df[, grepl("cat", names(df))]
# cat_a bb_bbcat
# 1 1 1

Resources