R: function for home inventory count? - r

I have a list of homes sales data in my neighborhood listed as
address, listingdate, saledate
101 Street, 2017/01/01, 2017/06/06
106 Street, 2017/03/01, 2017/08/11
102 Street, 2017/05/04, 2017/06/13
109 Street, 2017/07/04, 2017/11/24
...
I would like to calculate the number of homes listed for sale (and not sold) at the listing date too see how home sales and listing vary throughout the year.
in the example:
address, listingdate, saledate, inventory
101 Street, 2017/01/01, 2017/06/06, 1
106 Street, 2017/03/01, 2017/08/11, 2
102 Street, 2017/05/04, 2017/06/13, 3
109 Street, 2017/07/04, 2017/11/24, 2
...
E.g. 109 Street was listed when only 106 and 109 Street were for sale.
Is there a simple 1-step R expression that can calculate that?

I guess this is 3 simple steps. I'll just set the bar, I'm sure someone else will be able to go under it.
library(data.table)
library(lubridate)
dt <- data.table(
address = paste(c(101,106,102,109),"Street"),
listing_date = ymd(c('2017/01/01','2017/03/01','2017/05/04','2017/07/04')),
saledate = ymd(c("2017/06/06","2017/08/11","2017/06/13","2017/11/24")),
key = 'listing_date'))
dt2 <- rbind(dt[,.(date = listing_date, x = 1)], dt[,.(date = saledate, x = -1)])
dt3 <- dt2[, .(x = sum(x)), keyby = date][, .(date, inventory = cumsum(x))]
dt[, inventory := dt3[dt, on=c('date' = 'listing_date'), inventory]]
Or instead as a one-liner
dt[,inventory:=dt[,.(d=listing_date:saledate),.(address)][,.N,key=d][dt,N]]
dt[]
#> address listing_date saledate inventory
#> 1: 101 Street 2017-01-01 2017-06-06 1
#> 2: 106 Street 2017-03-01 2017-08-11 2
#> 3: 102 Street 2017-05-04 2017-06-13 3
#> 4: 109 Street 2017-07-04 2017-11-24 2

I couldn't use the specific solution because of incompatibilities between data.table and tibbles, but the general algorithm was very enlightening. I could translate the general idea to tidyverse land with a couple of changes
# import data from data file
homesale_file = "Home sales data.csv"
homesales <- read_csv(homesale_file,
col_types = cols(listingdate = col_date(format = "%m/%d/%Y"),
saledate = col_date(format = "%m/%d/%Y")
)
)
#
# calculation for inventory
#
listingdate <- tibble(address=homesales$address, listingdate=homesales$listingdate, type="listing",y=1)
saledate <- tibble(address=homesales$address, listingdate=homesales$saledate, type="sale", y=-1)
summation = bind_rows(listingdate, saledate) %>% arrange(listingdate) %>% mutate(inventory=cumsum(y)) %>% select(-y) %>% filter(type=="listing")
homesales <- homesales %>% inner_join(summation) %>% select(-type)
#pseudopin, thanks for the help!

Related

R: Comparing Subgroups From Different Datasets

I am working with the R programming language.
I have the following dataset that contains the heights and weights of people from Canada - using the value of height (cm), I split weight (kg) into bins based on ntiles, and calculated the average value of var2 within each ntile bin:
library(dplyr)
library(gtools)
set.seed(123)
canada = data.frame(height = rnorm(10000,150,10), weight = rnorm(10000,90, 10))
Part_1 = canada %>%
mutate(quants = quantcut(weight, 100),
rank = as.numeric(quants)) %>%
group_by(quants) %>%
mutate(min = min(weight), max = max(weight), count = n(), avg_height = mean(height))
Part_1 = Part_1 %>% distinct(rank, .keep_all = TRUE)
> Part_1
# A tibble: 100 x 8
# Groups: quants [100]
height weight quants rank min max count avg_height
<dbl> <dbl> <fct> <dbl> <dbl> <dbl> <int> <dbl>
1 144. 114. (110.2,113.9] 99 110. 114. 100 150.
2 148. 88.3 (88.12,88.38] 44 88.1 88.4 100 149.
3 166. 99.3 (99.1,99.52] 83 99.1 99.5 100 152.
4 151. 84.3 (84.14,84.44] 29 84.1 84.4 100 150.
For example, I see that there are 100 people between the weight range of 100.2 - 113.9 kg and the average height of these people is 150 cm
Now, suppose I have a similar dataset for people from the USA:
set.seed(124)
usa = data.frame(height = rnorm(10000,150,10), weight = rnorm(10000,90, 10))
My Question: Based on the weight ranges I calculated using the Canada dataset - I want to find out how many people from the USA fall within these Canadian ranges and what is the average weight of the Americans within these Canadian ranges
For example:
In the Canada dataset, I saw that there are 100 people between the weight range of 100.2 - 113.9 kg and the average height of these people is 150 cm
How many Americans are between the weight range of 100.2 - 113.9 kg and what is the average height of these Americans?
I know that I can do this manually for each rank:
americans_in_canadian_rank99 = usa %>%
filter(weight > 110.2 & weight < 113.9) %>%
group_by() %>%
summarize(count = n(), avg_height = mean(height))
americans_in_canadian_rank44 = usa %>%
filter(weight > 88.1 & weight < 88.4) %>%
group_by() %>%
summarize(count = n(), avg_height = mean(height))
In the end, I would be looking for something a desired output like this:
# number of rows should be = number of unique ranks
canadian_rank min_weight max_weight canadian_count canadian_avg_height american_count american_avg_height
1 99 110.2 113.9 100 150 116 150
2 44 88.1 88.4 100 149 154 150
Can someone please help me figure out a better way to do this?
Thanks!
Note: updated based on the desired output format combining the two sets:
This can be done in a straight-forward manner using the non-equijoin functionality of data.table.
library(data.table)
library(gtools)
set.seed(123)
canada = data.table(height = rnorm(10000,150,10), weight = rnorm(10000,90, 10))
set.seed(124)
usa = data.table(height = rnorm(10000,150,10), weight = rnorm(10000,90, 10))
## You can also use data.table to generate your Part_1 summary table
Part_1 <- canada[, .(min = min(weight),
max = max(weight),
count = .N,
avg_height = mean(height)), keyby = .(quants = quantcut(weight,100))]
Part_1[, rank := as.numeric(quants)]
## Join using a non-equi join to combine data sets
usa[Part_1, on = .(weight >= min,
weight < max)
## On the join result, compute same summary states by quants & rank
][, .(usa_count = .N,
usa_avg_height = mean(height)), keyby = .(rank,
quants,
## whenever we do a non-equijoin, the foreign key values, in this case min/max
## overwrite the local keys. Since we used weight twice, canadian min/max
## will show up in the join result table as weight and weight.1
min_weight = weight,
max_weight = weight.1,
## To keep both sets of results distinct, we can rename columns in our "by" statement
canadian_count = count,
canadian_avg_height = avg_height)]
Gives results as follows:
rank quants min_weight max_weight canadian_count canadian_avg_height usa_count usa_avg_height
1: 1 [55.11,66.71] 55.11266 66.69011 100 149.2101 114 149.8116
2: 2 (66.71,69.48] 66.70575 69.46055 100 149.0639 119 148.6486
3: 3 (69.48,71.15] 69.48011 71.13895 100 150.5331 94 148.4336
4: 4 (71.15,72.44] 71.14747 72.43042 100 150.4779 104 149.8926
Also, another option would be to assign result columns for the usa table directly back to your Part_1 summary table in place.
## This is a two-part nested join
Part_1[
## Start by creating a result that matches Part_1 ranks to all usa data
Part_1[usa,on = .(min <= weight,
max > weight)
## Compute aggregated results on the join table result
][,.(usa_count = .N,
usa_avg_height = mean(height)), by = .(rank)],
## Finaly, assign results back to the Part_1 summary table joined by rank
c("usa_count",
"usa_avg_height") := .(usa_count,
usa_avg_height), on = .(rank)]
Gives the following
quants min max count avg_height rank usa_count usa_avg_height
1: [55.11,66.71] 55.11266 66.69011 100 149.2101 1 114 149.8116
2: (66.71,69.48] 66.70575 69.46055 100 149.0639 2 119 148.6486
3: (69.48,71.15] 69.48011 71.13895 100 150.5331 3 94 148.4336
4: (71.15,72.44] 71.14747 72.43042 100 150.4779 4 104 149.8926
With data.table you can do this:
library(data.table)
library(stringr)
dt1 <- as.data.table(usa)
dt1 <- dt1[, c("min", "max") := weight]
dt2 <- as.data.table(Part_1 %>% select("quants", "rank"))
dt2 <- cbind(dt2[,.(rank)],
setDT(tstrsplit(str_sub(dt2$quants, 2, -2), ",", fixed = TRUE, names = c("min", "max"))))
dt2 <- dt2[, lapply(.SD, as.numeric)]
setkey(dt2, min, max)
dt1 <- dt1[, rank := dt2$rank[foverlaps(dt1, dt2, by.x = c("min", "max"), by.y = c("min", "max"), which = TRUE)$yid]] %>%
select(-c("min", "max"))
EDIT
Totally missed the last part. But if you wish to do that, it should be relatively straightforward from the last point (you could use dplyr for that if you wish):
dt3 <- rbind(canada %>%
mutate(quants = quantcut(weight, 100),
rank = as.numeric(quants),
country = "Canada") %>%
as.data.table(),
copy(dt1)[, country := "USA"], fill = TRUE)
dt3 <- dt3[,.(count = .N, avg_height = mean(height)), by = c("rank", "country")] %>%
dcast(rank ~ country, value.var = c("count", "avg_height")) %>%
merge(dt2 %>% rename("min_weight" = "min", "max_weight" = "max"), by = c("rank"), all.x = TRUE)
EDIT 2
Alternatively, you could try to do something similar using cut function without learning anything from data.table
rank_breaks <- Part_1 %>%
mutate(breaks = sub(",.*", "", str_sub(quants, 2)) %>% as.numeric()) %>%
arrange(rank) %>%
pull(breaks)
# Here I change minimum and maximum of groups 1 and 100 to -Inf and Inf respectively.
# If you do not wish to do so, you can disregard it and run `rank_breaks <- c(rank_breaks, max(canada$weight))` instead
rank_breaks[1] <- -Inf
rank_breaks <- c(rank_breaks, Inf)
usa <- usa %>%
mutate(rank = cut(weight, breaks = rank_breaks, labels = c(1:100)))
You can use fuzzyjoin for this.
library(fuzzyjoin)
# take percentile ranges and join US data
us_by_canadian_quantiles <- Part_1 |>
ungroup() |>
distinct(rank, min, max, height_avg_can = avg_height) |>
fuzzy_full_join(usa, by = c(min = "weight", max = "weight"), match_fun = c(`<`, `>=`))
# get count and height average per bin
us_by_canadian_quantiles |>
group_by(rank) |>
summarize(n_us = n(),
height_avg_us = mean(height),
height_avg_can = first(height_avg_can)
)
#> # A tibble: 101 × 4
#> rank n_us height_avg_us height_avg_can
#> <dbl> <int> <dbl> <dbl>
#> 1 1 114 150. 149.
#> 2 2 119 149. 149.
#> 3 3 94 148. 151.
#> 4 4 104 150. 150.
#> 5 5 115 152. 150.
#> 6 6 88 150. 149.
#> 7 7 86 150. 150.
#> 8 8 86 150. 151.
#> 9 9 102 151. 151.
#> 10 10 81 152. 150.
#> # … with 91 more rows
Note that there are a number of cases in the US frame which fall outside of the Canadian percentile ranges. They are grouped together here with rank being NA, but you could also add ranks 0 and 101 if you wanted to distinguish them.
I should note that fuzzyjoin tends to be much slower than data.table. But since you have already gotten a data.table solution, this might be more to your liking.

Get nearest n matching strings

Hi I am trying to match one string from other string in different dataframe and get nearest n matches based on score.
EX: from string_2 (df_2) column i need to match with string_1(df_1) and get the nearest 3 matches based on each ID group.
ID = c(100, 100,100,100,103,103,103,103,104,104,104,104)
string_1 = c("Jack Daniel","Jac","JackDan","steve","Mark","Dukes","Allan","Duke","Puma Nike","Puma","Nike","Addidas")
df_1 = data.frame(ID,string_1)
ID = c(100, 100, 185, 103,103, 104, 104,104)
string_2 = c("Jack Daniel","Mark","Order","Steve","Mark 2","Nike","Addidas","Reebok")
df_2 = data.frame(ID,string_2)
My output dataframe df_out will look like below.
ID = c(100, 100,185,103,103,104,104,104)
string_2 = c("Jack Daniel","Mark","Order","Steve","Mark 2","Nike","Addidas","Reebok")
nearest_str_match_1 = c("Jack Daniel","JackDan","NA","Duke","Mark","Nike","Addidas","Nike")
nearest_str_match_2 =c("JackDan","Jack Daniel","NA","Dukes","Duke","Addidas","Nike","Puma Nike")
nearest_str_match_3 =c("Jac","Jac","NA","Allan","Allan","Puma","Puma","Addidas")
df_out = data.frame(ID,string_2,nearest_str_match_1,nearest_str_match_2,nearest_str_match_3)
i have tried manually with package "stringdist" - 'jw' method and get the nearest value.
stringdist::stringdist("Jack Daniel","Jack Daniel","jw")
stringdist::stringdist("Jack Daniel","Jac","jw")
stringdist::stringdist("Jack Daniel","JackDan","jw")
Thanks in advance
merge(df_1, df_2, by = 'ID') %>%
group_by(string_2) %>%
mutate(dist = (stringdist::stringdist(string_2,string_1, 'jw')) %>%
rank(ties = 'last')) %>%
slice_min(dist, n = 3) %>%
pivot_wider(names_from = dist, names_prefix = 'nearest_str_match_',
values_from = string_1)
# A tibble: 7 x 5
# Groups: string_2 [7]
ID string_2 nearest_str_match_1 nearest_str_match_2 nearest_str_match_3
<dbl> <chr> <chr> <chr> <chr>
1 104 Addidas Addidas Nike Puma
2 100 Jack Daniel Jack Daniel JackDan Jac
3 100 Mark JackDan Jack Daniel Jac
4 103 Mark 2 Mark Duke Allan
5 104 Nike Nike Addidas Puma
6 104 Reebok Nike Puma Nike Addidas
7 103 Steve Duke Dukes Allan

converting an abbreviation into a full word

I am trying to avoid writing a long nested ifelse statement in excel.
I am working on two datasets, one where I have abbreviations and county names.
Abbre
COUNTY_NAME
1 AD Adams
2 AS Asotin
3 BE Benton
4 CH Chelan
5 CM Clallam
6 CR Clark
And another data set that contains the county abbreviation and votes.
CountyCode Votes
1 WM 97
2 AS 14
3 WM 163
4 WM 144
5 SJ 21
For the second table, how do I convert the countycode (abbreviation) into the full spelled-out text and add that as a new column?
I have been trying to solve this unsuccessfully using grep, match, and %in%. Clearly I am missing something and any insight would be greatly appreciated.
We can use a join
library(dplyr)
library(tidyr)
df2 <- df2 %>%
left_join(Abbre %>%
separate(COUNTY_NAME, into = c("CountyCode", "FullName")),
by = "CountyCode")
Or use base R
tmp <- read.table(text = Abbre$COUNTY_NAME, header = FALSE,
col.names = c("CountyCode", "FullName"))
df2 <- merge(df2, tmp, by = 'CountyCode', all.x = TRUE)
Another base R option using match
df2$COUNTY_NAME <- with(
df1,
COUNTY_NAME[match(df2$CountyCode, Abbre)]
)
gives
> df2
CountyCode Votes COUNTY_NAME
1 WM 97 <NA>
2 AS 14 Asotin
3 WM 163 <NA>
4 WM 144 <NA>
5 SJ 21 <NA>
A data.table option
> setDT(df1)[setDT(df2), on = .(Abbre = CountyCode)]
Abbre COUNTY_NAME Votes
1: WM <NA> 97
2: AS Asotin 14
3: WM <NA> 163
4: WM <NA> 144
5: SJ <NA> 21

How to rbind reshaped data tables of different column sizes and with different names

I checked similar entries in SO, none answers my question exactly.
My problem is this:
Let's say, User1 has 6 purchases, User2 has 2.
Purchase data is something like this:
set.seed(1234)
purchase <- data.frame(id = c(rep("User1", 6), rep("User2", 2)),
purchaseid = sample(seq(1, 100, 1), 8),
purchaseDate = seq(Sys.Date(), Sys.Date() + 7, 1),
price = sample(seq(30, 200, 10), 8))
#
users <- data.frame(id = c("User1","User2"),
uname = c("name1", "name2"),
uaddress = c("add1", "add2"))
> purchase
id purchaseid purchaseDate price
1 User1 12 2019-09-27 140
2 User1 62 2019-09-28 110
3 User1 60 2019-09-29 200
4 User1 61 2019-09-30 190
5 User1 83 2019-10-01 60
6 User1 97 2019-10-02 150
7 User2 1 2019-10-03 160
8 User2 22 2019-10-04 120
End data required includes 1 row for each user, that keeps the user name, address, etc. Then comes next columns for 20 purchases. The purchase data needs to be placed one after another in the same row. This is the rule: only one row for each user. If the user does not have 20 purchases, the remaining fields should be empty.
End data should therefore look like this:
id uname uaddr p1id p1date p1price p2id p2date p2price p3id p3date p3price p4id
1 User1 name1 add1 12 2019-09-27 140 62 2019-09-28 110 60 2019-09-29 200 61
2 User2 name2 add2 1 2019-10-03 160 22 2019-10-04 120 NA <NA> NA NA
p4date p4price
1 2019-09-30 190
2 <NA> NA
enddata <- data.frame(id = c("User1", "User2"),
uname = c("name1", "name2"),
uaddr = c("add1", "add2"),
p1id = c(12,1),
p1date = c("2019-09-27","2019-10-03"),
p1price = c(140, 160),
p2id = c(62, 22),
p2date = c("2019-09-28", "2019-10-04"),
p2price = c(110, 120),
p3id = c(60, NA),
p3date = c("2019-09-29", NA),
p3price = c(200, NA),
p4id = c(61, NA),
p4date = c("2019-09-30", NA),
p4price = c(190, NA))
I used reshape to get the data for each user into the wide format. The idea was doing it in a loop for each user id. Then I used rbindlist with the fill option TRUE, but this time I am having problem with column names. After reshape, each gets different column names. Without fixed number of columns, you cannot set names either.
Any elegant solution to this?
There's no need to process each id separately. Instead we can operate by id within a single data frame. Below is a tidyverse approach. You can stop the chain at any point to see the intermediate output. I've added comments to explain what the code is doing, but let me know if anything is unclear.
library(tidyverse)
dat = users %>%
# Join purchase data to user data
left_join(purchase) %>%
arrange(purchaseDate) %>%
# Create a count column to assign a sequence number to each purchase within each id.
# We'll use this later to create columns for each purchase event with a unique
# sequence number for each purchase.
group_by(id) %>%
mutate(seq=1:n()) %>%
ungroup %>%
# Reshape data frame to from "wide" to "long" format
gather(key, value, purchaseid:price) %>%
arrange(seq) %>%
# Paste together the "key" and "seq" columns (the resulting column will still be
# called "key"). This will allow us to spread the data frame to one row per id
# with each purchase event properly numbered.
unite(key, key, seq, sep="_") %>%
mutate(key = factor(key, levels=unique(key))) %>%
spread(key, value) %>%
# Convert date columns back to Date class
mutate_at(vars(matches("Date")), as.Date, origin="1970-01-01")
dat
id uname uaddress purchaseid_1 purchaseDate_1 price_1 purchaseid_2 purchaseDate_2 price_2
1 User1 name1 add1 12 2019-09-27 140 62 2019-09-28 110
2 User2 name2 add2 1 2019-10-03 160 22 2019-10-04 120
purchaseid_3 purchaseDate_3 price_3 purchaseid_4 purchaseDate_4 price_4 purchaseid_5 purchaseDate_5
1 60 2019-09-29 200 61 2019-09-30 190 83 2019-10-01
2 NA <NA> NA NA <NA> NA NA <NA>
price_5 purchaseid_6 purchaseDate_6 price_6
1 60 97 2019-10-02 150
2 NA NA <NA> NA
Another option using data.table:
#pivot to wide format
setDT(users)
setDT(purchase)[, pno := rowid(id)]
ans <- dcast(purchase[users, on=.(id)], id + uname + uaddress ~ pno,
value.var=c("purchaseid","purchaseDate", "price"))
#reorder columns
nm <- grep("[1-9]$", names(ans), value=TRUE)
setcolorder(ans, c(setdiff(names(ans), nm), nm[order(gsub("(.*)_", "", nm))]))
ans
output:
id uname uaddress purchaseid_1 purchaseDate_1 price_1 purchaseid_2 purchaseDate_2 price_2 purchaseid_3 purchaseDate_3 price_3 purchaseid_4 purchaseDate_4 price_4 purchaseid_5 purchaseDate_5 price_5 purchaseid_6 purchaseDate_6 price_6
1: User1 name1 add1 12 2019-09-30 140 62 2019-10-01 110 60 2019-10-02 200 61 2019-10-03 190 83 2019-10-04 60 97 2019-10-05 150
2: User2 name2 add2 1 2019-10-06 160 22 2019-10-07 120 NA <NA> NA NA <NA> NA NA <NA> NA NA <NA> NA

Ifelse with different lengths of data frame

I have dataset which is panel data that contains the following variables:
1. Country
2. Company
3. Monthly date
4. Revenue
`A <- data.frame(Country=as.factor(rep('A', 138)),
Company = as.factor(c(rep('AAA', 12), rep('BBB', 8), rep('CCC', 72), rep('DDD', 46))),
Date = c(seq(as.Date('2010-01-01'), as.Date('2011-01-01'), by = 'month'),
seq(as.Date('2010-01-01'), as.Date('2010-07-01'), by = 'month'),
seq(as.Date('2010-01-01'), as.Date('2015-12-01'), by = 'month'),
seq(as.Date('2012-03-01'), as.Date('2015-12-01'), by = 'month')),
Revenue= sample(10000:25000, 138)
)
B<- data.frame(Country=as.factor(rep('B', 108)),
Company = as.factor(c(rep('EEE', 36), rep('FFF', 36), rep('GGG', 36))),
Date = c(seq(as.Date('2013-01-01'), as.Date('2015-12-01'), by = 'month'),
seq(as.Date('2013-01-01'), as.Date('2015-12-01'), by = 'month'),
seq(as.Date('2013-01-01'), as.Date('2015-12-01'), by = 'month')),
Revenue = sample(10000:25000, 108)
)`
I want to add other variable to the dataset - Competitor's revenue, which is the total sum of the revenues of all other companies in the own country for the corresponding month.
I wrote the following code:
new_B<-data.frame()
for(i in 1:nlevels(B$Company)){
temp_i<-B[which(B$Company==levels(B$Company)[i]),]
temp_j<-B[which(B$Company!=levels(B$Company)[i]),]
agg_temp<-aggregate(temp_j$Revenue, by = list(temp_j$Date), sum)
temp_i$competitor_value<-ifelse(agg_temp$Group.1 %in% temp_i$Date, agg_temp$x, 0)
new_B<-rbind(new_B, temp_i)
}
I created two temporary data set inside for loop one containing company i only and the other - all other companies. I summed all revenues of other companies by month. Then using ifelse for the same dates I add new variable to temp_i. It works nice for the companies that operated during the same period, but in country A there are companies that operated for different periods and when I try to use my code, I have error that they are not of the same length
new_A<-data.frame()
for(i in 1:nlevels(A$Company)){
temp_i<-A[which(A$Company==levels(A$Company)[i]),]
temp_j<-A[which(A$Company!=levels(A$Company)[i]),]
agg_temp<-aggregate(temp_j$Revenue, by = list(temp_j$Date), sum)
temp_i$competitor_value<-ifelse(agg_temp$Group.1 %in% temp_i$Date, agg_temp$x, 0)
new_A<-rbind(new_A, temp_i)
}
I found similar answer ifelse statements with dataframes of different lengths, but still do not know how to solve my problem.
I would really appreciate help
I suggest a different approach using the dplyr package:
library(dplyr)
A %>%
bind_rows(B) %>%
group_by(month=format(Date, "%Y-%m")) %>%
mutate(revComp = sum(Revenue)) %>%
group_by(Company, add = T) %>%
mutate(revComp = revComp-Revenue)
# Source: local data frame [246 x 6]
# Groups: month, Company [246]
#
# Country Company Date Revenue month revComp
# (chr) (chr) (date) (int) (chr) (int)
# 1 A AAA 2010-01-01 10657 2010-01 30356
# 2 A AAA 2010-02-01 11620 2010-02 22765
# 3 A AAA 2010-03-01 17285 2010-03 33329
# 4 A AAA 2010-04-01 22886 2010-04 33469
# 5 A AAA 2010-05-01 20129 2010-05 39974
# 6 A AAA 2010-06-01 22865 2010-06 26896
# 7 A AAA 2010-07-01 13087 2010-07 29542
# 8 A AAA 2010-08-01 19451 2010-08 14842
# 9 A AAA 2010-09-01 12364 2010-09 15309
# 10 A AAA 2010-10-01 19375 2010-10 14090

Resources