If I have a table like this:
| FileName | Category| Value | Number |
|:--------:|:-------:|:-----:|:------:|
| File1 | Time | 123 | 1 |
| File1 | Size | 456 | 1 |
| File1 | Final | 789 | 1 |
| File2 | Time | 312 | 2 |
| File2 | Size | 645 | 2 |
| File2 | Final | 978 | 2 |
| File3 | Time | 741 | 1 |
| File3 | Size | 852 | 1 |
| File3 | Final | 963 | 1 |
| File1 | Time | 369 | 2 |
| File1 | Size | 258 | 2 |
| File1 | Final | 147 | 2 |
| File3 | Time | 741 | 2 |
| File3 | Size | 734 | 2 |
| File3 | Final | 942 | 2 |
| File1 | Time | 997 | 3 |
| File1 | Size | 245 | 3 |
| File1 | Final | 985 | 3 |
| File2 | Time | 645 | 3 |
| File2 | Size | 285 | 3 |
| File2 | Final | 735 | 3 |
| File3 | Time | 198 | 3 |
| File3 | Size | 165 | 3 |
| File3 | Final | 753 | 3 |
What means could I use in an R script to declare a variable that is the Value for each FileName where Number is minimum and Category is Time?
(EDIT: It should be noted that there are null entries in the Value column. Therefore, this code should be constructed to treat null entries as though they didn't exist so New Column doesn't end up filled with NA values.)
Then I'd like to merge this to form a new column on the existing table so that it now looks like this:
| FileName | Category | Value | Number | New Column |
|:--------:|:--------:|:-----:|:------:|------------|
| File1 | Time | 123 | 1 | 123 |
| File1 | Size | 456 | 1 | 123 |
| File1 | Final | 789 | 1 | 123 |
| File2 | Time | 312 | 2 | 312 |
| File2 | Size | 645 | 2 | 312 |
| File2 | Final | 978 | 2 | 312 |
| File3 | Time | 741 | 1 | 741 |
| File3 | Size | 852 | 1 | 741 |
| File3 | Final | 963 | 1 | 741 |
| File1 | Time | 369 | 2 | 369 |
| File1 | Size | 258 | 2 | 369 |
| File1 | Final | 147 | 2 | 369 |
| File3 | Time | 741 | 2 | 741 |
| File3 | Size | 734 | 2 | 741 |
| File3 | Final | 942 | 2 | 741 |
| File1 | Time | 997 | 3 | 997 |
| File1 | Size | 245 | 3 | 997 |
| File1 | Final | 985 | 3 | 997 |
| File2 | Time | 645 | 3 | 645 |
| File2 | Size | 285 | 3 | 645 |
| File2 | Final | 735 | 3 | 645 |
| File3 | Time | 198 | 3 | 198 |
| File3 | Size | 165 | 3 | 198 |
| File3 | Final | 753 | 3 | 198 |
Using data.table:
(Edited to reflect #Frank's comments)
DT[, Benchmark := Value[Category == "Time"][which.min(Number[Category == "Time"])], by = FileName]
Breaking this down:
Number[Category == "Time"]
Take all Number where Category == Time
which.min(^^^)
Find which one is the minimum
Benchmark := Value[Category == "Time"][^^^]
Set the new column of benchmark to the value at this minimum
by = FileName
Do this by group
Untested, but should get you started:
Ref <- Table1 %>%
mutate(Category2 = factor(Category, c("Time", "Size", "Final"),
FileNumber = as.numeric(sub("File", "", FileName)),
FilePrefix = "File") %>%
arrange(FilePrefix, FileNumber, Category2, Value) %>%
group_by(FilePrefix, FileNumber, Category2) %>%
mutate(NewColumn = Value[1])
Related
I have a table in a Mariadb version 10.3.27 database that looks like this:
+----+------------+---------------+-----------------+
| id | channel_id | timestamp | value |
+----+------------+---------------+-----------------+
| 1 | 2 | 1623669600000 | 2882.4449252449 |
| 2 | 1 | 1623669600000 | 295.46914369742 |
| 3 | 2 | 1623669630000 | 2874.46365243 |
| 4 | 1 | 1623669630000 | 295.68124546516 |
| 5 | 2 | 1623669660000 | 2874.9638893452 |
| 6 | 1 | 1623669660000 | 295.69561247521 |
| 7 | 2 | 1623669690000 | 2878.7120274678 |
and I want to have a result like this:
+------+-------+-------+
| hour | valhh | valwp |
+------+-------+-------+
| 0 | 419 | 115 |
| 1 | 419 | 115 |
| 2 | 419 | 115 |
| 3 | 419 | 115 |
| 4 | 419 | 115 |
| 5 | 419 | 115 |
| 6 | 419 | 115 |
| 7 | 419 | 115 |
| 8 | 419 | 115 |
| 9 | 419 | 115 |
| 10 | 419 | 115 |
| 11 | 419 | 115 |
| 12 | 419 | 115 |
| 13 | 419 | 115 |
| 14 | 419 | 115 |
| 15 | 419 | 115 |
| 16 | 419 | 115 |
| 17 | 419 | 115 |
| 18 | 419 | 115 |
| 19 | 419 | 115 |
| 20 | 419 | 115 |
| 21 | 419 | 115 |
| 22 | 419 | 115 |
| 23 | 419 | 115 |
+------+-------+-------+
but with valhh (valwp) being the average of the values for the hour of the day for all days where the channel_id is 1 (2) and not the overall average. So far, I've tried:
select h.hour, hh.valhh, wp.valwp from
(select hour(from_unixtime(timestamp/1000)) as hour from data) h,
(select hour(from_unixtime(timestamp/1000)) as hour, cast(avg(value) as integer) as valhh from data where channel_id = 1) hh,
(select hour(from_unixtime(timestamp/1000)) as hour, cast(avg(value) as integer) as valwp from data where channel_id = 2) wp group by h.hour;
which gives the result above (average of all values).
I can get what I want by querying the channels separately, i.e.:
select hour(from_unixtime(timestamp/1000)) as hour, cast(avg(value) as integer) as value from data where channel_id = 1 group by hour;
gives
+------+-------+
| hour | value |
+------+-------+
| 0 | 326 |
| 1 | 145 |
| 2 | 411 |
| 3 | 142 |
| 4 | 143 |
| 5 | 171 |
| 6 | 160 |
| 7 | 487 |
| 8 | 408 |
| 9 | 186 |
| 10 | 214 |
| 11 | 199 |
| 12 | 942 |
| 13 | 521 |
| 14 | 196 |
| 15 | 247 |
| 16 | 364 |
| 17 | 252 |
| 18 | 392 |
| 19 | 916 |
| 20 | 1024 |
| 21 | 1524 |
| 22 | 561 |
| 23 | 249 |
+------+-------+
but I want to have both channels in one result set as separate columns.
How would I do that?
Thanks!
After a steep learning curve I think I figured it out:
select
hh.hour, hh.valuehh, wp.valuewp
from
(select
hour(from_unixtime(timestamp/1000)) as hour,
cast(avg(value) as integer) as valuehh
from data
where channel_id=1
group by hour) hh
inner join
(select
hour(from_unixtime(timestamp/1000)) as hour,
cast(avg(value) as integer) as valuewp
from data
where channel_id=2
group by hour) wp
on hh.hour = wp.hour;
gives
+------+---------+---------+
| hour | valuehh | valuewp |
+------+---------+---------+
| 0 | 300 | 38 |
| 1 | 162 | 275 |
| 2 | 338 | 668 |
| 3 | 166 | 38 |
| 4 | 152 | 38 |
| 5 | 176 | 37 |
| 6 | 174 | 38 |
| 7 | 488 | 36 |
| 8 | 553 | 37 |
| 9 | 198 | 36 |
| 10 | 214 | 38 |
| 11 | 199 | 612 |
| 12 | 942 | 40 |
| 13 | 521 | 99 |
| 14 | 187 | 38 |
| 15 | 209 | 38 |
| 16 | 287 | 39 |
| 17 | 667 | 37 |
| 18 | 615 | 39 |
| 19 | 854 | 199 |
| 20 | 1074 | 44 |
| 21 | 1470 | 178 |
| 22 | 665 | 37 |
| 23 | 235 | 38 |
+------+---------+---------+
I need to run the Mann-Kendall test (package trend in R, https://cran.r-project.org/web/packages/trend/index.html) on varying length time series data. Currently the time series analysis is run with the start year that I manually specify, but that may not be the actual start date. A lot of my sites contain differing start years and some may have different ending years. I condensed my data into the following. This is water quality data, so has issues with missing data and varying start/end dates.
I also deal with NAs in the middle of the time series and at the beginning. I would like to smooth out the missing NAs when in the middle of a time series. If the NAs are at the beginning, I would like to start the time series with the first actual value.
+---------+------------+------+--------------+-------------+-------------+---------------+--------------+
| SITE_ID | PROGRAM_ID | YEAR | ANC_UEQ_L | NO3_UEQ_L | SO4_UEQ_L | SBC_ALL_UEQ_L | SBC_NA_UEQ_L |
+---------+------------+------+--------------+-------------+-------------+---------------+--------------+
| 1234 | Alpha | 1992 | 36.12 | 0.8786 | 91.90628571 | 185.5595714 | 156.2281429 |
| 1234 | Alpha | 1993 | 22.30416667 | 2.671258333 | 86.85733333 | 180.5109167 | 154.1934167 |
| 1234 | Alpha | 1994 | 25.25166667 | 3.296475 | 92.00533333 | 184.3589167 | 157.3889167 |
| 1234 | Alpha | 1995 | 23.39166667 | 1.753436364 | 97.58981818 | 184.5251818 | 160.2047273 |
| 5678 | Beta | 1983 | 4.133333333 | 20 | 134.4333333 | 182.1 | 157.4 |
| 5678 | Beta | 1984 | 2.6 | 21.85 | 137.78 | 170.67 | 150.64 |
| 5678 | Beta | 1985 | 3.58 | 20.85555556 | 133.7444444 | 168.82 | 150.09 |
| 5678 | Beta | 1986 | -5.428571429 | 40.27142857 | 124.9 | 152.4 | 136.2142857 |
| 5678 | Beta | 1987 | NA | 13.75 | 122.75 | 137.4 | 126.3 |
| 5678 | Beta | 1988 | 4.666666667 | 26.13333333 | 123.7666667 | 174.9166667 | 155.4166667 |
| 5678 | Beta | 1989 | 6.58 | 31.91 | 124.63 | 167.39 | 148.68 |
| 5678 | Beta | 1990 | 2.354545455 | 39.49090909 | 121.6363636 | 161.6454545 | 144.5545455 |
| 5678 | Beta | 1991 | 5.973846154 | 30.54307692 | 119.8138462 | 165.4661185 | 147.0807338 |
| 5678 | Beta | 1992 | 4.174359 | 16.99051285 | 124.1753846 | 148.5505115 | 131.8894862 |
| 5678 | Beta | 1993 | 6.05 | 19.76125 | 117.3525 | 148.3025 | 131.3275 |
| 5678 | Beta | 1994 | -2.51666 | 17.47167 | 117.93266 | 129.64167 | 114.64501 |
| 5678 | Beta | 1995 | 8.00936875 | 22.66188125 | 112.3575 | 166.1220813 | 148.7095813 |
| 9101 | Victor | 1980 | NA | NA | 94.075 | NA | NA |
| 9101 | Victor | 1981 | NA | NA | 124.7 | NA | NA |
| 9101 | Victor | 1982 | 33.26666667 | NA | 73.53333333 | 142.75 | 117.15 |
| 9101 | Victor | 1983 | 26.02 | NA | 94.9 | 147.96 | 120.44 |
| 9101 | Victor | 1984 | 20.96 | NA | 82.98 | 137.4 | 110.46 |
| 9101 | Victor | 1985 | 29.325 | 0.157843137 | 84.975 | 144.45 | 118.45 |
| 9101 | Victor | 1986 | 28.6 | 0.88504902 | 81.675 | 139.7 | 114.45 |
| 9101 | Victor | 1987 | 25.925 | 1.065441176 | 74.15 | 131.875 | 108.7 |
| 9101 | Victor | 1988 | 29.4 | 1.048529412 | 80.625 | 148.15 | 122.5 |
| 9101 | Victor | 1989 | 27.7 | 0.907598039 | 81.025 | 143.1 | 119.275 |
| 9101 | Victor | 1990 | 27.4 | 0.642647059 | 77.65 | 126.825 | 104.775 |
| 9101 | Victor | 1991 | 24.95 | 1.228921569 | 74.1 | 138.55 | 115.7 |
| 9101 | Victor | 1992 | 29.425 | 0.591911765 | 73.85 | 130.675 | 106.65 |
| 9101 | Victor | 1993 | 22.53333333 | 0.308169935 | 64.93333333 | 117.3666667 | 96.2 |
| 9101 | Victor | 1994 | 29.93333333 | 0.428431373 | 67.23333333 | 124.0666667 | 101.2333333 |
| 9101 | Victor | 1995 | 39.33333333 | 0.57875817 | 65.36666667 | 128.8333333 | 105.0666667 |
| 1121 | Charlie | 1987 | 12.39 | 0.65 | 99.48 | 136.37 | 107.75 |
| 1121 | Charlie | 1988 | 10.87333333 | 0.69 | 104.6133333 | 131.9 | 105.2 |
| 1121 | Charlie | 1989 | 5.57 | 1.09 | 105.46 | 136.125 | 109.5225 |
| 1121 | Charlie | 1990 | 13.4725 | 0.8975 | 99.905 | 134.45 | 108.9875 |
| 1121 | Charlie | 1991 | 11.3 | 0.805 | 100.605 | 134.3775 | 108.9725 |
| 1121 | Charlie | 1992 | 9.0025 | 7.145 | 99.915 | 136.8625 | 111.945 |
| 1121 | Charlie | 1993 | 7.7925 | 6.6 | 95.865 | 133.0975 | 107.4625 |
| 1121 | Charlie | 1994 | 7.59 | 3.7625 | 97.3575 | 129.635 | 104.465 |
| 1121 | Charlie | 1995 | 7.7925 | 1.21 | 100.93 | 133.9875 | 109.5025 |
| 3812 | Charlie | 1988 | 18.84390244 | 17.21142857 | 228.8684211 | 282.6540541 | 260.5648649 |
| 3812 | Charlie | 1989 | 11.7248 | 21.21363636 | 216.5973451 | 261.3711712 | 237.4929204 |
| 3812 | Charlie | 1990 | 2.368571429 | 35.23448276 | 216.7827586 | 286.0034483 | 264.3137931 |
| 3812 | Charlie | 1991 | 33.695 | 40.733 | 231.92 | 350.91075 | 328.443 |
| 3812 | Charlie | 1992 | 18.49111111 | 26.14818889 | 219.1488 | 301.3785889 | 281.8809222 |
| 3812 | Charlie | 1993 | 17.28181818 | 27.65394545 | 210.6605091 | 290.064 | 271.9205455 |
+---------+------------+------+--------------+-------------+-------------+---------------+--------------+
Here is the code currently that will run time series for my actual data if I change the start year to miss the NAs in the earlier data. It works great for sites that have values for that entire time, but gives me odd results when different start/end years are taken into account.
Mann_Kendall_Values_Trimmed <- filter(LTM_Data_StackOverflow_9_22_2020, YEAR >1984) %>% #I manually trimmed the data here to prevent some errors
group_by(SITE_ID) %>%
filter(n() > 2) %>% #filter sites with more than 10 years of data
gather(parameter, value, SO4_UEQ_L, ANC_UEQ_L, NO3_UEQ_L, SBC_ALL_UEQ_L, SBC_NA_UEQ_L ) %>%
#, DOC_MG_L)
group_by(parameter, SITE_ID, PROGRAM_ID) %>% nest() %>%
mutate(ts_out = map(data, ~ts(.x$value, start=c(1985, 1), end=c(1995, 1), frequency=1))) %>%
#this is where I would like to specify the first year in the actual time series with data. End year would also be tied to the last year of data.
mutate(mk_res = map(ts_out, ~mk.test(.x, alternative = c("two.sided", "greater", "less"),continuity = TRUE)),
sens = map(ts_out, ~sens.slope(.x, conf.level = 0.95))) %>%
#run the Mann-Kendall Test
mutate(mk_stat = map_dbl(mk_res, ~.x$statistic),
p_val = map_dbl(mk_res, ~.x$p.value)
, sens_slope = map_dbl(sens, ~.x$estimates)
) %>%
#Pull the parameters we need
select(SITE_ID, PROGRAM_ID, parameter, sens_slope, p_val, mk_stat) %>%
mutate(output = case_when(
sens_slope == 0 ~ "NC",
sens_slope > 0 & p_val < 0.05 ~ "INS",
sens_slope > 0 & p_val > 0.05 ~ "INNS",
sens_slope < 0 & p_val < 0.05 ~ "DES",
sens_slope < 0 & p_val > 0.05 ~ "DENS"))
How do I handle the NAs in the middle of the data?
How do I get the time series to automatically start and end on the dates with actual data ? For reference each of the site_id's has the following date ranges (not including NAs):
+-----------+-----------+-------------------+-----------+-----------+
| 1234 | 5678 | 9101 | 1121 | 3812 |
+-----------+-----------+-------------------+-----------+-----------+
| 1992-1995 | 1983-1995 | 1982 OR 1985-1995 | 1987-1995 | 1988-1993 |
+-----------+-----------+-------------------+-----------+-----------+
To make the data more consistent, I decided to organize the data as individual time-series (grouping by parameter, year, site_id, program) in Oracle before importing into R.
+---------+------------+------+--------------+-----------+
| SITE_ID | PROGRAM_ID | YEAR | Value | Parameter |
+---------+------------+------+--------------+-----------+
| 1234 | Alpha | 1992 | 36.12 | ANC |
| 1234 | Alpha | 1993 | 22.30416667 | ANC |
| 1234 | Alpha | 1994 | 25.25166667 | ANC |
| 1234 | Alpha | 1995 | 23.39166667 | ANC |
| 5678 | Beta | 1990 | 2.354545455 | ANC |
| 5678 | Beta | 1991 | 5.973846154 | ANC |
| 5678 | Beta | 1992 | 4.174359 | ANC |
| 5678 | Beta | 1993 | 6.05 | ANC |
| 5678 | Beta | 1994 | -2.51666 | ANC |
| 5678 | Beta | 1995 | 8.00936875 | ANC |
| 9101 | Victor | 1990 | 27.4 | ANC |
| 9101 | Victor | 1991 | 24.95 | ANC |
| 9101 | Victor | 1992 | 29.425 | ANC |
| 9101 | Victor | 1993 | 22.53333333 | ANC |
| 9101 | Victor | 1994 | 29.93333333 | ANC |
| 9101 | Victor | 1995 | 39.33333333 | ANC |
| 1121 | Charlie | 1990 | 13.4725 | ANC |
| 1121 | Charlie | 1991 | 11.3 | ANC |
| 1121 | Charlie | 1992 | 9.0025 | ANC |
| 1121 | Charlie | 1993 | 7.7925 | ANC |
| 1121 | Charlie | 1994 | 7.59 | ANC |
| 1121 | Charlie | 1995 | 7.7925 | ANC |
| 3812 | Charlie | 1990 | 2.368571429 | ANC |
| 3812 | Charlie | 1991 | 33.695 | ANC |
| 3812 | Charlie | 1992 | 18.49111111 | ANC |
| 3812 | Charlie | 1993 | 17.28181818 | ANC |
| 1234 | Alpha | 1992 | 0.8786 | NO3 |
| 1234 | Alpha | 1993 | 2.671258333 | NO3 |
| 1234 | Alpha | 1994 | 3.296475 | NO3 |
| 1234 | Alpha | 1995 | 1.753436364 | NO3 |
| 5678 | Beta | 1990 | 39.49090909 | NO3 |
| 5678 | Beta | 1991 | 30.54307692 | NO3 |
| 5678 | Beta | 1992 | 16.99051285 | NO3 |
| 5678 | Beta | 1993 | 19.76125 | NO3 |
| 5678 | Beta | 1994 | 17.47167 | NO3 |
| 5678 | Beta | 1995 | 22.66188125 | NO3 |
| 9101 | Victor | 1990 | 0.642647059 | NO3 |
| 9101 | Victor | 1991 | 1.228921569 | NO3 |
| 9101 | Victor | 1992 | 0.591911765 | NO3 |
| 9101 | Victor | 1993 | 0.308169935 | NO3 |
| 9101 | Victor | 1994 | 0.428431373 | NO3 |
| 9101 | Victor | 1995 | 0.57875817 | NO3 |
| 1121 | Charlie | 1990 | 0.8975 | NO3 |
| 1121 | Charlie | 1991 | 0.805 | NO3 |
| 1121 | Charlie | 1992 | 7.145 | NO3 |
| 1121 | Charlie | 1993 | 6.6 | NO3 |
| 1121 | Charlie | 1994 | 3.7625 | NO3 |
| 1121 | Charlie | 1995 | 1.21 | NO3 |
| 3812 | Charlie | 1990 | 35.23448276 | NO3 |
| 3812 | Charlie | 1991 | 40.733 | NO3 |
| 3812 | Charlie | 1992 | 26.14818889 | NO3 |
| 3812 | Charlie | 1993 | 27.65394545 | NO3 |
+---------+------------+------+--------------+-----------+
Once in R, I was able to edit the code to the following with the beginning of the code. Remaining code was the same.
Mann_Kendall_Values_Trimmed <- filter(LTM_Data_StackOverflow_9_22_2020, YEAR >1989, PARAMETER != 'doc') %>%
#filter data to start in 1990 as this removes nulls from pre-1990 sampling
group_by(SITE_ID) %>%
filter(n() > 10) %>% #filter sites with more than 10 years of data
#gather(SITE_ID, PARAMETER, VALUE) #I believe this is now redundant %>%
group_by(PARAMETER, SITE_ID, PROGRAM_ID) %>% nest() %>%
mutate(ts_out = map(data, ~ts(.x$VALUE, start=c(min(.x$YEAR), 1), c(max(.x$YEAR), 1), frequency=1)))
This achieved the result I needed for all time series that have sufficient length (greater than 2 I believe) to run the mann-kendall test. The parameter that had those issues will be dealt with in separate R code.
I have a sample table which looks somewhat like this:
| Date | Vendor_Id | Requisitioner | Amount |
|------------|:---------:|--------------:|--------|
| 1/17/2019 | 98 | John | 2405 |
| 4/30/2019 | 1320 | Dave | 1420 |
| 11/29/2018 | 3887 | Michele | 596 |
| 11/29/2018 | 3887 | Michele | 960 |
| 11/29/2018 | 3887 | Michele | 1158 |
| 9/21/2018 | 4919 | James | 857 |
| 10/25/2018 | 4919 | Paul | 1162 |
| 10/26/2018 | 4919 | Echo | 726 |
| 10/26/2018 | 4919 | Echo | 726 |
| 10/29/2018 | 4919 | Andrew | 532 |
| 10/29/2018 | 4919 | Andrew | 532 |
| 11/12/2018 | 4919 | Carlos | 954 |
| 5/21/2018 | 2111 | June | 3580 |
| 5/23/2018 | 7420 | Justin | 224 |
| 5/24/2018 | 1187 | Sylvia | 3442 |
| 5/25/2018 | 1187 | Sylvia | 4167 |
| 5/30/2018 | 3456 | Ama | 4580 |
Based on each requisitioner and vendor id, I need to find the difference in the date such that it should be something like this:
| Date | Vendor_Id | Requisitioner | Amount | Date_Diff |
|------------|:---------:|--------------:|--------|-----------|
| 1/17/2019 | 98 | John | 2405 | NA |
| 4/30/2019 | 1320 | Dave | 1420 | 103 |
| 11/29/2018 | 3887 | Michele | 596 | NA |
| 11/29/2018 | 3887 | Michele | 960 | 0 |
| 11/29/2018 | 3887 | Michele | 1158 | 0 |
| 9/21/2018 | 4919 | James | 857 | NA |
| 10/25/2018 | 4919 | Paul | 1162 | NA |
| 10/26/2018 | 4919 | Paul | 726 | 1 |
| 10/26/2018 | 4919 | Paul | 726 | 0 |
| 10/29/2018 | 4919 | Paul | 532 | 3 |
| 10/29/2018 | 4919 | Paul | 532 | 0 |
| 11/12/2018 | 4917 | Carlos | 954 | NA |
| 5/21/2018 | 2111 | Justin | 3580 | NA |
| 5/23/2018 | 7420 | Justin | 224 | 2 |
| 5/24/2018 | 1187 | Sylvia | 3442 | NA |
| 5/25/2018 | 1187 | Sylvia | 4167 | 1 |
| 5/30/2018 | 3456 | Ama | 4580 | NA |
Now, if the difference in the date is <=3 days within each requisitioner and vendor id, and sum of the amount is >5000, I need to create a subset of that. The final output should be something like this:
| Date | Vendor_Id | Requisitioner | Amount | Date_Diff |
|-----------|:---------:|--------------:|--------|-----------|
| 5/24/2018 | 1187 | Sylvia | 3442 | NA |
| 5/25/2018 | 1187 | Sylvia | 4167 | 1 |
Initially, when I tried working with date difference, I used the following code:
df=df %>% mutate(diffdate= difftime(Date,lag(Date,1)))
However, the difference doesn't make sense as they are huge numbers such as 86400 and some huge random numbers. I tried the above code when data type for 'Date' field was initially Posixct. Later when I changed it to 'Date' data type, the date differences were still the same huge random numbers.
Also, is it possible to group the date differences based on requisitioners and vendor id's as mentioned in the 2nd table above?
EDIT:
I'm coming across a new challenge now. In the problem set, I need to filter out the values whose date differences are less than 3 days. Let us assume that the table with date difference appears something like this:
| MasterCalendarDate | Vendor_Id | Requisitioner | Amount | diffdate |
|--------------------|:---------:|--------------:|--------|----------|
| 1/17/2019 | 98 | John | 2405 | #N/A |
| 4/30/2019 | 1320 | Dave | 1420 | 103 |
| 11/29/2018 | 3887 | Michele | 596 | #N/A |
| 11/29/2018 | 3887 | Michele | 960 | 0 |
| 11/29/2018 | 3887 | Michele | 1158 | 0 |
| 9/21/2018 | 4919 | Paul | 857 | #N/A |
| 10/25/2018 | 4919 | Paul | 1162 | 34 |
| 10/26/2018 | 4919 | Paul | 726 | 1 |
| 10/26/2018 | 4919 | Paul | 726 | 0 |
When we look at the requisitioner 'Paul', the date diff between 9/21/2018 and 10/25/2018 is 34 and between that of 10/25/2018 and 10/26/2018 is 1 day. However, when I filter the data for date difference <=3 days, I miss out on 10/25/2018 because of 34 days difference. I have multiple such occurrences. How can I fix it?
I think you need to convert your date variable using as.Date(), then you can compute the lagged time difference using difftime().
# create toy data frame
df <- data.frame(date=as.Date(paste(sample(2018:2019,100,T),
sample(1:12,100,T),
sample(1:28,100,T),sep = '-')),
req=sample(letters[1:10],100,T),
amount=sample(100:10000,100,T))
# compute lagged time difference in days -- diff output is numeric
df %>% arrange(req,date) %>% group_by(req) %>%
mutate(diff=as.numeric(difftime(date,lag(date),units='days')))
# as above plus filtering based on time difference and amount
df %>% arrange(req,date) %>% group_by(req) %>%
mutate(diff=as.numeric(difftime(date,lag(date),units='days'))) %>%
filter(diff<10 | is.na(diff), amount>5000)
# A tibble: 8 x 4
# Groups: req [7]
date req amount diff
<date> <fct> <int> <dbl>
1 2018-05-13 a 9062 NA
2 2019-05-07 b 9946 2
3 2018-02-03 e 5697 NA
4 2018-03-12 g 7093 NA
5 2019-05-16 g 5631 3
6 2018-03-06 h 7114 6
7 2018-08-12 i 5151 6
8 2018-04-03 j 7738 8
I am trying to find the symbol of the smallest difference. But I don't know what to do answer finding the difference to compare the two.
I have this set:
+------+------+-------------+-------------+--------------------+------+--------+
| clid | cust | Min | Max | Difference | Qty | symbol |
+------+------+-------------+-------------+--------------------+------+--------+
| 102 | C6 | 11.8 | 12.72 | 0.9199999999999999 | 1500 | GE |
| 110 | C3 | 44 | 48.099998 | 4.099997999999999 | 2000 | INTC |
| 115 | C4 | 1755.25 | 1889.650024 | 134.40002400000003 | 2000 | AMZN |
| 121 | C9 | 28.25 | 30.27 | 2.0199999999999996 | 1500 | BAC |
| 130 | C7 | 8.48753 | 9.096588 | 0.609058000000001 | 5000 | F |
| 175 | C3 | 6.41 | 7.71 | 1.2999999999999998 | 1500 | SBS |
| 204 | C5 | 6.41 | 7.56 | 1.1499999999999995 | 5000 | SBS |
| 208 | C2 | 1782.170044 | 2004.359985 | 222.1899410000001 | 5000 | AMZN |
| 224 | C10 | 153.350006 | 162.429993 | 9.079986999999988 | 1500 | FB |
| 269 | C6 | 355.980011 | 392.299988 | 36.319976999999994 | 2000 | BA |
+------+------+-------------+-------------+--------------------+------+--------+
so far I have this Query
select d.clid,
d.cust,
MIN(f.fillPx) as Min,
MAX(f.fillPx) as Max,
MAX(f.fillPx)-MIN(f.fillPx) as Difference,
d.Qty,
d.symbol
from orders d
inner join mp f on d.clid=f.clid
group by f.clid
having SUM(f.fillQty) < d.Qty
order by d.clid;
What am I missing so that I can compare the min and max and get the smallest different symbol?
mp table:
+------+------+--------+------+------+---------+-------------+--------+
| clid | cust | symbol | side | oQty | fillQty | fillPx | execid |
+------+------+--------+------+------+---------+-------------+--------+
| 123 | C2 | SBS | SELL | 5000 | 273 | 7.37 | 1 |
| 157 | C9 | C | SELL | 1500 | 167 | 69.709999 | 2 |
| 254 | C9 | GE | SELL | 5000 | 440 | 13.28 | 3 |
| 208 | C2 | AMZN | SELL | 5000 | 714 | 1864.420044 | 4 |
| 102 | C6 | GE | SELL | 1500 | 136 | 12.32 | 5 |
| 160 | C7 | INTC | SELL | 1500 | 267 | 44.5 | 6 |
| 145 | C10 | GE | SELL | 5000 | 330 | 13.28 | 7 |
| 208 | C2 | AMZN | SELL | 5000 | 1190 | 1788.609985 | 8 |
| 161 | C1 | C | SELL | 1500 | 135 | 72.620003 | 9 |
| 181 | C5 | FCX | BUY | 1500 | 84 | 12.721739 | 10 |
orders table:
+------+------+--------+------+------+
| cust | side | symbol | qty | clid |
+------+------+--------+------+------+
| C1 | SELL | C | 1500 | 161 |
| C9 | SELL | INTC | 2000 | 231 |
| C10 | SELL | BMY | 1500 | 215 |
| C1 | BUY | SBS | 2000 | 243 |
| C4 | BUY | AMZN | 2000 | 226 |
| C10 | BUY | C | 1500 | 211 |
If you want one symbol, you can use order by and limit:
select d.clid,
d.cust,
MIN(f.fillPx) as Min,
MAX(f.fillPx) as Max,
MAX(f.fillPx)-MIN(f.fillPx) as Difference,
d.Qty,
d.symbol
from orders d join
mp f
on d.clid = f.clid
group by d.clid, d.cust, d.Qty, d.symbol
having SUM(f.fillQty) < d.Qty
order by difference
limit 1;
Notice that I added the rest of the unaggregated columns to the group by.
When a counter is made up from a fixed number of digits (2 in the title), standard counting-up works by incrementing from the least to the most significant digit and upon overflow reset it.
I want to count differently: A 4 digit number in base-10 would be counted up in base-2 until the overflow back to 0000 would happen, but instead the base is increased to base-3 while omitting previously counted numbers, so only continue with 0002, 0012, 0020, 0021, 0022, 0102, 0112, 0120, 0121, 0122, 0200, 0201, 0202, 0210, 0211, ...(fixed!) and all other numbers with at least one 2 in in. Upon 2222 the switch to base-4 happens, so all 4-digit combinations with at least one 3 in it follow. In the end all numbers from 0000 to 9999 are in this sequence, but in a different ordering.
This way a 9 would not should up anywhere until the last 10% of the sequence.
How would such a counter be implemented in theory (without the naive digit-presence1 check)? Can I easily jump to the n-th element of this ordering or count backwards? And is there an actual name instead of "broad counting" for it?
1: A "naive digit-presence check" would count up to base-2, and when switching to base-3 all generated numbers are checked to ensure that at least on 2 is in them. Upon switching to base 4 (i.e. the 2222-0003 step) all numbers must contain at least one 3. So after 2222 the numbers 0000, 0001, 0002 are omitted because they lack a 3 and thus have been enumerated before. And so on, base-N means the digit N-1 has to be present at least once.
So first you want all numbers with the digits 0 in order. i.e. just 00
Then all numbers with the digits 0,1: 00, 01, 10, 11 but excluding 00
Then all numbers with digits 0,1,2: 00, 01, 02, 10, 11, 12, 20, 21, 22 but excluding 00, 01, 10, 11 i.e. all those which does not have a digit 2.
It is simplist to implement by going through all combinations and excluding those which have already been printed.
for(maxdigit=0; maxdigit<10; ++maxdigit) {
for(digit1 = 0; digit1 <= maxdigit; ++digit1) {
for(digit2 = 0; digit2 <= maxdigit; ++digit2) {
for(digit3 = 0; digit3 <= maxdigit; ++digit3) {
for(digit4 = 0; digit4 <= maxdigit; ++digit4) {
if( digit1 < maxdigit && digit2 < maxdigit
&& digit3 < maxdigit && digit4 < maxdigit) continue;
print( digit1, digit2, digit3, digit4);
}}}}
}
To understand the theory of how this works you can arrange the 2 digit version in a grid
00 | 01 | 02 | 03 | 04
----- | | |
10 11 | 12 | 13 | 14
---------- | |
20 21 22 | 23 | 24
--------------- |
30 31 32 33 | 34
--------------------
40 41 42 43 44
Note how each set of numbers form "shells". In the first shell we have just one number, the first and second shell have 4 = 2^2 numbers, the first, second and third shells have 9 = 3^2 numbers etc.
We can use this to work out how many numbers in each shell. In the two digit case is 1^2=1, 2^2-1^2=3, 3^2-2^2=5, 4^2-3^2=7.
With three digits its cubes instead. 1^3=1, 2^3-1^3=9-1 = 8, 3^3-2^3=27-9 = 18, etc.
With four digits its forth powers.
By considering the shells we could work out a more efficient way of doing things. In the 3 digit case we have shells of cubes, and would need to work out the path through the shell. I'm not sure if there is much to be gained unless you have a large number of digits.
To get the order for this consider the 3 digit case and let x,y,z be the digits in order. If we are looking at the k-th shell, we want to get all the solutions on the three planes x=k, y=k, z=k. These solutions split into those with x
In pseudocode
for(shell=0;shell<=9;++shell) {
// Do the L shaped part
for(x=0;x<shell;++x) {
// The the y-leg, which has z=k
for(y=0;y<shell;++y)
print(x,y,shell);
// Do the z-leg, which has y=k
for(z=0;z<=shell;++z)
print(x,shell,z);
}
// Now the x=shell face
for(y=0;y<=shell;++y)
for(z=0;z<=shell;++z)
print(shell,y,z);
}
It should be possible to generalise this to d-dimension. Let our coordinates here be x1, x2, ..., xd. The solution in the k-th shell will lie on the (d-1)-dimensional hyperplanes x1=k, x2=k, ..., xd=k. Again we loop through x1=0, to x1=k-1, with x1=0 the problem is the same as the d-1 dimensional problem which suggests a recursive approach.
// solve the dim-dimensional problem for
// prefix the output with prefix
function solve(int shell,int dim,String prefix) {
if(dim==1) {
// only one solution with the last digit being the shell
print(prefix+shell);
return;
}
// loop through the first digit
for(int x=0;x<shell;++x) {
String prefix2 = prefix + x;
solve(shell,dim-1,prefix2);
}
// Now to the x=k hypercube,
// need to do this in a separate recursion
String prefix2 = prefix + shell;
hypercube(shell,dim-1,prefix2);
}
// all solutions in dim dimensions with a given prefix
function hypercube(int shell,int dim,String prefix) {
if(dim==1) {
for(int x=0;x<=shell;++x)
println(prefix+x);
}
else {
for(int x=0;x<=shell;++x) {
String prefix2 = prefix + x;
hypercube(shell,dim-1,prefix2);
}
}
}
// Now call, here we do the 4 digit version
for(shell=0;shell<=9;++shell) {
solve(shell,4,"");
}
I've made a spreadsheet to look for patterns. The ones up to base 3 and the first few base 4s are shown below, though my spreadsheet goes higher which shows the patterns more clearly.
In the 'table' (apologies for lack of table formatting in SO), d3 to d0 are the 4 digits, str is a string representation of the same digits, b is the current number base, and d is the depth - number of digits without leading zeros. pos is the position in the list. (dec) is a decimal representation of the display number with the base interpreted in the normal way, which proves useless since there are duplicates.
Please check the table to see that I have interpreted what you are asking for correctly.
Some patterns are emerging, such as there seems to be some kind of exponential-ish relationship between the number of entries of each depth for a given base and the base. I'm out of time to spend on this right now, but will further edit this answer when I get the chance in the next day or so, unless some else beats me to it.
As for a name for this, I have no idea. Note, however, that the number of digits you are allowing very much affects the ordering of the outcome. On the other hand, there is no theoretical need to stop at 9, the math will continue up to any base you like; you could use A, B, C etc just like we usually do for hex counting if you wish. This continuation is limited only by the number of symbols you allow.
d3d2d1d0str b d pos(dec)
0 0 0 00000 1 0 0 0
0 0 0 10001 2 1 1 1
0 0 1 00010 2 2 2 2
0 0 1 10011 2 2 3 3
0 1 0 00100 2 3 4 4
0 1 0 10101 2 3 5 5
0 1 1 00110 2 3 6 6
0 1 1 10111 2 3 7 7
1 0 0 01000 2 4 8 8
1 0 0 11001 2 4 9 9
1 0 1 01010 2 41010
1 0 1 11011 2 41111
1 1 0 01100 2 41212
1 1 0 11101 2 41313
1 1 1 01110 2 41414
1 1 1 11111 2 41515
0 0 0 20002 3 116 2
0 0 1 20012 3 217 5
0 0 2 20022 3 218 8
0 1 0 20102 3 31911
0 1 1 20112 3 32014
0 1 2 20122 3 32117
1 0 0 21002 3 42229
1 0 1 21012 3 42332
1 0 2 21022 3 42435
1 1 0 21102 3 42538
1 1 1 21112 3 42641
1 1 2 21122 3 42744
2 0 0 22002 3 42856
2 0 1 22012 3 42959
2 0 2 22022 3 43062
2 1 0 22102 3 43165
2 1 1 22112 3 43268
2 1 2 22122 3 43371
2 2 2 22222 3 43480
0 0 0 30003 4 135 3
0 0 1 30013 4 236 7
0 0 2 30023 4 23711
0 0 3 30033 4 23815
0 1 0 30103 4 33919
0 1 1 30113 4 34023
Here's a table of all the 4-digit base(n) numbers for n up to 4, with any already listed in a previous column omitted. In this arrangement, some patterns are evident, and it seems the most you will ever have to skip to find the next unused one is n-1 (ignoring zero). you can also start counting for each new base at n-1. The majority of numbers in a given base n are usable, including all from (n-1),(n-2)(0)(0) up.
This arrangement suggests that the naive elimination approach might be ok for finding the next or previous number, but, once again, you'd have to look more algorithmically at the patterns to answer questions like 'what does the one at (x) look like' or 'what ordinal position is (yyyy) at' without looping.
+-----+------+------+------+------+
| | - | 2 | 3 | 4 |
+-----+------+------+------+------+
| 0 | 0000 | | | |
| 1 | | 0001 | | |
| 2 | | 0010 | 0002 | |
| 3 | | 0011 | | 0003 |
| 4 | | 0100 | | |
| 5 | | 0101 | 0012 | |
| 6 | | 0110 | 0020 | |
| 7 | | 0111 | 0021 | 0013 |
| 8 | | 1000 | 0022 | |
| 9 | | 1001 | | |
| 10 | | 1010 | | |
| 11 | | 1011 | 0102 | 0023 |
| 12 | | 1100 | | 0030 |
| 13 | | 1101 | | 0031 |
| 14 | | 1110 | 0112 | 0032 |
| 15 | | 1111 | 0120 | 0033 |
| 16 | | | 0121 | |
| 17 | | | 0122 | |
| 18 | | | 0200 | |
| 19 | | | 0201 | 0103 |
| 20 | | | 0202 | |
| 21 | | | 0210 | |
| 22 | | | 0211 | |
| 23 | | | 0212 | 0113 |
| 24 | | | 0220 | |
| 25 | | | 0221 | |
| 26 | | | 0222 | |
| 27 | | | | 0123 |
| 28 | | | | 0130 |
| 29 | | | 1002 | 0131 |
| 30 | | | | 0132 |
| 31 | | | | 0133 |
| 32 | | | 1012 | |
| 33 | | | 1020 | |
| 34 | | | 1021 | |
| 35 | | | 1022 | 0203 |
| 36 | | | | |
| 37 | | | | |
| 38 | | | 1102 | |
| 39 | | | | 0213 |
| 40 | | | | |
| 41 | | | 1112 | |
| 42 | | | 1120 | |
| 43 | | | 1121 | 0223 |
| 44 | | | 1122 | 0230 |
| 45 | | | 1200 | 0231 |
| 46 | | | 1201 | 0232 |
| 47 | | | 1202 | 0233 |
| 48 | | | 1210 | 0300 |
| 49 | | | 1211 | 0301 |
| 50 | | | 1212 | 0302 |
| 51 | | | 1220 | 0303 |
| 52 | | | 1221 | 0310 |
| 53 | | | 1222 | 0311 |
| 54 | | | 2000 | 0312 |
| 55 | | | 2001 | 0313 |
| 56 | | | 2002 | 0320 |
| 57 | | | 2010 | 0321 |
| 58 | | | 2011 | 0322 |
| 59 | | | 2012 | 0323 |
| 60 | | | 2020 | 0330 |
| 61 | | | 2021 | 0331 |
| 62 | | | 2022 | 0332 |
| 63 | | | 2100 | 0333 |
| 64 | | | 2101 | |
| 65 | | | 2102 | |
| 66 | | | 2110 | |
| 67 | | | 2111 | 1003 |
| 68 | | | 2112 | |
| 69 | | | 2120 | |
| 70 | | | 2121 | |
| 71 | | | 2122 | 1013 |
| 72 | | | 2200 | |
| 73 | | | 2201 | |
| 74 | | | 2202 | |
| 75 | | | 2210 | 1023 |
| 76 | | | 2211 | 1030 |
| 77 | | | 2212 | 1031 |
| 78 | | | 2220 | 1032 |
| 79 | | | 2221 | 1033 |
| 80 | | | 2222 | |
| 81 | | | | |
| 82 | | | | |
| 83 | | | | 1103 |
| 84 | | | | |
| 85 | | | | |
| 86 | | | | |
| 87 | | | | 1113 |
| 88 | | | | |
| 89 | | | | |
| 90 | | | | |
| 91 | | | | 1123 |
| 92 | | | | 1130 |
| 93 | | | | 1131 |
| 94 | | | | 1132 |
| 95 | | | | 1133 |
| 96 | | | | |
| 97 | | | | |
| 98 | | | | |
| 99 | | | | 1203 |
| 100 | | | | |
| 101 | | | | |
| 102 | | | | |
| 103 | | | | 1213 |
| 104 | | | | |
| 105 | | | | |
| 106 | | | | |
| 107 | | | | 1223 |
| 108 | | | | 1230 |
| 109 | | | | 1231 |
| 110 | | | | 1232 |
| 111 | | | | 1233 |
| 112 | | | | 1300 |
| 113 | | | | 1301 |
| 114 | | | | 1302 |
| 115 | | | | 1303 |
| 116 | | | | 1310 |
| 117 | | | | 1311 |
| 118 | | | | 1312 |
| 119 | | | | 1313 |
| 120 | | | | 1320 |
| 121 | | | | 1321 |
| 122 | | | | 1322 |
| 123 | | | | 1323 |
| 124 | | | | 1330 |
| 125 | | | | 1331 |
| 126 | | | | 1332 |
| 127 | | | | 1333 |
| 128 | | | | |
| 129 | | | | |
| 130 | | | | |
| 131 | | | | 2003 |
| 132 | | | | |
| 133 | | | | |
| 134 | | | | |
| 135 | | | | 2013 |
| 136 | | | | |
| 137 | | | | |
| 138 | | | | |
| 139 | | | | 2023 |
| 140 | | | | 2030 |
| 141 | | | | 2031 |
| 142 | | | | 2032 |
| 143 | | | | 2033 |
| 144 | | | | |
| 145 | | | | |
| 146 | | | | |
| 147 | | | | 2103 |
| 148 | | | | |
| 149 | | | | |
| 150 | | | | |
| 151 | | | | 2113 |
| 152 | | | | |
| 153 | | | | |
| 154 | | | | |
| 155 | | | | 2123 |
| 156 | | | | 2130 |
| 157 | | | | 2131 |
| 158 | | | | 2132 |
| 159 | | | | 2133 |
| 160 | | | | |
| 161 | | | | |
| 162 | | | | |
| 163 | | | | 2203 |
| 164 | | | | |
| 165 | | | | |
| 166 | | | | |
| 167 | | | | 2213 |
| 168 | | | | |
| 169 | | | | |
| 170 | | | | |
| 171 | | | | 2223 |
| 172 | | | | 2230 |
| 173 | | | | 2231 |
| 174 | | | | 2232 |
| 175 | | | | 2233 |
| 176 | | | | 2300 |
| 177 | | | | 2301 |
| 178 | | | | 2302 |
| 179 | | | | 2303 |
| 180 | | | | 2310 |
| 181 | | | | 2311 |
| 182 | | | | 2312 |
| 183 | | | | 2313 |
| 184 | | | | 2320 |
| 185 | | | | 2321 |
| 186 | | | | 2322 |
| 187 | | | | 2323 |
| 188 | | | | 2330 |
| 189 | | | | 2331 |
| 190 | | | | 2332 |
| 191 | | | | 2333 |
| 192 | | | | 3000 |
| 193 | | | | 3001 |
| 194 | | | | 3002 |
| 195 | | | | 3003 |
| 196 | | | | 3010 |
| 197 | | | | 3011 |
| 198 | | | | 3012 |
| 199 | | | | 3013 |
| 200 | | | | 3020 |
| 201 | | | | 3021 |
| 202 | | | | 3022 |
| 203 | | | | 3023 |
| 204 | | | | 3030 |
| 205 | | | | 3031 |
| 206 | | | | 3032 |
| 207 | | | | 3033 |
| 208 | | | | 3100 |
| 209 | | | | 3101 |
| 210 | | | | 3102 |
| 211 | | | | 3103 |
| 212 | | | | 3110 |
| 213 | | | | 3111 |
| 214 | | | | 3112 |
| 215 | | | | 3113 |
| 216 | | | | 3120 |
| 217 | | | | 3121 |
| 218 | | | | 3122 |
| 219 | | | | 3123 |
| 220 | | | | 3130 |
| 221 | | | | 3131 |
| 222 | | | | 3132 |
| 223 | | | | 3133 |
| 224 | | | | 3200 |
| 225 | | | | 3201 |
| 226 | | | | 3202 |
| 227 | | | | 3203 |
| 228 | | | | 3210 |
| 229 | | | | 3211 |
| 230 | | | | 3212 |
| 231 | | | | 3213 |
| 232 | | | | 3220 |
| 233 | | | | 3221 |
| 234 | | | | 3222 |
| 235 | | | | 3223 |
| 236 | | | | 3230 |
| 237 | | | | 3231 |
| 238 | | | | 3232 |
| 239 | | | | 3233 |
| 240 | | | | 3300 |
| 241 | | | | 3301 |
| 242 | | | | 3302 |
| 243 | | | | 3303 |
| 244 | | | | 3310 |
| 245 | | | | 3311 |
| 246 | | | | 3312 |
| 247 | | | | 3313 |
| 248 | | | | 3320 |
| 249 | | | | 3321 |
| 250 | | | | 3322 |
| 251 | | | | 3323 |
| 252 | | | | 3330 |
| 253 | | | | 3331 |
| 254 | | | | 3332 |
| 255 | | | | 3333 |
+-----+------+------+------+------+
Edit: the patterns are even clearer when you go wider. Here are all the 2-digit ones up to base 11 (because... why not?)
+-----+----+----+----+----+----+----+----+----+----+----+----+
| | - | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
+-----+----+----+----+----+----+----+----+----+----+----+----+
| 0 | 00 | | | | | | | | | | |
| 1 | | 01 | | | | | | | | | |
| 2 | | 10 | 02 | | | | | | | | |
| 3 | | 11 | | 03 | | | | | | | |
| 4 | | | | | 04 | | | | | | |
| 5 | | | 12 | | | 05 | | | | | |
| 6 | | | 20 | | | | 06 | | | | |
| 7 | | | 21 | 13 | | | | 07 | | | |
| 8 | | | 22 | | | | | | 08 | | |
| 9 | | | | | 14 | | | | | 09 | |
| 10 | | | | | | | | | | | 0A |
| 11 | | | | 23 | | 15 | | | | | |
| 12 | | | | 30 | | | | | | | |
| 13 | | | | 31 | | | 16 | | | | |
| 14 | | | | 32 | 24 | | | | | | |
| 15 | | | | 33 | | | | 17 | | | |
| 16 | | | | | | | | | | | |
| 17 | | | | | | 25 | | | 18 | | |
| 18 | | | | | | | | | | | |
| 19 | | | | | 34 | | | | | 19 | |
| 20 | | | | | 40 | | 26 | | | | |
| 21 | | | | | 41 | | | | | | 1A |
| 22 | | | | | 42 | | | | | | |
| 23 | | | | | 43 | 35 | | 27 | | | |
| 24 | | | | | 44 | | | | | | |
| 25 | | | | | | | | | | | |
| 26 | | | | | | | | | 28 | | |
| 27 | | | | | | | 36 | | | | |
| 28 | | | | | | | | | | | |
| 29 | | | | | | 45 | | | | 29 | |
| 30 | | | | | | 50 | | | | | |
| 31 | | | | | | 51 | | 37 | | | |
| 32 | | | | | | 52 | | | | | 2A |
| 33 | | | | | | 53 | | | | | |
| 34 | | | | | | 54 | 46 | | | | |
| 35 | | | | | | 55 | | | 38 | | |
| 36 | | | | | | | | | | | |
| 37 | | | | | | | | | | | |
| 38 | | | | | | | | | | | |
| 39 | | | | | | | | 47 | | 39 | |
| 40 | | | | | | | | | | | |
| 41 | | | | | | | 56 | | | | |
| 42 | | | | | | | 60 | | | | |
| 43 | | | | | | | 61 | | | | 3A |
| 44 | | | | | | | 62 | | 48 | | |
| 45 | | | | | | | 63 | | | | |
| 46 | | | | | | | 64 | | | | |
| 47 | | | | | | | 65 | 57 | | | |
| 48 | | | | | | | 66 | | | | |
| 49 | | | | | | | | | | 49 | |
| 50 | | | | | | | | | | | |
| 51 | | | | | | | | | | | |
| 52 | | | | | | | | | | | |
| 53 | | | | | | | | | 58 | | |
| 54 | | | | | | | | | | | 4A |
| 55 | | | | | | | | 67 | | | |
| 56 | | | | | | | | 70 | | | |
| 57 | | | | | | | | 71 | | | |
| 58 | | | | | | | | 72 | | | |
| 59 | | | | | | | | 73 | | 59 | |
| 60 | | | | | | | | 74 | | | |
| 61 | | | | | | | | 75 | | | |
| 62 | | | | | | | | 76 | 68 | | |
| 63 | | | | | | | | 77 | | | |
| 64 | | | | | | | | | | | |
| 65 | | | | | | | | | | | 5A |
| 66 | | | | | | | | | | | |
| 67 | | | | | | | | | | | |
| 68 | | | | | | | | | | | |
| 69 | | | | | | | | | | 69 | |
| 70 | | | | | | | | | | | |
| 71 | | | | | | | | | 78 | | |
| 72 | | | | | | | | | 80 | | |
| 73 | | | | | | | | | 81 | | |
| 74 | | | | | | | | | 82 | | |
| 75 | | | | | | | | | 83 | | |
| 76 | | | | | | | | | 84 | | 6A |
| 77 | | | | | | | | | 85 | | |
| 78 | | | | | | | | | 86 | | |
| 79 | | | | | | | | | 87 | 79 | |
| 80 | | | | | | | | | 88 | | |
| 81 | | | | | | | | | | | |
| 82 | | | | | | | | | | | |
| 83 | | | | | | | | | | | |
| 84 | | | | | | | | | | | |
| 85 | | | | | | | | | | | |
| 86 | | | | | | | | | | | |
| 87 | | | | | | | | | | | 7A |
| 88 | | | | | | | | | | | |
| 89 | | | | | | | | | | 89 | |
| 90 | | | | | | | | | | 90 | |
| 91 | | | | | | | | | | 91 | |
| 92 | | | | | | | | | | 92 | |
| 93 | | | | | | | | | | 93 | |
| 94 | | | | | | | | | | 94 | |
| 95 | | | | | | | | | | 95 | |
| 96 | | | | | | | | | | 96 | |
| 97 | | | | | | | | | | 97 | |
| 98 | | | | | | | | | | 98 | 8A |
| 99 | | | | | | | | | | 99 | |
| 100 | | | | | | | | | | | |
| 101 | | | | | | | | | | | |
| 102 | | | | | | | | | | | |
| 103 | | | | | | | | | | | |
| 104 | | | | | | | | | | | |
| 105 | | | | | | | | | | | |
| 106 | | | | | | | | | | | |
| 107 | | | | | | | | | | | |
| 108 | | | | | | | | | | | |
| 109 | | | | | | | | | | | 9A |
| 110 | | | | | | | | | | | A0 |
| 111 | | | | | | | | | | | A1 |
| 112 | | | | | | | | | | | A2 |
| 113 | | | | | | | | | | | A3 |
| 114 | | | | | | | | | | | A4 |
| 115 | | | | | | | | | | | A5 |
| 116 | | | | | | | | | | | A6 |
| 117 | | | | | | | | | | | A7 |
| 118 | | | | | | | | | | | A8 |
| 119 | | | | | | | | | | | A9 |
| 120 | | | | | | | | | | | AA |
+-----+----+----+----+----+----+----+----+----+----+----+----+