Mariadb: select average by hour and other column - mariadb

I have a table in a Mariadb version 10.3.27 database that looks like this:
+----+------------+---------------+-----------------+
| id | channel_id | timestamp | value |
+----+------------+---------------+-----------------+
| 1 | 2 | 1623669600000 | 2882.4449252449 |
| 2 | 1 | 1623669600000 | 295.46914369742 |
| 3 | 2 | 1623669630000 | 2874.46365243 |
| 4 | 1 | 1623669630000 | 295.68124546516 |
| 5 | 2 | 1623669660000 | 2874.9638893452 |
| 6 | 1 | 1623669660000 | 295.69561247521 |
| 7 | 2 | 1623669690000 | 2878.7120274678 |
and I want to have a result like this:
+------+-------+-------+
| hour | valhh | valwp |
+------+-------+-------+
| 0 | 419 | 115 |
| 1 | 419 | 115 |
| 2 | 419 | 115 |
| 3 | 419 | 115 |
| 4 | 419 | 115 |
| 5 | 419 | 115 |
| 6 | 419 | 115 |
| 7 | 419 | 115 |
| 8 | 419 | 115 |
| 9 | 419 | 115 |
| 10 | 419 | 115 |
| 11 | 419 | 115 |
| 12 | 419 | 115 |
| 13 | 419 | 115 |
| 14 | 419 | 115 |
| 15 | 419 | 115 |
| 16 | 419 | 115 |
| 17 | 419 | 115 |
| 18 | 419 | 115 |
| 19 | 419 | 115 |
| 20 | 419 | 115 |
| 21 | 419 | 115 |
| 22 | 419 | 115 |
| 23 | 419 | 115 |
+------+-------+-------+
but with valhh (valwp) being the average of the values for the hour of the day for all days where the channel_id is 1 (2) and not the overall average. So far, I've tried:
select h.hour, hh.valhh, wp.valwp from
(select hour(from_unixtime(timestamp/1000)) as hour from data) h,
(select hour(from_unixtime(timestamp/1000)) as hour, cast(avg(value) as integer) as valhh from data where channel_id = 1) hh,
(select hour(from_unixtime(timestamp/1000)) as hour, cast(avg(value) as integer) as valwp from data where channel_id = 2) wp group by h.hour;
which gives the result above (average of all values).
I can get what I want by querying the channels separately, i.e.:
select hour(from_unixtime(timestamp/1000)) as hour, cast(avg(value) as integer) as value from data where channel_id = 1 group by hour;
gives
+------+-------+
| hour | value |
+------+-------+
| 0 | 326 |
| 1 | 145 |
| 2 | 411 |
| 3 | 142 |
| 4 | 143 |
| 5 | 171 |
| 6 | 160 |
| 7 | 487 |
| 8 | 408 |
| 9 | 186 |
| 10 | 214 |
| 11 | 199 |
| 12 | 942 |
| 13 | 521 |
| 14 | 196 |
| 15 | 247 |
| 16 | 364 |
| 17 | 252 |
| 18 | 392 |
| 19 | 916 |
| 20 | 1024 |
| 21 | 1524 |
| 22 | 561 |
| 23 | 249 |
+------+-------+
but I want to have both channels in one result set as separate columns.
How would I do that?
Thanks!

After a steep learning curve I think I figured it out:
select
hh.hour, hh.valuehh, wp.valuewp
from
(select
hour(from_unixtime(timestamp/1000)) as hour,
cast(avg(value) as integer) as valuehh
from data
where channel_id=1
group by hour) hh
inner join
(select
hour(from_unixtime(timestamp/1000)) as hour,
cast(avg(value) as integer) as valuewp
from data
where channel_id=2
group by hour) wp
on hh.hour = wp.hour;
gives
+------+---------+---------+
| hour | valuehh | valuewp |
+------+---------+---------+
| 0 | 300 | 38 |
| 1 | 162 | 275 |
| 2 | 338 | 668 |
| 3 | 166 | 38 |
| 4 | 152 | 38 |
| 5 | 176 | 37 |
| 6 | 174 | 38 |
| 7 | 488 | 36 |
| 8 | 553 | 37 |
| 9 | 198 | 36 |
| 10 | 214 | 38 |
| 11 | 199 | 612 |
| 12 | 942 | 40 |
| 13 | 521 | 99 |
| 14 | 187 | 38 |
| 15 | 209 | 38 |
| 16 | 287 | 39 |
| 17 | 667 | 37 |
| 18 | 615 | 39 |
| 19 | 854 | 199 |
| 20 | 1074 | 44 |
| 21 | 1470 | 178 |
| 22 | 665 | 37 |
| 23 | 235 | 38 |
+------+---------+---------+

Related

Calculating weighted average buy and hold return per ID in R

Thanks to #langtang, I was able to calculate Buy and Hold Return around the event date for each company (Calculating Buy and Hold return around event date per ID in R). But then now I am facing a new problem.
Below is the data I currently have.
+----+------------+-------+------------+------------+----------------------+
| ID | Date | Price | EventDate | Market Cap | BuyAndHoldIndividual |
+----+------------+-------+------------+------------+----------------------+
| 1 | 2011-03-06 | 10 | NA | 109 | NA |
| 1 | 2011-03-07 | 9 | NA | 107 | -0.10000 |
| 1 | 2011-03-08 | 12 | NA | 109 | 0.20000 |
| 1 | 2011-03-09 | 14 | NA | 107 | 0.40000 |
| 1 | 2011-03-10 | 15 | NA | 101 | 0.50000 |
| 1 | 2011-03-11 | 17 | NA | 101 | 0.70000 |
| 1 | 2011-03-12 | 12 | 2011-03-12 | 110 | 0.20000 |
| 1 | 2011-03-13 | 14 | NA | 110 | 0.40000 |
| 1 | 2011-03-14 | 17 | NA | 100 | 0.70000 |
| 1 | 2011-03-15 | 14 | NA | 101 | 0.40000 |
| 1 | 2011-03-16 | 17 | NA | 107 | 0.70000 |
| 1 | 2011-03-17 | 16 | NA | 104 | 0.60000 |
| 1 | 2011-03-18 | 15 | NA | 104 | NA |
| 1 | 2011-03-19 | 16 | NA | 102 | 0.06667 |
| 1 | 2011-03-20 | 17 | NA | 107 | 0.13333 |
| 1 | 2011-03-21 | 18 | NA | 104 | 0.20000 |
| 1 | 2011-03-22 | 11 | NA | 105 | -0.26667 |
| 1 | 2011-03-23 | 15 | NA | 100 | 0.00000 |
| 1 | 2011-03-24 | 12 | 2011-03-24 | 110 | -0.20000 |
| 1 | 2011-03-25 | 13 | NA | 110 | -0.13333 |
| 1 | 2011-03-26 | 15 | NA | 107 | 0.00000 |
| 2 | 2011-03-12 | 48 | NA | 300 | NA |
| 2 | 2011-03-13 | 49 | NA | 300 | NA |
| 2 | 2011-03-14 | 50 | NA | 290 | NA |
| 2 | 2011-03-15 | 57 | NA | 296 | 0.14000 |
| 2 | 2011-03-16 | 60 | NA | 297 | 0.20000 |
| 2 | 2011-03-17 | 49 | NA | 296 | -0.02000 |
| 2 | 2011-03-18 | 64 | NA | 299 | 0.28000 |
| 2 | 2011-03-19 | 63 | NA | 292 | 0.26000 |
| 2 | 2011-03-20 | 67 | 2011-03-20 | 290 | 0.34000 |
| 2 | 2011-03-21 | 70 | NA | 299 | 0.40000 |
| 2 | 2011-03-22 | 58 | NA | 295 | 0.16000 |
| 2 | 2011-03-23 | 65 | NA | 290 | 0.30000 |
| 2 | 2011-03-24 | 57 | NA | 296 | 0.14000 |
| 2 | 2011-03-25 | 55 | NA | 299 | 0.10000 |
| 2 | 2011-03-26 | 57 | NA | 299 | NA |
| 2 | 2011-03-27 | 60 | NA | 300 | NA |
| 3 | 2011-03-18 | 5 | NA | 54 | NA |
| 3 | 2011-03-19 | 10 | NA | 50 | NA |
| 3 | 2011-03-20 | 7 | NA | 53 | NA |
| 3 | 2011-03-21 | 8 | NA | 53 | NA |
| 3 | 2011-03-22 | 7 | NA | 50 | NA |
| 3 | 2011-03-23 | 8 | NA | 51 | 0.14286 |
| 3 | 2011-03-24 | 7 | NA | 52 | 0.00000 |
| 3 | 2011-03-25 | 6 | NA | 55 | -0.14286 |
| 3 | 2011-03-26 | 9 | NA | 54 | 0.28571 |
| 3 | 2011-03-27 | 9 | NA | 55 | 0.28571 |
| 3 | 2011-03-28 | 9 | 2011-03-28 | 50 | 0.28571 |
| 3 | 2011-03-29 | 6 | NA | 52 | -0.14286 |
| 3 | 2011-03-30 | 6 | NA | 53 | -0.14286 |
| 3 | 2011-03-31 | 4 | NA | 50 | -0.42857 |
| 3 | 2011-04-01 | 5 | NA | 50 | -0.28571 |
| 3 | 2011-04-02 | 8 | NA | 55 | 0.00000 |
| 3 | 2011-04-03 | 9 | NA | 55 | NA |
+----+------------+-------+------------+------------+----------------------+
This time, I would like to make a new column called BuyAndHoldWeightedMarket, where I calculate the weighted average (by Market cap) Buy and Hold return for each ID around -5 ~ +5 days of the event date. For example, for ID =1, starting from 2011-03-19, BuyAndHoldWeightedMarket is calculated as the sum product of (prices for each ID(t)/prices for each ID(eventdate-6)-1) and Market Caps for that day for each ID and then dividing that by the sum of the Market Caps for each ID on that day.
Please check the below picture for the details. The equations are listed for each case of colored blocks.
Please note that for the uppermost BuyAndHoldWeightedMarket, ID =2,3 is not involved because they begin later than 2011-03-06. For the third block (grey colored area), the calculation of weighted return only includes ID=1,2 because Id=3 begins later than 2011-03-14. Also, for the Last block (mixed color), the first four rows use all three IDs, Blue area uses only ID=2,3 because ID=1 ends 2011-03-26, and the yellow block uses only ID=3 because ID=1, 2 ends before 2011-03-28.
Eventually, I would like to get a nice data table that looks as below.
+----+------------+-------+------------+------------+----------------------+--------------------------+
| ID | Date | Price | EventDate | Market Cap | BuyAndHoldIndividual | BuyAndHoldWeightedMarket |
+----+------------+-------+------------+------------+----------------------+--------------------------+
| 1 | 2011-03-06 | 10 | NA | 109 | NA | NA |
| 1 | 2011-03-07 | 9 | NA | 107 | -0.10000 | -0.10000 |
| 1 | 2011-03-08 | 12 | NA | 109 | 0.20000 | 0.20000 |
| 1 | 2011-03-09 | 14 | NA | 107 | 0.40000 | 0.40000 |
| 1 | 2011-03-10 | 15 | NA | 101 | 0.50000 | 0.50000 |
| 1 | 2011-03-11 | 17 | NA | 101 | 0.70000 | 0.70000 |
| 1 | 2011-03-12 | 12 | 2011-03-12 | 110 | 0.20000 | 0.20000 |
| 1 | 2011-03-13 | 14 | NA | 110 | 0.40000 | 0.40000 |
| 1 | 2011-03-14 | 17 | NA | 100 | 0.70000 | 0.70000 |
| 1 | 2011-03-15 | 14 | NA | 101 | 0.40000 | 0.40000 |
| 1 | 2011-03-16 | 17 | NA | 107 | 0.70000 | 0.70000 |
| 1 | 2011-03-17 | 16 | NA | 104 | 0.60000 | 0.60000 |
| 1 | 2011-03-18 | 15 | NA | 104 | NA | NA |
| 1 | 2011-03-19 | 16 | NA | 102 | 0.06667 | 0.11765 |
| 1 | 2011-03-20 | 17 | NA | 107 | 0.13333 | 0.10902 |
| 1 | 2011-03-21 | 18 | NA | 104 | 0.20000 | 0.17682 |
| 1 | 2011-03-22 | 11 | NA | 105 | -0.26667 | -0.07924 |
| 1 | 2011-03-23 | 15 | NA | 100 | 0.00000 | 0.07966 |
| 1 | 2011-03-24 | 12 | 2011-03-24 | 110 | -0.20000 | -0.07331 |
| 1 | 2011-03-25 | 13 | NA | 110 | -0.13333 | -0.09852 |
| 1 | 2011-03-26 | 15 | NA | 107 | 0.00000 | 0.02282 |
| 2 | 2011-03-12 | 48 | NA | 300 | NA | NA |
| 2 | 2011-03-13 | 49 | NA | 300 | NA | NA |
| 2 | 2011-03-14 | 50 | NA | 290 | NA | NA |
| 2 | 2011-03-15 | 57 | NA | 296 | 0.14000 | 0.059487331 |
| 2 | 2011-03-16 | 60 | NA | 297 | 0.20000 | 0.147029703 |
| 2 | 2011-03-17 | 49 | NA | 296 | -0.02000 | -0.030094118 |
| 2 | 2011-03-18 | 64 | NA | 299 | 0.28000 | 0.177381404 |
| 2 | 2011-03-19 | 63 | NA | 292 | 0.26000 | 0.177461929 |
| 2 | 2011-03-20 | 67 | 2011-03-20 | 290 | 0.34000 | 0.24836272 |
| 2 | 2011-03-21 | 70 | NA | 299 | 0.40000 | 0.311954459 |
| 2 | 2011-03-22 | 58 | NA | 295 | 0.16000 | 0.025352941 |
| 2 | 2011-03-23 | 65 | NA | 290 | 0.30000 | 0.192911011 |
| 2 | 2011-03-24 | 57 | NA | 296 | 0.14000 | 0.022381918 |
| 2 | 2011-03-25 | 55 | NA | 299 | 0.10000 | 0.009823098 |
| 2 | 2011-03-26 | 57 | NA | 299 | NA | NA |
| 2 | 2011-03-27 | 60 | NA | 300 | NA | NA |
| 3 | 2011-03-18 | 5 | NA | 54 | NA | NA |
| 3 | 2011-03-19 | 10 | NA | 50 | NA | NA |
| 3 | 2011-03-20 | 7 | NA | 53 | NA | NA |
| 3 | 2011-03-21 | 8 | NA | 53 | NA | NA |
| 3 | 2011-03-22 | 7 | NA | 50 | NA | NA |
| 3 | 2011-03-23 | 8 | NA | 51 | 0.14286 | 0.178343199 |
| 3 | 2011-03-24 | 7 | NA | 52 | 0.00000 | 0.010691161 |
| 3 | 2011-03-25 | 6 | NA | 55 | -0.14286 | -0.007160905 |
| 3 | 2011-03-26 | 9 | NA | 54 | 0.28571 | 0.106918456 |
| 3 | 2011-03-27 | 9 | NA | 55 | 0.28571 | 0.073405953 |
| 3 | 2011-03-28 | 9 | 2011-03-28 | 50 | 0.28571 | 0.285714286 |
| 3 | 2011-03-29 | 6 | NA | 52 | -0.14286 | -0.142857143 |
| 3 | 2011-03-30 | 6 | NA | 53 | -0.14286 | -0.142857143 |
| 3 | 2011-03-31 | 4 | NA | 50 | -0.42857 | -0.428571429 |
| 3 | 2011-04-01 | 5 | NA | 50 | -0.28571 | -0.285714286 |
| 3 | 2011-04-02 | 8 | NA | 55 | 0.00000 | 0.142857143 |
| 3 | 2011-04-03 | 9 | NA | 55 | NA | NA |
+----+------------+-------+------------+------------+----------------------+--------------------------+
I tried so far by using the following code, with the help of the previous question, but I am having a hard time figure out how to calculate the weighted BUY AND HOLD return that begins around different event dates for each ID.
#choose rows with no NA in event date and only show ID and event date
events = unique(df[!is.na(EventDate),.(ID,EventDate)])
#helper column
#:= is defined for use in j only. It adds or updates or removes column(s) by reference.
#It makes no copies of any part of memory at all.
events[, eDate:=EventDate]
#makes new column(temporary) lower and upper boundary
df[, `:=`(s=Date-6, e=Date+6)]
#non-equi match
bhr = events[df, on=.(ID, EventDate>=s, EventDate<=e), nomatch=0]
#Generate the BuyHoldReturn column, by ID and EventDate
bhr2 = bhr[, .(Date, BuyHoldReturnM1=c(NA, (Price[-1]/Price[1] -1)*MarketCap[-1])), by = .(Date)]
#merge back to get the full data
bhr3 = bhr2[df,on=.(ID,Date),.(ID,Date,Price,EventDate=i.EventDate,BuyHoldReturn)]
I would be grateful if you could help.
Thank you very much in advance!

Calculating date difference of subsequent rows based on each group in R

I have a sample table which looks somewhat like this:
| Date | Vendor_Id | Requisitioner | Amount |
|------------|:---------:|--------------:|--------|
| 1/17/2019 | 98 | John | 2405 |
| 4/30/2019 | 1320 | Dave | 1420 |
| 11/29/2018 | 3887 | Michele | 596 |
| 11/29/2018 | 3887 | Michele | 960 |
| 11/29/2018 | 3887 | Michele | 1158 |
| 9/21/2018 | 4919 | James | 857 |
| 10/25/2018 | 4919 | Paul | 1162 |
| 10/26/2018 | 4919 | Echo | 726 |
| 10/26/2018 | 4919 | Echo | 726 |
| 10/29/2018 | 4919 | Andrew | 532 |
| 10/29/2018 | 4919 | Andrew | 532 |
| 11/12/2018 | 4919 | Carlos | 954 |
| 5/21/2018 | 2111 | June | 3580 |
| 5/23/2018 | 7420 | Justin | 224 |
| 5/24/2018 | 1187 | Sylvia | 3442 |
| 5/25/2018 | 1187 | Sylvia | 4167 |
| 5/30/2018 | 3456 | Ama | 4580 |
Based on each requisitioner and vendor id, I need to find the difference in the date such that it should be something like this:
| Date | Vendor_Id | Requisitioner | Amount | Date_Diff |
|------------|:---------:|--------------:|--------|-----------|
| 1/17/2019 | 98 | John | 2405 | NA |
| 4/30/2019 | 1320 | Dave | 1420 | 103 |
| 11/29/2018 | 3887 | Michele | 596 | NA |
| 11/29/2018 | 3887 | Michele | 960 | 0 |
| 11/29/2018 | 3887 | Michele | 1158 | 0 |
| 9/21/2018 | 4919 | James | 857 | NA |
| 10/25/2018 | 4919 | Paul | 1162 | NA |
| 10/26/2018 | 4919 | Paul | 726 | 1 |
| 10/26/2018 | 4919 | Paul | 726 | 0 |
| 10/29/2018 | 4919 | Paul | 532 | 3 |
| 10/29/2018 | 4919 | Paul | 532 | 0 |
| 11/12/2018 | 4917 | Carlos | 954 | NA |
| 5/21/2018 | 2111 | Justin | 3580 | NA |
| 5/23/2018 | 7420 | Justin | 224 | 2 |
| 5/24/2018 | 1187 | Sylvia | 3442 | NA |
| 5/25/2018 | 1187 | Sylvia | 4167 | 1 |
| 5/30/2018 | 3456 | Ama | 4580 | NA |
Now, if the difference in the date is <=3 days within each requisitioner and vendor id, and sum of the amount is >5000, I need to create a subset of that. The final output should be something like this:
| Date | Vendor_Id | Requisitioner | Amount | Date_Diff |
|-----------|:---------:|--------------:|--------|-----------|
| 5/24/2018 | 1187 | Sylvia | 3442 | NA |
| 5/25/2018 | 1187 | Sylvia | 4167 | 1 |
Initially, when I tried working with date difference, I used the following code:
df=df %>% mutate(diffdate= difftime(Date,lag(Date,1)))
However, the difference doesn't make sense as they are huge numbers such as 86400 and some huge random numbers. I tried the above code when data type for 'Date' field was initially Posixct. Later when I changed it to 'Date' data type, the date differences were still the same huge random numbers.
Also, is it possible to group the date differences based on requisitioners and vendor id's as mentioned in the 2nd table above?
EDIT:
I'm coming across a new challenge now. In the problem set, I need to filter out the values whose date differences are less than 3 days. Let us assume that the table with date difference appears something like this:
| MasterCalendarDate | Vendor_Id | Requisitioner | Amount | diffdate |
|--------------------|:---------:|--------------:|--------|----------|
| 1/17/2019 | 98 | John | 2405 | #N/A |
| 4/30/2019 | 1320 | Dave | 1420 | 103 |
| 11/29/2018 | 3887 | Michele | 596 | #N/A |
| 11/29/2018 | 3887 | Michele | 960 | 0 |
| 11/29/2018 | 3887 | Michele | 1158 | 0 |
| 9/21/2018 | 4919 | Paul | 857 | #N/A |
| 10/25/2018 | 4919 | Paul | 1162 | 34 |
| 10/26/2018 | 4919 | Paul | 726 | 1 |
| 10/26/2018 | 4919 | Paul | 726 | 0 |
When we look at the requisitioner 'Paul', the date diff between 9/21/2018 and 10/25/2018 is 34 and between that of 10/25/2018 and 10/26/2018 is 1 day. However, when I filter the data for date difference <=3 days, I miss out on 10/25/2018 because of 34 days difference. I have multiple such occurrences. How can I fix it?
I think you need to convert your date variable using as.Date(), then you can compute the lagged time difference using difftime().
# create toy data frame
df <- data.frame(date=as.Date(paste(sample(2018:2019,100,T),
sample(1:12,100,T),
sample(1:28,100,T),sep = '-')),
req=sample(letters[1:10],100,T),
amount=sample(100:10000,100,T))
# compute lagged time difference in days -- diff output is numeric
df %>% arrange(req,date) %>% group_by(req) %>%
mutate(diff=as.numeric(difftime(date,lag(date),units='days')))
# as above plus filtering based on time difference and amount
df %>% arrange(req,date) %>% group_by(req) %>%
mutate(diff=as.numeric(difftime(date,lag(date),units='days'))) %>%
filter(diff<10 | is.na(diff), amount>5000)
# A tibble: 8 x 4
# Groups: req [7]
date req amount diff
<date> <fct> <int> <dbl>
1 2018-05-13 a 9062 NA
2 2019-05-07 b 9946 2
3 2018-02-03 e 5697 NA
4 2018-03-12 g 7093 NA
5 2019-05-16 g 5631 3
6 2018-03-06 h 7114 6
7 2018-08-12 i 5151 6
8 2018-04-03 j 7738 8

SQL getting the smallest difference between max and min

I am trying to find the symbol of the smallest difference. But I don't know what to do answer finding the difference to compare the two.
I have this set:
+------+------+-------------+-------------+--------------------+------+--------+
| clid | cust | Min | Max | Difference | Qty | symbol |
+------+------+-------------+-------------+--------------------+------+--------+
| 102 | C6 | 11.8 | 12.72 | 0.9199999999999999 | 1500 | GE |
| 110 | C3 | 44 | 48.099998 | 4.099997999999999 | 2000 | INTC |
| 115 | C4 | 1755.25 | 1889.650024 | 134.40002400000003 | 2000 | AMZN |
| 121 | C9 | 28.25 | 30.27 | 2.0199999999999996 | 1500 | BAC |
| 130 | C7 | 8.48753 | 9.096588 | 0.609058000000001 | 5000 | F |
| 175 | C3 | 6.41 | 7.71 | 1.2999999999999998 | 1500 | SBS |
| 204 | C5 | 6.41 | 7.56 | 1.1499999999999995 | 5000 | SBS |
| 208 | C2 | 1782.170044 | 2004.359985 | 222.1899410000001 | 5000 | AMZN |
| 224 | C10 | 153.350006 | 162.429993 | 9.079986999999988 | 1500 | FB |
| 269 | C6 | 355.980011 | 392.299988 | 36.319976999999994 | 2000 | BA |
+------+------+-------------+-------------+--------------------+------+--------+
so far I have this Query
select d.clid,
d.cust,
MIN(f.fillPx) as Min,
MAX(f.fillPx) as Max,
MAX(f.fillPx)-MIN(f.fillPx) as Difference,
d.Qty,
d.symbol
from orders d
inner join mp f on d.clid=f.clid
group by f.clid
having SUM(f.fillQty) < d.Qty
order by d.clid;
What am I missing so that I can compare the min and max and get the smallest different symbol?
mp table:
+------+------+--------+------+------+---------+-------------+--------+
| clid | cust | symbol | side | oQty | fillQty | fillPx | execid |
+------+------+--------+------+------+---------+-------------+--------+
| 123 | C2 | SBS | SELL | 5000 | 273 | 7.37 | 1 |
| 157 | C9 | C | SELL | 1500 | 167 | 69.709999 | 2 |
| 254 | C9 | GE | SELL | 5000 | 440 | 13.28 | 3 |
| 208 | C2 | AMZN | SELL | 5000 | 714 | 1864.420044 | 4 |
| 102 | C6 | GE | SELL | 1500 | 136 | 12.32 | 5 |
| 160 | C7 | INTC | SELL | 1500 | 267 | 44.5 | 6 |
| 145 | C10 | GE | SELL | 5000 | 330 | 13.28 | 7 |
| 208 | C2 | AMZN | SELL | 5000 | 1190 | 1788.609985 | 8 |
| 161 | C1 | C | SELL | 1500 | 135 | 72.620003 | 9 |
| 181 | C5 | FCX | BUY | 1500 | 84 | 12.721739 | 10 |
orders table:
+------+------+--------+------+------+
| cust | side | symbol | qty | clid |
+------+------+--------+------+------+
| C1 | SELL | C | 1500 | 161 |
| C9 | SELL | INTC | 2000 | 231 |
| C10 | SELL | BMY | 1500 | 215 |
| C1 | BUY | SBS | 2000 | 243 |
| C4 | BUY | AMZN | 2000 | 226 |
| C10 | BUY | C | 1500 | 211 |
If you want one symbol, you can use order by and limit:
select d.clid,
d.cust,
MIN(f.fillPx) as Min,
MAX(f.fillPx) as Max,
MAX(f.fillPx)-MIN(f.fillPx) as Difference,
d.Qty,
d.symbol
from orders d join
mp f
on d.clid = f.clid
group by d.clid, d.cust, d.Qty, d.symbol
having SUM(f.fillQty) < d.Qty
order by difference
limit 1;
Notice that I added the rest of the unaggregated columns to the group by.

Selecting a unique value from an R data frame

If I have a table like this:
| FileName | Category| Value | Number |
|:--------:|:-------:|:-----:|:------:|
| File1 | Time | 123 | 1 |
| File1 | Size | 456 | 1 |
| File1 | Final | 789 | 1 |
| File2 | Time | 312 | 2 |
| File2 | Size | 645 | 2 |
| File2 | Final | 978 | 2 |
| File3 | Time | 741 | 1 |
| File3 | Size | 852 | 1 |
| File3 | Final | 963 | 1 |
| File1 | Time | 369 | 2 |
| File1 | Size | 258 | 2 |
| File1 | Final | 147 | 2 |
| File3 | Time | 741 | 2 |
| File3 | Size | 734 | 2 |
| File3 | Final | 942 | 2 |
| File1 | Time | 997 | 3 |
| File1 | Size | 245 | 3 |
| File1 | Final | 985 | 3 |
| File2 | Time | 645 | 3 |
| File2 | Size | 285 | 3 |
| File2 | Final | 735 | 3 |
| File3 | Time | 198 | 3 |
| File3 | Size | 165 | 3 |
| File3 | Final | 753 | 3 |
What means could I use in an R script to declare a variable that is the Value for each FileName where Number is minimum and Category is Time?
(EDIT: It should be noted that there are null entries in the Value column. Therefore, this code should be constructed to treat null entries as though they didn't exist so New Column doesn't end up filled with NA values.)
Then I'd like to merge this to form a new column on the existing table so that it now looks like this:
| FileName | Category | Value | Number | New Column |
|:--------:|:--------:|:-----:|:------:|------------|
| File1 | Time | 123 | 1 | 123 |
| File1 | Size | 456 | 1 | 123 |
| File1 | Final | 789 | 1 | 123 |
| File2 | Time | 312 | 2 | 312 |
| File2 | Size | 645 | 2 | 312 |
| File2 | Final | 978 | 2 | 312 |
| File3 | Time | 741 | 1 | 741 |
| File3 | Size | 852 | 1 | 741 |
| File3 | Final | 963 | 1 | 741 |
| File1 | Time | 369 | 2 | 369 |
| File1 | Size | 258 | 2 | 369 |
| File1 | Final | 147 | 2 | 369 |
| File3 | Time | 741 | 2 | 741 |
| File3 | Size | 734 | 2 | 741 |
| File3 | Final | 942 | 2 | 741 |
| File1 | Time | 997 | 3 | 997 |
| File1 | Size | 245 | 3 | 997 |
| File1 | Final | 985 | 3 | 997 |
| File2 | Time | 645 | 3 | 645 |
| File2 | Size | 285 | 3 | 645 |
| File2 | Final | 735 | 3 | 645 |
| File3 | Time | 198 | 3 | 198 |
| File3 | Size | 165 | 3 | 198 |
| File3 | Final | 753 | 3 | 198 |
Using data.table:
(Edited to reflect #Frank's comments)
DT[, Benchmark := Value[Category == "Time"][which.min(Number[Category == "Time"])], by = FileName]
Breaking this down:
Number[Category == "Time"]
Take all Number where Category == Time
which.min(^^^)
Find which one is the minimum
Benchmark := Value[Category == "Time"][^^^]
Set the new column of benchmark to the value at this minimum
by = FileName
Do this by group
Untested, but should get you started:
Ref <- Table1 %>%
mutate(Category2 = factor(Category, c("Time", "Size", "Final"),
FileNumber = as.numeric(sub("File", "", FileName)),
FilePrefix = "File") %>%
arrange(FilePrefix, FileNumber, Category2, Value) %>%
group_by(FilePrefix, FileNumber, Category2) %>%
mutate(NewColumn = Value[1])

Counting up "broadly": Not 0,1,2,..,9,10,11,..,99 but 00, 01, 10, 11, 02, 12, 20, 21, 22, 03, .., 99

When a counter is made up from a fixed number of digits (2 in the title), standard counting-up works by incrementing from the least to the most significant digit and upon overflow reset it.
I want to count differently: A 4 digit number in base-10 would be counted up in base-2 until the overflow back to 0000 would happen, but instead the base is increased to base-3 while omitting previously counted numbers, so only continue with 0002, 0012, 0020, 0021, 0022, 0102, 0112, 0120, 0121, 0122, 0200, 0201, 0202, 0210, 0211, ...(fixed!) and all other numbers with at least one 2 in in. Upon 2222 the switch to base-4 happens, so all 4-digit combinations with at least one 3 in it follow. In the end all numbers from 0000 to 9999 are in this sequence, but in a different ordering.
This way a 9 would not should up anywhere until the last 10% of the sequence.
How would such a counter be implemented in theory (without the naive digit-presence1 check)? Can I easily jump to the n-th element of this ordering or count backwards? And is there an actual name instead of "broad counting" for it?
1: A "naive digit-presence check" would count up to base-2, and when switching to base-3 all generated numbers are checked to ensure that at least on 2 is in them. Upon switching to base 4 (i.e. the 2222-0003 step) all numbers must contain at least one 3. So after 2222 the numbers 0000, 0001, 0002 are omitted because they lack a 3 and thus have been enumerated before. And so on, base-N means the digit N-1 has to be present at least once.
So first you want all numbers with the digits 0 in order. i.e. just 00
Then all numbers with the digits 0,1: 00, 01, 10, 11 but excluding 00
Then all numbers with digits 0,1,2: 00, 01, 02, 10, 11, 12, 20, 21, 22 but excluding 00, 01, 10, 11 i.e. all those which does not have a digit 2.
It is simplist to implement by going through all combinations and excluding those which have already been printed.
for(maxdigit=0; maxdigit<10; ++maxdigit) {
for(digit1 = 0; digit1 <= maxdigit; ++digit1) {
for(digit2 = 0; digit2 <= maxdigit; ++digit2) {
for(digit3 = 0; digit3 <= maxdigit; ++digit3) {
for(digit4 = 0; digit4 <= maxdigit; ++digit4) {
if( digit1 < maxdigit && digit2 < maxdigit
&& digit3 < maxdigit && digit4 < maxdigit) continue;
print( digit1, digit2, digit3, digit4);
}}}}
}
To understand the theory of how this works you can arrange the 2 digit version in a grid
00 | 01 | 02 | 03 | 04
----- | | |
10 11 | 12 | 13 | 14
---------- | |
20 21 22 | 23 | 24
--------------- |
30 31 32 33 | 34
--------------------
40 41 42 43 44
Note how each set of numbers form "shells". In the first shell we have just one number, the first and second shell have 4 = 2^2 numbers, the first, second and third shells have 9 = 3^2 numbers etc.
We can use this to work out how many numbers in each shell. In the two digit case is 1^2=1, 2^2-1^2=3, 3^2-2^2=5, 4^2-3^2=7.
With three digits its cubes instead. 1^3=1, 2^3-1^3=9-1 = 8, 3^3-2^3=27-9 = 18, etc.
With four digits its forth powers.
By considering the shells we could work out a more efficient way of doing things. In the 3 digit case we have shells of cubes, and would need to work out the path through the shell. I'm not sure if there is much to be gained unless you have a large number of digits.
To get the order for this consider the 3 digit case and let x,y,z be the digits in order. If we are looking at the k-th shell, we want to get all the solutions on the three planes x=k, y=k, z=k. These solutions split into those with x
In pseudocode
for(shell=0;shell<=9;++shell) {
// Do the L shaped part
for(x=0;x<shell;++x) {
// The the y-leg, which has z=k
for(y=0;y<shell;++y)
print(x,y,shell);
// Do the z-leg, which has y=k
for(z=0;z<=shell;++z)
print(x,shell,z);
}
// Now the x=shell face
for(y=0;y<=shell;++y)
for(z=0;z<=shell;++z)
print(shell,y,z);
}
It should be possible to generalise this to d-dimension. Let our coordinates here be x1, x2, ..., xd. The solution in the k-th shell will lie on the (d-1)-dimensional hyperplanes x1=k, x2=k, ..., xd=k. Again we loop through x1=0, to x1=k-1, with x1=0 the problem is the same as the d-1 dimensional problem which suggests a recursive approach.
// solve the dim-dimensional problem for
// prefix the output with prefix
function solve(int shell,int dim,String prefix) {
if(dim==1) {
// only one solution with the last digit being the shell
print(prefix+shell);
return;
}
// loop through the first digit
for(int x=0;x<shell;++x) {
String prefix2 = prefix + x;
solve(shell,dim-1,prefix2);
}
// Now to the x=k hypercube,
// need to do this in a separate recursion
String prefix2 = prefix + shell;
hypercube(shell,dim-1,prefix2);
}
// all solutions in dim dimensions with a given prefix
function hypercube(int shell,int dim,String prefix) {
if(dim==1) {
for(int x=0;x<=shell;++x)
println(prefix+x);
}
else {
for(int x=0;x<=shell;++x) {
String prefix2 = prefix + x;
hypercube(shell,dim-1,prefix2);
}
}
}
// Now call, here we do the 4 digit version
for(shell=0;shell<=9;++shell) {
solve(shell,4,"");
}
I've made a spreadsheet to look for patterns. The ones up to base 3 and the first few base 4s are shown below, though my spreadsheet goes higher which shows the patterns more clearly.
In the 'table' (apologies for lack of table formatting in SO), d3 to d0 are the 4 digits, str is a string representation of the same digits, b is the current number base, and d is the depth - number of digits without leading zeros. pos is the position in the list. (dec) is a decimal representation of the display number with the base interpreted in the normal way, which proves useless since there are duplicates.
Please check the table to see that I have interpreted what you are asking for correctly.
Some patterns are emerging, such as there seems to be some kind of exponential-ish relationship between the number of entries of each depth for a given base and the base. I'm out of time to spend on this right now, but will further edit this answer when I get the chance in the next day or so, unless some else beats me to it.
As for a name for this, I have no idea. Note, however, that the number of digits you are allowing very much affects the ordering of the outcome. On the other hand, there is no theoretical need to stop at 9, the math will continue up to any base you like; you could use A, B, C etc just like we usually do for hex counting if you wish. This continuation is limited only by the number of symbols you allow.
d3d2d1d0str b d pos(dec)
0 0 0 00000 1 0 0 0
0 0 0 10001 2 1 1 1
0 0 1 00010 2 2 2 2
0 0 1 10011 2 2 3 3
0 1 0 00100 2 3 4 4
0 1 0 10101 2 3 5 5
0 1 1 00110 2 3 6 6
0 1 1 10111 2 3 7 7
1 0 0 01000 2 4 8 8
1 0 0 11001 2 4 9 9
1 0 1 01010 2 41010
1 0 1 11011 2 41111
1 1 0 01100 2 41212
1 1 0 11101 2 41313
1 1 1 01110 2 41414
1 1 1 11111 2 41515
0 0 0 20002 3 116 2
0 0 1 20012 3 217 5
0 0 2 20022 3 218 8
0 1 0 20102 3 31911
0 1 1 20112 3 32014
0 1 2 20122 3 32117
1 0 0 21002 3 42229
1 0 1 21012 3 42332
1 0 2 21022 3 42435
1 1 0 21102 3 42538
1 1 1 21112 3 42641
1 1 2 21122 3 42744
2 0 0 22002 3 42856
2 0 1 22012 3 42959
2 0 2 22022 3 43062
2 1 0 22102 3 43165
2 1 1 22112 3 43268
2 1 2 22122 3 43371
2 2 2 22222 3 43480
0 0 0 30003 4 135 3
0 0 1 30013 4 236 7
0 0 2 30023 4 23711
0 0 3 30033 4 23815
0 1 0 30103 4 33919
0 1 1 30113 4 34023
Here's a table of all the 4-digit base(n) numbers for n up to 4, with any already listed in a previous column omitted. In this arrangement, some patterns are evident, and it seems the most you will ever have to skip to find the next unused one is n-1 (ignoring zero). you can also start counting for each new base at n-1. The majority of numbers in a given base n are usable, including all from (n-1),(n-2)(0)(0) up.
This arrangement suggests that the naive elimination approach might be ok for finding the next or previous number, but, once again, you'd have to look more algorithmically at the patterns to answer questions like 'what does the one at (x) look like' or 'what ordinal position is (yyyy) at' without looping.
+-----+------+------+------+------+
| | - | 2 | 3 | 4 |
+-----+------+------+------+------+
| 0 | 0000 | | | |
| 1 | | 0001 | | |
| 2 | | 0010 | 0002 | |
| 3 | | 0011 | | 0003 |
| 4 | | 0100 | | |
| 5 | | 0101 | 0012 | |
| 6 | | 0110 | 0020 | |
| 7 | | 0111 | 0021 | 0013 |
| 8 | | 1000 | 0022 | |
| 9 | | 1001 | | |
| 10 | | 1010 | | |
| 11 | | 1011 | 0102 | 0023 |
| 12 | | 1100 | | 0030 |
| 13 | | 1101 | | 0031 |
| 14 | | 1110 | 0112 | 0032 |
| 15 | | 1111 | 0120 | 0033 |
| 16 | | | 0121 | |
| 17 | | | 0122 | |
| 18 | | | 0200 | |
| 19 | | | 0201 | 0103 |
| 20 | | | 0202 | |
| 21 | | | 0210 | |
| 22 | | | 0211 | |
| 23 | | | 0212 | 0113 |
| 24 | | | 0220 | |
| 25 | | | 0221 | |
| 26 | | | 0222 | |
| 27 | | | | 0123 |
| 28 | | | | 0130 |
| 29 | | | 1002 | 0131 |
| 30 | | | | 0132 |
| 31 | | | | 0133 |
| 32 | | | 1012 | |
| 33 | | | 1020 | |
| 34 | | | 1021 | |
| 35 | | | 1022 | 0203 |
| 36 | | | | |
| 37 | | | | |
| 38 | | | 1102 | |
| 39 | | | | 0213 |
| 40 | | | | |
| 41 | | | 1112 | |
| 42 | | | 1120 | |
| 43 | | | 1121 | 0223 |
| 44 | | | 1122 | 0230 |
| 45 | | | 1200 | 0231 |
| 46 | | | 1201 | 0232 |
| 47 | | | 1202 | 0233 |
| 48 | | | 1210 | 0300 |
| 49 | | | 1211 | 0301 |
| 50 | | | 1212 | 0302 |
| 51 | | | 1220 | 0303 |
| 52 | | | 1221 | 0310 |
| 53 | | | 1222 | 0311 |
| 54 | | | 2000 | 0312 |
| 55 | | | 2001 | 0313 |
| 56 | | | 2002 | 0320 |
| 57 | | | 2010 | 0321 |
| 58 | | | 2011 | 0322 |
| 59 | | | 2012 | 0323 |
| 60 | | | 2020 | 0330 |
| 61 | | | 2021 | 0331 |
| 62 | | | 2022 | 0332 |
| 63 | | | 2100 | 0333 |
| 64 | | | 2101 | |
| 65 | | | 2102 | |
| 66 | | | 2110 | |
| 67 | | | 2111 | 1003 |
| 68 | | | 2112 | |
| 69 | | | 2120 | |
| 70 | | | 2121 | |
| 71 | | | 2122 | 1013 |
| 72 | | | 2200 | |
| 73 | | | 2201 | |
| 74 | | | 2202 | |
| 75 | | | 2210 | 1023 |
| 76 | | | 2211 | 1030 |
| 77 | | | 2212 | 1031 |
| 78 | | | 2220 | 1032 |
| 79 | | | 2221 | 1033 |
| 80 | | | 2222 | |
| 81 | | | | |
| 82 | | | | |
| 83 | | | | 1103 |
| 84 | | | | |
| 85 | | | | |
| 86 | | | | |
| 87 | | | | 1113 |
| 88 | | | | |
| 89 | | | | |
| 90 | | | | |
| 91 | | | | 1123 |
| 92 | | | | 1130 |
| 93 | | | | 1131 |
| 94 | | | | 1132 |
| 95 | | | | 1133 |
| 96 | | | | |
| 97 | | | | |
| 98 | | | | |
| 99 | | | | 1203 |
| 100 | | | | |
| 101 | | | | |
| 102 | | | | |
| 103 | | | | 1213 |
| 104 | | | | |
| 105 | | | | |
| 106 | | | | |
| 107 | | | | 1223 |
| 108 | | | | 1230 |
| 109 | | | | 1231 |
| 110 | | | | 1232 |
| 111 | | | | 1233 |
| 112 | | | | 1300 |
| 113 | | | | 1301 |
| 114 | | | | 1302 |
| 115 | | | | 1303 |
| 116 | | | | 1310 |
| 117 | | | | 1311 |
| 118 | | | | 1312 |
| 119 | | | | 1313 |
| 120 | | | | 1320 |
| 121 | | | | 1321 |
| 122 | | | | 1322 |
| 123 | | | | 1323 |
| 124 | | | | 1330 |
| 125 | | | | 1331 |
| 126 | | | | 1332 |
| 127 | | | | 1333 |
| 128 | | | | |
| 129 | | | | |
| 130 | | | | |
| 131 | | | | 2003 |
| 132 | | | | |
| 133 | | | | |
| 134 | | | | |
| 135 | | | | 2013 |
| 136 | | | | |
| 137 | | | | |
| 138 | | | | |
| 139 | | | | 2023 |
| 140 | | | | 2030 |
| 141 | | | | 2031 |
| 142 | | | | 2032 |
| 143 | | | | 2033 |
| 144 | | | | |
| 145 | | | | |
| 146 | | | | |
| 147 | | | | 2103 |
| 148 | | | | |
| 149 | | | | |
| 150 | | | | |
| 151 | | | | 2113 |
| 152 | | | | |
| 153 | | | | |
| 154 | | | | |
| 155 | | | | 2123 |
| 156 | | | | 2130 |
| 157 | | | | 2131 |
| 158 | | | | 2132 |
| 159 | | | | 2133 |
| 160 | | | | |
| 161 | | | | |
| 162 | | | | |
| 163 | | | | 2203 |
| 164 | | | | |
| 165 | | | | |
| 166 | | | | |
| 167 | | | | 2213 |
| 168 | | | | |
| 169 | | | | |
| 170 | | | | |
| 171 | | | | 2223 |
| 172 | | | | 2230 |
| 173 | | | | 2231 |
| 174 | | | | 2232 |
| 175 | | | | 2233 |
| 176 | | | | 2300 |
| 177 | | | | 2301 |
| 178 | | | | 2302 |
| 179 | | | | 2303 |
| 180 | | | | 2310 |
| 181 | | | | 2311 |
| 182 | | | | 2312 |
| 183 | | | | 2313 |
| 184 | | | | 2320 |
| 185 | | | | 2321 |
| 186 | | | | 2322 |
| 187 | | | | 2323 |
| 188 | | | | 2330 |
| 189 | | | | 2331 |
| 190 | | | | 2332 |
| 191 | | | | 2333 |
| 192 | | | | 3000 |
| 193 | | | | 3001 |
| 194 | | | | 3002 |
| 195 | | | | 3003 |
| 196 | | | | 3010 |
| 197 | | | | 3011 |
| 198 | | | | 3012 |
| 199 | | | | 3013 |
| 200 | | | | 3020 |
| 201 | | | | 3021 |
| 202 | | | | 3022 |
| 203 | | | | 3023 |
| 204 | | | | 3030 |
| 205 | | | | 3031 |
| 206 | | | | 3032 |
| 207 | | | | 3033 |
| 208 | | | | 3100 |
| 209 | | | | 3101 |
| 210 | | | | 3102 |
| 211 | | | | 3103 |
| 212 | | | | 3110 |
| 213 | | | | 3111 |
| 214 | | | | 3112 |
| 215 | | | | 3113 |
| 216 | | | | 3120 |
| 217 | | | | 3121 |
| 218 | | | | 3122 |
| 219 | | | | 3123 |
| 220 | | | | 3130 |
| 221 | | | | 3131 |
| 222 | | | | 3132 |
| 223 | | | | 3133 |
| 224 | | | | 3200 |
| 225 | | | | 3201 |
| 226 | | | | 3202 |
| 227 | | | | 3203 |
| 228 | | | | 3210 |
| 229 | | | | 3211 |
| 230 | | | | 3212 |
| 231 | | | | 3213 |
| 232 | | | | 3220 |
| 233 | | | | 3221 |
| 234 | | | | 3222 |
| 235 | | | | 3223 |
| 236 | | | | 3230 |
| 237 | | | | 3231 |
| 238 | | | | 3232 |
| 239 | | | | 3233 |
| 240 | | | | 3300 |
| 241 | | | | 3301 |
| 242 | | | | 3302 |
| 243 | | | | 3303 |
| 244 | | | | 3310 |
| 245 | | | | 3311 |
| 246 | | | | 3312 |
| 247 | | | | 3313 |
| 248 | | | | 3320 |
| 249 | | | | 3321 |
| 250 | | | | 3322 |
| 251 | | | | 3323 |
| 252 | | | | 3330 |
| 253 | | | | 3331 |
| 254 | | | | 3332 |
| 255 | | | | 3333 |
+-----+------+------+------+------+
Edit: the patterns are even clearer when you go wider. Here are all the 2-digit ones up to base 11 (because... why not?)
+-----+----+----+----+----+----+----+----+----+----+----+----+
| | - | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
+-----+----+----+----+----+----+----+----+----+----+----+----+
| 0 | 00 | | | | | | | | | | |
| 1 | | 01 | | | | | | | | | |
| 2 | | 10 | 02 | | | | | | | | |
| 3 | | 11 | | 03 | | | | | | | |
| 4 | | | | | 04 | | | | | | |
| 5 | | | 12 | | | 05 | | | | | |
| 6 | | | 20 | | | | 06 | | | | |
| 7 | | | 21 | 13 | | | | 07 | | | |
| 8 | | | 22 | | | | | | 08 | | |
| 9 | | | | | 14 | | | | | 09 | |
| 10 | | | | | | | | | | | 0A |
| 11 | | | | 23 | | 15 | | | | | |
| 12 | | | | 30 | | | | | | | |
| 13 | | | | 31 | | | 16 | | | | |
| 14 | | | | 32 | 24 | | | | | | |
| 15 | | | | 33 | | | | 17 | | | |
| 16 | | | | | | | | | | | |
| 17 | | | | | | 25 | | | 18 | | |
| 18 | | | | | | | | | | | |
| 19 | | | | | 34 | | | | | 19 | |
| 20 | | | | | 40 | | 26 | | | | |
| 21 | | | | | 41 | | | | | | 1A |
| 22 | | | | | 42 | | | | | | |
| 23 | | | | | 43 | 35 | | 27 | | | |
| 24 | | | | | 44 | | | | | | |
| 25 | | | | | | | | | | | |
| 26 | | | | | | | | | 28 | | |
| 27 | | | | | | | 36 | | | | |
| 28 | | | | | | | | | | | |
| 29 | | | | | | 45 | | | | 29 | |
| 30 | | | | | | 50 | | | | | |
| 31 | | | | | | 51 | | 37 | | | |
| 32 | | | | | | 52 | | | | | 2A |
| 33 | | | | | | 53 | | | | | |
| 34 | | | | | | 54 | 46 | | | | |
| 35 | | | | | | 55 | | | 38 | | |
| 36 | | | | | | | | | | | |
| 37 | | | | | | | | | | | |
| 38 | | | | | | | | | | | |
| 39 | | | | | | | | 47 | | 39 | |
| 40 | | | | | | | | | | | |
| 41 | | | | | | | 56 | | | | |
| 42 | | | | | | | 60 | | | | |
| 43 | | | | | | | 61 | | | | 3A |
| 44 | | | | | | | 62 | | 48 | | |
| 45 | | | | | | | 63 | | | | |
| 46 | | | | | | | 64 | | | | |
| 47 | | | | | | | 65 | 57 | | | |
| 48 | | | | | | | 66 | | | | |
| 49 | | | | | | | | | | 49 | |
| 50 | | | | | | | | | | | |
| 51 | | | | | | | | | | | |
| 52 | | | | | | | | | | | |
| 53 | | | | | | | | | 58 | | |
| 54 | | | | | | | | | | | 4A |
| 55 | | | | | | | | 67 | | | |
| 56 | | | | | | | | 70 | | | |
| 57 | | | | | | | | 71 | | | |
| 58 | | | | | | | | 72 | | | |
| 59 | | | | | | | | 73 | | 59 | |
| 60 | | | | | | | | 74 | | | |
| 61 | | | | | | | | 75 | | | |
| 62 | | | | | | | | 76 | 68 | | |
| 63 | | | | | | | | 77 | | | |
| 64 | | | | | | | | | | | |
| 65 | | | | | | | | | | | 5A |
| 66 | | | | | | | | | | | |
| 67 | | | | | | | | | | | |
| 68 | | | | | | | | | | | |
| 69 | | | | | | | | | | 69 | |
| 70 | | | | | | | | | | | |
| 71 | | | | | | | | | 78 | | |
| 72 | | | | | | | | | 80 | | |
| 73 | | | | | | | | | 81 | | |
| 74 | | | | | | | | | 82 | | |
| 75 | | | | | | | | | 83 | | |
| 76 | | | | | | | | | 84 | | 6A |
| 77 | | | | | | | | | 85 | | |
| 78 | | | | | | | | | 86 | | |
| 79 | | | | | | | | | 87 | 79 | |
| 80 | | | | | | | | | 88 | | |
| 81 | | | | | | | | | | | |
| 82 | | | | | | | | | | | |
| 83 | | | | | | | | | | | |
| 84 | | | | | | | | | | | |
| 85 | | | | | | | | | | | |
| 86 | | | | | | | | | | | |
| 87 | | | | | | | | | | | 7A |
| 88 | | | | | | | | | | | |
| 89 | | | | | | | | | | 89 | |
| 90 | | | | | | | | | | 90 | |
| 91 | | | | | | | | | | 91 | |
| 92 | | | | | | | | | | 92 | |
| 93 | | | | | | | | | | 93 | |
| 94 | | | | | | | | | | 94 | |
| 95 | | | | | | | | | | 95 | |
| 96 | | | | | | | | | | 96 | |
| 97 | | | | | | | | | | 97 | |
| 98 | | | | | | | | | | 98 | 8A |
| 99 | | | | | | | | | | 99 | |
| 100 | | | | | | | | | | | |
| 101 | | | | | | | | | | | |
| 102 | | | | | | | | | | | |
| 103 | | | | | | | | | | | |
| 104 | | | | | | | | | | | |
| 105 | | | | | | | | | | | |
| 106 | | | | | | | | | | | |
| 107 | | | | | | | | | | | |
| 108 | | | | | | | | | | | |
| 109 | | | | | | | | | | | 9A |
| 110 | | | | | | | | | | | A0 |
| 111 | | | | | | | | | | | A1 |
| 112 | | | | | | | | | | | A2 |
| 113 | | | | | | | | | | | A3 |
| 114 | | | | | | | | | | | A4 |
| 115 | | | | | | | | | | | A5 |
| 116 | | | | | | | | | | | A6 |
| 117 | | | | | | | | | | | A7 |
| 118 | | | | | | | | | | | A8 |
| 119 | | | | | | | | | | | A9 |
| 120 | | | | | | | | | | | AA |
+-----+----+----+----+----+----+----+----+----+----+----+----+

Resources