I have a data that I simply create:
Col1 Col2 Col3 Col4
2014/1/1 A Y 10
2014/4/1 A Y 15
2015/1/1 A Z 15
2015/4/1 A Z 30
2014/1/1 B Y 20
2014/4/1 B Y 30
2015/1/1 B Z 40
2015/4/1 B Z 80
I want to create a measure in Power BI so I can create an interactive visualization. The above data is created for example so we need to suppose that col2, col3 have multiple factors.
The measure I want is relativity, the value in Col4 divide the first value under Col1 and Col2.
Result I supposed but I do not need this in data table since when I create the visualization and add the filter for other columns, Col5, Col6, and etc. that I did not show in this example:
Col1 Col2 Col3 Col4 relativity_Col3ALL relativity_Col3EqualsYorZ
2014/1/1 A Y 10 1 1
2014/4/1 A Y 15 1.5 1.5
2015/1/1 A Z 15 1.5 1
2015/4/1 A Z 30 3 2
2014/1/1 B Y 20 1 1
2014/4/1 B Y 30 1.5 1.5
2015/1/1 B Z 40 2 1
2015/4/1 B Z 80 4 2
So I plot it and add filters beside the plot. When I select Y in the filter Col3, the plot will automatically change.
I provide the code I think in R:
dt <- data.table::as.data.table(dt)
dt[, relativity := Col4 / Col4[1], by = .(Col1, Col2)]
But above code is incorrect because it did not consider Col3. I just want to mention Col4 / Col4[1] or Col4 / first(Col4).
I tried measure in Power BI:
relativity = CALCULATE(DIVIDE(dt[Col4], dt[AnotherMeasure]), MIN(dt[Col1]))
I know this is false.
Can anyone help?
UPDATE
I tried #Alexis Olson's code and modified as:
relativity =
VAR YR = MIN(dt[Col1].[Year])
VAR QT = MIN(dt[Col1].[Quarter])
VAR PF = CALCULATE(TOTALQTD(SUM(dt[Col4]), dt[Col1].[Date]), dt[Col1].[Year] = YR, dt[Col1].[Quarter] = QT)
RETURN
DIVIDE(SUM(dt[Col4]), PF)
However, when I visualize in the report, it all shows 1.
I also tried this:
relativity =
VAR YR = CALCULATE(MIN(dt[Col1].[Year]), ALLEXCEPT(dt, dt[Col2]))
VAR QT = CALCULATE(MIN(dt[Col1].[Quarter]), ALLEXCEPT(dt, dt[Col2]))
VAR PFQTD = TOTALQTD(SUM(dt[Col4]), dt[Col1].[Date])
VAR MPFQTD = CALCULATE(MAX(PFQTD), FILTER(dt, dt[Col1].[Year] = YR), FILTER(dt, dt[Col1].[Quarter] = QT))
RETURN
MPFQTD
Failed either
Using the logic from this Q&A, you can create a calculated column as follows:
relativity =
VAR FirstCol1 = CALCULATE ( MIN ( dt[Col1] ), ALLEXCEPT ( dt, dt[Col2], dt[Col3] ) )
VAR FirstCol4 = CALCULATE ( VALUES ( dt[Col4] ), dt[Col1] = FirstCol1 )
RETURN
DIVIDE ( dt[Col4], FirstCol4 )
This looks up the first date when Col2 and Col3 are the same value, then finds the Col4 value on that first date, and finally divides the current Col4 value by that first Col4 value.
The ALLEXCEPT removes the row context except for the columns you specify. If you want relativity_Col3_All, then simply remove that column from the ALLEXCEPT specification.
Related
DATA = data.table(STUDENT = c(1,1,2,2,2,2,2,3,3,3,3,3,4,
SCORE = c(5,6,8,3,14,5,6,9,0,12,13,14,19))
WANT = data.table(STUDENT = c(1,1,4),
SCORE = c(5,6,19))
I have DATA and wish to create WANT which takes a random sample of 2 STUDENT and includes all of their data. I present WANT as an example.
I try this with no success
WANT = WANT[ , .SD[sample(x = .N, size = 2)], by = STUDENT]
sample the unique values of STUDENT and filter all the rows for those STUDENT,
library(data.table)
set.seed(1357)
DATA[STUDENT %in% sample(unique(STUDENT), 2)]
# STUDENT SCORE
#1: 1 5
#2: 1 6
#3: 4 19
I would like to filter a data frame based on the values present in a second data frame.
For example, match the rows from the first data frame that, in the column "BP", are higher than the first value of the "start_pos" column and smaller than "end_pos" column or just smaller than "end_pos" in the second data frame.
I need to repeat this procedure for all the values in the second data frame. Currently, I am performing these using a for loop. However, I would like to do it in a single command.
Data frame 1
CHR BP
29 836019
29 4417047
29 7589996
29 11052921
29 14009294
29 33174196
Data frame 2
start_pos end_pos gene_id
19774 19899 ENSBTAG00000046619
34627 35558 ENSBTAG00000006858
69695 71121 ENSBTAG00000039257
83323 84281 ENSBTAG00000035349
124849 179713 ENSBTAG00000001753
264298 264843 ENSBTAG00000005540
for(j in 1:nrow(tmp_markers)){
temp_out_markers<- tmp_markers[j,]
tmp_search<-tmp_gene[which((tmp_markers[j,"BP"]>=tmp_gene[,"start_pos"] & tmp_markers[j,"BP"]<= tmp_gene[,"end_pos"]) | (tmp_markers[j,"BP"]+interval>=tmp_gene[,"start_pos"] & tmp_markers[j,"BP"]+interval <=tmp_gene[,"end_pos"]) | (tmp_markers[j,"BP"]+interval>=tmp_gene[,"start_pos"] & tmp_markers[j,"BP"]+interval <=tmp_gene[,"end_pos"]) | (tmp_markers[j,"BP"]+interval>=tmp_gene[,"start_pos"] & tmp_markers[j,"BP"]+interval >=tmp_gene[,"end_pos"]& tmp_markers[j,"BP"]<=tmp_gene[,"start_pos"])| (tmp_markers[j,"BP"]-interval<=tmp_gene[,"end_pos"] & tmp_markers[j,"BP"]-interval >=tmp_gene[,"start_pos"])|(tmp_markers[j,"BP"]-interval<=tmp_gene[,"end_pos"] & tmp_markers[j,"BP"]-interval<=tmp_gene[,"start_pos"] & tmp_markers[j,"BP"]>=tmp_gene[,"end_pos"])),]
if(nrow(tmp_search)>0){
temp_out<-cbind(temp_out_markers[rep(seq_len(nrow(tmp_search))),],tmp_search)
temp_out[,"Distance_from_gene_start"]<-temp_out[,"BP"]-temp_out[,"start_pos"]
temp_out[,"Distance_from_gene_end"]<-temp_out[,"BP"]-temp_out[,"end_pos"]
output_genes<-rbind(temp_out,output_genes)
}
}
At the end, I want a data frame with all the rows that are within my tested intervals.
As I stated in a comment, your mock data won't result in a match, as the smallest BP value (836019) is larger than the largest end_pos (264843).
It could be also that I misunderstood altogether your problem!
I understand that you want to match the rows in df1 to those in df2 for which BP >= start_pos and BP <= end_pos. If it's so, we can achieve that using the non-equi joins provided by package data.table.
library(data.table)
result <- dt1[dt2,
.(BP, CHR, gene_id),
on = .(BP >= start_pos, BP <= end_pos),
nomatch = NULL,
by = .EACHI]
setnames(result, 1:2, names(dt2)[1:2])
result
start_pos end_pos BP CHR gene_id
1: 0.000000 2.000000 0 29 ABCD01
2: 4.571429 6.571429 6 30 ABCD03
3: 11.428571 13.428571 12 31 ABCD06
4: 16.000000 18.000000 18 32 ABCD08
5: 22.857143 24.857143 24 33 ABCD011
6: 29.714286 31.714286 30 34 ABCD014
In case you need the full 15 rows of dt2, simply omit the nomatch = NULL part.
DATA USED:
dt1 <- data.table(CHR = 29:34,
BP = seq(0, 30, length.out = 6),
key = "BP")
dt2 <- data.table(start_pos = seq(0, 32, length.out = 15),
gene_id = paste0("ABCD", rep(0, 3), 1:15))
dt2[, end_pos := start_pos + 2]
setcolorder(dt2, c(1, 3, 2))
Alternative with foverlaps
As #r2evans mentioned in a comment, data.table has another function, foverlaps than can be useful here. It checks if a range overlaps with one in another table, so we need to do a small trick to create a 0-width range in dt1:
dt1[, BP2 := BP]
We also need to have keyed data.tables:
setkey(dt1, "BP", "BP2")
setkey(dt2, "start_pos", "end_pos")
And then pass it to the working horse:
foverlaps(dt1, dt2)
start_pos end_pos gene_id CHR BP BP2
1: 0.000000 2.000000 ABCD01 29 0 0
2: 4.571429 6.571429 ABCD03 30 6 6
3: 11.428571 13.428571 ABCD06 31 12 12
4: 16.000000 18.000000 ABCD08 32 18 18
5: 22.857143 24.857143 ABCD011 33 24 24
6: 29.714286 31.714286 ABCD014 34 30 30
Of course we can get rid of BP2 later on by BP2 := NULL.
If we want the full 15 rows of dt2, the it's just inverting the order of the objects in the call:
foverlaps(dt2, dt1)
Thank you very much!
I ended with this solution and it is working very well.
foverlaps(tmp_gene, tmp_markers, by.x = c("start_pos","end_pos"), by.y =
key(tmp_markers),nomatch = 0)
Cheers.
Pivoting and subtotals are common auxiliary steps in spreadsheets and SQL.
Assume a data.table with the fields date, myCategory, revenue. Assume that you want to know the proportion of day revenue of all revenue and the proportion of day revenue within different subgroup such that
b[,{
#First auxiliary variable of all revenue
totalRev = sum(revenue) #SUBGROUP OF ALL REV
#Second auxiliary variable of revenue by date, syntax wrong! How to do this?
{totalRev_date=sum(revenue), by=list(date)} #DIFFERENT SUBGROUP, by DATE's rev
#Within the subgroup by date and myCategory, we will use 1st&2nd auxiliary vars
.SD[,.(Revenue_prop_of_TOT=revenue/totalRev,
,Revenue_prop_of_DAY=revenue/totalRev_date) ,by=list(myCategory,date)]
},]
where we need to compute the auxiliary sums, all revenue of specific day and all revenue of whole history.
The end result should look like this:
date myCategory Revenue_prop_of_TOT Revenue_prop_of_DAY
2019-01-01 Cat1 0.002 0.2
...
where you see that the auxiliary variables are only help functions.
How can you pivot and compute subtotals within R data.table?
Another option using data.table::cube:
cb <- cube(DT, sum(value), by=c("date","category"), id=TRUE)
cb[grouping==0L, .(date, category,
PropByDate = V1 / cb[grouping==1L][.SD, on="date", x.V1],
PropByCategory = V1 / cb[grouping==2L][.SD, on="category", x.V1],
PropByTotal = V1 / cb[grouping==3L, V1]
)]
output:
date category PropByDate PropByCategory PropByTotal
1: 1 1 0.3333333 0.2500000 0.1
2: 1 2 0.6666667 0.3333333 0.2
3: 2 1 0.4285714 0.7500000 0.3
4: 2 2 0.5714286 0.6666667 0.4
data:
DT <- data.table(date=c(1, 1, 2, 2), category=c(1, 2, 1, 2), value=1:4)
# date category value
#1: 1 1 1
#2: 1 2 2
#3: 2 1 3
#4: 2 2 4
Hopefully I'm understanding correctly what you intend but please let me know in the comments if you need a different output.
b = data.table(date = rep(seq.Date(Sys.Date()-99, Sys.Date(), "days"), each=2),
myCategory = c("a", "b"),
revenue = rnorm(100, 200))
# global total, just create a constant
totalRev = b[, sum(revenue)]
# Total revenue at myCategory and date level / total Revenue
b[, Revenue_prop_of_TOT:=sum(revenue)/totalRev, by=.(myCategory, date)]
# you can calculate totalRev_date independently
b[, totalRev_date:=sum(revenue), by=date]
# If these are all the columns you have you don't need the sum(revenue) and by calls
b[, Revenue_prop_of_DAY:=sum(revenue)/totalRev_date, by=.(myCategory, date)]
Finally I would wrap it in a function.
revenue_total <- function(b){
totalRev = b[, sum(revenue)]
b[, Revenue_prop_of_TOT:=sum(revenue)/totalRev, by=.(myCategory, date)]
b[, totalRev_date:=sum(revenue), by=date]
b[, Revenue_prop_of_DAY:=sum(revenue)/totalRev_date, by=.(myCategory, date)]
b
}
b = revenue_total(b)
Options for pivoting and subtotals in R
cube answered here
groupingsets commented by marbel here
I have data consisting of tree growth measurements (diameter and height) for trees at known X & Y coordinates. I'd like to determine the distance to each tree's nearest neighbor of equal or greater size.
I've seen other SE questions asking about nearest neighbor calculations (e.g., see here, here, here, here, etc.), but none specify constraints on the nearest neighbor to be searched.
Is there a function (or other work around) that would allow me to determine the distance of a point's nearest neighbor given that nearest point meets some criteria (e.g., must be equal to or greater in size than the point of interest)?
[An even more complex set of constraints would be even more helpful...]
For my example: specifying that a tree must also be in the same plot as the tree of interest or is the same species as the tree of interest
I'd do it with non-equijoins and data.table
EDIT: (fyi, this requires data.table 1.9.7, which you can get from github)
EDIT2: did it with a copy of the data.table, since it seems like it was joining on its own threshholds. I'll fix that in future, but this works for now.
library(data.table)
dtree <- data.table(id = 1:1000,
x = runif(1000),
y = runif(1000),
height = rnorm(1000,mean = 100,sd = 10),
species = sample(LETTERS[1:3],1000,replace = TRUE),
plot = sample(1:3,1000, replace = TRUE))
dtree_self <- copy(dtree)
dtree_self[,thresh1 := height + 10]
dtree_self[,thresh2 := height - 10]
# Join on a range, must be a cartesian join, since there are many candidates
test <- dtree[dtree_self, on = .(height >= thresh2,
height <= thresh1),
allow.cartesian = TRUE]
# Calculate the distance
test[, dist := (x - i.x)**2 + (y - i.y)**2]
# Exclude identical matches and
# Take the minimum distance grouped by id
final <- test[id != i.id, .SD[which.min(dist)],by = id]
The final dataset contains each pair, according to the given threshholds
EDIT:
With Additional variables:
If you want to join on additional parameters, this allows you to do it, (It's probably even faster if you additionally join on things like plot or species, since the cartesian join will be smaller)
Here's an example joining on two additional categorical variables, species and plot:
library(data.table)
dtree <- data.table(id = 1:1000,
x = runif(1000),
y = runif(1000),
height = rnorm(1000,mean = 100,sd = 10),
species = sample(LETTERS[1:3],1000,replace = TRUE),
plot = sample(1:3,1000, replace = TRUE))
dtree_self <- copy(dtree)
dtree_self[,thresh1 := height + 10]
dtree_self[,thresh2 := height - 10]
# Join on a range, must be a cartesian join, since there are many candidates
test <- dtree[dtree_self, on = .(height >= thresh2,
height <= thresh1,
species == species,
plot == plot),
nomatch = NA,
allow.cartesian = TRUE]
# Calculate the distance
test[, dist := (x - i.x)**2 + (y - i.y)**2]
# Exclude identical matches and
# Take the minimum distance grouped by id
final <- test[id != i.id, .SD[which.min(dist)],by = id]
final
> final
id x y height species plot height.1 i.id i.x i.y i.height dist
1: 3 0.4837348 0.4325731 91.53387 C 2 111.53387 486 0.5549221 0.4395687 101.53387 0.005116568
2: 13 0.8267298 0.3137061 94.58949 C 2 114.58949 754 0.8408547 0.2305702 104.58949 0.007111079
3: 29 0.2905729 0.4952757 89.52128 C 2 109.52128 333 0.2536760 0.5707272 99.52128 0.007054301
4: 37 0.4534841 0.5249862 89.95493 C 2 109.95493 72 0.4807242 0.6056771 99.95493 0.007253044
5: 63 0.1678515 0.8814829 84.77450 C 2 104.77450 289 0.1151764 0.9728488 94.77450 0.011122404
---
994: 137 0.8696393 0.2226888 66.57792 C 2 86.57792 473 0.4467795 0.6881008 76.57792 0.395418724
995: 348 0.3606249 0.1245749 110.14466 A 2 130.14466 338 0.1394011 0.1200064 120.14466 0.048960849
996: 572 0.6562758 0.1387882 113.61821 A 2 133.61821 348 0.3606249 0.1245749 123.61821 0.087611511
997: 143 0.9170504 0.1171652 71.39953 C 3 91.39953 904 0.6954973 0.3690599 81.39953 0.112536771
998: 172 0.6834473 0.6221259 65.52187 A 2 85.52187 783 0.4400028 0.9526355 75.52187 0.168501816
>
NOTE: in the final answer, there are columns height and height.1, the latter appears to result from data.table's equi join and represent the upper and lower boundary respectively.
A Mem-efficient solution
One of the issues here for #theforestecologist was that this requires a lot of memory to do,
(in that case, there were an additional 42 columns being multiplied by the cartesian join, which caused mem issues),
However, we can do this in a more memory efficient way by using .EACHI (I believe). Since we will not load the full table into memory. That solution follows:
library(data.table)
dtree <- data.table(id = 1:1000,
x = runif(1000),
y = runif(1000),
height = rnorm(1000,mean = 100,sd = 10),
species = sample(LETTERS[1:3],1000,replace = TRUE),
plot = sample(1:3,1000, replace = TRUE))
dtree_self <- copy(dtree)
dtree_self[,thresh1 := height + 10]
dtree_self[,thresh2 := height - 10]
# In order to navigate the sometimes unusual nature of scoping inside a
# data.table join, I set the second table to have its own uniquely named id
dtree_self[,id2 := id]
dtree_self[,id := NULL]
# for clarity inside the brackets,
# I define the squared euclid distance
eucdist <- function(x,xx,y,yy) (x - xx)**2 + (y - yy)**2
# Join on a range, must be a cartesian join, since there are many candidates
# Return a table of matches, using .EACHI to keep from loading too much into mem
test <- dtree[dtree_self, on = .(height >= thresh2,
height <= thresh1,
species,
plot),
.(id2, id[{z = eucdist(x,i.x,y,i.y); mz <- min(z[id2 != id]); mz == z}]),
by = .EACHI,
nomatch = NA,
allow.cartesian = TRUE]
# join the metadata back onto each id
test <- dtree[test, on = .(id = V2), nomatch = NA]
test <- dtree[test, on = .(id = id2), nomatch = NA]
> test
id x y height species plot i.id i.x i.y i.height i.species i.plot i.height.2 i.height.1 i.species.1 i.plot.1
1: 1 0.17622235 0.66547312 84.68450 B 2 965 0.17410840 0.63219350 93.60226 B 2 74.68450 94.68450 B 2
2: 2 0.04523011 0.33813054 89.46288 B 2 457 0.07267547 0.35725229 88.42827 B 2 79.46288 99.46288 B 2
3: 3 0.24096368 0.32649256 103.85870 C 3 202 0.20782303 0.38422814 94.35898 C 3 93.85870 113.85870 C 3
4: 4 0.53160655 0.06636979 101.50614 B 1 248 0.47382417 0.01535036 103.74101 B 1 91.50614 111.50614 B 1
5: 5 0.83426727 0.55380451 101.93408 C 3 861 0.78210747 0.52812487 96.71422 C 3 91.93408 111.93408 C 3
This way we should keep total memory usage low.
I am creating correlations using R, with the following code:
Values<-read.csv(inputFile, header = TRUE)
O<-Values$Abundance_O
S<-Values$Abundance_S
cor(O,S)
pear_cor<-round(cor(O,S),4)
outfile<-paste(inputFile, ".jpg", sep = "")
jpeg(filename = outfile, width = 15, height = 10, units = "in", pointsize = 10, quality = 75, bg = "white", res = 300, restoreConsole = TRUE)
rx<-range(0,20000000)
ry<-range(0,200000)
plot(rx,ry, ylab="S", xlab="O", main="O vs S", type="n")
points(O,S, col="black", pch=3, lwd=1)
mtext(sprintf("%s %.4f", "pearson: ", pear_cor), adj=1, padj=0, side = 1, line = 4)
dev.off()
pear_cor
I now need to find the lower quartile for each set of data and exclude data that is within the lower quartile. I would then like to rewrite the data without those values and use the new column of data in the correlation analysis (because I want to threshold the data by the lower quartile). If there is a way I can write this so that it is easy to change the threshold by applying arguments from Java (as I have with the input file name) that's even better!
Thank you so much.
I have now implicated the answer below and that is working, however I need to keep the pairs of data together for the correlation. Here is an example of my data (from csv):
Abundance_O Abundance_S
3635900.752 1390.883073
463299.4622 1470.92626
359101.0482 989.1609251
284966.6421 3248.832403
415283.663 2492.231265
2076456.856 10175.48946
620286.6206 5074.268802
3709754.717 269.6856808
803321.0892 118.2935093
411553.0203 4772.499758
50626.83554 17.29893001
337428.8939 203.3536852
42046.61549 152.1321255
1372013.047 5436.783169
939106.3275 7080.770535
96618.01393 1967.834701
229045.6983 948.3087208
4419414.018 23735.19352
So I need to exclude both values in the row if one does not meet my quartile threshold (0.25 quartile). So if the quartile for O was 45000 then the row "42046.61549,152.1321255" would be removed. Is this possible? If I read in both columns as a dataframe can I search each column separately? Or find the quartiles and then input that value into code to remove the appropriate rows?
Thanks again, and sorry for the evolution of the question!
Please try to provide a reproducible example, but if you have data in a data.frame, you can subset it using the quantile function as the logical test. For instance, in the following data we want to select only rows from the dataframe where the value of the measured variable 'Val' is above the bottom quartile:
# set.seed so you can reproduce these values exactly on your system
set.seed(39856)
df <- data.frame( ID = 1:10 , Val = runif(10) )
df
ID Val
1 1 0.76487516
2 2 0.59755578
3 3 0.94584374
4 4 0.72179297
5 5 0.04513418
6 6 0.95772248
7 7 0.14566118
8 8 0.84898704
9 9 0.07246594
10 10 0.14136138
# Now to select only rows where the value of our measured variable 'Val' is above the bottom 25% quartile
df[ df$Val > quantile(df$Val , 0.25 ) , ]
ID Val
1 1 0.7648752
2 2 0.5975558
3 3 0.9458437
4 4 0.7217930
6 6 0.9577225
7 7 0.1456612
8 8 0.8489870
# And check the value of the bottom 25% quantile...
quantile(df$Val , 0.25 )
25%
0.1424363
Although this is an old question, I came across it during research of my own and I arrived at a solution that someone may be interested in.
I first defined a function which will convert a numerical vector into its quantile groups. Parameter n determines the quantile length (n = 4 for quartiles, n = 10 for deciles).
qgroup = function(numvec, n = 4){
qtile = quantile(numvec, probs = seq(0, 1, 1/n))
out = sapply(numvec, function(x) sum(x >= qtile[-(n+1)]))
return(out)
}
Function example:
v = rep(1:20)
> qgroup(v)
[1] 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4
Consider now the following data:
dt = data.table(
A0 = runif(100),
A1 = runif(100)
)
We apply qgroup() across the data to obtain two quartile group columns:
cols = colnames(dt)
qcols = c('Q0', 'Q1')
dt[, (qcols) := lapply(.SD, qgroup), .SDcols = cols]
head(dt)
> A0 A1 Q0 Q1
1: 0.72121846 0.1908863 3 1
2: 0.70373594 0.4389152 3 2
3: 0.04604934 0.5301261 1 3
4: 0.10476643 0.1108709 1 1
5: 0.76907762 0.4913463 4 2
6: 0.38265848 0.9291649 2 4
Lastly, we only include rows for which both quartile groups are above the first quartile:
dt = dt[Q0 + Q1 > 2]