Heatmap of PWM with image() function - r

I want to use the image() function, to plot a heatmap with two colours, black and white, but I don't know how.
Could anyone help me?

We can use fields package to plot image, see example.
First, let's get example PWM data from seqLogo package, and convert it to matrix object:
# source("https://bioconductor.org/biocLite.R")
# biocLite("seqLogo")
library(seqLogo) # bioconductor
# get example matrix
mFile <- system.file("Exfiles/pwm1", package="seqLogo")
m <- read.table(mFile)
plotDat <- as.matrix(m)
plotDat
# V1 V2 V3 V4 V5 V6 V7 V8
# [1,] 0.0 0.0 0.0 0.3 0.2 0.0 0.0 0.0
# [2,] 0.8 0.2 0.8 0.3 0.4 0.2 0.8 0.2
# [3,] 0.2 0.8 0.2 0.4 0.3 0.8 0.2 0.8
# [4,] 0.0 0.0 0.0 0.0 0.1 0.0 0.0 0.0
Now we can use fields image function to plot, we need to specify the colours, note we could use graphics::image(plotDat) but fields::image.plot has more plot customisation options:
library(fields)
image.plot(plotDat, col = c("black", "white"))

Related

Show what the calculated bins breaks are in a histogram

It is my understanding that when plotting histogram, it's not that every unique data point gets its own bin, there's an algorithm that calculates how many bins to use. How do I find out how the data were partitioned to create the number of bins? E.g. 0-5,6-10,... How do I get R to show me where the breaks are via text output?
I've found various methods to calculate number of bins but that's just theory
I think you need to use $breaks:
set.seed(10)
hist(rnorm(200,0,1),20)$breaks
[1] -2.4 -2.2 -2.0 -1.8 -1.6 -1.4 -1.2 -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4

Use of different priors for groups of idepedent variables in a Bayesian Probit regression model (rstanarm)

I am using the Bayesian logistic regression (probit) from the rstanarm package to train a model on default events. As inputs the model accepts some financial ratios and some qualitative data. Is there a way where I can actually regularise the coefficients, for the qualitative data only, to be always positive?
For example, when I use a single prior for everything I get these results (I calibrate the model using MCMC, with set.seed(12345)):
prior <- rstanarm::normal(location = 0, scale = NULL, autoscale = TRUE)
model.formula <-
formula(paste0('default_events ~ fin_ratio_1 + ',
'fin_ratio_2 + fin_ratio_3 +',
'fin_ratio_4 + fin_ratio_5 +',
'fin_ratio_6 + fin_ratio_7 +',
'fin_ratio_8 + Qual_1 + Qual_2 +',
'Qual_3 + Qual_4'))
bayesian.model <- rstanarm::stan_glm(model.formula,
family = binomial(link = "probit"),
data = as.data.frame(ds), prior = prior,
prior_intercept = NULL,
init_r = .1, iter=600, warmup=200)
The coefficients are the following:
summary(bayesian.model)
Estimates:
mean sd 2.5% 25% 50% 75% 97.5%
(Intercept) -2.0 0.4 -2.7 -2.3 -2.0 -1.7 -1.3
fin_ratio_1 -0.7 0.1 -0.9 -0.8 -0.7 -0.6 -0.4
fin_ratio_2 -0.3 0.1 -0.5 -0.4 -0.3 -0.2 -0.1
fin_ratio_3 0.4 0.1 0.2 0.4 0.4 0.5 0.6
fin_ratio_4 0.3 0.1 0.1 0.2 0.3 0.3 0.4
fin_ratio_5 0.2 0.1 0.1 0.2 0.2 0.3 0.4
fin_ratio_6 -0.2 0.1 -0.4 -0.2 -0.2 -0.1 0.0
fin_ratio_7 -0.3 0.1 -0.5 -0.3 -0.3 -0.2 -0.1
fin_ratio_8 -0.2 0.1 -0.5 -0.3 -0.2 -0.1 0.0
Qual_1 -0.2 0.1 -0.3 -0.2 -0.2 -0.1 -0.1
Qual_2 0.0 0.1 -0.1 -0.1 0.0 0.0 0.1
Qual_3 0.2 0.0 0.1 0.1 0.2 0.2 0.3
Qual_4 0.0 0.2 -0.3 -0.1 0.0 0.1 0.3
The question is, can I use two different distributions? Like for fin_ratio_x variables to use normal and for Qual_x variables to use exponential or dirichlet?
Neither using different prior families nor inequality restrictions on coefficients are possible with the models supplied by the rstanarm package. Either or both is fairly easy to do with the brms package or by writing your own Stan program.

How to meta analyze p values of different observations

I am trying to meta analyze p values from different studies. I have data frame
DF1
p-value1 p-value2 pvalue3 m
0.1 0.2 0.3 a
0.2 0.3 0.4 b
0.3 0.4 0.5 c
0.4 0.4 0.5 a
0.6 0.7 0.9 b
0.6 0.7 0.3 c
I am trying to get fourth column of meta analyzed p-values1 to p-value3.
I tried to use metap package
p<–rbind(DF1$p-value1,DF1$p-value2,DF1$p-value3)
pv–split (p,p$m)
library(metap)
for (i in 1:length(pv))
{pvalue <- sumlog(pv[[i]]$pvalue)}
But it results in one p value. Thank you for any help.
You can try
apply(DF1[,1:3], 1, sumlog)

What are the Closeness and shortest.paths functions definition in igraph package calculating?

I found a weird result in some data I am working on and decided to test closeness and shortest.paths functions with the following matrix.
test<-c(0,0.3,0.7,0.9,0.3,0,0,0,0.7,0,0,0.5,0.9,0,0.5,0)
test<-matrix(test,nrow=4)
colnames(test)<-c("A","B","C,","D")
rownames(test)<-c("A","B","C,","D")
test
A B C D
A 0.0 0.3 0.7 0.9
B 0.3 0.0 0.0 0.0
C 0.7 0.0 0.0 0.5
D 0.9 0.0 0.5 0.0
grafo=graph.adjacency(abs(test),mode="undirected",weighted=TRUE,diag=FALSE)
When I measure closeness() I get this:
> closeness(grafo)
A B C D
0.5263158 0.4000000 0.4545455 0.3846154
Which is merely the sum of the weights and NOT the distancies (1-weights).
> 1/(0.7+(0.7+0.3)+0.5)
[1] 0.4545455
When I define distance as 1-weight, I get this
> 1/((1-0.7)+((1-0.7)+(1-0.3))+(1-0.5))
[1] 0.5555556
In the igraph manual, it says, in the formula, that it is the sum of distances. My question is, does the function actually consider the weight and, therefore, it is a bug, or WE should consider (and modify) our graphs' edges as distance to run this function?
The SAME issue occurs with the shortest.paths function btw. It gives me a sum of the weights, NOT distances.
> shortest.paths(grafo)
A B C D
A 0.0 0.3 0.7 0.9
B 0.3 0.0 1.0 1.2
C 0.7 1.0 0.0 0.5
D 0.9 1.2 0.5 0.0
Thanks.

rollapply function on specific column of dataframes within list

I must admit to complete lunacy when trying to understand how functions within functions are defined and passed in R. The examples always presume you understand every nuance and don't provide descriptions of the process. I have yet to come across a plain English, idiots guide break down of the process. So the first question is do you know of one?
Now my physical problem.
I have a list of data.frames: fileData.
I want to use the rollapply() function on specific columns in each data.frame. I then want all the results(lists) combined. So starting with one of the data.frames using the built in mtcars dataframes as an example:
Of course I need to tell rollapply() to use the function PPI() along with the associated parameters which are the columns.
PPI <- function(a, b){
value = (a + b)
PPI = sum(value)
return(PPI)
}
I tried this:
f <- function(x) PPI(x$mpg, x$disp)
fileData<- list(mtcars, mtcars, mtcars)
df <- fileData[[1]]
and got stopped at
rollapply(df, 20, f)
Error in x$mpg : $ operator is invalid for atomic vectors
I think this is related to Zoo using matrices but other numerous attempts couldn't resolve the rollapply issue. So moving onto what I believe is next:
lapply(fileData, function(x) rollapply ......
Seems a mile away. Some guidance and solutions would be very welcome.
Thanks.
I will Try to help you and show how you can debug the problem. One trick that is very helpful in R is to learn how to debug. Gnerelly I am using browser function.
problem :
Here I am changing you function f by adding one line :
f <- function(x) {
browser()
PPI(x$changeFactor_A, x$changeFactor_B)
}
Now when you run :
rollapply(df, 1, f)
The debugger stops and you can inspect the value of the argument x:
Browse[1]> x
[1,]
1e+05
as you see is a scalar value , so you can't apply the $ operator on it, hence you get the error:
Error in x$changeFactor_A : $ operator is invalid for atomic vectors
general guides
Now I will explain how you should do this.
Either you change your PPI function, to have a single parameter excees: so you do the subtraction outside of it (easier)
Or you use mapply to get a generalized solution. (Harder but more general and very useful)
Avoid using $ within functions. Personally, I use it only on the R console.
complete solution:
I assume that you data.frames(zoo objects) have changeFactor_A and changeFactor_B columns.
sapply(fileData,function(dat){
dat <- transform(dat,excess= changeFactor_A-changeFactor_B)
rollapply(dat[,'excess'],2,sum)
}
Or More generally :
sapply(fileData,function(dat){
excess <- get_excess(dat,'changeFactor_A','changeFactor_B')
rollapply(excess,2,sum)
}
Where
get_excess <-
function(data,colA,colB){
### do whatever you want here
### return a vector
excess
}
Look at the "Usage" section of the help page to ?rollapply. I'll admit that R help pages are not easy to parse, and I see how you got confused.
The problem is that rollapply can deal with ts, zoo or general numeric vectors, but only a single series. You are feeding it a function that takes two arguments, asset and benchmark. Granted, your f and PPI can trivially be vectorized, but rollapply simply isn't made for that.
Solution: calculate your excess outside rollapply (excess is easily vectorially calculated, and it does not involve any rolling calculations), and only then rollapply your function to it:
> mtcars$excess <- mtcars$mpg-mtcars$disp
> rollapply(mtcars$excess, 3, sum)
[1] -363.2 -460.8 -663.1 -784.8 -893.9 ...
You may possibly be interested in mapply, which vectorizes a function for multiple arguments, similarly to apply and friends, which work on single arguments. However, I know of no analogue of mapply with rolling windows.
I sweated away and took some time to slowly understand how to break down the process and protocol of calling a function with arguments from another function. A great site that helped was Advanced R from the one and only Hadley Wickham, again! The pictures showing the process breakdown are near ideal. Although I still needed my thinking cap on for a few details.
Here is a complete example with notes. Hopefully someone else finds it useful.
library(zoo)
#Create a list of dataframes for the example.
listOfDataFrames<- list(mtcars, mtcars, mtcars)
#Give each element a name.
names(listOfDataFrames) <- c("A", "B", "C")
#This is a simple function just for the example!
#I want to perform this function on column 'col' of matrix 'm'.
#Of course to make the whole task worthwhile, this function is usually something more complex.
fApplyFunction <- function(m,col){
mean(m[,col])
}
#This function is called from lapply() and does 'something' to the dataframe that is passed.
#I created this function to keep lapply() very simply.
#The something is to apply the function fApplyFunction(), wich requires an argument 'thisCol'.
fOnEachElement <- function(thisDF, thisCol){
#Convert to matrix for zoo library.
thisMatrix <- as.matrix(thisDF)
rollapply(thisMatrix, 5, fApplyFunction, thisCol, partial = FALSE, by.column = FALSE)
}
#This is where the program really starts!
#
#Apply a function to each element of list.
#The list is 'fileData', with each element being a dataframe.
#The function to apply to each element is 'fOnEachElement'
#The additional argument for 'fOnEachElement' is "vs", which is the name of the column I want the function performed on.
#lapply() returns each result as an element of a list.
listResults <- lapply(listOfDataFrames, fOnEachElement, "vs")
#Combine all elements of the list into one dataframe.
combinedResults <- do.call(cbind, listResults)
#Now that I understand the argument passing, I could call rollapply() directly from lapply()...
#Note that ONLY the additional arguments of rollapply() are passed. The primary argurment is passed automatically by lapply().
listResults2 <- lapply(listOfDataFrames, rollapply, 5, fApplyFunction, "vs", partial = FALSE, by.column = FALSE)
Results:
> combinedResults
A B C
[1,] 0.4 0.4 0.4
[2,] 0.6 0.6 0.6
[3,] 0.6 0.6 0.6
[4,] 0.6 0.6 0.6
[5,] 0.6 0.6 0.6
[6,] 0.8 0.8 0.8
[7,] 0.8 0.8 0.8
[8,] 0.8 0.8 0.8
[9,] 0.6 0.6 0.6
[10,] 0.4 0.4 0.4
[11,] 0.2 0.2 0.2
[12,] 0.0 0.0 0.0
[13,] 0.0 0.0 0.0
[14,] 0.2 0.2 0.2
[15,] 0.4 0.4 0.4
[16,] 0.6 0.6 0.6
[17,] 0.8 0.8 0.8
[18,] 0.8 0.8 0.8
[19,] 0.6 0.6 0.6
[20,] 0.4 0.4 0.4
[21,] 0.2 0.2 0.2
[22,] 0.2 0.2 0.2
[23,] 0.2 0.2 0.2
[24,] 0.4 0.4 0.4
[25,] 0.4 0.4 0.4
[26,] 0.4 0.4 0.4
[27,] 0.2 0.2 0.2
[28,] 0.4 0.4 0.4
> listResults
$A
[1] 0.4 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 0.8 0.6
[20] 0.4 0.2 0.2 0.2 0.4 0.4 0.4 0.2 0.4
$B
[1] 0.4 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 0.8 0.6
[20] 0.4 0.2 0.2 0.2 0.4 0.4 0.4 0.2 0.4
$C
[1] 0.4 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 0.8 0.6
[20] 0.4 0.2 0.2 0.2 0.4 0.4 0.4 0.2 0.4
> listResults2
$A
[1] 0.4 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 0.8 0.6
[20] 0.4 0.2 0.2 0.2 0.4 0.4 0.4 0.2 0.4
$B
[1] 0.4 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 0.8 0.6
[20] 0.4 0.2 0.2 0.2 0.4 0.4 0.4 0.2 0.4
$C
[1] 0.4 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 0.8 0.6
[20] 0.4 0.2 0.2 0.2 0.4 0.4 0.4 0.2 0.4

Resources