Suppose I have a data that looks like this.
> print(dat)
V1 V2
1 1 11613
2 2 6517
3 3 2442
4 4 687
5 5 159
6 6 29
# note that V2 is the frequency and V1 does not always start with 1.
> plot(dat,main=title,type="h")
# legend()??
Now what I want to do is to plot histogram, and have the mean
and standard deviation included as the legend. In the above example the standard deviation equals 0.87 and the mean eauals 1.66.
How can I achieve that automatically in R?
This solves the problem with legend creation that Gavin notices.
require(Hmisc)
myMean <- wtd.mean(dat$V1, dat$V2)
mySD <- sqrt(wtd.var(dat$V1, dat$V2))
plot(dat,main="title",type="h")
L= list( bquote(Mean== .(myMean)), bquote(SD== .(mySD) ) )
legend('topright', legend=sapply(L, as.expression))
This was pulled from an answer on Rhelp that I posted in 2010 that attributed the strategy for the solution to a 2005 exchange between Gabor Grothendieck and Thomas Lumley.
This gets pretty close:
dat <- data.frame(V1 = 1:6, V2 = c(11613, 6517, 2442, 687, 159, 29))
addMyLegend <- function(data, where = "topright", digits = 3, ...) {
MEAN <- round(mean(data), digits = digits)
SD <- round(sd(data), digits = digits)
legend(where, legend = list(bquote(Mean == .(MEAN)),
bquote(SD == .(SD))),
...)
}
plot(dat, type = "h")
addMyLegend(dat$V1, digits = 2, bty = "n")
Which gives
I'm not sure why the plotmath code is not displaying the == and a typeset =... Will have to look into that.
To see what is going on read ?bquote which explains that it can be used to replace components of an expression with dynamic data. Anything wrapped in .( ) will be replaced by the value of the object named in the wrapped part of the expression. Thus foo == .(bar) will look for an object named bar and insert the value of bar into the expression. If bar contained 1.3 then the result after applying bquote(foo == .(bar)) would be similar to expression(foo == 1.3).
The rest of my function addMyLegend() should be fairly self explanatory, if not read ?legend. Note you can pass on any arguments to legend() via the ... in addMyLegend().
Related
I have a set of data below which shows a number of cars at an imaginary car dealership. The 'current_price' variable is obviously the price the car is currently set to sell at. The 'minimum_price' variable shows the hard floor price that the car must not, under any circumstance, sell at. (This can be assumed to be the purchase price).
I'm trying to create a function where the user can select a subset of the cars in the database (using the 'user defined parameters' as noted below), and then reduce or increase the 'Current_Price' by either a percentage or pound(£) value.
The 'minimum profit parameters' set the minimum profit which is to be made on all cars. In this example they have been set to £10 and 10%. Meaning the profit for each car must be either £10, or 10% of the current price - whichever is larger.
The price change parameters set how much the price is to move and whether it should move up or down.
# Dummy data
Type <- rep(c("Car", "Van"),each=3)
Age <- as.numeric(c(2, 2, 5, 4, 8,1))
Colour <- c("Red", "Red", "Yellow", "Red", "Black", "Red")
Make <- c("Ford", "VW", "VW", "VW", "BMW", "Ford")
Current_Price <- as.numeric(c(1050, 1000, 1500, 995, 2200, 2100))
Minimum_Price <- as.numeric(c(900, 600, 500, 850, 1900, 1950))
df1 <- data.frame(Type, Age, Colour, Make, Current_Price, Minimum_Price)
# User defined parameters - price to be changed for all cars which fit below selection
Input_Type <- "Car"
Input_Min_Age <- 2 # All cars this age and above
Input_Max_Age <- 10 # All cars this age and below
Input_Colour <- "Red"
Input_Make <- c("Ford", "VW")
# Minimum profit parameters
Input_Min_Pounds <- 10
Input_Min_Percentage <- 0.10
# Price change parameters
Input_Change_Type <- "Percentage" # "Percentage" or "Pound"
Input_Change_Value <- -0.10 # "-" sign to represent price reduction
Given the above, I would expect lines 1 & 2 to be effected by the change. Line 1's price should move down from £1,050 to £1,000. This is because £1,000 is the lowest price possible where 10% of the price is profit (900/(1-0.10) = 1000).
Line 2's price should simply move down by 10% to 900.
Has anyone got an idea how to put this into a function which will be fairly intuitive to use for someone who is not used to using R?
This answer uses a data.table to support the price changes in the original "data.frame" (as you have explained in the comments to your question) the solution could look like this.
I am still ignoring the pricing logic because I want to focus on the usability aspect
(this specialized pricing logic is arbitrary and not of particular interest for anybody else here at SO; if you have a specific problem to implement it yourself please open a new question and explain the problem in detail).
library(data.table)
data <- as.data.table(df1)
calc_price <- function(Current_Price,
Minimum_Price,
price_change_type,
price_change_value,
min_profit_pounds,
min_profit_percentage) {
# TODO implement your pricing logic here...
return(Current_Price + 1)
}
update_car_prices <- function(data,
filter,
price_change_type = c("Percentage", "Pound"),
price_change_value = 0,
min_profit_pounds = 10,
min_profit_percentage = 0.10) {
stopifnot(is.data.table(data))
price_change_type <- match.arg(price_change_type) # use the first value if none was provided
filter_exp <- substitute(filter) # "parse" the passed value as expression
# date the price using a separate function to encapsulate the logic
data[eval(filter_exp), Current_Price := calc_price(Current_Price,
Minimum_Price,
price_change_type,
price_change_value,
min_profit_pounds,
min_profit_percentage)][]
return(data)
}
The usage is still similar to my data.frame answer, e. g.:
update_car_prices(data, Type == "Car" & Age >= 2 & Age <= 10 & Colour == "Red" & Make %in% c("Ford", "VW"))
update_car_prices(data, Colour == "Red")
update_car_prices(data, Colour == "Red", "Pound", 500)
The differences are:
The whole data.table (data) is returned to see the impact
The original data is changed since data.tables are passed by reference
and I am updating the price "by reference" using the data.table syntax :=
This answer is based on a data.frame...
Your questions addresses multiple aspects (pricing logic, filtering logic and usability).
I am focusing on the usability (and ignoring the pricing logic since this is just a deliberate detail).
I see at least three options:
Use a strongly typed function:
get_car_prices1 <- function(data, Input_Type, Input_Min_Age, Input_Max_Age, Input_Colour, Input_Make, Input_Min_Pounds, Input_Min_Percentage)
Use an "untyped" function with a deliberate number of arguments via ... to support filtering by passing only the required arguments:
get_car_prices2 <- function(data, Input_Min_Pounds, Input_Min_Percentage, ...)
Use meta programming with substitute + eval
I have decided for option 3 as being the best (user friendly + flexible) option IMHO:
get_car_prices <- function(data,
filter,
price_change_type = c("Percentage", "Pound"),
price_change_value = 1)
{
price_change_type <- match.arg(price_change_type) # use the first value if none was provided
filter_exp <- substitute(filter) # "parse" the passed value as expression
data_subset <- subset(data, eval(filter_exp))
# TODO add your pricing logic here (e. g. using "ifelse")
return(data_subset)
}
# Usage examples:
get_car_prices(df1, Colour == "Red")
# Type Age Colour Make Current_Price Minimum_Price
# 1 Car 2 Red Ford 1050 900
# 2 Car 2 Red VW 1000 600
# 4 Van 4 Red VW 995 850
# 6 Van 1 Red Ford 2100 1950
get_car_prices(df1, Type == "Car" & Age >= 2 & Age <= 10 & Colour == "Red" & Make %in% c("Ford", "VW"))
# Type Age Colour Make Current_Price Minimum_Price
# 1 Car 2 Red Ford 1050 900
# 2 Car 2 Red VW 1000 600
get_car_prices(df1, Colour == "Red", "Pound", 500)
# ...
get_car_prices(df1, Colour == "Red", "dumping price", 1)
# Error in match.arg(price_change_type) :
# 'arg' should be one of “Percentage”, “Pound”
# But: The user has to learn at least the expression logic of R and that variables (and values) are case-sensitive:
get_car_prices(df1, Colour == "red")
# [1] Type Age Colour Make Current_Price Minimum_Price
# <0 rows> (or 0-length row.names)
# Error: Assignment operator (=) used instead of comparison operator (==)
get_car_prices(df1, Colour = "Red")
# Error in get_car_prices(df1, Colour = "Red") :
# unused argument (Colour = "Red")
First time question asker here. I wasn't able to find an answer to this question in other posts (love stackexchange, btw).
Anyway...
I'm creating a rarefaction curve via the vegan package and I'm getting a very messy plot that has a very thick black bar at the bottom of the plot which is obscuring some low diversity sample lines.
Ideally, I would like to generate a plot with all of my lines (169; I could reduce this to 144) but make a composite graph, coloring by Sample Year and making different types of lines for each Pond (i.e: 2 sample years: 2016, 2017 and 3 ponds: 1,2,5). I've used phyloseq to create an object with all my data, then separated my OTU abundance table from my metadata into distinct objects (jt = OTU table and sampledata = metadata). My current code:
jt <- as.data.frame(t(j)) # transform it to make it compatible with the proceeding commands
rarecurve(jt
, step = 100
, sample = 6000
, main = "Alpha Rarefaction Curve"
, cex = 0.2
, color = sampledata$PondYear)
# A very small subset of the sample metadata
Pond Year
F16.5.d.1.1.R2 5 2016
F17.1.D.6.1.R1 1 2017
F16.1.D15.1.R3 1 2016
F17.2.D00.1.R2 2 2017
enter image description here
Here is an example of how to plot a rarefaction curve with ggplot. I used data available in the phyloseq package available from bioconductor.
to install phyloseq:
source('http://bioconductor.org/biocLite.R')
biocLite('phyloseq')
library(phyloseq)
other libraries needed
library(tidyverse)
library(vegan)
data:
mothlist <- system.file("extdata", "esophagus.fn.list.gz", package = "phyloseq")
mothgroup <- system.file("extdata", "esophagus.good.groups.gz", package = "phyloseq")
mothtree <- system.file("extdata", "esophagus.tree.gz", package = "phyloseq")
cutoff <- "0.10"
esophman <- import_mothur(mothlist, mothgroup, mothtree, cutoff)
extract OTU table, transpose and convert to data frame
otu <- otu_table(esophman)
otu <- as.data.frame(t(otu))
sample_names <- rownames(otu)
out <- rarecurve(otu, step = 5, sample = 6000, label = T)
Now you have a list each element corresponds to one sample:
Clean the list up a bit:
rare <- lapply(out, function(x){
b <- as.data.frame(x)
b <- data.frame(OTU = b[,1], raw.read = rownames(b))
b$raw.read <- as.numeric(gsub("N", "", b$raw.read))
return(b)
})
label list
names(rare) <- sample_names
convert to data frame:
rare <- map_dfr(rare, function(x){
z <- data.frame(x)
return(z)
}, .id = "sample")
Lets see how it looks:
head(rare)
sample OTU raw.read
1 B 1.000000 1
2 B 5.977595 6
3 B 10.919090 11
4 B 15.826125 16
5 B 20.700279 21
6 B 25.543070 26
plot with ggplot2
ggplot(data = rare)+
geom_line(aes(x = raw.read, y = OTU, color = sample))+
scale_x_continuous(labels = scales::scientific_format())
vegan plot:
rarecurve(otu, step = 5, sample = 6000, label = T) #low step size because of low abundance
One can make an additional column of groupings and color according to that.
Here is an example how to add another grouping. Lets assume you have a table of the form:
groupings <- data.frame(sample = c("B", "C", "D"),
location = c("one", "one", "two"), stringsAsFactors = F)
groupings
sample location
1 B one
2 C one
3 D two
where samples are grouped according to another feature. You could use lapply or map_dfr to go over groupings$sample and label rare$location.
rare <- map_dfr(groupings$sample, function(x){ #loop over samples
z <- rare[rare$sample == x,] #subset rare according to sample
loc <- groupings$location[groupings$sample == x] #subset groupings according to sample, if more than one grouping repeat for all
z <- data.frame(z, loc) #make a new data frame with the subsets
return(z)
})
head(rare)
sample OTU raw.read loc
1 B 1.000000 1 one
2 B 5.977595 6 one
3 B 10.919090 11 one
4 B 15.826125 16 one
5 B 20.700279 21 one
6 B 25.543070 26 one
Lets make a decent plot out of this
ggplot(data = rare)+
geom_line(aes(x = raw.read, y = OTU, group = sample, color = loc))+
geom_text(data = rare %>% #here we need coordinates of the labels
group_by(sample) %>% #first group by samples
summarise(max_OTU = max(OTU), #find max OTU
max_raw = max(raw.read)), #find max raw read
aes(x = max_raw, y = max_OTU, label = sample), check_overlap = T, hjust = 0)+
scale_x_continuous(labels = scales::scientific_format())+
theme_bw()
I know this is an older question but I originally came here for the same reason and along the way found out that in a recent (2021) update vegan has made this a LOT easier.
This is an absolutely bare-bones example.
Ultimately we're going to be plotting the final result in ggplot so you'll have full customization options, and this is a tidyverse solution with dplyr.
library(vegan)
library(dplyr)
library(ggplot2)
I'm going to use the dune data within vegan and generate a column of random metadata for the site.
data(dune)
metadata <- data.frame("Site" = as.factor(1:20),
"Vegetation" = rep(c("Cactus", "None")))
Now we will run rarecurve, but provide the argument tidy = TRUE which will export a dataframe rather than a plot.
One thing to note here is that I have also used the step argument. The default step is 1, and this means by default you will get one row per individual per sample in your dataset, which can make the resulting dataframe huge. Step = 1 for dune gave me over 600 rows. Reducing the step too much will make your curves blocky, so it will be a balance between step and resolution for a nice plot.
Then I piped a left join right into the rarecurve call
dune_rare <- rarecurve(dune,
step = 2,
tidy = TRUE) %>%
left_join(metadata)
Now it will be plottable in ggplot, with a color/colour call to whatever metadata you attached.
From here you can customize other aspects of the plot as well.
ggplot(dune_rare) +
geom_line(aes(x = Sample, y = Species, group = Site, colour = Vegetation)) +
theme_bw()
dune-output
(Sorry it says I'm not allowed to embed the image yet :( )
This question already has answers here:
Text clustering with Levenshtein distances
(4 answers)
Closed 6 years ago.
I have to following data:
attributes <- c("apple-water-orange", "apple-water", "apple-orange", "coffee", "coffee-croissant", "green-red-yellow", "green-red-blue", "green-red","black-white","black-white-purple")
attributes
attributes
1 apple-water-orange
2 apple-water
3 apple-orange
4 coffee
5 coffee-croissant
6 green-red-yellow
7 green-red-blue
8 green-red
9 black-white
10 black-white-purple
What I want is another column, that assigns a category to each row, based on observation similarity.
category <- c(1,1,1,2,2,3,3,3,4,4)
df <- as.data.frame(cbind(df, category))
attributes category
1 apple-water-orange 1
2 apple-water 1
3 apple-orange 1
4 coffee 2
5 coffee-croissant 2
6 green-red-yellow 3
7 green-red-blue 3
8 green-red 3
9 black-white 4
10 black-white-purple 4
It is clustering in the broader sense, but I think most clustering methods are for numeric data only and one-hot-encoding has a lot of disadvantages (thats what I read on the internet).
Does anyone have an idea how to do this task? Maybe some word-matching approaches?
It would be also great if I could adjust degree of similarity (rough vs. decent "clustering") based on a parameter.
Thanks in advance for any idea!
So I have whipped up two possibilities. Option 1: uses "one-hot-encoding" which is simple and straight forward so long as apple/apples are equally different from apple/orange, for example. I use the Jaccard index for the distance metric because it does reasonably well with overlapping sets. Option 2: Uses a local sequence alignment algorithm and should be quite robust against things like apple/apples vs. apple/orange, it will also have more tuning parameters which could take time to optimize for your problem.
library(reshape2)
library(proxy)
attributes <- c("apple-water-orange", "apple-water", "apple-orange", "coffee",
"coffee-croissant", "green-red-yellow", "green-red-blue",
"green-red","black-white","black-white-purple")
dat <- data.frame(attr=attributes, row.names = paste("id", seq_along(attributes), sep=""))
attributesList <- strsplit(attributes, "-")
df <- data.frame(id=paste("id", rep(seq_along(attributesList), sapply(attributesList, length)), sep=""),
word=unlist(attributesList))
df.wide <- dcast(data=df, word ~ id, length)
rownames(df.wide) <- df.wide[, 1]
df.wide <- as.matrix(df.wide[, -1])
df.dist <- dist(t(df.wide), method="jaccard")
plot(hclust(df.dist))
abline(h=c(0.6, 0.8))
heatmap.2(df.wide, trace="none", col=rev(heat.colors(15)))
res <- merge(dat, data.frame(cat1=cutree(hclust(df.dist), h=0.8)), by="row.names")
res <- merge(res, data.frame(cat2=cutree(hclust(df.dist), h=0.6)), by.y="row.names", by.x="Row.names")
res
You'll see you can control the granularity of the categorization by adjusting where you cut the dendrogram.
Here is a method using the "Smith-Waterman" alignment (local) alignment
Biostrings is part of the Bioconductor project. The SW algorithm finds the optimal local (non-end-to-end) alignment of two sequences (strings). In this case you can again use cutree to set your categories but you can also tune the scoring function to suit your needs.
library(Biostrings)
strList <- lapply(attributes, BString)
swDist <- matrix(apply(expand.grid(seq_along(strList), seq_along(strList)), 1, function(x) {
pairwiseAlignment(strList[[x[1]]], strList[[x[2]]], type="local")#score
}), nrow = 10)
heatmap.2(swDist, trace="none", col = rev(heat.colors(15)),
labRow = paste("id", 1:10, sep=""), labCol = paste("id", 1:10, sep=""))
I'm trying to use the 'relsurv' package in R to compare the survival of a cohort to national life tables. The code below shows my problem using the example from relsurv but changing the life-table data. I've just used two years and two ages in the life-table data below, the actual data is much larger but gives the same error. The error is 'invalid ratetable argument' but I've formatted it as per the example life-tables 'slopop' and 'survexp.us'.
library(survival)
library(relsurv)
data(rdata) # example data from relsurv
raw = read.table(header=T, stringsAsFactors = F, sep=' ', text='
Year Age sex qx
1980 30 1 0.00189
1980 31 1 0.00188
1981 30 1 0.00191
1981 31 1 0.00191
1980 30 2 0.00077
1980 31 2 0.00078
1981 30 2 0.00076
1981 31 2 0.00074
')
ages = c(30,40) # in years
years = c(1980, 1990)
rtab = array(data=NA, dim=c(length(ages), 2, length(years))) # set up blank array: ages, sexes, years
for (y in unique(raw$Year)){
for (s in 1:2){
rtab[ , s, y-min(years)+1] = -1 * log(1-subset(raw, Year==y&sex==s)$qx) / 365.24 # probability of death in next year, transformed to hazard (see ratetables help)
}
}
attributes(rtab)$dimnames[[1]] = as.character(ages)
attributes(rtab)$dimnames[[2]] = c('male','female')
attributes(rtab)$dimnames[[3]] = as.character(years)
attributes(rtab)$dimid <- c("age", "sex", 'year')
attributes(rtab)$dim <- c(length(ages), 2, length(years))
attributes(rtab)$factor = c(0,0,1)
attributes(rtab)$type = c(2,1,4)
attributes(rtab)$cutpoints[[1]] = ages*365.24 # must be in days
attributes(rtab)$cutpoints[[2]] = NULL
attributes(rtab)$cutpoints[[3]] = as.date(paste("1Jan", years, sep='')) # must be date
attributes(rtab)$class = "ratetable"
# example from relsurv
rsmul(Surv(time,cens) ~ sex+as.factor(agegr)+
ratetable(age=age*365.24, sex=sex, year=year),
data=rdata, ratetable=rtab, int=1)
Try using the transrate function from the relsurv package to reformat the data. That should give you a compatible dataset.
Regards,
Josh
Three things to add:
You should set attributes(rtab)$factor = c(0,1,0), since sex (the second dimension) is a factor (i.e., doesn't change over time).
A good way to check whether something is a valid rate table is to use the is.ratetable() function. is.ratetable(rtab, verbose = TRUE) will even return a message stating what was wrong.
Check the result of is.ratetable without using verbose first, because it will lie about valid rate tables.
The rest of this comment is about this lie.
If the type attribute isn't given, is.ratetable will calculate it using the factor attribute; you can see this by just printing the function. However, it seems to do so incorrectly. It uses type <- 1 * (fac == 1) + 2 * (fac == 0) + 4 * (fac > 0), where fac is attributes(rtab)$factor.
But the next section, which checks the type attribute if it's provided, says the only valid values are 1, 2, 3, and 4. It's impossible to get 1 from the code above.
For example, let's examine the slopop ratetable provided with the relsurv package.
library(relsurv)
data(slopop)
is.ratetable(slopop)
# [1] TRUE
is.ratetable(slopop, verbose = TRUE)
# [1] "wrong length for cutpoints 3"
I think this is where your rate table is being hung up.
I have a large dataframe, where a variable id (first column) recurs with different values in the second column. My idea is to order the dataframe, to split it into a list and then lapply a function which cbinds the sequence 1:nrows(variable id) to each group. My code so far:
DF <- DF[order(DF[,1]),]
DF <- split(DF,DF[,1])
DF <- lapply(1:length(DF), function(i) cbind(DF[[i]], 1:length(DF[[i]])))
But this gives me an error: arguments imply different number of rows.
Can you elaborate?
> head(DF, n=50)
cell area
1 1 121.2130
2 2 81.3555
3 3 81.5862
4 4 83.6345
...
33 1 121.3270
34 2 80.7832
35 3 81.1816
36 4 83.3340
DF <- DF[order(DF$cell),]
What I want is:
> head(DF, n=50)
cell area counter
1 1 121.213 1
33 1 121.327 2
65 1 122.171 3
97 1 122.913 4
129 1 123.697 5
161 1 124.474 6
...and so on.
This is my code:
cell.areas.t <- function(file) {
dat = paste(file)
DF <- read.table(dat, col.names = c("cell","area"))
DF <- splitstackshape::getanID(DF, "cell")[] # thanks to akrun's answer
ggplot2::ggplot(data = DF, aes(x = .id , y = area, color = cell)) +
geom_line(aes(group = cell)) + geom_point(size=0.1)
}
And the plot looks like this:
Most cells increase in area, only some decrease. This is only a first try to visualize my data, so what you can't see very well is that the areas drop down periodically due to cell division.
Additional question:
There is a problem I didn't take into account beforehand, which is that after a cell division a new cell is added to the data.frame and is handed the initial index 1 (you see in the image that all cells start from .id=1, not later), which is not what I want - it needs to inherit the index of its creation time. First thing that comes into my mind is that I could use a parsing mechanism that does this job for a newly added cell variable:
DF$.id[DF$cell != temporary.cellindex] <- max(DF$.id[DF$cell != temporary.cellindex])
Do you have a better idea? Thanks.
There is a boundary condition which may ease the problem: fixed number of cells at the beginning (32). Another solution would be to cut away all data before the last daughter cell is created.
Update: Additional question solved, here's the code:
cell.areas.t <- function(file) {
dat = paste(file)
DF <- read.table(dat, col.names = c("cell","area"))
DF$.id <- c(0, cumsum(diff(DF$cell) < 0)) + 1L # Indexing
title <- getwd()
myplot <- ggplot2::ggplot(data = DF, aes(x = .id , y = area, color = factor(cell))) +
geom_line(aes(group = cell)) + geom_line(size=0.1) + theme(legend.position="none") + ggtitle(title)
#save the plot
image=myplot
ggsave(file="cell_areas_time.svg", plot=image, width=10, height=8)
}
We can use getanID from splitstackshape
library(splitstackshape)
getanID(DF, "cell")[]
There's a much easier method to accomplish that goal. Use ave with seq.int
DF$group_seq <- ave(DF, DF[,1], FUN=function(x){ seq.int(nrow(x)) } )