Finding patterns through better visualization in R - r

I have the following time series data. It has 60 data points shown below. Please see a simple plot of this data below. I am using R for plotting this. I think that if I draw a moving average curve on the points in the graph, then we can better understand the patterns in the data. I don't know how to do it in R. Could some one help me to do that. Additionally, I am not sure whether this is a good way to identify patterns or not. Please also suggest me if there is any better way. Thank you.
x <- c(18,21,18,14,8,14,10,14,14,12,12,14,10,10,12,6,10,8,
14,10,10,6,6,4,6,2,8,6,2,6,4,4,2,8,6,6,8,12,8,8,6,6,2,2,4,
4,4,8,14,8,6,6,2,6,6,4,4,8,6,6)

To answer your question about moving averages, you could accomplish it with the help of rollmean which is in package zoo.
From Joshua's comment: You could also look into TTR package that depends on xts that depends on zoo. Also, there are other moving averages in the package TTR: check ?MA.
require(TTR)
# assuming your vector is loaded in dat
# sliding window / moving average of size 5
dat.k5 <- rollmean(dat, k=5)

One reasonable possibility:
d <- data.frame(x=scan("tmp.dat"))
qplot(x=seq(nrow(d)),x,data=d)+geom_smooth(method="loess")
edit: moved from comment to answer, based on https://meta.stackexchange.com/questions/164783/why-was-a-seemingly-relevant-non-offensive-comment-removed
With regard to "is this a good way to identify patterns" (which is a little off-topic for StackOverflow, but whatever); I think rolling means are perfectly respectable, although more sophisticated methods (such as the locally-weighted regression [loess/lowess] shown here) do exist. However, it doesn't look to me as though there is much of a complicated pattern to detect here: the data seem to initially decline with time, then level off. Rolling means and more sophisticated approaches may look prettier, but I don't think they will identify any deeper patterns in this data set ...
If you want to do this sort of thing for multiple data sets at once (as indicated in your comment), you may like ggplot's capabilities for automatically producing multi-line or faceted versions of the same plot.

Related

Plotting a subset of data from a prcomp matrix without re-running prcomp

I am asking a question to a similar post posted up 2 years ago, with no full answer to it (subset of prcomp object in R). P.S. sorry for commenting on it for an answer..
Basically, my question is the same. I have generated a PCA table using prcomp that has 10000+ genes, and 1700+ cells, made up of 7 timepoints. Plotting all of them in a single file makes it difficult to see.
I would like to plot each timepoint separately, using the same PCA results table (ie without re-running prcomp).
Thanks Dean for giving me tips on posting. To think of a way to describe my dataset without actually loading it here, will take me a week I believe. I also tried the
dput(droplevels(head(object,2)))
option, but it was just too much info since I have such a large dataset. In short, it is a large matrix of single-cell dataset where people can commonly see on packages such as Seurat (https://satijalab.org/seurat/pbmc3k_tutorial_1_4.html). EDIT: I have posted a screenshot of a subset of my matrix here ().
Sorry I don't know how to re-create this or even export a text format.. But this is what I can provide:
My TPM matrix has 16541 rows (defining genes), and 1798 columns (defining cells).
In it, I have "re-labelled" my columns based on timepoints, using codes such as:
D0<-c(colnames(TPM[,grep("20180419-24837-1-*", colnames(TPM))])) #D0: 286 cells
D7<-c(colnames(TPM[,grep("20180419-24837-2-*", colnames(TPM))])) #D7: 237 cells
D10<-c(colnames(TPM[,grep("20180419-24947-5-*", colnames(TPM))])) #D10: 304 cells
...... and I continued to label each timepoint.
Each timepoint was also given a specific colour.
rc<-rep("white", ncol(TPM))
rc<-[,grep("20180419-24837-1-*", colnames(TPM))]= "magenta"
...... and I continued to give colour to each timepoint.
I performed a PCA using this code:
pcaRes<-prcomp(t(log(TPM+1)), center= TRUE, scale. = TRUE)
Then I proceeded to plot a PCA plot using:
plot(pcaRes$x[,1], pcaRes$x[,2], xlab="PC1", ylab="PC2",
cex=1.0, col= rc, pch=16, main="")
Then I when I wanted to plot a PCA plot only with D0, using the same PCA output (pcaRes).. This is where I am stuck.
P.S. If anyone else has an easier way of advising how to input an example data here from my large matrix, I welcome any help. Thanks so much! Sorry I am very new in bioinformatics.
Stack Exchange for
Bioinformatics is where you you will need to go to ask question(s) or learn about the package(s) and function(s) you need to deal with you area of specialty. Stack Exchange for Bioinformatics is linked with Stackoverflow so you will just need to join, you'll have the same login.
Classes S3, S4 and Base.
This Very basic over view of Classes in R. Think of a Class as the parent you inherit all of their skills or abilities from and as a result you are able to achieve certain tasks better than others and some cases, you will not be able to do the task at all.
In R and all programming, to save re-inventing the wheel, parent classes are created so that the average person does not have to repeatedly write a function to do something simple like plot() a graph. This stuff is hidden, to access it, you inherit from the parent. The child reads the traits off the parent(s), and then it either performs the task or gives you a cryptic error message.
Base and S3 classes work well together, they are like the working class people of the R world. S4 is a specialized class made for specific fields of study to be able to provide specific functionality needed in their industry. This mean you can only use certain Base and S3 functions with Class S4 functions, most are just not compatible. So it's nothing you've done wrong, plot() and ggplot() just have the wrong parent(s) to work with your dataset.
Typical Base and S3 Class dataframe: Box like structure. Along the left hand side is all the column names, nice and neatly stacked on top of each other.
Seurat S4 Class dataframe: Tree like structure, formatted to be read by a specific function(s).
Well hope that helps and I wish you well in your career. Cheers Conrad
Ps if this helps, then click the arrow up. :)
thanks #ConradThiele for your suggestion, I will check out that site.
I had a chat with other bioinformatics around the institute. My query has little to do with the object being an S4 class, since I am performing prcomp outside of the package. I have extracted my matrix out of the object and then ran prcomp on it.
Solution is simple: run prcomp with full dataset, transform the prcomp output into a dataframe, input additional columns to input additional details like "timepoint", create new dataframe(s) only with the "timepoint"/ "variable" of interest from the prcomp result, make multiple sub-dataframe and then plotting these using "plot" or whatever function you use.
This was not my solution but from a bioinformatition I went for help to in my institute. Hope this helps others! Thanks again for your time.
P.S. If I have the time, I will post a copy of the code I suggested soon.

R plot data.frame to get more effective overview of data

At work when I want to understand a dataset (I work with portfolio data in life insurance), I would normally use pivot tables in Excel to look at e.g. the development of variables over time or dependencies between variables.
I remembered from university the nice R-function where you can plot every column of a dataframe against every other column like in:
For the dependency between issue.age and duration this plot is actually interesting because you can clearly see that high issue ages come with shorter policy durations (because there is a maximum age for each policy). However the plots involving the issue year iss.year are much less "visual". In fact you cant see anything from them. I would like to see with once glance if the distribution of issue ages has changed over the different issue.years, something like
where you could see immediately that the average age of newly issue policies has been increasing from 2014 to 2016.
I don't want to write code that needs to be customized for every dataset that I put in because then I can also do it faster manually in Excel.
So my question is, is there an easy way to plot each column of a matrix against every other column with more flexible chart types than with the standard plot(data.frame)?
The ggpairs() function from the GGally library. It has a lot of capability for visualizing columns of all different types, and provides a lot of control over what to visualize.
For example, here is a snippet from the vignette linked to above:
data(tips, package = "reshape")
ggpairs(tips)

Clustering time series in R

i have a problem with clustering time series in R.
I googled a lot and found nothing that fits my problem.
I have made a STL-Decomposition of Timeseries.
The trend component is in a matrix with 64 columns, one for every series.
Now i want to cluster these series in simular groups, involve the curve shapes and the timely shift. I found some functions that imply one of these aspects but not both.
First i tried to calculte a distance matrix with the dtw-distance so i
found clusters based on the values and inply the time shift but not on the shape of the timeseries. After this i tried some correlation based clustering, but then the timely shift
we're not recognized and the result dont satisfy my claims.
Is there a function that could cover my problem or have i to build up something
on my own. Im thankful for every kind of help, after two days of tutorials and examples i totaly uninspired. I hope i could explain the problem well enough to you.
I attached a picture. Here you can see some example time series.
There you could see the problem. The two series in the middle are set to one cluster,
although the upper and the one on the bottom have the same shape as one of the middle.
Have you tried the R package dtwclust
https://cran.r-project.org/web/packages/dtwclust/index.html
(I'm just starting to explore this package, but it seems like it covers a lot of aspects of time series clustering and it has lots of good references.)
you can use the kml package. It is used specifically to longitudinal data. You can consult its help. It has the next example:
### Generation of some data
cld1 <- generateArtificialLongData(25)
### We suspect 3, 4 or 6 clusters, we want 3 redrawing.
### We want to "see" what happen (so printCal and printTraj are TRUE)
kml(cld1,c(3,4,6),3,toPlot='both')
### 4 seems to be the best. We want 7 more redrawing.
### We don't want to see again, we want to get the result as fast as possible.
kml(cld1,4,10)
Example cluster

How can I create a histogram for all variables in a data set with minimal effort in R?

Exploring a new data set: What is the easiest, quickest way to visualise many (all) variables?
Ideally, the output shows the histograms next to each other with minimal clutter and maximum information. Key to this question is flexibility and stability to deal with large and different data sets. I'm using RStudio and usually deal with large and messy survey data.
One example which comes out of the box of Hmisc and works quite well here is:
library(ggplot2)
str(mpg)
library(Hmisc)
hist.data.frame(mpg)
Unfortunately, somewhere else I run into problems with data lables (Error in plot.new() : figure margins too large). It also crashed for a larger data set than mpg and I haven't figured out how to control binning. Moreover, I'd prefer a flexible solution in ggplot2. Note that I just started learning R and am used to the comfortable solutions provided by commercial software.
More questions on this topic:
R histogram - too many variables
...?
There may be three broad approaches:
Commands from packages such as hist.data.frame()
Looping over variables or similar macro constructs
Stacking variables and using facets
Packages
Other commands available that may be helpful:
library(plyr)
library(psych)
multi.hist(mpg) #error, not numeric
multi.hist(mpg[,sapply(mpg, is.numeric)])
or perhaps multhist from plotrix, which I haven't explored. Both of them do not offer the flexibilty I was looking for.
Loops
As an R beginner everyone advised me to stay away from loops. So I did, but perhaps it is worth a try here. Any suggestions are very welcome. Perhaps you could comment on how to combine the graphs into one file.
Stacking
My first suspicion was that stacking variables might get out of hand. However, it might be the best strategy for a reasonable set of variables.
One example I came up with uses the melt function.
library(reshape2)
mpgid <- mutate(mpg, id=as.numeric(rownames(mpg)))
mpgstack <- melt(mpgid, id="id")
pp <- qplot(value, data=mpgstack) + facet_wrap(~variable, scales="free")
# pp + stat_bin(geom="text", aes(label=..count.., vjust=-1))
ggsave("mpg-histograms.pdf", pp, scale=2)
(As you can see I tried to put value labels on the bars for more information density, but that didn't go so well. The labels on the x-axis are also less than ideal.)
No solution here is perfect and there won't be a one-size-fits-all command. But perhaps we can get closer to ease exploring a new data set.

Time series smoothing, avoiding revisions

This time my question is more methodological than technical. I have weekly time series data which gets updated every week. Unfortunately the time series is quite volatile. I would thus like to apply a filter/a smoothing method. I tried Hodrick-Prescott and LOESS. Both results look fine, with the downturn that if a new datapoint follows which diverges strongly from the historic data points, the older values have to be revised/are changing. Does somebody know a method which is implemented in R, which could do what I want? A name of a method/a function would probably be completely sufficient. It should however be something more sophisticated than a left sided moving average, because I would not like to lose data at the beginning of the time series. Every helping comment is appreciated! Thank you very much!
Best regards,
Andreas
I think (?) that the term you may be looking for is causal filtering, i.e. filtering that doesn't depend on future values. Within this category probably the simplest/best known approach is exponential smoothing, which is implemented in the forecast and expsmooth packages (library("sos"); findFn("{exponential smoothing}")).
Does that help?
It seems you need a robust two-sided smoother. The problem is that an outlier at an end-point is indistinguishable from a sudden change in the trend. It only becomes clear that it is an outlier after several more observations are collected (and even then some strong assumptions of trend smoothness are required).
I think you will find it hard to do better than loess(), but other functions that aim to do robust smoothing include
smooth() for Tukey's smoothers;
supsmu() for Friedman's super smoother;
Hodrick-Prescott smoothing is not robust to outliers.

Resources