Improving multivariate regression model in R - r
I recently conducted a survey within an IT company concering user satisfaction with a specific data management solution. There was one question about the overall satisfaction (dependend variable for my regression). And then various questions about more specific aspects like data quality etc. (independend variables in my regression).
With the help of R, I created a multivariate regression in order to figure out which of the various aspects are most important for the customer satisfaction. However, I believe my results are not 100% correct since some of the results dont make sense. For instance, according to the standardized coeffizient increasing data quality results in less user satisfaction. From my point of view, the coefficient should be positive for all variables.
Maybe somebody here can help me/ give me some tips how to improve my model. Down below you can find my code and the results (anonymized). The rows labeled M-AV are my independend variables. In the columns to the right you can find the standardized coefficent, the standard error, t value and p-value.
#https://www.youtube.com/watch?v=EUbujtw5Azc
#Librarys laden
library(lmtest)
library(car)
library(sandwich)
#Daten einlesen
daten <- read.csv(file.choose(), header = T, sep=";")
#Spalte K transformieren (wird als chr erkannt, ist aber numeric)
daten <- transform(daten, K = as.numeric(K))
str(daten)
#Regressions Modell
#modell <- lm(H ~ M + N + O + P + X + Y + Z + AA + AB + AE + AF + AG + AJ + AL + AM + AN + AQ + AR + AS + AU + AV, daten)
modell <- lm(C ~ M + N + O + P + X + Y + Z + AA + AB + AE + AF + AG + AJ + AL + AM + AN + AQ + AR + AS + AU + AV, daten)
#Vorraussetzungen
# 1 Normalverteilung der Residuen
#Plot Punkte sollten ca. auf Linie liegen (entspricht Normalverteilung). Abweichung am Anfang und Ende ist OK.
plot(modell, 2)
# 2a Homoskedastizität (Streuen Residuen gleich)
plot(modell, 1) #sollte ca. auf Ideallinie liegen
#Breusch-Pagan Test, Null-Hypothese: es liegt Homoskedastizität vor
#falls p-value > 0.05 wird Nullhypothese beibehalten
bptest(modell)
#3 Keine Multikollinarität (unabhängige Variablen korrelieren zu stark)
#Vif sollte auf jeden Fall unter 10 liegen, konservativer unter 6
vif(modell)
#4 Ausreißer/ Einflussreiche Fälle
#https://bjoernwalther.com/cook-distanz-in-r-ermitteln-und-interpretieren-ausreisser-erkennen/
plot(modell, 4)
#Robuste Standardfehler
coeftest(modell, vcov=vcovHC(modell, type ="HC3"))
#Auswertung
summary(modell)
#F-Statistik hat Nullhypothese, das Erklärungsmodell kein Erklärungsbeitrag leistet --> hier <.05, wird also verworfen!
#R2 Wert --> ca. 60% der Variable wird durch Variabeln erklärt (eigentlich 40%, siehe ajustiertes R2)
#standartisierte Koeffizienten um einflussreichste Variable zu finden
zmodell <- lm(scale(C) ~ scale(M)+ scale(N) + scale(O) + scale(P) + scale(X) + scale(Y) + scale(Z) + scale(AA) + scale(AB) + scale(AE) + scale(AF) + scale(AG) + scale(AJ) + scale(AL) + scale(AM) + scale(AN) + scale(AQ) + scale(AR) + scale(AS) + scale(AU) + scale(AV), data = daten)
summary(zmodell)
dput(head(j, 20))
structure(list(A = c(6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L,
15L, 16L, 17L, 19L, 20L, 21L, 22L, 23L, 24L, 25L, 26L), B = c(NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA), C = c(10L, 5L, 9L, 9L, 7L, 10L, 10L, 5L, 10L, 8L,
1L, 8L, 10L, 7L, 8L, 10L, 8L, 2L, 8L, 3L), D = c(NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA), E = c(5L, 3L, 4L, 5L, 4L, 4L, 6L, 3L, 5L, 3L, 4L, 2L, 4L,
2L, 3L, 5L, 3L, 4L, 3L, 2L), F = c(5L, 2L, 6L, 5L, 4L, 2L, 6L,
4L, 5L, 6L, 4L, 4L, 6L, 5L, 5L, 6L, 4L, 3L, 5L, 5L), G = c(NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA), H = c(6L, 3L, 5L, 4L, 5L, 4L, 5L, 4L, 5L, 4L, 2L,
5L, 5L, 4L, 4L, 6L, 4L, 5L, 4L, 1L), I = c(6L, 2L, 5L, 4L, 4L,
4L, 5L, 3L, 5L, 4L, 2L, 5L, 5L, 3L, 4L, 5L, 3L, 2L, 4L, 1L),
J = c(3L, 6L, 6L, 5L, 6L, 2L, 5L, 4L, 6L, 6L, 5L, 2L, 5L,
5L, 2L, 6L, 5L, 5L, 6L, 6L), K = c(5, 3.67, 5.33, 4.33, 5,
3.33, 5, 3.67, 5.33, 4.67, 3, 4, 5, 4, 3.33, 5.67, 4, 4,
4.67, 2.67), L = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), M = c(4L, 2L, 6L,
6L, 5L, 6L, 6L, 4L, 6L, 6L, 5L, 6L, 5L, 5L, 5L, 6L, 6L, 6L,
6L, 3L), N = c(6L, 5L, 5L, 5L, 6L, 6L, 6L, 5L, 6L, 6L, 4L,
4L, 4L, 3L, 5L, 5L, 4L, 5L, 5L, 2L), O = c(5L, 1L, 5L, 4L,
6L, 6L, 5L, 2L, 6L, 6L, 1L, 5L, 5L, 3L, 4L, 5L, 4L, 2L, 5L,
3L), P = c(6L, 1L, 4L, 4L, 4L, 6L, 6L, 2L, 5L, 3L, 2L, 5L,
5L, 3L, 5L, 5L, 4L, 5L, 2L, 1L), Q = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), R = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), S = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), T = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), U = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), V = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), W = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), X = c(4L, 1L, 3L, 4L, 5L, 6L, 5L, 3L, 5L, 4L, 1L, 5L,
4L, 1L, 4L, 1L, 5L, 2L, 4L, 1L), Y = c(5L, 1L, 3L, 3L, 3L,
6L, 5L, 2L, 6L, 4L, 1L, 3L, 4L, 1L, 5L, 5L, 3L, 2L, 3L, 2L
), Z = c(5L, 1L, 3L, 4L, 3L, 6L, 5L, 2L, 5L, 4L, 2L, 3L,
5L, 3L, 5L, 3L, 2L, 1L, 4L, 1L), AA = c(6L, 4L, 4L, 5L, 5L,
6L, 5L, 3L, 4L, 5L, 3L, 4L, 4L, 3L, 5L, 6L, 5L, 3L, 6L, 2L
), AB = c(6L, 6L, 4L, 4L, 3L, 6L, 5L, 3L, 5L, 3L, 2L, 6L,
5L, 6L, 5L, 5L, 5L, 5L, 6L, 2L), AC = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), AD = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), AE = c(5L, 1L, 6L, 4L, 6L,
5L, 4L, 3L, 5L, 5L, 2L, 2L, 4L, 1L, 5L, 3L, 3L, 4L, 4L, 1L
), AF = c(4L, 1L, 6L, 2L, 5L, 5L, 4L, 3L, 6L, 4L, 2L, 4L,
5L, 4L, 5L, 4L, 3L, 4L, 6L, 2L), AG = c(4L, 1L, 5L, 2L, 5L,
5L, 4L, 4L, 4L, 4L, 2L, 4L, 5L, 5L, 4L, 2L, 3L, 2L, 6L, 2L
), AH = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), AI = c(0L, 0L, 1L, 1L, 1L,
1L, 1L, 0L, 1L, 1L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, 0L, 0L, 0L
), AJ = c(3L, 2L, 5L, 3L, 4L, 4L, 6L, 3L, 5L, 5L, 2L, 5L,
5L, 3L, 5L, 5L, 4L, 2L, 5L, 1L), AK = c(NA, NA, 5L, 3L, 4L,
4L, 5L, NA, 6L, 5L, NA, NA, 6L, NA, NA, NA, 4L, NA, NA, NA
), AL = c(4L, 4L, 6L, 4L, 6L, 5L, 5L, 3L, 6L, 5L, 4L, 6L,
5L, 3L, 5L, 4L, 5L, 3L, 6L, 1L), AM = c(5L, 1L, 6L, 4L, 5L,
2L, 4L, 2L, 6L, 4L, 2L, 2L, 6L, 1L, 5L, 3L, 2L, 1L, 4L, 3L
), AN = c(1L, 1L, 6L, 3L, 2L, 6L, 4L, 1L, 6L, 2L, 1L, 4L,
5L, 2L, 5L, 5L, 4L, 4L, 5L, 1L), AO = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), AP = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), AQ = c(3L, 1L, 6L, 3L, 6L,
1L, 5L, 2L, 6L, 5L, 6L, 3L, 6L, 1L, 5L, 3L, 2L, 2L, 4L, 2L
), AR = c(1L, 4L, 4L, 3L, 6L, 1L, 5L, 1L, 6L, 5L, 5L, 4L,
6L, 2L, 5L, 4L, 2L, 2L, 4L, 2L), AS = c(1L, 1L, 6L, 4L, 6L,
1L, 5L, 3L, 6L, 5L, 6L, 5L, 6L, 5L, 5L, 5L, 4L, 2L, 5L, 2L
), AT = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), AU = c(5L, 3L, 4L, 4L, 6L,
3L, 5L, 3L, 6L, 5L, 4L, 4L, 4L, 6L, 5L, 6L, 5L, 6L, 5L, 2L
), AV = c(6L, 3L, 5L, 4L, 6L, 2L, 6L, 2L, 6L, 4L, 4L, 4L,
4L, 6L, 4L, 6L, 3L, 6L, 2L, 3L), AW = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), AX = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), AY = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), AZ = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), BA = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), BB = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA), BC = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
), BD = c(5.25, 2.25, 5, 4.75, 5.25, 6, 5.75, 3.25, 5.75,
5.25, 3, 5, 4.75, 3.5, 4.75, 5.25, 4.5, 4.5, 4.5, 2.25),
BE = c(5.2, 2.6, 3.4, 4, 3.8, 6, 5, 2.6, 5, 4, 1.8, 4.2,
4.4, 2.8, 4.8, 4, 4, 2.6, 4.6, 1.6), BF = c(4.333333333,
1, 5.666666667, 2.666666667, 5.333333333, 5, 4, 3.333333333,
5, 4.333333333, 2, 3.333333333, 4.666666667, 3.333333333,
4.666666667, 3, 3, 3.333333333, 5.333333333, 1.666666667),
BG = c(3.25, 2, 5.75, 3.5, 4.25, 4.25, 4.75, 2.25, 5.75,
4, 2.25, 4.25, 5.25, 2.25, 5, 4.25, 3.75, 2.5, 5, 1.5), BH = c(1.666666667,
2, 5.333333333, 3.333333333, 6, 1, 5, 2, 6, 5, 5.666666667,
4, 6, 2.666666667, 5, 4, 2.666666667, 2, 4.333333333, 2),
BI = c(5.5, 3, 4.5, 4, 6, 2.5, 5.5, 2.5, 6, 4.5, 4, 4, 4,
6, 4.5, 6, 4, 6, 3.5, 2.5)), row.names = c(NA, 20L), class = "data.frame")
Related
Convert multiple columns from numeric to factor
I thought this task is simple, then I was surprised that it wasn't. I have multiple selected columns with coded responses (likert-scales). I want to transform them into a factor variable with factor levels (some of them were never chosen). The questionnair is in German, that is why I you probably won't be able to understand the labels. df[,c(3:21,23:25)] <- apply(df[,c(3:21,23:25)],2, function (x) factor(x, levels = c(0,1,2,3,4), labels = c("gar nicht", "gering", "eher schwach", "eher stark", "sehr stark"))) df[,22] <- apply(df[,22],1, function (x) factor(x, levels = c(0,1,2,3), labels = c("gar nicht", "sofort", "mittelfristig", "langfristig"))) I will need to split those data frames because of the different scales. Nevertheless, it does not transform my data accurately. The outcome is a character. Here is my test data: structure(list(ï..lfdNr = 1:20, company = c("Nationalpark Thayathal", "Naturpark Heidenreichsteiner Moor", "Naturpark Hohe Wand", "Tierpark Stadt Haag", "Ötscher Tropfsteinhöhle", "Carnuntum", "Stift Heiligenkreuz", "Ruine Kollmitz", "Schlosshof", "Retzer Erlebniskeller", "LOISIUM Weinwelt", "Bio Imkerei Stögerer", "Amethyst Welt Maissau", "Donau Niederösterreich tourismus", "Niederösterreich Bahnen", "Benediktinerstift Melk", "Kunstmeile Krems", "Die Garten Tulln", "Winzer Krems ", "Domäne Wachau"), A2_1_hitz = c(4L, NA, NA, 3L, NA, NA, 3L, 2L, 3L, NA, 3L, NA, 3L, NA, 2L, 3L, 3L, 4L, 2L, 3L), A2_2_trock = c(3L, NA, NA, 3L, NA, NA, 3L, NA, 3L, NA, 2L, NA, 1L, NA, 2L, 4L, 3L, 4L, 2L, 3L), A2_3_reg = c(2L, NA, NA, 2L, NA, NA, 3L, 2L, 3L, NA, 3L, NA, 2L, NA, 3L, 4L, 2L, 3L, 4L, 2L), A2_4_schnee = c(4L, NA, NA, 3L, NA, NA, NA, 3L, 3L, NA, 1L, NA, 0L, NA, 4L, NA, 3L, 4L, 4L, 1L), B1_1_hitz = c(4L, NA, NA, 1L, NA, NA, NA, 3L, 3L, NA, 2L, NA, NA, NA, 2L, 3L, 2L, 4L, 0L, 2L), B1_2_trock = c(3L, NA, NA, 2L, NA, NA, NA, NA, 3L, NA, 0L, NA, NA, NA, 2L, 3L, 2L, 4L, 3L, 1L), B1_3_reg = c(2L, NA, NA, 1L, NA, NA, NA, NA, 3L, NA, 3L, NA, NA, NA, 3L, 3L, 1L, 2L, 3L, 3L), B1_4_schnee = c(1L, NA, NA, 0L, NA, NA, 0L, 0L, 1L, NA, NA, NA, NA, NA, 4L, 1L, 0L, 4L, 0L, 0L), B2_1_nZuk = c(3L, NA, NA, 0L, NA, NA, NA, 0L, 0L, NA, 0L, NA, 0L, 3L, 3L, 0L, 3L, 2L, 0L, 0L), B2_2_mZuk = c(3L, NA, NA, 0L, NA, NA, NA, 0L, 2L, NA, 2L, NA, 0L, 2L, 3L, 0L, 3L, 2L, 3L, 0L), B2_3_fZuk = c(3L, NA, NA, 2L, NA, NA, NA, NA, 2L, NA, 2L, NA, 0L, 2L, 3L, 0L, 3L, NA, 3L, 0L), C1_1_aktEin = c(2L, NA, NA, 1L, NA, NA, NA, NA, 2L, NA, NA, NA, NA, NA, NA, 0L, 1L, 3L, 2L, 3L), C1_2_zukEin = c(3L, NA, NA, 2L, NA, NA, NA, NA, 3L, NA, NA, NA, NA, NA, NA, 0L, 2L, 4L, 3L, 3L), C2_1_bisVer = c(2L, NA, NA, 1L, NA, NA, NA, NA, 2L, NA, NA, NA, NA, NA, 2L, 2L, 1L, 3L, 2L, 2L), C2_2_zukVer = c(3L, NA, NA, 2L, NA, NA, NA, NA, 3L, NA, NA, NA, NA, NA, 2L, 2L, 2L, 3L, 3L, 2L), C3_1_bisVer = c(NA, NA, NA, 1L, NA, NA, 2L, NA, 3L, NA, NA, NA, NA, NA, 1L, 1L, 1L, NA, 2L, 2L), C3_2_zukVer = c(NA, NA, NA, 2L, NA, NA, 3L, NA, 3L, NA, NA, NA, NA, NA, 1L, 2L, 2L, NA, 3L, 2L), C4_1_EinKlim = c(NA, NA, NA, 2L, NA, NA, NA, NA, 2L, NA, 2L, NA, NA, NA, 3L, 0L, 1L, NA, 3L, 1L), D1a_1_StÃ.rke = c(NA, NA, NA, 3L, NA, NA, NA, NA, 3L, NA, NA, NA, 3L, NA, 2L, 3L, 2L, 3L, 3L, 3L), D1b_1_Dring = c(NA, NA, NA, NA, NA, NA, 2L, 3L, NA, NA, NA, NA, 2L, NA, 1L, 1L, 1L, 1L, 1L, 1L), D5_1_bestBed = c(NA, NA, NA, 0L, NA, NA, NA, NA, 3L, NA, NA, NA, NA, NA, NA, 2L, 1L, NA, 3L, 3L), E1_1_zuBesuch = c(NA, NA, NA, 2L, NA, NA, NA, NA, 3L, NA, NA, NA, NA, NA, 4L, 1L, 4L, NA, 4L, NA), E1_2_wirtBed = c(NA, NA, NA, 3L, NA, NA, 3L, NA, 2L, NA, NA, NA, NA, NA, 1L, 1L, 4L, NA, 3L, NA)), row.names = c(NA, 20L), class = "data.frame") Thanks, nadine
We need lapply and not apply as apply converts to matrix and matrix can have only a single class df[,c(3:21,23:25)] <- lapply(df[,c(3:21,23:25)], function (x) factor(x, levels = c(0,1,2,3), labels = c("gar nicht", "sofort", "mittelfristig", "langfristig")))
set factor level in ggplot2
Hello I would need help in order to sort geom_segment in my plot by the column end_scaffold. Here is the code I used to produce the following plot : library(ggplot2) #Here I try to sort the data in order to get geom_segment sorted in the plot but it does not work tab<-tab[with(tab, order(-end_scaff,-end_gene)), ] ggplot(tab, aes(x = start_scaff, xend = end_scaff, y = molecule, yend = molecule)) + geom_segment(size = 3, col = "grey80") + geom_segment(aes(x = ifelse(direction == 1, start_gene, end_gene), xend = ifelse(direction == 1, end_gene, start_gene)), data = tab, arrow = arrow(length = unit(0.1, "inches")), size = 2) + geom_text(aes(x = start_gene, y = molecule, label = gene), data = tab, nudge_y = 0.2) + scale_y_discrete(limits = rev(levels(tab$molecule))) + theme_minimal() does someone have an idea in order to sort the geom_segment by the column end_scaffold (descending) (where scaffold_1254 should be on the top of the plot and scaffold_74038 shoudl be on the bottom). here are the data > dput(tab) structure(list(molecule = structure(c(2L, 6L, 6L, 3L, 7L, 4L, 5L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("", "scaffold_1254", "scaffold_15158", "scaffold_7180", "scaffold_74038", "scaffold_7638", "scaffold_8315"), class = "factor"), gene = structure(c(8L, 6L, 5L, 3L, 7L, 4L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("", "G1", "G2", "G3", "G4", "G5", "G6", "G7"), class = "factor"), start_gene = c(6708L, 9567L, 3456L, 10105L, 2760L, 9814L, 1476L, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), end_gene = c(11967L, 10665L, 4479L, 10609L, 3849L, 10132L, 2010L, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), start_scaff = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), end_scaff = c(20072, 15336, 15336, 13487, 10827, 10155, 2010, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), strand = structure(c(2L, 2L, 3L, 2L, 3L, 2L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("", "backward", "forward"), class = "factor"), direction = c(-1L, -1L, 1L, -1L, 1L, -1L, 1L, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA)), row.names = c(7L, 5L, 4L, 2L, 6L, 3L, 1L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 23L, 24L, 25L, 26L, 27L, 28L, 29L, 30L, 31L, 32L, 33L, 34L, 35L, 36L, 37L, 38L, 39L), class = "data.frame")
For a solution inside ggplot you can remove the limits to the scale_y_discrete (it would reorder based on the factor levels) and use y = reorder(molecule, end_scaff) inside the aes: library(dplyr) library(ggplot2) tab <- tab %>% filter(!is.na(start_gene)) ggplot(tab, aes(x = start_scaff, xend = end_scaff, y = reorder(molecule, end_scaff), yend = molecule)) + geom_segment(size = 3, col = "grey80") + geom_segment(aes(x = ifelse(direction == 1, start_gene, end_gene),xend = ifelse(direction == 1, end_gene, start_gene)), arrow = arrow(length = unit(0.1, "inches")), size = 2) + geom_text(aes(x = start_gene, y = molecule, label = gene), nudge_y = 0.2) + scale_y_discrete() + theme_minimal() Created on 2020-09-04 by the reprex package (v0.3.0)
The trick is to reorder the levels of molecule, not the entire data.frame. Instead of tab <- tab[with(tab, order(-end_scaff,-end_gene)), ] run i <- with(tab, order(-end_scaff,-end_gene)) mol <- unique(tab$molecule[i]) tab$molecule <- factor(tab$molecule, levels = mol) The same plotting code now produces the following graph.
How to add a variable and assign a value based on the value of another variable
I want to create a new variable (final_session) and assign a value based on the filtered value of another variable which is in date format (date). I have been able to add the variable and assign a value, then been able to filter and change the value of the variable, but I lose the rest of the observations (which I do not want to do). I have the code below: ## add `final_session` column, defualt value 0 old_sp_long2 <- old_sp_long %>% add_column(final_session = 0) ## select most recent date of sessions 1--15 and mark as final session == 1 df <- old_sp_long2 %>% filter(wave <= 15) %>% group_by(uci) %>% slice(which.max(date)) %>% mutate(final_session = replace(final_session, final_session == 0, 1)) I minimal dataset below: structure(list(uci = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), .Label = c("10001h", "10268h", "10431h"), class = "factor"), wave = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L), date = structure(c(17042, 17053, 17060, 17074, 17086, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 17003, 17010, 17015, 17055, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 16994, 17000, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), class = "Date"), session = c(1L, 2L, 3L, 4L, 5L, 6L, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 1L, 2L, 3L, 4L, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 1L, 2L, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA)), class = "data.frame", row.names = c(NA, -51L)) I'm sure this is possible but I just cannot figure it out. Does anyone have a solution to my problem? Please let me know if you need any more information.
Do you need something like this ? library(dplyr) old_sp_long2 %>% group_by(uci) %>% mutate(max_date = max(date[wave <= 15], na.rm = TRUE), max_wave = wave[which.max(date == max_date)], final_session = replace(final_session, date == max_date, 1)) # uci wave date session final_session max_date max_wave # <fct> <int> <date> <int> <dbl> <date> <int> # 1 10001h 1 2016-08-29 1 0 2016-10-12 5 # 2 10001h 2 2016-09-09 2 0 2016-10-12 5 # 3 10001h 3 2016-09-16 3 0 2016-10-12 5 # 4 10001h 4 2016-09-30 4 0 2016-10-12 5 # 5 10001h 5 2016-10-12 5 1 2016-10-12 5 # 6 10001h 6 NA 6 0 2016-10-12 5 # 7 10001h 7 NA NA 0 2016-10-12 5 # 8 10001h 8 NA NA 0 2016-10-12 5 # 9 10001h 9 NA NA 0 2016-10-12 5 #10 10001h 10 NA NA 0 2016-10-12 5 # … with 41 more rows This keeps same number of observation as in your original old_sp_long2.
Return an average of last or first two rows from a different group (indicated by a variable)
This is a follow-up to this question. With a data like below: data <- structure(list(seq = c(1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 7L, 7L, 7L, 7L, 7L, 8L, 8L, 9L, 9L, 9L, 10L, 10L, 10L), new_seq = c(2, 2, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 2, 2, 2, 2, NA, NA, NA, NA, NA, 4, 4, 4, 4, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 6, 6, 6, 6, 6, NA, NA, 8, 8, 8, NA, NA, NA), value = c(2L, 0L, 0L, 3L, 0L, 5L, 5L, 3L, 0L, 3L, 2L, 3L, 2L, 3L, 4L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 2L, 5L, 3L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 3L, 5L, 3L, 1L, 1L, 1L, 0L, 1L, 0L, 4L, 3L, 0L, 3L, 1L, 3L, 0L, 0L, 1L, 0L, 0L, 3L, 4L, 5L, 3L, 5L, 3L, 5L, 0L, 1L, 1L, 3L, 2L, 1L, 0L, 0L, 0L, 0L, 5L, 1L, 1L, 0L, 4L, 1L, 5L, 0L, 3L, 1L, 2L, 1L, 0L, 3L, 0L, 1L, 1L, 3L, 0L, 1L, 1L, 2L, 2L, 1L, 0L, 4L, 0L, 0L, 3L, 0L, 0L)), row.names = c(NA, -100L), class = c("tbl_df", "tbl", "data.frame")) for every value of new_seq, which is not NA I need to calculate a mean of 2 observations from respective group in seq (value of new_seq refers to a value of seq). The issue is that: for those rows, where new_seq refers to a value of seq which appears after (rows 1:2 in an example) it should be a mean of 2 FIRST rows from respective group, for those rows where new_seq refers to a value of seq which appears before it should be a mean of 2 LAST rows from respective group #Z.Lin provided excellent solution for the second case, but how it can be tweaked to handle both cases? Or maybe is there another solution with tidyverse?
I think I got it, so I post an answer for the anybody who'll come here from search. lookup_backwards <- data %>% group_by(seq) %>% mutate(rank = seq(n(), 1)) %>% filter(rank <= 2) %>% summarise(backwards = mean(value)) %>% ungroup() lookup_forwards <- data %>% group_by(seq) %>% mutate(rank = seq(1, n())) %>% filter(rank <= 2) %>% summarise(forwards = mean(value)) %>% ungroup() data %>% left_join(lookup_backwards, by = c('new_seq' = 'seq')) %>% left_join(lookup_forwards, by = c('new_seq' = 'seq')) %>% replace_na(list(backwards = 0, forwards = 0)) %>% mutate(new_column = ifelse(new_seq > seq, forwards, backwards))
Tranpose and Calculate pearson correlation
I am really new to coding and I need to run a number of statistics in a dataset, for example the pearson correlation, but I am having some trouble manipulating the data. From what I understood I need to transpose my data in order to calculate the pearson correlation, but here's where I'm having some problems. For starters, the column names turn into a new row instead of becoming the new column names. Then I get a message that my values are not numeric. I also have some NA and I am trying to calculate the correlation with this code cor(cr, use = "complete.obs", method = "pearson") Error in cor(cr1, use = "complete.obs", method = "pearson") : 'x' must be numeric I need to know the correlation between Victoria and Nuria which should yield 0.3651484 here is the dput of my dataset: > dput(cr) structure(list(User = structure(c(8L, 10L, 2L, 17L, 11L, 1L, 18L, 9L, 7L, 5L, 3L, 14L, 13L, 4L, 20L, 6L, 16L, 12L, 15L, 19L ), .Label = c("Ana", "Anton", "Bernard", "Carles", "Chris", "Ivan", "Jim", "John", "Marc", "Maria", "Martina", "Nadia", "Nerea", "Nuria", "Oriol", "Rachel", "Roger", "Sergi", "Valery", "Victoria" ), class = "factor"), Star.Wars.IV...A.New.Hope = c(1L, 5L, NA, NA, 4L, 2L, NA, 4L, 5L, 4L, 2L, 3L, 2L, 3L, 4L, NA, NA, 4L, 5L, 1L), Star.Wars.VI...Return.of.the.Jedi = c(5L, 3L, NA, 3L, 3L, 4L, NA, NA, 1L, 2L, 1L, 5L, 3L, NA, 4L, NA, NA, 5L, 1L, 2L), Forrest.Gump = c(2L, NA, NA, NA, 4L, 4L, 3L, NA, NA, NA, 5L, 2L, NA, 3L, NA, 1L, NA, 1L, NA, 2L), The.Shawshank.Redemption = c(NA, 2L, 5L, NA, 1L, 4L, 1L, NA, 4L, 5L, NA, NA, 5L, NA, NA, NA, NA, 5L, NA, 4L), The.Silence.of.the.Lambs = c(4L, 4L, 2L, NA, 4L, NA, 1L, 3L, 2L, 3L, NA, 2L, 4L, 2L, 5L, 3L, 4L, 1L, NA, 5L), Gladiator = c(4L, 2L, NA, 1L, 1L, NA, 4L, 2L, 4L, NA, 5L, NA, NA, NA, 5L, 2L, NA, 1L, 4L, NA), Toy.Story = c(2L, 1L, 4L, 2L, NA, 3L, NA, 2L, 4L, 4L, 5L, 2L, 4L, 3L, 2L, NA, 2L, 4L, 2L, 2L), Saving.Private.Ryan = c(2L, NA, NA, 3L, 4L, 1L, 5L, NA, 4L, 3L, NA, NA, 5L, NA, NA, 2L, NA, NA, 1L, 3L), Pulp.Fiction = c(NA, NA, NA, 4L, NA, 4L, 2L, 3L, NA, 4L, NA, 1L, NA, NA, 3L, NA, 2L, 5L, 3L, 2L), Stand.by.Me = c(3L, 4L, 1L, NA, 1L, 4L, NA, NA, 1L, NA, NA, NA, NA, 4L, 5L, 1L, NA, NA, 3L, 2L), Shakespeare.in.Love = c(2L, 3L, NA, NA, 5L, 5L, 1L, NA, 2L, NA, NA, 3L, NA, NA, NA, 5L, 2L, NA, 3L, 1L), Total.Recall = c(NA, 2L, 1L, 4L, 1L, 2L, NA, 2L, 3L, NA, 3L, NA, 2L, 1L, 1L, NA, NA, NA, 1L, NA), Independence.Day = c(5L, 2L, 4L, 1L, NA, 4L, NA, 3L, 1L, 2L, 2L, 3L, 4L, 2L, 3L, NA, NA, NA, NA, NA), Blade.Runner = c(2L, NA, 4L, 3L, 4L, NA, 3L, 2L, NA, NA, NA, NA, NA, 2L, NA, NA, NA, 4L, NA, 5L), Groundhog.Day = c(NA, 2L, 1L, 5L, NA, 1L, NA, 4L, 5L, NA, NA, 2L, 3L, 3L, 2L, 5L, NA, NA, NA, 5L), The.Matrix = c(4L, NA, 1L, NA, 3L, NA, 1L, NA, NA, 2L, 1L, 5L, NA, 5L, NA, 2L, 4L, NA, 2L, 4L), Schindler.s.List = c(2L, 5L, 2L, 5L, 5L, NA, NA, 1L, NA, 5L, NA, NA, NA, 1L, 3L, 2L, NA, 2L, NA, 3L ), The.Sixth.Sense = c(5L, 1L, 3L, 1L, 5L, 3L, NA, 3L, NA, 1L, 2L, NA, NA, NA, NA, 4L, NA, 1L, NA, 5L), Raiders.of.the.Lost.Ark = c(NA, 3L, 1L, 1L, NA, NA, 5L, 5L, NA, NA, 1L, NA, 5L, NA, 3L, 3L, NA, 2L, NA, 3L), Babe = c(NA, NA, 3L, 2L, NA, 2L, 2L, NA, 5L, NA, 4L, 2L, NA, NA, 1L, 4L, NA, 5L, NA, NA)), .Names = c("User", "Star.Wars.IV...A.New.Hope", "Star.Wars.VI...Return.of.the.Jedi", "Forrest.Gump", "The.Shawshank.Redemption", "The.Silence.of.the.Lambs", "Gladiator", "Toy.Story", "Saving.Private.Ryan", "Pulp.Fiction", "Stand.by.Me", "Shakespeare.in.Love", "Total.Recall", "Independence.Day", "Blade.Runner", "Groundhog.Day", "The.Matrix", "Schindler.s.List", "The.Sixth.Sense", "Raiders.of.the.Lost.Ark", "Babe"), row.names = c(NA, -20L), class = c("tbl_df", "tbl", "data.frame")) Can someone help me?
This code should give you the correlation matrix between all users. cr2<-t(cr[,2:21]) # Transpose (first column contains names) colnames(cr2)<-cr[,1] # Assign names to columns cor(cr2,use="complete.obs") # Gives an error because there are no complete obs # Error in cor(cr2, use = "complete.obs") : no complete element pairs cor(cr2,use="pairwise.complete.obs") # use pairwise deletion Correlation between Victoria and Nuria is 0.36514837 (using pairwise deletion) Edit:To get just the correlation between Victoria and Nuria with listwise deletion, run the above and then cr2<-as.data.frame(cr2) with(cr2, cor(Victoria, Nuria, use = "complete.obs", method = "pearson")) [1] 0.3651484
As a summary in addition to #Niek's answer. First transpose the data frame by t() by excluding first column (which contains the names and is not numeric and thus cannot used for correlation calculations); assign these names to new columns in same step. Then calculate specific correlations. The solution in one piece would be: cr2 <- setNames(as.data.frame(t(cr[, -1])), cr[, 1]) with(cr2, cor(Victoria, Nuria, use = "complete.obs")) [1] 0.3651484 Or for the whole correlation matrix: cor(cr2, use = "pairwise.complete.obs")