Abline command is not showing a regression line? - r

I'm new to R programming and I'm trying to plot a regression line for this data set, but it doesn't seem to be working.
I followed exactly what my professor was using, however it doesn't seem to be working. I've also interchanged the abline command with abline(lm(batters$EMH~batters$TB)) with similar results.
Here is my code for it:
batters<-read.table(header=TRUE, text="
X AVG EBH TB OPS K.to.BB.Ratio
1 LeMahieu 0.327 61 312 0.893 1.95
2 Urshela 0.314 55 236 0.889 3.48
3 Torres 0.278 64 292 0.871 2.64
4 Judge 0.272 46 204 0.921 2.21
5 Sanchez 0.232 47 208 0.841 3.13
6 Wong 0.285 40 202 0.784 1.76
7 Molina 0.270 34 167 0.711 2.52
8 Goldschmidt 0.260 60 284 0.821 2.13
9 Ozuna 0.243 53 230 0.804 1.84
10 DeJong 0.233 62 259 0.762 2.39
11 Altuve 0.298 61 275 0.903 1.98
12 Bregman 0.296 80 328 1.015 0.69
13 Springer 0.292 62 283 0.974 1.69
14 Reddick 0.275 36 205 0.728 1.83
15 Chirinos 0.238 40 162 0.791 2.45
16 Bellinger 0.305 84 351 1.035 1.14
17 Turner 0.290 51 244 0.881 1.72
18 Seager 0.272 64 236 0.817 2.23
19 Taylor 0.262 45 169 0.794 3.11
20 Muncy 0.251 58 251 0.889 1.65
21 Meadows 0.291 69 296 0.922 2.43
22 Garcia 0.282 47 227 0.796 4.03
23 Pham 0.273 56 255 0.818 1.52
24 Choi 0.261 41 188 0.822 1.69
25 Adames 0.254 46 222 0.735 3.32
26 Yelich 0.329 76 328 1.101 1.48
27 Braun 0.285 55 232 0.849 3.09
28 Moustakas 0.254 66 270 0.845 1.85
29 Grandal 0.246 56 240 0.848 1.28
30 Arcia 0.223 32 173 0.633 2.53")
plot(batters$EBH,batters$TB,main="Attribute Pairing 5",xlab="EBH",ylab="TB")
lm(formula = batters$EBH~batters$TB)
#Call:
#lm(formula = batters$EBH ~ batters$TB)
#Coefficients:
#(Intercept) batters$TB
# -4.1275 0.2416
lin_model_1<-lm(formula = batters$EBH~batters$TB)
summary(lin_model_1)
abline(-4.12752, 0.24162)
I apologize for the messy coding, this is for a class.

Your formula is backwards in the lm() function call. The dependent variable is on the left side of the "~".
In your plot the y-axis (dependent variable) is TB, but in the linear regression model, it is defined as the independent variable. So for the linear regression model to work, one needs to swap EBH & TB.
plot(batters$EBH,batters$TB,main="Attribute Pairing 5",xlab="EBH",ylab="TB")
model <-lm(formula = batters$TB ~batters$EBH)
model
Call: lm(formula = batters$TB ~ batters$EBH)
Coefficients: (Intercept) batters$EBH
46.510 3.603
abline(model)
#or
abline (46.51, 3.60)
Also if you pass the "model" to abline you can avoid the need to specify the slope and intercept with abline

Related

I have a the following message "'collapse' argument should be a formula" while using mixture function (drc package)

While using drc package for mixture prediction on microalgae growth inhibition data, specifically the mixture function, I receive an error message "'collapse' argument should be a formula" , which I don't really understand.
I'm using almost exactly the same script present in the drc package pdf available online with the data "glymet" which also concerns growth inhibition data, with just a few changes to fit my own data. The drm function manages well to estimate the parameters of the dose-response curves concerning my data.
Being not that proficient with Rstudio as a whole, I am stuck with the error message ""'collapse' argument should be a formula" " which blocks me from doing further analysis.
Does someone has any idea of what could cause this error ? I didn't find anyone raising the same issue online, and this problem doesn't happens with glymet data.
my data :
> dbp
dose rgr pct
1 0.00 1.502 100
2 0.00 1.449 100
3 0.00 1.611 100
4 0.00 1.468 100
5 0.00 1.506 100
6 0.00 1.495 100
7 1.81 1.249 100
8 1.81 1.303 100
9 1.81 1.316 100
10 3.19 0.968 100
11 3.19 1.057 100
12 3.19 1.083 100
13 5.43 1.003 100
14 5.43 0.964 100
15 5.43 0.943 100
16 8.25 0.849 100
17 8.25 0.781 100
18 8.25 0.697 100
19 15.67 0.587 100
20 15.67 0.660 100
21 15.67 0.591 100
22 26.65 0.485 100
23 26.65 0.497 100
24 26.65 0.532 100
25 45.50 0.286 100
26 45.50 0.370 100
27 45.50 0.326 100
28 0.00 1.686 75
29 0.00 1.580 75
30 0.00 1.499 75
31 0.00 1.528 75
32 0.00 1.540 75
33 0.00 1.653 75
34 1.32 1.380 75
35 1.32 1.421 75
36 1.32 1.468 75
37 2.65 1.174 75
38 2.65 1.137 75
39 2.65 1.167 75
40 5.30 0.726 75
41 5.30 0.810 75
42 5.30 0.797 75
43 10.59 0.626 75
44 10.59 0.471 75
45 10.59 0.416 75
46 21.18 0.468 75
47 21.18 0.415 75
48 21.18 0.487 75
49 42.36 0.252 75
50 42.36 0.303 75
51 42.36 0.320 75
52 0.00 1.620 50
53 0.00 1.713 50
54 0.00 1.659 50
55 0.00 1.678 50
56 0.00 1.700 50
57 0.00 1.581 50
58 1.58 1.298 50
59 1.58 1.226 50
60 1.58 1.189 50
61 3.16 1.021 50
62 3.16 1.062 50
63 3.16 0.925 50
64 6.33 0.863 50
65 6.33 0.823 50
66 6.33 0.711 50
67 12.65 0.548 50
68 12.65 0.611 50
69 12.65 0.597 50
70 25.30 0.394 50
71 25.30 0.363 50
72 25.30 0.319 50
73 50.60 0.255 50
74 50.60 0.241 50
75 50.60 0.219 50
76 0.00 1.500 25
77 0.00 1.541 25
78 0.00 1.527 25
79 0.00 1.491 25
80 0.00 1.468 25
81 0.00 1.353 25
82 1.80 1.512 25
83 1.80 1.313 25
84 1.80 1.442 25
85 3.60 1.437 25
86 3.60 1.364 25
87 3.60 1.291 25
88 7.20 1.231 25
89 7.20 1.389 25
90 7.20 1.286 25
91 14.40 0.802 25
92 14.40 1.069 25
93 14.40 0.865 25
94 28.80 0.474 25
95 28.80 0.597 25
96 28.80 0.411 25
97 57.61 0.321 25
98 57.61 0.216 25
99 57.61 0.239 25
100 0.00 1.512 0
101 0.00 1.390 0
102 0.00 1.388 0
103 0.00 1.391 0
104 0.00 1.328 0
105 0.00 1.390 0
106 1.56 1.467 0
107 1.56 1.422 0
108 1.56 1.371 0
109 2.92 1.255 0
110 2.92 1.359 0
111 2.92 1.354 0
112 5.25 1.211 0
113 5.25 1.232 0
114 5.25 1.353 0
115 7.66 1.271 0
116 7.66 1.168 0
117 7.66 0.970 0
118 17.03 0.927 0
119 17.03 0.689 0
120 17.03 1.034 0
121 31.22 0.611 0
122 31.22 0.758 0
123 31.22 0.752 0
124 55.19 0.449 0
125 55.19 0.303 0
126 55.19 0.434 0
dose is corresponding to the concentration of exposure, rgr to the growth rate and pct to the percentage of the 1st substance in the mixture.
I'm using the following script for this data:
> library(drc)
>
> dbp<-read.table("dbp.txt",header=TRUE, sep="")
>
### data glymet
> ## Fitting the model with freely varying ED50 values
> dbp.free <- drm(rgr~dose, pct, data = dbp,
+ fct = LL.3())
>
> ## Lack-of-fit test
> modelFit(dbp.free) # acceptable
Lack-of-fit test
ModelDf RSS Df F value p value
ANOVA 89 0.46595
DRC model 111 0.68575 22 1.9083 0.0181
> summary(dbp.free)
Model fitted: Log-logistic (ED50 as parameter) with lower limit at 0 (3 parms)
Parameter estimates:
Estimate Std. Error t-value p-value
b:100 0.836148 0.060986 13.710 < 2.2e-16 ***
b:75 0.990562 0.067962 14.575 < 2.2e-16 ***
b:50 0.819426 0.057170 14.333 < 2.2e-16 ***
b:25 1.651083 0.147962 11.159 < 2.2e-16 ***
b:0 1.217393 0.112309 10.840 < 2.2e-16 ***
d:100 1.511861 0.031439 48.089 < 2.2e-16 ***
d:75 1.605332 0.030678 52.329 < 2.2e-16 ***
d:50 1.658598 0.031896 52.001 < 2.2e-16 ***
d:25 1.472612 0.026126 56.365 < 2.2e-16 ***
d:0 1.414731 0.027300 51.822 < 2.2e-16 ***
e:100 10.070587 0.871177 11.560 < 2.2e-16 ***
e:75 6.293939 0.474130 13.275 < 2.2e-16 ***
e:50 5.670343 0.491531 11.536 < 2.2e-16 ***
e:25 20.022984 1.153494 17.359 < 2.2e-16 ***
e:0 27.492564 2.023952 13.584 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error:
0.07859961 (111 degrees of freedom)
> ## Plotting isobole structure
> isobole(dbp.free)
>
> ## Fitting the concentration addition model
> dbp.ca <- mixture(dbp.free, model = "CA")
Error in mixture(dbp.free, model = "CA") :
'collapse' argument should be a formula
Thank you in advance! It's my first post so don't hesitate to tell me if an information is missing.

How to remove the variable names from the diagonal and put them on axes in R function scatterplotMatrix?

I am trying to reproduce a matrix plot on a book. Here is the plot on the book:
Here are my codes:
y=read.table("T1-2.dat");
colnames(y) <- c("density","mach-dir","cross-dir");
library(car);
scatterplotMatrix(y,smooth=F, regLine=F, var.labels=colnames(y), diagonal=list(method="boxplot"));
And this is what it looks like right now:
.
How can I delete the names from the diagonal and put them on the side of the table just like the one on the book.
Thanks in advance.
Data:
> y
density mach-dir cross-dir
1 0.801 121.41 70.42
2 0.824 127.70 72.47
3 0.841 129.20 78.20
4 0.816 131.80 74.89
5 0.840 135.10 71.21
6 0.842 131.50 78.39
7 0.820 126.70 69.02
8 0.802 115.10 73.10
9 0.828 130.80 79.28
10 0.819 124.60 76.48
11 0.826 118.31 70.25
12 0.802 114.20 72.88
13 0.810 120.30 68.23
14 0.802 115.70 68.12
15 0.832 117.51 71.62
16 0.796 109.81 53.10
17 0.759 109.10 50.85
18 0.770 115.10 51.68
19 0.759 118.31 50.60
20 0.772 112.60 53.51
21 0.806 116.20 56.53
22 0.803 118.00 70.70
23 0.845 131.00 74.35
24 0.822 125.70 68.29
25 0.971 126.10 72.10
26 0.816 125.80 70.64
27 0.836 125.50 76.33
28 0.815 127.80 76.75
29 0.822 130.50 80.33
30 0.822 127.90 75.68
31 0.843 123.90 78.54
32 0.824 124.10 71.91
33 0.788 120.80 68.22
34 0.782 107.40 54.42
35 0.795 120.70 70.41
36 0.805 121.91 73.68
37 0.836 122.31 74.93
38 0.788 110.60 53.52
39 0.772 103.51 48.93
40 0.776 110.71 53.67
41 0.758 113.80 52.42
And by the way, can we also display "Max, Med, Min" and the corresponding values on the diagonal as well? Thanks.

Averaging Duplicate Values in an R data frame

I have a df named ColorMap in which I am looking to average all numerical values corresponding to the same feature (further explanation below). Here is the df.
> ColorMap
KEGGnumber Colors
1 c("C00489" 0.162
2 "C06104" 0.162
3 "C02656") 0.162
4 C00163 -0.173
5 c("C02656" -0.140
6 "C00036" -0.140
7 "C00232" -0.140
8 "C01571" -0.140
9 "C00422") -0.140
10 c("C00402" 0.147
11 "C06664" 0.147
12 "C06687" 0.147
13 "C02059") 0.147
14 c("C00246" 0.069
15 "C00902") 0.069
**16 C00033 0.011
...
25 C00033 -0.073**
26 C00048 0.259
**27 c("C00803" 0.063
...
37 C00803 -0.200
38 C00803 -0.170**
39 c("C00164" -0.020
40 "C01712" -0.020
...
165 c("C00246" 0.076
166 "C00902") 0.076
**167 C00163 -0.063
...
169 C00163 0.046**
170 c("C00058" -0.208
171 "C00036") -0.208
172 C00121 -0.178
173 C00033 -0.193
174 C00163 -0.085
I would like the final to look something like this
> ColorMap
KEGGnumber Colors
1 C00489 0.162
2 C06104 0.162
3 C02656 0.162
4 C00163 -0.173
5 C02656 -0.140
6 C00036 -0.140
7 C00232 -0.140
8 C01571 -0.140
9 C00422 -0.140
10 C00402 0.147
11 C06664 0.147
12 C06687 0.147
13 C02059 0.147
14 C00246 0.069
15 C00902 0.069
**16 C00033 0.031**
26 C00048 0.259
**27 C00803 -0.100**
39 C00164 -0.020
40 C01712 -0.020
...
165 C00246 0.076
166 C00902 0.076
**167 C00163 0.0085**
170 C00058 -0.208
171 C00036 -0.208
172 C00121 -0.178
173 C00033 -0.193
174 C00163 -0.085
They do not need to be next to each other, I simply chose those for easy visualization. I would like the mean of all Colors to a single KEGGvalue. Thus, each KEGGvalue is unique, there are no duplicates.
You can clean that column using
library(stringr)
ColorMap$KEGGnumber <- str_extract(ColorMap$KEGGnumber, "[C][0-9]+")
The argument pattern allows you to match with a regular expression, in this case, a simple one, telling you to match the capital letter C followed by any number of numbers.
Afterwards, grouping using dplyr we have
library(dplyr)
ColorMap %>% group_by(KEGGnumber) %>% summarize(mean(Colors))

Error in producing the output

I have problem with my code. I can't trace the error. I have coor data (40 by 2 matrix) as below and a rainfall data (14610 by 40 matrix).
No Longitude Latitude
1 100.69 6.34
2 100.77 6.24
3 100.39 6.11
4 100.43 5.53
5 100.39 5.38
6 101.00 5.71
7 101.06 5.30
8 100.80 4.98
9 101.17 4.48
10 102.26 6.11
11 102.22 5.79
12 102.28 5.31
13 102.02 5.38
14 101.97 4.88
15 102.95 5.53
16 103.13 5.32
17 103.06 4.94
18 103.42 4.76
19 103.42 4.23
20 102.38 4.24
21 101.94 4.23
22 103.04 3.92
23 103.36 3.56
24 102.66 3.03
25 103.19 2.89
26 101.35 3.70
27 101.41 3.37
28 101.75 3.16
29 101.39 2.93
30 102.07 3.09
31 102.51 2.72
32 102.26 2.76
33 101.96 2.74
34 102.19 2.36
35 102.49 2.29
36 103.02 2.38
37 103.74 2.26
38 103.97 1.85
39 103.72 1.76
40 103.75 1.47
rainfall= 14610 by 40 matrix;
coor= 40 by 2 matrix
my_prog=function(rainrain,coordinat,misss,distance)
{
rain3<-rainrain # target station i**
# neighboring stations for target station i
a=coordinat # target station i**
diss=as.matrix(distHaversine(a,coor,r=6371))
mmdis=sort(diss,decreasing=F,index.return=T)
mdis=as.matrix(mmdis$x)
mdis1=as.matrix(mmdis$ix)
dist=cbind(mdis,mdis1)
# NA creation
# create missing values in rainfall data
set.seed(100)
b=sample(1:nrow(rain3),(misss*nrow(rain3)),replace=F)
k=replace(rain3,b,NA)
# pick i closest stations
neig=mdis1[distance] # neighbouring selection distance
# target (with NA) and their neighbors
rainB=rainfal00[,neig]
rainA0=rainB[,2:ncol(rainB)]
rainA<-as.matrix(cbind(k,rainA0))
rain2=na.omit(rainA)
x=as.matrix(rain2[,1]) # used to calculate the correlation
n1=ncol(rainA)-1
#1) normal ratio(nr)
jum=as.matrix(apply(rain2,2,mean))
nr0=(jum[1]/jum)
nr=as.matrix(nr0[2:nrow(nr0),])
m01=as.matrix(rainA[is.na(k),])
m1=m01[,2:ncol(m01)]
out1=as.matrix(sapply(seq_len(nrow(m1)),
function(i) sum(nr*m1[i,],na.rm=T)/n1))
print(out1)
}
impute=my_prog(rainrain=rainfall[,1],coordinat=coor[1,],misss=0.05,distance=mdis<200)
I have run this code and and the output obtained is:
Error in my_prog(rainrain = rainfal00[, 1], misss = 0.05, coordinat = coor[1, :
object 'mdis' not found
I have checked the program, but cannot trace the problem. I would really appreciate if someone could help me.

Write a dataframe formatted to a csv sheet

I am having a dataframe which looks like that:
> (eventStudyList120_After)
Dates Company Returns Market Returns Abnormal Returns
1 25.08.2009 4.81 0.62595516 4.184045
2 26.08.2009 4.85 0.89132960 3.958670
3 27.08.2009 4.81 -0.93323011 5.743230
4 28.08.2009 4.89 1.00388875 3.886111
5 31.08.2009 4.73 2.50655343 2.223447
6 01.09.2009 4.61 0.28025201 4.329748
7 02.09.2009 4.77 0.04999239 4.720008
8 03.09.2009 4.69 -1.52822071 6.218221
9 04.09.2009 4.89 -1.48860354 6.378604
10 07.09.2009 4.85 -0.38646531 5.236465
11 08.09.2009 4.89 -1.54065680 6.430657
12 09.09.2009 5.01 -0.35443455 5.364435
13 10.09.2009 5.01 -0.54107231 5.551072
14 11.09.2009 4.89 0.15189458 4.738105
15 14.09.2009 4.93 -0.36811321 5.298113
16 15.09.2009 4.93 -1.31185921 6.241859
17 16.09.2009 4.93 -0.53398643 5.463986
18 17.09.2009 4.97 0.44765285 4.522347
19 18.09.2009 5.01 0.81109101 4.198909
20 21.09.2009 5.01 -0.76254262 5.772543
21 22.09.2009 4.93 0.11309704 4.816903
22 23.09.2009 4.93 1.64429117 3.285709
23 24.09.2009 4.93 0.37294212 4.557058
24 25.09.2009 4.93 -2.59894035 7.528940
25 28.09.2009 5.21 0.29588776 4.914112
26 29.09.2009 4.93 0.49762314 4.432377
27 30.09.2009 5.41 2.17220569 3.237794
28 01.10.2009 5.21 1.67482716 3.535173
29 02.10.2009 5.25 -0.79014302 6.040143
30 05.10.2009 4.97 -2.69996146 7.669961
31 06.10.2009 4.97 0.18086490 4.789135
32 07.10.2009 5.21 -1.39072582 6.600726
33 08.10.2009 5.05 0.04210020 5.007900
34 09.10.2009 5.37 -1.14940251 6.519403
35 12.10.2009 5.13 1.16479551 3.965204
36 13.10.2009 5.37 -2.24208216 7.612082
37 14.10.2009 5.13 0.41327193 4.716728
38 15.10.2009 5.21 1.54473332 3.665267
39 16.10.2009 5.13 -1.73781565 6.867816
40 19.10.2009 5.01 0.66416288 4.345837
41 20.10.2009 5.09 -0.27007314 5.360073
42 21.10.2009 5.13 1.26968917 3.860311
43 22.10.2009 5.01 0.29432965 4.715670
44 23.10.2009 5.01 1.73758937 3.272411
45 26.10.2009 5.21 0.38854011 4.821460
46 27.10.2009 5.21 2.72671890 2.483281
47 28.10.2009 5.21 -1.76846884 6.978469
48 29.10.2009 5.41 2.95523593 2.454764
49 30.10.2009 5.37 -0.22681024 5.596810
50 02.11.2009 5.33 1.38835160 3.941648
51 03.11.2009 5.33 -1.83751398 7.167514
52 04.11.2009 5.21 -0.68721323 5.897213
53 05.11.2009 5.21 -0.26954741 5.479547
54 06.11.2009 5.21 -2.24083342 7.450833
55 09.11.2009 5.17 0.39168239 4.778318
56 10.11.2009 5.09 -0.99082271 6.080823
57 11.11.2009 5.17 0.07924735 5.090753
58 12.11.2009 5.81 -0.34424802 6.154248
59 13.11.2009 6.21 -2.00230195 8.212302
60 16.11.2009 7.81 0.48655978 7.323440
61 17.11.2009 7.69 -0.21092848 7.900928
62 18.11.2009 7.61 1.55605852 6.053941
63 19.11.2009 7.21 0.71028798 6.499712
64 20.11.2009 7.01 -2.38596631 9.395966
65 23.11.2009 7.25 0.55334705 6.696653
66 24.11.2009 7.21 -0.54239847 7.752398
67 25.11.2009 7.25 3.36386413 3.886136
68 26.11.2009 7.01 -1.28927630 8.299276
69 27.11.2009 7.09 0.98053264 6.109467
70 30.11.2009 7.09 -2.61935612 9.709356
71 01.12.2009 7.01 -0.11946242 7.129462
72 02.12.2009 7.21 0.17152317 7.038477
73 03.12.2009 7.21 -0.79343095 8.003431
74 04.12.2009 7.05 0.43919792 6.610802
75 07.12.2009 7.01 1.62169804 5.388302
76 08.12.2009 7.01 0.74055990 6.269440
77 09.12.2009 7.05 -0.99504492 8.045045
78 10.12.2009 7.21 -0.79728245 8.007282
79 11.12.2009 7.21 -0.73784636 7.947846
80 14.12.2009 6.97 -0.14656077 7.116561
81 15.12.2009 6.89 -1.42712116 8.317121
82 16.12.2009 6.97 0.95988962 6.010110
83 17.12.2009 6.69 0.22718293 6.462817
84 18.12.2009 6.53 -1.46958638 7.999586
85 21.12.2009 6.33 -0.21365446 6.543654
86 22.12.2009 6.65 -0.17256757 6.822568
87 23.12.2009 7.05 -0.59940253 7.649403
88 24.12.2009 7.05 NA NA
89 25.12.2009 7.05 NA NA
90 28.12.2009 7.05 -0.22307263 7.273073
91 29.12.2009 6.81 0.76736750 6.042632
92 30.12.2009 6.81 0.00000000 6.810000
93 31.12.2009 6.81 -1.50965723 8.319657
94 01.01.2010 6.81 NA NA
95 04.01.2010 6.65 0.06111069 6.588889
96 05.01.2010 6.65 -0.13159651 6.781597
97 06.01.2010 6.65 0.09545081 6.554549
98 07.01.2010 6.49 -0.32727619 6.817276
99 08.01.2010 6.81 -0.07225296 6.882253
100 11.01.2010 6.81 1.61131397 5.198686
101 12.01.2010 6.57 -0.40791980 6.977920
102 13.01.2010 6.85 -0.53016383 7.380164
103 14.01.2010 6.93 1.82016604 5.109834
104 15.01.2010 6.97 -0.62552046 7.595520
105 18.01.2010 6.93 -0.80490241 7.734902
106 19.01.2010 6.77 2.02857647 4.741424
107 20.01.2010 6.93 1.68204556 5.247954
108 21.01.2010 6.89 1.02683875 5.863161
109 22.01.2010 6.90 0.96765669 5.932343
110 25.01.2010 6.73 -0.57603687 7.306037
111 26.01.2010 6.81 0.50990350 6.300096
112 27.01.2010 6.81 1.64994011 5.160060
113 28.01.2010 6.61 -1.13511086 7.745111
114 29.01.2010 6.53 -0.82206204 7.352062
115 01.02.2010 7.03 -1.03993428 8.069934
116 02.02.2010 6.93 0.61692305 6.313077
117 03.02.2010 7.73 2.53012795 5.199872
118 04.02.2010 7.97 1.96223075 6.007769
119 05.02.2010 9.33 -0.76549820 10.095498
120 08.02.2010 8.01 -0.34391479 8.353915
When I write it to a csv sheet it looks like that:
write.table(eventStudyList120_After$`Abnormal Returns`, file = "C://Users//AbnormalReturns.csv", sep = ";")
In fact I want to let it look like that:
So my question is:
How to write the data frame as it is into a csv and how to transpose the Abnormal return column and put the header as in the example sheet?
Two approaches: transpose the data in R or in Excel
In R
Add an index column, select the columns you want and transpose the data using the function t
d <- anscombe
d$index <- 1:nrow(anscombe)
td <- t(d[c("index", "x1")])
write.table(td, "filename.csv", col.names = F, sep = ";")
Result:
"index";1;2;3;4;5;6;7;8;9;10;11
"x1";10;8;13;9;11;14;6;4;12;7;5
In Excel
Excel allows you to transpose data as well: http://office.microsoft.com/en-us/excel-help/switch-transpose-columns-and-rows-HP010224502.aspx

Resources