Running a logistic regression on panel data - r

I want to do a logistic regression to calculate the probability of a student pursuing a master degree.
I have dataset containing many students who have done certain courses in certain years. These courses also receive a rating (as well does the tutor) and this is course and year specific.
These students may or may not do a master at the same university. Based on the results a student gets, bases on the rating a course gets, based on the number of resits a student does, I want to predict the probability of a student pursuing a master.
To do so, I want to run a logistic regression and hence I need to split the data into a training and validation/test set. However, as you see, multiple rows can revolve around the same student. E.g. row 1 to 12 revolve around student 9000006.
The problem when doing a logistic regression now is that the regression sees every row as a seperate unit, while in fact, the students are kind of 'grouped'.
Programme Resits Student_ID Course_code Academic_year Course_Grade_Binned Graduated Master_Student Course.rating_M Rating.tutor_M Selfstudy_M
1 IB 0 9000006 ABC1198 2013 B TRUE 1 7.5 8.2 14.1
2 IB 0 9000006 ABC1192 2014 B TRUE 1 8.4 8.8 13.0
3 IB 0 9000006 ABC1277 2014 A TRUE 1 6.0 6.4 10.6
4 IB 0 9000006 ABC1448 2013 B TRUE 1 5.7 7.8 14.4
5 IB 0 9000006 ABC1120 2014 B TRUE 1 7.1 7.4 11.2
6 IB 0 9000006 ABC1362 2013 B TRUE 1 6.7 7.5 15.8
7 IB 0 9000006 ABC1213 2013 C TRUE 1 7.7 8.1 11.4
8 IB 0 9000006 ABC1382 2013 B TRUE 1 6.6 7.1 16.3
9 IB 0 9000006 ABC1108 2013 C TRUE 1 7.1 7.6 15.7
10 IB 1 9000006 ABC1329 2014 B TRUE 1 7.5 7.9 10.7
11 IB 0 9000006 ABC1126 2013 B TRUE 1 6.7 7.5 15.3
12 IB 0 9000006 ABC1003 2013 B TRUE 1 7.3 8.5 12.6
13 IB 0 9000014 ABC1309 2014 B TRUE 0 6.9 6.1 12.4
14 IB 0 9000014 ABC1198 2013 A TRUE 0 7.5 8.2 14.1
15 IB 0 9000014 ABC1277 2014 A TRUE 0 6.0 6.4 10.6
16 IB 0 9000014 ABC1448 2013 A TRUE 0 5.7 7.8 14.4
17 IB 0 9000014 ABC1362 2013 B TRUE 0 6.7 7.5 15.8
18 IB 0 9000014 ABC1213 2013 B TRUE 0 7.7 8.1 11.4
19 IB 0 9000014 ABC1152 2014 A TRUE 0 7.0 7.6 12.3
20 IB 0 9000014 ABC1382 2013 A TRUE 0 6.6 7.1 16.3
21 IB 0 9000014 ABC1108 2013 B TRUE 0 7.1 7.6 15.7
22 IB 0 9000014 ABC1455 2014 A TRUE 0 6.7 7.3 11.2
23 IB 0 9000014 ABC1126 2013 B TRUE 0 6.7 7.5 15.3
24 IB 0 9000014 ABC1003 2013 A TRUE 0 7.3 8.5 12.6
25 IB 1 9000028 ABC1213 2014 C TRUE 0 7.8 8.6 10.7
26 IB 0 9000028 ABC1198 2014 B TRUE 0 7.1 8.0 15.5
Does anyone have any tips on how to perform a logistic regression on this kind of data? If you have another suggestion to calculate a probability of a student pursuing a master, then please let me know as well :)
Cheers!

Related

How to sample data non-random

I have weather dataset my data is date-dependent
I want to predict the temperature from 07 May 2008 until 18 May 2008 (which is maybe a total of 10-15 observations) my data size is around 200
I will be using decision tree/RF and SVM & NN to make my prediction
I've never handled data like this so I'm not sure how to sample non random data
I want to sample data 80% train data and 30% test data but I want to sample the data in the original order not randomly. Is that possible ?
install.packages("rattle")
install.packages("RGtk2")
library("rattle")
seed <- 42
set.seed(seed)
fname <- system.file("csv", "weather.csv", package = "rattle")
dataset <- read.csv(fname, encoding = "UTF-8")
dataset <- dataset[1:200,]
dataset <- dataset[order(dataset$Date),]
set.seed(321)
sample_data = sample(nrow(dataset), nrow(dataset)*.8)
test<-dataset[sample_data,] # 30%
train<-dataset[-sample_data,] # 80%
output
> head(dataset)
Date Location MinTemp MaxTemp Rainfall Evaporation Sunshine WindGustDir WindGustSpeed
1 2007-11-01 Canberra 8.0 24.3 0.0 3.4 6.3 NW 30
2 2007-11-02 Canberra 14.0 26.9 3.6 4.4 9.7 ENE 39
3 2007-11-03 Canberra 13.7 23.4 3.6 5.8 3.3 NW 85
4 2007-11-04 Canberra 13.3 15.5 39.8 7.2 9.1 NW 54
5 2007-11-05 Canberra 7.6 16.1 2.8 5.6 10.6 SSE 50
6 2007-11-06 Canberra 6.2 16.9 0.0 5.8 8.2 SE 44
WindDir9am WindDir3pm WindSpeed9am WindSpeed3pm Humidity9am Humidity3pm Pressure9am
1 SW NW 6 20 68 29 1019.7
2 E W 4 17 80 36 1012.4
3 N NNE 6 6 82 69 1009.5
4 WNW W 30 24 62 56 1005.5
5 SSE ESE 20 28 68 49 1018.3
6 SE E 20 24 70 57 1023.8
Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm RainToday RISK_MM RainTomorrow
1 1015.0 7 7 14.4 23.6 No 3.6 Yes
2 1008.4 5 3 17.5 25.7 Yes 3.6 Yes
3 1007.2 8 7 15.4 20.2 Yes 39.8 Yes
4 1007.0 2 7 13.5 14.1 Yes 2.8 Yes
5 1018.5 7 7 11.1 15.4 Yes 0.0 No
6 1021.7 7 5 10.9 14.8 No 0.2 No
> head(test)
Date Location MinTemp MaxTemp Rainfall Evaporation Sunshine WindGustDir WindGustSpeed
182 2008-04-30 Canberra -1.8 14.8 0.0 1.4 7.0 N 28
77 2008-01-16 Canberra 17.9 33.2 0.0 10.4 8.4 N 59
88 2008-01-27 Canberra 13.2 31.3 0.0 6.6 11.6 WSW 46
58 2007-12-28 Canberra 15.1 28.3 14.4 8.8 13.2 NNW 28
96 2008-02-04 Canberra 18.2 22.6 1.8 8.0 0.0 ENE 33
126 2008-03-05 Canberra 12.0 27.6 0.0 6.0 11.0 E 46
WindDir9am WindDir3pm WindSpeed9am WindSpeed3pm Humidity9am Humidity3pm Pressure9am
182 E N 2 19 80 40 1024.2
77 N NNE 15 20 58 62 1008.5
88 N WNW 4 26 71 28 1013.1
58 NNW NW 6 13 73 44 1016.8
96 SSE ENE 7 13 92 76 1014.4
126 SSE WSW 7 6 69 35 1025.5
Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm RainToday RISK_MM RainTomorrow
182 1020.5 1 7 5.3 13.9 No 0.0 No
77 1006.1 6 7 24.5 23.5 No 4.8 Yes
88 1009.5 1 4 19.7 30.7 No 0.0 No
58 1013.4 1 5 18.3 27.4 Yes 0.0 No
96 1011.5 8 8 18.5 22.1 Yes 9.0 Yes
126 1022.2 1 1 15.7 26.2 No 0.0 No
> head(train)
Date Location MinTemp MaxTemp Rainfall Evaporation Sunshine WindGustDir WindGustSpeed
7 2007-11-07 Canberra 6.1 18.2 0.2 4.2 8.4 SE 43
9 2007-11-09 Canberra 8.8 19.5 0.0 4.0 4.1 S 48
11 2007-11-11 Canberra 9.1 25.2 0.0 4.2 11.9 N 30
16 2007-11-16 Canberra 12.4 32.1 0.0 8.4 11.1 E 46
22 2007-11-22 Canberra 16.4 19.4 0.4 9.2 0.0 E 26
25 2007-11-25 Canberra 15.4 28.4 0.0 4.4 8.1 ENE 33
WindDir9am WindDir3pm WindSpeed9am WindSpeed3pm Humidity9am Humidity3pm Pressure9am
7 SE ESE 19 26 63 47 1024.6
9 E ENE 19 17 70 48 1026.1
11 SE NW 6 9 74 34 1024.4
16 SE WSW 7 9 70 22 1017.9
22 ENE E 6 11 88 72 1010.7
25 SSE NE 9 15 85 31 1022.4
Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm RainToday RISK_MM RainTomorrow
7 1022.2 4 6 12.4 17.3 No 0.0 No
9 1022.7 7 7 14.1 18.9 No 16.2 Yes
11 1021.1 1 2 14.6 24.0 No 0.2 No
16 1012.8 0 3 19.1 30.7 No 0.0 No
22 1008.9 8 8 16.5 18.3 No 25.8 Yes
25 1018.6 8 2 16.8 27.3 No 0.0 No
I use mtcars as an example. An option to non-randomly split your data in train and test is to first create a sample size based on the number of rows in your data. After that you can use split to split the data exact at the 80% of your data. You using the following code:
smp_size <- floor(0.80 * nrow(mtcars))
split <- split(mtcars, rep(1:2, each = smp_size))
With the following code you can turn the split in train and test:
train <- split$`1`
test <- split$`2`
Let's check the number of rows:
> nrow(train)
[1] 25
> nrow(test)
[1] 7
Now the data is split in train and test without losing their order.

R - Dataframe (group_by/aggregate/pivot_wider) Manipulation [duplicate]

This question already has answers here:
Aggregate by specific year in R
(2 answers)
Closed last year.
I'm currently having an issue manipulating/aggregating my dataframe. The current data frame I have is as follow:
Farm
Year
Cow
Duck
Chicken
Sheep
Horse
Farm 1
2020
22
12
100
30
25
Farm 1
2020
0
12
120
20
20
Farm 1
2019
16
6
80
10
16
Farm 1
2019
12
0
50
0
11
Farm 1
2018
8
0
0
16
0
Farm 1
2018
0
0
10
13
12
Farm 2
2020
31
28
27
10
14
Farm 2
2020
0
13
31
20
0
Farm 2
2019
3
31
0
20
43
Farm 2
2019
20
50
43
17
42
Farm 2
2018
39
33
0
48
10
Farm 2
2018
34
20
28
12
12
Farm 3
2020
27
0
37
30
42
Farm 3
2020
50
9
0
0
0
Farm 3
2019
0
19
0
20
16
Farm 3
2019
0
2
0
0
7
Farm 3
2018
0
0
5
27
0
Farm 3
2018
0
7
43
49
42
For simplicity, the code for the data frame is as follows:
Farms = c(rep("Farm 1", 6), rep("Farm 2", 6), rep("Farm 3", 6))
Year = rep(c(2020,2020,2019,2019,2018,2018),3)
Cow = c(22,0,16,12,8,0,31,0,3,20,39,34,27,50,0,0,0,0)
Duck = c(12,12,6,0,0,0,28,13,31,50,33,20,0,9,19,2,0,7)
Chicken = c(100,120,80,50,0,10,27,31,0,43,0,28,37,0,0,0,5,43)
Sheep = c(30,20,10,0,16,13,10,20,20,17,48,12,30,0,20,0,27,49)
Horse = c(25,20,16,11,0,12,14,0,43,42,10,12,42,0,16,7,0,42)
Data = data.frame(Farms, Year, Cow, Duck, Chicken, Sheep, Horse)
Can I check if anyone knows how I can change the dataframe to the following table below using group_by and/or aggregate and/or pivot_wider or any other ways? The dataframe below aggregated the farm by year and took the average of each animal for the year.
Farm
Year
Cow
Duck
Chicken
Sheep
Horse
Farm 1
2020
Average of 2020 = (22+0)/2 = 11
12
110
25
22.5
Farm 1
2019
14
3
65
5
13.5
Farm 1
2018
4
0
5
14.5
6
Farm 2
2020
15.5
20.5
29
15
7
Farm 2
2019
11.5
40.5
21.5
18.5
42.5
Farm 2
2018
36.5
26.5
14
30
11
Farm 3
2020
38.5
4.5
18.5
15
21
Farm 3
2019
0
10.5
0
10
11.5
Farm 3
2018
0
3.5
24
38
21
Thank you in Advance and a happy 2022 to all!
aggregate(.~Year + Farms, Data, mean)
Year Farms Cow Duck Chicken Sheep Horse
1 2018 Farm 1 4.0 0.0 5.0 14.5 6.0
2 2019 Farm 1 14.0 3.0 65.0 5.0 13.5
3 2020 Farm 1 11.0 12.0 110.0 25.0 22.5
4 2018 Farm 2 36.5 26.5 14.0 30.0 11.0
5 2019 Farm 2 11.5 40.5 21.5 18.5 42.5
6 2020 Farm 2 15.5 20.5 29.0 15.0 7.0
7 2018 Farm 3 0.0 3.5 24.0 38.0 21.0
8 2019 Farm 3 0.0 10.5 0.0 10.0 11.5
9 2020 Farm 3 38.5 4.5 18.5 15.0 21.0
aggregate(.~Farms + Year, Data, mean)
Farms Year Cow Duck Chicken Sheep Horse
1 Farm 1 2018 4.0 0.0 5.0 14.5 6.0
2 Farm 2 2018 36.5 26.5 14.0 30.0 11.0
3 Farm 3 2018 0.0 3.5 24.0 38.0 21.0
4 Farm 1 2019 14.0 3.0 65.0 5.0 13.5
5 Farm 2 2019 11.5 40.5 21.5 18.5 42.5
6 Farm 3 2019 0.0 10.5 0.0 10.0 11.5
7 Farm 1 2020 11.0 12.0 110.0 25.0 22.5
8 Farm 2 2020 15.5 20.5 29.0 15.0 7.0
9 Farm 3 2020 38.5 4.5 18.5 15.0 21.0
Data%>%
group_by(Farms, Year) %>%
summarise(across(everything(), mean), .groups = 'drop')
# A tibble: 9 x 7
Farms Year Cow Duck Chicken Sheep Horse
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Farm 1 2018 4 0 5 14.5 6
2 Farm 1 2019 14 3 65 5 13.5
3 Farm 1 2020 11 12 110 25 22.5
4 Farm 2 2018 36.5 26.5 14 30 11
5 Farm 2 2019 11.5 40.5 21.5 18.5 42.5
6 Farm 2 2020 15.5 20.5 29 15 7
7 Farm 3 2018 0 3.5 24 38 21
8 Farm 3 2019 0 10.5 0 10 11.5
9 Farm 3 2020 38.5 4.5 18.5 15 21
Onyambu's answer is good. But small thing - and I know you didn't ask for this - you might want to consider if by average you want the mean or median statistic. At first glance, looks like the data might be rather skewed and median might be better for you.
Data %>%
pivot_longer(names_to = 'names', values_to = 'values', 3:7) %>%
ggplot(aes(x = values)) + geom_density() + facet_wrap(~names)

For loop to check if value exists in other dataframe

I have a large dataframe with 31181 observations and 9 variables. In this dataframe, the academic performance of students is registered.
I also have a second dataframe, in which each student is represented in 1 row. In this row I would like to store his/her results from the academic performance dataframe.
Dataframe 1 (let's call it Academic) looks as follows:
Programme Resits Student_ID Course_code Academic_year Course_Grade_Binned Graduated Master_Student Course.rating_M Rating.tutor_M Selfstudy_M
1 IB 0 9000006 ABC1198 2013 B TRUE 1 7.5 8.2 14.1
2 IB 0 9000006 ABC1192 2014 B TRUE 1 8.4 8.8 13.0
3 IB 0 9000006 ABC1277 2014 A TRUE 1 6.0 6.4 10.6
4 IB 0 9000006 ABC1448 2013 B TRUE 1 5.7 7.8 14.4
5 IB 0 9000006 ABC1120 2014 B TRUE 1 7.1 7.4 11.2
6 IB 0 9000006 ABC1362 2013 B TRUE 1 6.7 7.5 15.8
7 IB 0 9000006 ABC1213 2013 C TRUE 1 7.7 8.1 11.4
8 IB 0 9000006 ABC1382 2013 B TRUE 1 6.6 7.1 16.3
9 IB 0 9000006 ABC1108 2013 C TRUE 1 7.1 7.6 15.7
10 IB 1 9000006 ABC1329 2014 B TRUE 1 7.5 7.9 10.7
11 IB 0 9000006 ABC1126 2013 B TRUE 1 6.7 7.5 15.3
12 IB 0 9000006 ABC1003 2013 B TRUE 1 7.3 8.5 12.6
13 IB 0 9000014 ABC1309 2014 B TRUE 0 6.9 6.1 12.4
14 IB 0 9000014 ABC1198 2013 A TRUE 0 7.5 8.2 14.1
15 IB 0 9000014 ABC1277 2014 A TRUE 0 6.0 6.4 10.6
16 IB 0 9000014 ABC1448 2013 A TRUE 0 5.7 7.8 14.4
17 IB 0 9000014 ABC1362 2013 B TRUE 0 6.7 7.5 15.8
18 IB 0 9000014 ABC1213 2013 B TRUE 0 7.7 8.1 11.4
19 IB 0 9000014 ABC1152 2014 A TRUE 0 7.0 7.6 12.3
20 IB 0 9000014 ABC1382 2013 A TRUE 0 6.6 7.1 16.3
21 IB 0 9000014 ABC1108 2013 B TRUE 0 7.1 7.6 15.7
22 IB 0 9000014 ABC1455 2014 A TRUE 0 6.7 7.3 11.2
23 IB 0 9000014 ABC1126 2013 B TRUE 0 6.7 7.5 15.3
24 IB 0 9000014 ABC1003 2013 A TRUE 0 7.3 8.5 12.6
25 IB 1 9000028 ABC1213 2014 C TRUE 0 7.8 8.6 10.7
26 IB 0 9000028 ABC1198 2014 B TRUE 0 7.1 8.0 15.5
Dataframe 2 (let's call it NewData) looks like this:
Student_ID Master Resits Programme ABC1198 ABC1192 ABC1277 ABC1448 ABC1120 ABC1362 ABC1213 ABC1382 ABC1108 ABC1329 ABC1126 ABC1003 ABC1309 ABC1152 ABC1455 ABC1123 ABC1409
1 9000006 1 1 IB NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
2 9000014 0 0 IB NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
3 9000028 0 5 IB NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
4 9000045 1 5 EBE NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
As you can see, all the course columns are still NA. I would like to create a loop to check if a course_code exists in a group (i.e. by Student_ID) in the academic dataframe and then put a 1 in the particular course column in the NewData dataframe and a 0 if the student didn't do that course.
The end result (the NewData) should thus look like this:
Student_ID Master Resits Programme ABC1198 ABC1192 ABC1277 ABC1448 ABC1120 ABC1362 ABC1213 ABC1382 ABC1108 ABC1329 ABC1126 ABC1003 ABC1309 ABC1152 ABC1455 ABC1123 ABC1409
1 9000006 1 1 IB 1 1 1 1 1 1 1 0 1 1 1 1 1 0 0 0 0
Using base R, we can first define the columns where the subjects are present in NewData. split Course_code based on Student_ID and create a logical vector using %in% based on subjects present for each Student.
cols <- 5:ncol(NewData)
NewData[cols] <- t(sapply(split(Academic$Course_code, Academic$Student_ID),
function(x) +(names(NewData)[cols] %in% x)))
You can use tidyr also.
library(tidyr)
library(dplyr)
Academic$value = 1
NewData = Academic %>% spread(key = Course_code, value = value)
NewData[is.na(NewData)] = 0

Check for nearest value in a column

Is there a way to check which value in a vector/column is nearest to a given value?
so for example I have column with number of days:
days: 50, 49, 59, 180, 170, 199, 200
I want to make a new column in the dataframe that marks an X everytime the dayscolumn has the value 183 or close to 183
It should look like this:
DAYS new column
0
12
12
14
133
140 X
0
12
14
15
178
183 X
0
15
30
72
172 X
Hope you can help me!
You're searching for local maxima, essentially. Start off by normalizing your data to your target, i.e. 183, and search for values closest to zero. Those are your local maxima. I added data with values greater than your target to demonstrate.
df <- data.frame(DAYS = c(0,12,12,14,133,140,0,12,14,15,178,183,184,190,0,15,30,72,172,172.5))
df$localmin <- abs(df$DAYS - 183)
df
> df
DAYS localmin
1 0.0 183.0
2 12.0 171.0
3 12.0 171.0
4 14.0 169.0
5 133.0 50.0
6 140.0 43.0
7 0.0 183.0
8 12.0 171.0
9 14.0 169.0
10 15.0 168.0
11 178.0 5.0
12 183.0 0.0
13 184.0 1.0
14 190.0 7.0
15 0.0 183.0
16 15.0 168.0
17 30.0 153.0
18 72.0 111.0
19 172.0 11.0
20 172.5 10.5
targets <- which(diff(sign(diff(c(df$localmin, 183)))) == 2) + 1L
df$targets <- 0
df$targets[targets] <- 1
df
> df
DAYS localmin targets
1 0.0 183.0 0
2 12.0 171.0 0
3 12.0 171.0 0
4 14.0 169.0 0
5 133.0 50.0 0
6 140.0 43.0 1
7 0.0 183.0 0
8 12.0 171.0 0
9 14.0 169.0 0
10 15.0 168.0 0
11 178.0 5.0 0
12 183.0 0.0 1
13 184.0 1.0 0
14 190.0 7.0 0
15 0.0 183.0 0
16 15.0 168.0 0
17 30.0 153.0 0
18 72.0 111.0 0
19 172.0 11.0 0
20 172.5 10.5 1

How to change a column classed as NULL to class integer?

So I'm starting with a dataframe called max.mins that has 153 rows.
day Tx Hx Tn
1 1 10.0 7.83 2.1
2 2 7.7 6.19 2.5
3 3 7.1 4.86 0.0
4 4 9.8 7.37 2.7
5 5 13.4 12.68 0.4
6 6 17.5 17.47 3.5
7 7 16.5 15.58 6.5
8 8 21.5 20.30 6.2
9 9 21.7 21.41 9.7
10 10 24.4 28.18 8.0
I'm applying these statements to the dataframe to look for specific criteria
temp_warnings <- subset(max.mins, Tx >= 32 & Tn >=20)
humidex_warnings <- subset(max.mins, Hx >= 40)
Now when I open up humidex_warnings for example I have this dataframe
row.names day Tx Hx Tn
1 41 10 31.1 40.51 20.7
2 56 25 33.4 42.53 19.6
3 72 11 34.1 40.78 18.1
4 73 12 33.8 40.18 18.8
5 74 13 34.1 41.10 22.4
6 79 18 30.3 41.57 22.5
7 94 2 31.4 40.81 20.3
8 96 4 30.7 40.39 20.2
The next step is to search for 2 or 3 consective numbers in the column row.names and give me a total of how many times this occurs (I asked this in a previous question and have a function that should work once this problem is sorted out). The issue is that row.names is class NULL which is preventing me from applying further functions to this dataframe.
Help? :)
Thanks in advance,
Nick
If you need the row.names as a data as integer:
humidex_warnings$seq <- as.integer(row.names(humidex_warnings))
If you don't need row.names
row.names(humidex_warnings) <- NULL

Resources