Running a function for each group - r

I have following parameters of gompertz
A <- 100 # A is always 100
mu <- 35
lambda <- 265 # day of the year. Also the start day
I can use the above parameters to run a gompertz using following equation
grofit::gompertz(time,A,mu,lambda)
time is a basically a vector of lambda:end.day.
Now the issue is that I know the lambda (start day) but not the end day. I want find the end day when it reaches 100.
For e.g in the above example if I supply lambda:end.day as 265:270, I do not reach 100.
time <- 265:270
x <- round(grofit::gompertz(time,A,mu,lambda),2)
x
6.60 35.00 66.67 85.51 94.13 97.69
By multiple trials, I know if I give a vector of 265:277, I will reach 100.
time <- 265:277
x <- round(grofit::gompertz(time,A,mu,lambda),2)
x
[1] 6.60 35.00 66.67 85.51 94.13 97.69
[7] 99.10 99.65 99.87 99.95 99.98 99.99
[13] 100.00
I have dataframe that has the lambda (same as start day) and mu.
df <- data.frame(id = c(1,1,2,2), year = c(1981,1982,1981,1982), mu= c(35,32,33,28), lambda = c(275,278,284,296))
For each id and year, I want two columns: one column called day first value of which is equal to lamba and a second column which tells me the value of x for each day till it reaches 100 (end day).
How do I implement the above equation for each id and year such that I have a dataframe something like this:
id year day x
1 1981 275 6.6
1 1981 276 35
1 1981 277 66.67
1 1981 278 85.51
1 1981 279 94.13
1 1981 280 97.69
1 1981 281 99.1
1 1981 282 99.65
1 1981 283 99.87
1 1981 284 99.95
1 1981 285 99.98
1 1981 286 99.99
1 1981 287 100
. . . .
. . . .
2 1982 296 8
2 1982 297 33
2 1982 298 45
2 1982 299 63
2 1982 300 61
2 1982 301 73
2 1982 302 81
2 1982 303 91
2 1982 304 94
2 1982 305 98
2 1982 306 99
2 1982 307 100

Using dplyr and tidyr:
library(dplyr)
library(tidyr)
A <- 100 # A is always 100
df <-
data.frame(
id = c(1, 1, 2, 2),
year = c(1981, 1982, 1981, 1982),
mu = c(35, 32, 33, 28),
lambda = c(275, 278, 284, 296)
)
df2 <- df %>%
crossing(day = 1:365) %>%
group_by(id, year) %>%
filter(day >= lambda) %>%
mutate(x = round(grofit::gompertz(day, A, mu, lambda), 2)) %>%
group_by(id, year, x) %>%
filter(x != 100 | row_number() == 1)
df2 %>%
as.data.frame()
Result:
id year mu lambda day x
1 1 1981 35 275 275 6.60
2 1 1981 35 275 276 35.00
3 1 1981 35 275 277 66.67
4 1 1981 35 275 278 85.51
5 1 1981 35 275 279 94.13
6 1 1981 35 275 280 97.69
7 1 1981 35 275 281 99.10
8 1 1981 35 275 282 99.65
9 1 1981 35 275 283 99.87
10 1 1981 35 275 284 99.95
11 1 1981 35 275 285 99.98
12 1 1981 35 275 286 99.99
13 1 1981 35 275 287 100.00
14 1 1982 32 278 278 6.60
15 1 1982 32 278 279 32.01
16 1 1982 32 278 280 62.05
17 1 1982 32 278 281 81.87
18 1 1982 32 278 282 91.96
19 1 1982 32 278 283 96.55
20 1 1982 32 278 284 98.54
21 1 1982 32 278 285 99.39
22 1 1982 32 278 286 99.74
23 1 1982 32 278 287 99.89
24 1 1982 32 278 288 99.95
25 1 1982 32 278 289 99.98
26 1 1982 32 278 290 99.99
27 1 1982 32 278 291 100.00
28 2 1981 33 284 284 6.60
29 2 1981 33 284 285 33.01
30 2 1981 33 284 286 63.64
31 2 1981 33 284 287 83.17
32 2 1981 33 284 288 92.76
33 2 1981 33 284 289 96.98
34 2 1981 33 284 290 98.76
35 2 1981 33 284 291 99.49
36 2 1981 33 284 292 99.79
37 2 1981 33 284 293 99.92
38 2 1981 33 284 294 99.97
39 2 1981 33 284 295 99.99
40 2 1981 33 284 296 99.99
41 2 1981 33 284 297 100.00
42 2 1982 28 296 296 6.60
43 2 1982 28 296 297 28.09
44 2 1982 28 296 298 55.26
45 2 1982 28 296 299 75.80
46 2 1982 28 296 300 87.86
47 2 1982 28 296 301 94.13
48 2 1982 28 296 302 97.21
49 2 1982 28 296 303 98.69
50 2 1982 28 296 304 99.39
51 2 1982 28 296 305 99.71
52 2 1982 28 296 306 99.87
53 2 1982 28 296 307 99.94
54 2 1982 28 296 308 99.97
55 2 1982 28 296 309 99.99
56 2 1982 28 296 310 99.99
57 2 1982 28 296 311 100.00

Related

Is there an R function that turns a frequency table into a prop table?

What is the simplest way of turning a frequency data table into a prop table in R?
This is the data:
Time Total Blog News Social.Network Microblog Other Forums Pictures Video
1 15.KW 2022 1816 23 326 39 678 99 27 523 0
2 16.KW 2022 2535 32 690 42 815 135 26 644 1
3 17.KW 2022 2181 20 362 79 805 110 14 634 1
4 18.KW 2022 2583 19 895 25 692 127 6 658 0
5 19.KW 2022 2337 21 555 22 908 148 8 599 0
6 20.KW 2022 2091 23 392 18 851 119 5 554 0
7 21.KW 2022 1658 17 344 16 650 129 1 417 0
8 22.KW 2022 2476 24 798 24 937 150 7 443 0
9 23.KW 2022 1687 14 341 17 691 102 9 400 0
10 24.KW 2022 2476 21 521 29 984 110 19 509 0
11 25.KW 2022 2412 22 696 31 845 115 29 561 0
12 26.KW 2022 2197 22 715 13 709 128 59 445 0
13 27.KW 2022 2111 20 429 10 937 86 28 474 1
14 28.KW 2022 752 5 121 4 373 42 3 172 0
Your data frame df has a 2nd column called Total. It seems that you want to divide subsequent columns by this one.
df[-1] <- df[-1] / df$Total
After this, the 1st column Time does not change. 2nd column Total becomes 1. Other columns become proportions.

Putting several rows into one column in R

I am trying to run a time series analysis on the following data set:
Year 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780
Number 101 82 66 35 31 7 20 92 154 125
Year 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790
Number 85 68 38 23 10 24 83 132 131 118
Year 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800
Number 90 67 60 47 41 21 16 6 4 7
Year 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810
Number 14 34 45 43 48 42 28 10 8 2
Year 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820
Number 0 1 5 12 14 35 46 41 30 24
Year 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830
Number 16 7 4 2 8 17 36 50 62 67
Year 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840
Number 71 48 28 8 13 57 122 138 103 86
Year 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850
Number 63 37 24 11 15 40 62 98 124 96
Year 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860
Number 66 64 54 39 21 7 4 23 55 94
Year 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870
Number 96 77 59 44 47 30 16 7 37 74
My problem is that the data is placed in multiple rows. I am trying to make two columns from the data. One for Year and one for Number, so that it is easily readable in R. I have tried
> library(tidyverse)
> sun.df = data.frame(sunspots)
> Year = filter(sun.df, sunspots == "Year")
to isolate the Year data, and it works, but I am unsure of how to then place it in a column.
Any suggestions?
Try this:
library(tidyverse)
df <- read_csv("test.csv",col_names = FALSE)
df
# A tibble: 6 x 4
# X1 X2 X3 X4
# <chr> <dbl> <dbl> <dbl>
# 1 Year 123 124 125
# 2 Number 1 2 3
# 3 Year 126 127 128
# 4 Number 4 5 6
# 5 Year 129 130 131
# 6 Number 7 8 9
# Removing first column and transpose it to get a dataframe of numbers
df_number <- as.data.frame(as.matrix(t(df[,-1])),row.names = FALSE)
df_number
# V1 V2 V3 V4 V5 V6
# 1 123 1 126 4 129 7
# 2 124 2 127 5 130 8
# 3 125 3 128 6 131 9
# Keep the first two column (V1,V2) and assign column names
df_new <- df_number[1:2]
colnames(df_new) <- c("Year","Number")
# Iterate and rbind with subsequent columns (2 by 2) to df_new
for(i in 1:((ncol(df_number) - 2 )/2)) {
df_mini <- df_number[(i*2+1):(i*2+2)]
colnames(df_mini) <- c("Year","Number")
df_new <- rbind(df_new,df_mini)
}
df_new
# Year Number
# 1 123 1
# 2 124 2
# 3 125 3
# 4 126 4
# 5 127 5
# 6 128 6
# 7 129 7
# 8 130 8
# 9 131 9

Pivot / Reshape data [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
My sample data looks like this:
data <- read.table(header=T, text='
pid measurement1 Tdays1 measurement2 Tdays2 measurement3 Tdays3 measurment4 Tdays4
1 1356 1435 1483 1405 1563 1374 NA NA
2 943 1848 1173 1818 1300 1785 NA NA
3 1590 185 NA NA NA NA 1585 294
4 130 72 443 70 NA NA 136 79
4 140 82 NA NA NA NA 756 89
4 220 126 266 124 NA NA 703 128
4 166 159 213 156 476 145 776 166
4 380 189 583 173 NA NA 586 203
4 353 231 510 222 656 217 526 240
4 180 268 NA NA NA NA NA NA
4 NA NA NA NA NA NA 580 278
4 571 334 596 303 816 289 483 371
')
Now i would like it to look something like this:
PID Time (days) Value
1 1435 1356
1 1405 1483
1 1374 1563
2 1848 943
2 1818 1173
2 1785 1300
3 185 1590
... ... ...
How would i tend to get there? I have looked up some things about wide to longformat, but it doesn't seem to do the trick.
Kind regards, and thank you in advance.
Here is a base R option
u <- cbind(
data[1],
do.call(
rbind,
lapply(
split.default(data[-1], ceiling(seq_along(data[-1]) / 2)),
setNames,
c("Value", "Time")
)
)
)
out <- `row.names<-`(
subset(
x <- u[order(u$pid), ],
complete.cases(x)
), NULL
)
such that
> out
pid Value Time
1 1 1356 1435
2 1 1483 1405
3 1 1563 1374
4 2 943 1848
5 2 1173 1818
6 2 1300 1785
7 3 1590 185
8 3 1585 294
9 4 130 72
10 4 140 82
11 4 220 126
12 4 166 159
13 4 380 189
14 4 353 231
15 4 180 268
16 4 571 334
17 4 443 70
18 4 266 124
19 4 213 156
20 4 583 173
21 4 510 222
22 4 596 303
23 4 476 145
24 4 656 217
25 4 816 289
26 4 136 79
27 4 756 89
28 4 703 128
29 4 776 166
30 4 586 203
31 4 526 240
32 4 580 278
33 4 483 371
An option with pivot_longer
library(dplyr)
library(tidyr)
names(data)[8] <- "measurement4"
data %>%
pivot_longer(cols = -pid, names_to = c('.value', 'grp'),
names_sep = "(?<=[a-z])(?=[0-9])", values_drop_na = TRUE) %>% select(-grp)
# A tibble: 33 x 3
# pid measurement Tdays
# <int> <int> <int>
# 1 1 1356 1435
# 2 1 1483 1405
# 3 1 1563 1374
# 4 2 943 1848
# 5 2 1173 1818
# 6 2 1300 1785
# 7 3 1590 185
# 8 3 1585 294
# 9 4 130 72
#10 4 443 70
# … with 23 more rows

Convert non-numeric rows and columns to zero

I have this data from an r package, where X is the dataset with all the data
library(ISLR)
data("Hitters")
X=Hitters
head(X)
here is one part of the data:
AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun CRuns CRBI CWalks League Division PutOuts Assists Errors Salary NewLeague
-Andy Allanson 293 66 1 30 29 14 1 293 66 1 30 29 14 A E 446 33 20 NA A
-Alan Ashby 315 81 7 24 38 39 14 3449 835 69 321 414 375 N W 632 43 10 475.0 N
-Alvin Davis 479 130 18 66 72 76 3 1624 457 63 224 266 263 A W 880 82 14 480.0 A
-Andre Dawson 496 141 20 65 78 37 11 5628 1575 225 828 838 354 N E 200 11 3 500.0 N
-Andres Galarraga 321 87 10 39 42 30 2 396 101 12 48 46 33 N E 805 40 4 91.5 N
-Alfredo Griffin 594 169 4 74 51 35 11 4408 1133 19 501 336 194 A W 282 421 25 750.0 A
I want to convert all the columns and the rows with non numeric values to zero, is there any simple way to do this.
I found here an example how to remove the rows for one column just but for more I have to do it for every column manually.
Is in r any function that does this for all columns and rows?
To remove non-numeric columns, perhaps something like this?
df %>%
select(which(sapply(., is.numeric)))
# AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun
#-Andy Allanson 293 66 1 30 29 14 1 293 66 1
#-Alan Ashby 315 81 7 24 38 39 14 3449 835 69
#-Alvin Davis 479 130 18 66 72 76 3 1624 457 63
#-Andre Dawson 496 141 20 65 78 37 11 5628 1575 225
#-Andres Galarraga 321 87 10 39 42 30 2 396 101 12
#-Alfredo Griffin 594 169 4 74 51 35 11 4408 1133 19
# CRuns CRBI CWalks PutOuts Assists Errors Salary
#-Andy Allanson 30 29 14 446 33 20 NA
#-Alan Ashby 321 414 375 632 43 10 475.0
#-Alvin Davis 224 266 263 880 82 14 480.0
#-Andre Dawson 828 838 354 200 11 3 500.0
#-Andres Galarraga 48 46 33 805 40 4 91.5
#-Alfredo Griffin 501 336 194 282 421 25 750.0
or
df %>%
select(-which(sapply(., function(x) is.character(x) | is.factor(x))))
Or much neater (thanks to #AntoniosK):
df %>% select_if(is.numeric)
Update
To additionally replace NAs with 0, you can do
df %>% select_if(is.numeric) %>% replace(is.na(.), 0)
# AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun
#-Andy Allanson 293 66 1 30 29 14 1 293 66 1
#-Alan Ashby 315 81 7 24 38 39 14 3449 835 69
#-Alvin Davis 479 130 18 66 72 76 3 1624 457 63
#-Andre Dawson 496 141 20 65 78 37 11 5628 1575 225
#-Andres Galarraga 321 87 10 39 42 30 2 396 101 12
#-Alfredo Griffin 594 169 4 74 51 35 11 4408 1133 19
# CRuns CRBI CWalks PutOuts Assists Errors Salary
#-Andy Allanson 30 29 14 446 33 20 0.0
#-Alan Ashby 321 414 375 632 43 10 475.0
#-Alvin Davis 224 266 263 880 82 14 480.0
#-Andre Dawson 828 838 354 200 11 3 500.0
#-Andres Galarraga 48 46 33 805 40 4 91.5
#-Alfredo Griffin 501 336 194 282 421 25 750.0
library(ISLR)
data("Hitters")
d = head(Hitters)
library(dplyr)
d %>%
mutate_if(function(x) !is.numeric(x), function(x) 0) %>% # if column is non numeric add zeros
mutate_all(function(x) ifelse(is.na(x), 0, x)) # if there is an NA element replace it with 0
# AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun CRuns CRBI CWalks League Division PutOuts Assists Errors Salary NewLeague
# 1 293 66 1 30 29 14 1 293 66 1 30 29 14 0 0 446 33 20 0.0 0
# 2 315 81 7 24 38 39 14 3449 835 69 321 414 375 0 0 632 43 10 475.0 0
# 3 479 130 18 66 72 76 3 1624 457 63 224 266 263 0 0 880 82 14 480.0 0
# 4 496 141 20 65 78 37 11 5628 1575 225 828 838 354 0 0 200 11 3 500.0 0
# 5 321 87 10 39 42 30 2 396 101 12 48 46 33 0 0 805 40 4 91.5 0
# 6 594 169 4 74 51 35 11 4408 1133 19 501 336 194 0 0 282 421 25 750.0 0
If you want to avoid function(x) you can use this
d %>%
mutate_if(Negate(is.numeric), ~0) %>%
mutate_all(~ifelse(is.na(.), 0, .))
You can get the numeric columns with sapply/inherits.
X <- Hitters
inx <- sapply(X, inherits, c("integer", "numeric"))
Y <- X[inx]
Then, it wouldn't make much sense to remove the rows with non-numeric entries, they were already removed, but you could do
inx <- apply(Y, 1, function(y) all(inherits(y, c("integer", "numeric"))))
Y[inx, ]

Filter rows having duplicate IDs [duplicate]

This question already has answers here:
Finding ALL duplicate rows, including "elements with smaller subscripts"
(9 answers)
Closed 5 years ago.
My data is like this:
dat <- read.table(header=TRUE, text="
ID Veh oct nov dec jan feb
1120 1 7 47 152 259 140
2000 1 5 88 236 251 145
2000 2 14 72 263 331 147
1133 1 6 71 207 290 242
2000 3 7 47 152 259 140
2002 1 5 88 236 251 145
2006 1 14 72 263 331 147
2002 2 6 71 207 290 242
")
dat
ID Veh oct nov dec jan feb
1 1120 1 7 47 152 259 140
2 2000 1 5 88 236 251 145
3 2000 2 14 72 263 331 147
4 1133 1 6 71 207 290 242
5 2000 3 7 47 152 259 140
6 2002 1 5 88 236 251 145
7 2006 1 14 72 263 331 147
8 2002 2 6 71 207 290 242
By using duplicated function:
Unique Cells in Column 1
dat[!duplicated(dat[,1]),]
ID Veh oct nov dec jan feb
1 1120 1 7 47 152 259 140
2 2000 1 5 88 236 251 145
4 1133 1 6 71 207 290 242
6 2002 1 5 88 236 251 145
7 2006 1 14 72 263 331 147
Duplicate cells in Column 1
dat[duplicated(dat[,1]),]
ID Veh oct nov dec jan feb
3 2000 2 14 72 263 331 147
5 2000 3 7 47 152 259 140
8 2002 2 6 71 207 290 242
But I want to keep the row with first row like the following (which I am struggling to code):
ID Veh oct nov dec jan feb
2000 1 5 88 236 251 145
2000 2 14 72 263 331 147
2000 3 7 47 152 259 140
2002 1 5 88 236 251 145
2002 2 6 71 207 290 242
Try
dat[duplicated(dat[,1])|duplicated(dat[,1],fromLast=TRUE),]
# ID Veh oct nov dec jan feb
#2 2000 1 5 88 236 251 145
#3 2000 2 14 72 263 331 147
#5 2000 3 7 47 152 259 140
#6 2002 1 5 88 236 251 145
#8 2002 2 6 71 207 290 242
Or
library(data.table)
setDT(dat)[, .SD[.N>1], ID]

Resources