How to transform NA values with the R mutate function? - r

I'm trying to use the function mutate is order to create a variable based on conditions regarding three others.
These conditions were created using case_when, as you may see in the code below.
But I have some conditions that uses NA valures, and these seems to be causing an error in the mutate function.
Check it out, please:
# About the variables being used:
unique(x1)
# [1] 1 0 NA
str(pemg$x1)
# num [1:1622989] 1 0 0 1 1 0 1 1 0 0 ...
unique(x2)
# [1] 16 66 38 11 8 6 14 17 53 59 10 31 50 19 48 42 44 21 54 55 56 18 57 61 13 43 7 4 15
# [30] 39 5 20 3 37 23 51 36 52 68 58 27 65 62 2 12 32 41 49 46 35 34 45 81 69 33 40 0 70
# [59] 9 47 63 29 25 22 64 24 60 30 67 26 71 72 28 1 75 80 87 77 73 78 76 79 74 83 92 102 85
# [88] 86 90 82 91 84 88 93 89 96 95 105 115 106 94 100 99 97 104 98 103 108 109 101 117 107 114 113 NA 112
# [117] 110 111
str(pemg$x2)
# num [1:1622989] 16 66 38 11 8 6 14 17 53 59 ...
unique(x3)
# [1] 6 3 4 5 0 8 2 1 11 9 10 7 NA 15
str(pemg$anoest)
# num [1:1622989] 6 3 4 5 3 0 5 8 4 2 ...
df <- mutate(df,
y = case_when(
x1 == 1 & x2 >= 7 & x3 == 0 ~ 1,
x1 == 1 & x2 >= 8 & x3 == 1 ~ 1,
x1 == 1 & x2 >= 10 & x3 == 3 ~ 1,
x1 == 1 & x2 >= 11 & x3 == 4 ~ 1,
x1 == 1 & x2 >= 12 & x3 == 5 ~ 1,
x1 == 1 & x2 >= 13 & x3 == 6 ~ 1,
x1 == 1 & x2 >= 14 & x3 == 7 ~ 1,
x1 == 1 & x2 >= 15 & x3 == 8 ~ 1,
x1 == 1 & x2 >= 16 & x3 == 9 ~ 1,
x1 == 1 & x2 >= 17 & x3 == 10 ~ 1,
x1 == 1 & x2 >= 18 & x3 == 11 ~ 1,
x1 == 1 & !is.na(x3) ~ 0,
x1 == 1 & x3 %in% 12:16 ~ 0,
x2 %in% 0:7 ~ NA,
x2 > 18 ~ NA,
x1 == 0 ~ NA,
is.na(x3) ~ NA))
# Error: Problem with `mutate()` input `defasado`.
# x must be a double vector, not a logical vector.
# i Input `defasado` is `case_when(...)`.
# Run `rlang::last_error()` to see where the error occurred.
last_error()
# <error/dplyr_error>
# Problem with `mutate()` input `y`.
# x must be a double vector, not a logical vector.
# i Input `y` is `case_when(...)`.
# Backtrace:
# 1. dplyr::mutate(...)
# 2. dplyr:::mutate.data.frame(...)
# 3. dplyr:::mutate_cols(.data, ...)
# Run `rlang::last_trace()` to see the full context.
last_trace()
# <error/dplyr_error>
# Problem with `mutate()` input `defasado`.
# x must be a double vector, not a logical vector.
# i Input `defasado` is `case_when(...)`.
# Backtrace:
# x
# 1. +-dplyr::mutate(...)
# 2. \-dplyr:::mutate.data.frame(...)
# 3. \-dplyr:::mutate_cols(.data, ...)
# <parent: error/rlang_error>
# must be a double vector, not a logical vector.
# Backtrace:
# x
# 1. +-mask$eval_all_mutate(dots[[i]])
# 2. \-dplyr::case_when(...)
# 3. \-dplyr:::replace_with(...)
# 4. \-dplyr:::check_type(val, x, name)
# 5. \-dplyr:::glubort(header, "must be {friendly_type_of(template)}, not {friendly_type_of(x)}.")
Can someone give me a hint on how to solve this?

The problem here are the results of your case_when. if_else form dplyr is stricter than ifelse from base R - all result values have to be of the same type. Since case_when is a vecotrization of multiple if_else you have to tell R which type of NA the output should be:
library(dplyr)
# does not work
dplyr::tibble(d = c(6,2,4, NA, 5)) %>%
dplyr::mutate(v = case_when(d < 4 ~ 0,
is.na(d) ~ NA))
# works
dplyr::tibble(d = c(6,2,4, NA, 5)) %>%
dplyr::mutate(v = case_when(d < 4 ~ 0,
is.na(d) ~ NA_real_))

You need to make sure your NA's are the right class. In your case, place the NA after the ~ in as.numeric(). For example:
x2 %in% 0:7 ~ as.numeric(NA)

R has different types of NA. The one you are using is of logical type, but you need the double type NA_real_ in order to be consistent with the output of your other conditions. For more information, see this: https://stat.ethz.ch/R-manual/R-patched/library/base/html/NA.html

In base R, we can construct a logical vector and assign the column values to NA based on that logical vector. Unlike case_when, we don't have to really specify the type of NA as this gets automatically converted.
df1$d[df1$d %in% 0:7] <- NA
Also, for a simple operation, it can be done in base R in a compact way

Related

How to adopt ifelse statement to NA value in R?

I am trying to create new column by condition, where if value in A equal to 1, value is copied from B column, otherwise from C column. When A has NA, condition does not work. I dont want to drop this NA and do following:
If A contains NA, values has to taken from C column
df <- data.frame (A = c(1,1,NA,2,2,2),
B = c(10,20,30,40,25,45),
C = c(11,23,33,45,56,13))
#If Customer faced defect only in Wave_3 his Boycotting score is taken from wave 3, otherwise from wave 4
df$D1 <- ifelse(df$A ==1 , df$B ,df$C)
Expected output:
Use Boolean operation here:
df$D1 <- ifelse(df$A ==1 & !is.na(df$A), df$B ,df$C)
alternate way is
df$D1 <- ifelse(is.na(df$A), df$C, ifelse(df$A ==1 , df$B ,df$C))
Using case_when
library(dplyr)
df %>%
mutate(D1 = case_when(A %in% 1 ~ B, TRUE ~ C))
A B C D1
1 1 10 11 10
2 1 20 23 20
3 NA 30 33 33
4 2 40 45 45
5 2 25 56 56
6 2 45 13 13
Same idea as #Mohanasundaram, but implemented in dplyr chain:
library(dplyr)
df %>%
mutate(D1 = ifelse(A == 1 & !is.na(A), B, C))
Output:
A B C D1
1 1 10 11 10
2 1 20 23 20
3 NA 30 33 33
4 2 40 45 45
5 2 25 56 56
6 2 45 13 13
Another option is to use dplyr::if_else() which has a missing argument to handle the NA values. The advantage of dplyr::if_else() is that it checks that the true and false are the same types.
dplyr::mutate(
.data = df,
D1 = dplyr::if_else(
condition = A == 1, true = B, false = C, missing = C
)
)
Output:
A B C D1
1 1 10 11 10
2 1 20 23 20
3 NA 30 33 33
4 2 40 45 45
5 2 25 56 56
6 2 45 13 13

Multiple condition `rowSums`

I would like to perform a rowSums based on specific values for multiple columns (i.e. multiple conditions). I know how to rowSums based on a single condition (see example below) but can't seem to figure out multiple conditions.
# rowSums with single, global condition
set.seed(100)
df <- data.frame(a = sample(0:100,10),
b = sample(0:100,10),
c = sample(0:100,10),
d = sample(0:100,10))
print(df)
a b c d
1 31 63 54 49
2 25 88 71 92
3 54 27 53 34
4 5 39 73 93
5 45 73 40 67
6 46 64 16 85
7 77 19 97 17
8 34 33 82 59
9 50 93 51 99
10 15 100 25 11
Single Condition Works
df$ROWSUMS <- rowSums(df[,1:4] <= 50)
# And produces
a b c d ROWSUMS
1 31 63 54 49 2
2 25 88 71 92 1
3 54 27 53 34 2
4 5 39 73 93 2
5 45 73 40 67 2
6 46 64 16 85 2
7 77 19 97 17 2
8 34 33 82 59 2
9 50 93 51 99 1
10 15 100 25 11 3
Multiple Conditions Don't Work
df$ROWSUMS_Multi <- rowSums(df[,1] <= 50 | df[,2] <= 25 | df[,3] <= 75)
Error in rowSums(df[, 1] <= 50 | df[, 2] <= 25 | df[, 3] <= 75) :
'x' must be an array of at least two dimensions
Desired Output
a b c d ROWSUMS_Multi
1 31 63 54 49 2
2 25 88 71 92 2
3 54 27 53 34 1
4 5 39 73 93 2
5 45 73 40 67 2
6 46 64 16 85 2
7 77 19 97 17 1
8 34 33 82 59 1
9 50 93 51 99 2
10 15 100 25 11 2
I could just be sub-setting incorrectly, but I haven't been able to find a fix.
One problem with [ while having a single row or single column is it coerces the data.frame to a vector. Based on ?Extract
x[i, j, ... , drop = TRUE]
NOTE, drop is TRUE by default
and later in the documentation
drop - For matrices and arrays. If TRUE the result is coerced to the lowest possible dimension (see the examples). This only works for extracting elements, not for the replacement. See drop for further details.
To avoid that either use drop = FALSE or simply drop the , which will return a single column data.frame because by default, the index without any comma is regarded as column index and not row index for data.frame
rowSums(df[1] <= 50 | df[2] <= 25 | df[3] <= 75)
Update
Based on the expected output, the rowSums can be written as
dfROWSUMS <- rowSums(df[1:3] <= c(50, 25, 75)[col(df[1:3])])
df$ROWSUMS
#[1] 2 2 1 2 2 2 1 1 2 2
NOTE: Earlier comment was based on why the rowSums didn't work. Didn't check the expected output earlier. Here, we need to do comparison of 3 columns with different values. When we do
df[1] <= 50
It is a single column of one TRUE/FALSE
When we do | with
df[1] <= 50 | df[2] <= 25
It would be still be a single column of TRUE/FALSE. Only difference is that we have replaced TRUE/FALSE or FALSE/TRUE in a row with TRUE. Similarly, it would be the case when we add n logical comparisons compared with |. Instead of that, do a +, does the elementwise sum
((df[1] <= 50)+ (df[2] <= 25) + (df[3] <= 75))[,1] # note it is a matrix
Here, we can do it with vector i.e. using , as well
((df[, 1] <= 50)+ (df[, 2] <= 25) + (df[, 3] <= 75)) # vector output
The only issue with this would be to repeatedly do the +. If we use rowSums, then make sure the comparison value replicated (col) to the same dimensions of the subset of data.frame. Another option is Map,
Reduce(`+`, Map(`<=`, df[1:3], c(50, 25, 75)))
We can also use cbind to create a matrix from the multiple conditions using column positions or column names then use rowSums like usual, e.g
> rowSums(cbind(df[,'a'] <= 50 ,df[,'b'] <= 25 ,df[,'c'] <= 75), na.rm = TRUE)
[1] 2 2 1 2 2 2 1 1 2 2
> rowSums(cbind(df['a'] <= 50 ,df['b'] <= 25 ,df['c'] <= 75), na.rm = TRUE)
[1] 2 2 1 2 2 2 1 1 2 2
Using dplyr
library(dplyr)
df %>% mutate(ROWSUMS=rowSums(cbind(.['a'] <= 50 ,.['b'] <= 25 ,.['c'] <= 75), na.rm = TRUE))

find duplicated rows of a data frame in R [duplicate]

I have the following data:
x1 x2 x3 x4
34 14 45 53
2 8 18 17
34 14 45 20
19 78 21 48
2 8 18 5
In rows 1 and 3; and 2 and 5 the values for columns X1;X2,X3 are equal. How can I output only those 4 rows, with equal numbers? The output should be in the following format:
x1 x2 x3 x4
34 14 45 53
34 14 45 20
2 8 18 17
2 8 18 5
Please, ask me questions if something unclear.
ADDITIONAL QUESTION: in the output
x1 x2 x3 x4
34 14 45 53
34 14 45 20
2 8 18 17
2 8 18 5
find the sum of values in last column:
x1 x2 x3 x4
34 14 45 73
2 8 18 22
You can do this with duplicated, which checks for rows being duplicated when passed a matrix. Since you're only checking the first three columns, you should pass dat[,-4] to the function.
dat[duplicated(dat[,-4]) | duplicated(dat[,-4], fromLast=T),]
# x1 x2 x3 x4
# 1 34 14 45 53
# 2 2 8 18 17
# 3 34 14 45 20
# 5 2 8 18 5
An alternative using ave:
dat[ave(dat[,1], dat[-4], FUN=length) > 1,]
# x1 x2 x3 x4
#1 34 14 45 53
#2 2 8 18 17
#3 34 14 45 20
#5 2 8 18 5
Learned this one the other day. You won't need to re-order the output.
s <- split(dat, do.call(paste, dat[-4]))
Reduce(rbind, Filter(function(x) nrow(x) > 1, s))
# x1 x2 x3 x4
# 2 2 8 18 17
# 5 2 8 18 5
# 1 34 14 45 53
# 3 34 14 45 20
There is another way to solve both questions using two packages.
library(DescTools)
library(dplyr)
dat[AllDuplicated(dat[1:3]), ] %>% # this line is to find duplicates
group_by(x1, x2) %>% # the lines followed are to sum up
mutate(x4 = sum(x4)) %>%
unique()
# Source: local data frame [2 x 4]
# Groups: x1, x2
#
# x1 x2 x3 x4
# 1 34 14 45 73
# 2 2 8 18 22
Can also use table command:
> d1 = ddf[ddf$x1 %in% ddf$x1[which(table(ddf$x1)>1)],]
> d2 = ddf[ddf$x2 %in% ddf$x2[which(table(ddf$x2)>1)],]
> rr = rbind(d1, d2)
> rr[!duplicated(rbind(d1, d2)),]
x1 x2 x3 x4
1 34 14 45 53
3 34 14 45 20
2 2 8 18 17
5 2 8 18 5
For sum in last column:
> rrt = data.table(rr2)
> rrt[,x4:=sum(x4),by=x1]
> rrt[rrt[,!duplicated(x1),]]
x1 x2 x3 x4
1: 34 14 45 73
2: 2 8 18 22
first one similar as above, let z be your data.frame:
library(DescTools)
(zz <- Sort(z[AllDuplicated(z[, -4]), ], decreasing=TRUE) )
# now aggregate
aggregate(zz[, 4], zz[, -4], FUN=sum)
# use Sort again, if needed...

How to get next number in sequence in R

I need to automate the process of getting the next number(s) in the given sequence.
Can we make a function which takes two inputs
a vector of numbers(3,7,13,21 e.g.)
how many next numbers
seqNext <- function(sequ, next) {
..
}
seqNext( c(3,7,13,21), 3)
# 31 43 57
seqNext( c(37,26,17,10), 1)
# 5
By the power of maths!
x1 <- c(3,7,13,21)
dat <- data.frame(x=seq_along(x1), y=x1)
predict(lm(y ~ poly(x, 2), data=dat), newdata=list(x=5:15))
# 1 2 3 4 5 6 7 8 9 10 11
# 31 43 57 73 91 111 133 157 183 211 241
When dealing with successive differences that change their sign, the pattern of output values ends up switching from decreasing to increasing:
x2 <- c(37,26,17,10)
dat <- data.frame(x=seq_along(x2), y=x2)
predict(lm(y ~ poly(x,2), data=dat), newdata=list(x=1:10))
# 1 2 3 4 5 6 7 8 9 10
#37 26 17 10 5 2 1 2 5 10
-(11) -(9) -(7) -(5) -(3) -(1) -(-1) -(-3) -(-5)
-2 -2 -2 -2 -2 -2 -2 -2
As a function:
seqNext <- function(x,n) {
L <- length(x)
dat <- data.frame(x=seq_along(x), y=x)
unname(
predict(lm(y ~ poly(x, 2), data=dat), newdata=list(x=seq(L+1,L+n)))
)
}
seqNext(x1,5)
#[1] 31 43 57 73 91
seqNext(x2,5)
#[1] 5 2 1 2 5
This is also easily extensible to circumstances where the pattern might be n orders deep, e.g.:
x3 <- c(100, 75, 45, 5, -50)
diff(x3)
#[1] -25 -30 -40 -55
diff(diff(x3))
#[1] -5 -10 -15
diff(diff(diff(x3)))
#[1] -5 -5
seqNext <- function(x,n,degree=2) {
L <- length(x)
dat <- data.frame(x=seq_along(x), y=x)
unname(
predict(lm(y ~ poly(x, degree), data=dat), newdata=list(x=seq(L+1,L+n)))
)
}
seqNext(x3,n=5,deg=3)
#[1] -125 -225 -355 -520 -725
seqNext <- function(x, n) {
k <- length(x); d <- diff(x[(k - 2):k])
x[k] + 1:n * d[2] + cumsum(1:n) * diff(d[1:2])
}
seqNext(c(3,7,13,21),3)
# [1] 31 43 57
seqNext(c(37,26,17,10),1)
# [1] 5
seqNext(c(137,126,117,110),10)
# [1] 105 102 101 102 105 110 117 126 137 150
seqNext(c(105,110,113,114),5)
# [1] 113 110 105 98 89

How to get all the sum in aggregate function?

Here's some sample data:
dat="x1 x2 x3 x4 x5
1 C 1 16 NA 16
2 A 1 16 16 NA
3 A 1 16 16 NA
4 A 4 64 64 NA
5 C 4 64 NA 64
6 A 1 16 16 NA
7 A 1 16 16 NA
8 A 1 16 16 NA
9 B 4 64 32 32
10 A 3 48 48 NA
11 B 4 64 32 32
12 B 3 48 32 16"
data<-read.table(text=dat,header=TRUE)
aggregate(cbind(x2,x3,x4,x5)~x1, FUN=sum, data=data)
x1 x2 x3 x4 x5
1 B 11 176 96 8
How do I get the sum of A and C as well in x1?
aggregate(.~x1, FUN=sum, data=data, na.action = na.omit)
x1 x2 x3 x4 x5
1 B 11 176 96 80
When I use sqldf:
library("sqldf")
sqldf("select sum(x2),sum(x3),sum(x4),sum(x5) from data group by x1")
sum(x2) sum(x3) sum(x4) sum(x5)
1 12 192 192 <NA>
2 11 176 96 80
3 5 80 NA 80
Why do I get <NA> in the first line, but NA in the third line ?
What is the differences between them? Why do I get the <NA>? there is no <NA> in data!
str(data)
'data.frame': 12 obs. of 5 variables:
$ x1: Factor w/ 3 levels "A","B","C": 3 1 1 1 3 1 1 1 2 1 ...
$ x2: int 1 1 1 4 4 1 1 1 4 3 ...
$ x3: int 16 16 16 64 64 16 16 16 64 48 ...
$ x4: int NA 16 16 64 NA 16 16 16 32 48 ...
$ x5: int 16 NA NA NA 64 NA NA NA 32 NA ...
The sqldf problem remains here, why sum(x4) gets NA, on the contrary sum(x5) gets <NA>?
I can prove that all NA both in x4 and x5 is the same this way:
data[is.na(data)] <- 0
> data
x1 x2 x3 x4 x5
1 C 1 16 0 16
2 A 1 16 16 0
3 A 1 16 16 0
4 A 4 64 64 0
5 C 4 64 0 64
6 A 1 16 16 0
7 A 1 16 16 0
8 A 1 16 16 0
9 B 4 64 32 32
10 A 3 48 48 0
11 B 4 64 32 32
12 B 3 48 32 16
So the fact that sqldf treats sum(x4) and sum(x5) differently is so strange that I think there is a logical mess in sqldf. It can be reproduced in other pc. Please do first and then have the discussion go on.
Here's the data.table way in case you're interested:
require(data.table)
dt <- data.table(data)
dt[, lapply(.SD, sum, na.rm=TRUE), by=x1]
# x1 x2 x3 x4 x5
# 1: C 5 80 0 80
# 2: A 12 192 192 0
# 3: B 11 176 96 80
If you want sum to return NA instead of the sum after removing NA's, just remove the na.rm=TRUE argument.
.SD here is an internal data.table variable that constructs, by default, all the columns not in by - here all except x1. You can check the contents of .SD by doing:
dt[, print(.SD), by=x1]
to get an idea of what's .SD. If you're interested check ?data.table for other internal (and very useful) special variables like .I, .N, .GRP etc..
Because of how the formula method for aggregate handles NA values by default, you need to override that before using the na.rm argument from sum. You can do this by setting na.action to NULL or na.pass:
aggregate(cbind(x2,x3,x4,x5) ~ x1, FUN = sum, data = data,
na.rm = TRUE, na.action = NULL)
# x1 x2 x3 x4 x5
# 1 A 12 192 192 0
# 2 B 11 176 96 80
# 3 C 5 80 0 80
aggregate(cbind(x2,x3,x4,x5) ~ x1, FUN = sum, data = data,
na.rm = TRUE, na.action = na.pass)
# x1 x2 x3 x4 x5
# 1 A 12 192 192 0
# 2 B 11 176 96 80
# 3 C 5 80 0 80
Regarding sqldf, it seems like the columns are being cast to different types depending on whether the item in the first row of the first grouping variable is an NA or not. If it is an NA, that column gets cast as character.
Compare:
df1 <- data.frame(id = c(1, 1, 2, 2, 2),
A = c(1, 1, NA, NA, NA),
B = c(NA, NA, 1, 1, 1))
sqldf("select sum(A), sum(B) from df1 group by id")
# sum(A) sum(B)
# 1 2 <NA>
# 2 NA 3.0
df2 <- data.frame(id = c(2, 2, 1, 1, 1),
A = c(1, 1, NA, NA, NA),
B = c(NA, NA, 1, 1, 1))
sqldf("select sum(A), sum(B) from df2 group by id")
# sum(A) sum(B)
# 1 <NA> 3
# 2 2.0 NA
However, there is an easy workaround: reassign the original name to the new columns being created. Perhaps that let's SQLite inherit some of the information from the previous database? (I don't really use SQL.)
Example (with the same "df2" created earlier):
sqldf("select sum(A) `A`, sum(B) `B` from df2 group by id")
# A B
# 1 NA 3
# 2 2 NA
You can easily use paste to create your select statement:
Aggs <- paste("sum(", names(data)[-1], ") `",
names(data)[-1], "`", sep = "", collapse = ", ")
sqldf(paste("select", Aggs, "from data group by x1"))
# x2 x3 x4 x5
# 1 12 192 192 NA
# 2 11 176 96 80
# 3 5 80 NA 80
str(.Last.value)
# 'data.frame': 3 obs. of 4 variables:
# $ x2: int 12 11 5
# $ x3: int 192 176 80
# $ x4: int 192 96 NA
# $ x5: int NA 80 80
A similar approach can be taken if you want NA to be replaced with 0:
Aggs <- paste("sum(ifnull(", names(data)[-1], ", 0)) `",
names(data)[-1], "`", sep = "", collapse = ", ")
sqldf(paste("select", Aggs, "from data group by x1"))
# x2 x3 x4 x5
# 1 12 192 192 0
# 2 11 176 96 80
# 3 5 80 0 80
aggregate(data[, -1], by=list(data$x1), FUN=sum)
I eliminated the first column because you don't use it in the sum, it is just a group variable to split the data (as a matter of fact I then used it in "by")
Here's how you would do this with the reshape package:
> # x1 = identifier variable, everything else = measured variables
> data_melted <- melt(data, id="x1", measured=c("x2", "x3", "x4", "x5"))
>
> # Thus we now have (measured variable and it's value) per x1 (id variable)
> head(data_melted)
x1 variable value
1 C x2 1
2 A x2 1
3 A x2 1
4 A x2 4
5 C x2 4
6 A x2 1
> tail(data_melted)
x1 variable value
43 A x5 NA
44 A x5 NA
45 B x5 32
46 A x5 NA
47 B x5 32
48 B x5 16
> # Now aggregate using sum, passing na.rm to it
> cast(data_melted, x1 ~ ..., sum, na.rm=TRUE)
x1 x2 x3 x4 x5
1 A 12 192 192 0
2 B 11 176 96 80
3 C 5 80 0 80
Alternatively, you could have done na.rm during the melt()-ing process itself.
The great thing about learning library(reshape) is, quoting the author ("Reshaping Data with the
reshape
Package"),
"In R, there are a number of general functions that can aggregate data,
for example tapply, by and aggregate, and a function specifically for
reshaping data, reshape. Each of these functions tends to deal well
with one or two specific scenarios, and each requires slightly different
input arguments. In practice, you need careful thought to piece
together the correct sequence of operations to get your data into the
form that you want. The reshape package grew out of my frustrations
with reshaping data for consulting clients, and overcomes these
problems with a general conceptual framework that uses just two
functions: melt and cast."

Resources