I have a data frame with 80 existing rows and 6 variables, they are:
Row_ID
CatName
CatAge
Request
Friends
ID,
and I need to add some outliers to the dataset of generated data by adding a row on to the end containing specific data.
I attempted the following but it does not work. Any tips on how to get this to work?
```{r, create row 1, echo=TRUE,include=TRUE}
Cat_dataframe %>%
add_row(Row_ID = "30",CatName = "Carla",CatAge="30",Request="30",Friends="8",ID="500000")
```
Your command looks pretty good to me:
library(tidyverse)
df <- tribble(~"Row_ID", ~"CatName", ~"CatAge", ~"Request", ~"Friends", ~"ID",
"1", "name1", "31", "request1", "2", "051245")
df %>%
add_row(Row_ID = "30",CatName = "Carla",CatAge="30",Request="30",Friends="8",ID="500000")
#> # A tibble: 2 × 6
#> Row_ID CatName CatAge Request Friends ID
#> <chr> <chr> <chr> <chr> <chr> <chr>
#> 1 1 name1 31 request1 2 051245
#> 2 30 Carla 30 30 8 500000
Created on 2022-04-03 by the reprex package (v2.0.1)
You may have an issue with your chunk title (i.e. try {r create_row_1, echo=TRUE, include=TRUE} instead of {r, create row 1, echo=TRUE,include=TRUE}) and you may have an issue with different data types, e.g. if "CatAge" is an integer in your original dataframe and a character string in your add_rows() command (age=31 and age="31" are different types).
If you edit your original question to include the error message/s you're getting it will very likely make it easier troubleshoot your problem.
Related
I recognize most people have the opposite problem. But I'm trying to create an ASV table, with column names as "identified OTUs" (aka the column name is drawn from the taxonomy information from GlobalPatterns#tax.table, rather than just being the assigned OTU code that's encoded in GlobalPatterns#otu.table), and row names as sample name.
I also want to append the metadata to the end of the ASV table, to allow for analysis based on said metadata.
I managed to generate a table without the taxonomic information with this code, using GlobalPatterns for reproducibility:
data(GlobalPatterns)
asv.matrix <- as.matrix(GlobalPatterns#otu_table#.Data)
asv <- data.frame(t(asv.matrix)) #transposing to make sample name the row name
meta.df <- as.data.frame(GlobalPatterns#sam_data)
asv.full <- data.frame(asv,meta.df)
write.csv(asv.full, file = "full_asv.csv",quote = FALSE,sep = ",")
However, I can't figure out how to force taxonomy information into the column names, which makes the ASV table functionally useless for analysis.
EDIT:
My preferred format is (abbreviated with faked metadata appended) as below. Tried to make a table, failed, have a fake code chunk.
Sample-ID / Species1 / Species2 / ...etc... / Metadata1 / Metadata2 /...etc... /
--------- / -------- / -------- / --------- / --------- / --------- /--------- /
Sample1 / 1 / 5 / ...etc... / lake / summer /...etc... /
Sample2 / 4 / 0 / ...etc... / bog / spring /...etc... /
I think you're looking for the phyloseq::psmelt function, which combines the otu_table, tax_table and sample_data tables into a single, long format table that is suitable for analysis.
One way of dealing with unresolved taxonomy is to assign the highest known taxonomy to any unresolved level. You can use the name_na_taxa function from the fantaxtic package for this, prior to using psmelt.
EDIT
After seeing your updated post, I understand a bit better what you want. You can take the output from psmelt and pivot this into a semi-wide format; see the code chunk below.
require("phyloseq")
require("fantaxtic")
require("tidyverse")
# Load data
data(GlobalPatterns)
# Generate (unique) species names using fantaxtic
ps <- name_na_taxa(GlobalPatterns)
ps <- label_duplicate_taxa(ps, tax_level = "Species", asv_as_id = T)
# Convert to long data format
ps_long <- psmelt(ps)
# Convert to semi-wide data format where each column has a taxon name
# and contains the abundance in each sample
meta_vars <- sample_variables(ps)
ps_wide <- ps_long %>%
select(all_of(meta_vars), Species, Abundance) %>%
pivot_wider(names_from = Species,
values_from = Abundance)
# Inspect the final table
head(ps_wide)
#> # A tibble: 6 x 19,223
#> X.SampleID Primer Final_Barcode Barcode_truncate~ Barcode_full_le~ SampleType
#> <fct> <fct> <fct> <fct> <fct> <fct>
#> 1 AQC4cm ILBC_17 ACAGCT AGCTGT CAAGCTAGCTG Freshwate~
#> 2 LMEpi24M ILBC_13 ACACTG CAGTGT CATGAACAGTG Freshwater
#> 3 AQC7cm ILBC_18 ACAGTG CACTGT ATGAAGCACTG Freshwate~
#> 4 AQC1cm ILBC_16 ACAGCA TGCTGT GACCACTGCTG Freshwate~
#> 5 M31Tong ILBC_10 ACACGA TCGTGT TGTGGCTCGTG Tongue
#> 6 M11Fcsw ILBC_05 AAGCTG CAGCTT CGACTGCAGCT Feces
#> # ... with 19,217 more variables: Description <fct>,
#> # `Unknown Stramenopiles (Order) 549656` <dbl>,
#> # `Unknown Dolichospermum (Genus) 279599` <dbl>,
#> # `Unknown Neisseria (Genus) 360229` <dbl>,
#> # `Unknown Bacteroides (Genus) 331820` <dbl>,
#> # `Haemophilusparainfluenzae 94166` <dbl>,
#> # `Unknown ACK-M1 (Family) 329744` <dbl>, ...
Created on 2022-09-26 by the reprex package (v2.0.1)
Note that this will potentially lead to a table with thousands of columns (about 20k in the case of GlobalPatterns), which might be hard to work with.
I have a tibble which contains a column of base64-encoded strings like so:
mytib <- tibble(encoded_var = c("VGVzdGluZ3Rlc3Rpbmc=", "QW5vdGhlcnRlc3Q="))
When I try to decode it with base64::base64decode
mytib %>%
mutate(decoded_var = base64decode(encoded_var))
I receive an error:
Error in `mutate()`:
! Problem while computing `decoded_var = base64decode(encoded_var)`.
x `decoded_var` must be size 2 or 1, not 25.
I'm looking to have a tibble with a column of decoded, human-readable base64 strings. I'd also like to do that using the mutate tidyverse syntax. How can I achieve that?
Update: The tibble should look like this
# A tibble: 2 × 2
encoded_var decoded_var
<chr> <chr>
1 VGVzdGluZ3Rlc3Rpbmc= Testingtesting
2 QW5vdGhlcnRlc3Q= Anothertest
base64enc::base64decode produces a raw vector, so you need to carry out the conversion rowwise and wrap the result with rawToChar:
mytib %>%
rowwise() %>%
mutate(decoded_var = rawToChar(base64decode(encoded_var)))
#> # A tibble: 2 x 2
#> # Rowwise:
#> encoded_var decoded_var
#> <chr> <chr>
#> 1 VGVzdGluZ3Rlc3Rpbmc= Testingtesting
#> 2 QW5vdGhlcnRlc3Q= Anothertest
The problem is that the caTools::base64decode function only works on one string at a time, because a single string could contain several values. If you always have a single character value in your variable, then you can vectorize it:
library(tidyverse)
mytib <- tibble(encoded_var = c("VGVzdGluZ3Rlc3Rpbmc=", "QW5vdGhlcnRlc3Q="))
mytib %>%
mutate(decoded_var = Vectorize(caTools::base64decode)(encoded_var, "character"))
#> # A tibble: 2 × 2
#> encoded_var decoded_var
#> <chr> <chr>
#> 1 VGVzdGluZ3Rlc3Rpbmc= Testingtesting
#> 2 QW5vdGhlcnRlc3Q= Anothertest
Created on 2022-03-14 by the reprex package (v2.0.1)
EDITED TO ADD: Actually, there are (at least) four different packages that provide base64decode functions. I used caTools. There are also versions in the processx, xfun and base64enc packages. (The one in xfun is actually named base64_decode.) This is why it's important to show reproducible code here on StackOverflow. The reprex package makes this very easy.
So I am trying to write an automated report in R with Functions. One of the questions I am trying to answer is this " During the first week of the month, what were the 10 most viewed products? Show the results in a table with the product's identifier, category, and count of the number of views.". To to this I wrote the following function
most_viewed_products_per_week <- function (month,first_seven_days, views){
month <- views....February.2020.2
first_seven_days <- function( month, date_1, date_2){
date_1 <-2020-02-01
date_2 <- 2020-02-07
return (first_seven_days)}
views <-function(views, desc){
return (views.head(10))}
}
print(most_viewed_products_per_week)
However the output I get is this:
function (month,first_seven_days, views){
month <- views....February.2020.2
first_seven_days <- function( month, date_1, date_2){
date_1 <-2020-02-01
date_2 <- 2020-02-07
return (first_seven_days)}
views <-function(views, desc){
return (views.head(10))}
How do I fix that?
This report has more questions like this, so I am trying to get my function writing as correct as possible from the start.
Thanks in advance,
Edo
It is a good practice to code in functions. Still I recommend you get your code doing what you want and then think about what parts you want to wrap in a function (for future re-use). This is to get you going.
In general: to support your analysis, make sure that your data is in the right class. I.e. dates are formatted as dates, numbers as double or integers, etc. This will give you access to many helper functions and packages.
For the case at hand, read up on {tidyverse}, in particular {dplyr} which can help you with coding pipes.
simulate data
As mentioned - you will find many friends on Stackoverflow, if you provide a reproducible example.
Your questions suggests your data look a bit like the following simulated data.
Adapt as appropriate (or provide example)
library(tibble) # tibble are modern data frames
library(dplyr) # for crunching tibbles/data frames
library(lubridate) # tidyverse package for date (and time) handling
df <- tribble( # create row-tibble
~date, ~identifier, ~category, ~views
,"2020-02-01", 1, "TV", 27
,"2020-02-02", 2, "PC", 40
,"2020-02-03", 1, "TV", 12
,"2020-02-03", 2, "PC", 2
,"2020-02-08", 3, "UV", 200
) %>%
mutate(date = ymd(date)) # date is read in a character - lubridate::ymd() for date
This yields
> df
# A tibble: 5 x 4
date identifier category views
<date> <dbl> <chr> <dbl>
1 2020-02-01 1 TV 27
2 2020-02-02 2 PC 40
3 2020-02-03 1 TV 12
4 2020-02-03 2 PC 2
5 2020-02-08 3 UV 200
Notice: date-column is in date-format.
work your algorithm
From your attempt it follows you want to extract the first 7 days.
Since we have a "date"-column, we can use a date-function to help us here.
{lubridate}'s day() extracts the "day-number".
> df %>% filter(day(date) <= 7)
# A tibble: 4 x 4
date identifier category views
<date> <dbl> <chr> <dbl>
1 2020-02-01 1 TV 27
2 2020-02-02 2 PC 40
3 2020-02-03 1 TV 12
4 2020-02-03 2 PC 2
Anything outside the first 7 days is gone.
Next you want to summarise to get your product views total.
df %>%
## ---------- c.f. above ------------
filter(day(date) <= 7) %>%
## ---------- summarise in bins that you need := groups -------
group_by(identifier, category) %>%
summarise(total_views = sum(views)
, .groups = "drop" ) # if grouping is not needed "drop" it
This gives you:
# A tibble: 2 x 3
identifier category total_views
<dbl> <chr> <dbl>
1 1 TV 39
2 2 PC 42
Now pick the top-10 and sort the order:
df %>%
## ---------- c.f. above ------------
filter(day(date) <= 7) %>%
group_by(identifier, category) %>%
summarise(total_views = sum(views), .groups = "drop" ) %>%
## ---------- make use of another helper function of dplyr
top_n(n = 10, total_views) %>% # note top-10 makes here no "real" sense :), try top_n(1, total_views)
arrange(desc(total_views)) # arrange in descending order on total_views
wrap in function
Now that the workflow is in place, think about breaking your code into the blocks you think are useful.
I leave this to you. You can assign interim results to new data frames and wrap the preparation of the data into a function and then the top_n() %>% arrange() in another function, ...
This yields:
# A tibble: 2 x 3
identifier category total_views
<dbl> <chr> <dbl>
1 2 PC 42
2 1 TV 39
I got an XLSX with data from a questionnaire for my master thesis.
The questions and answers for an interviewee are in one row in the second column. The first column contains the date.
The data of the second column comes in a form like this:
"age":"52","height":"170","Gender":"Female",...and so on
I started with:
test12 <- read_xlsx("Testdaten.xlsx")
library(splitstackshape)
test13 <- concat.split(data = test12, split.col= "age", sep =",")
Then I got the questions and the answers as a column divided by a ":".
For e.g. column 1: "age":"52" and column2:"height":"170".
But the data is so messy that sometimes in the column of the age question and answer there is a height question and answer and for some questionnaires questions and answers double.
I would need the questions as variables and the answers as observations. But I have no clue how to get there. I could clean the data in excel first, but with the fact that columns are not constant and there are for e.g. some height questions in the age column I see no chance to do it as I will get new data regularly, formated the same way.
Here is an example of the data:
A tibble: 5 x 2
partner.createdAt partner.wphg.info
<chr> <chr>
1 2019-11-09T12:13:11.099Z "{\"age_years\":\"50\",\"job_des\":\"unemployed\",\"height_cm\":\"170\",\"Gender\":\"female\",\"born_in\":\"Italy\",\"Alcoholic\":\"false\",\"knowledge_selfass\":\"5\",\"total_wealth\":\"200000\""
2 2019-11-01T06:43:22.581Z "{\"age_years\":\"34\",\"job_des\":\"self-employed\",\"height_cm\":\"158\",\"Gender\":\"male\",\"born_in\":\"Germany\",\"Alcoholic\":\"true\",\"knowledge_selfass\":\"3\",\"total_wealth\":\"10000\""
3 2019-11-10T07:59:46.136Z "{\"age_years\":\"24\",\"height_cm\":\"187\",\"Gender\":\"male\",\"born_in\":\"England\",\"Alcoholic\":\"false\",\"knowledge_selfass\":\"3\",\"total_wealth\":\"150000\""
4 2019-11-11T13:01:48.488Z "{\"age_years\":\"59\",\"job_des\":\"employed\",\"height_cm\":\"167\",\"Gender\":\"female\",\"born_in\":\"United States\",\"Alcoholic\":\"false\",\"knowledge_selfass\":\"2\",\"total_wealth\":\"1000000~
5 2019-11-08T14:54:26.654Z "{\"age_years\":\"36\",\"height_cm\":\"180\",\"born_in\":\"Germany\",\"Alcoholic\":\"false\",\"knowledge_selfass\":\"5\",\"total_wealth\":\"170000\",\"job_des\":\"employed\",\"Gender\":\"male\""
Thank you so much for your time!
You can loop through each entry, splitting at , as you did. Then you can loop through them all again, splitting at :.
The result will be a bunch of variable/value pairings. This can be all done stacked. Then you just want to pivot back into columns.
data
Updated the data based on your edit.
data <- tribble(~partner.createdAt, ~partner.wphg.info,
'2019-11-09T12:13:11.099Z', '{\"age_years\":\"50\",\"job_des\":\"unemployed\",\"height_cm\":\"170\",\"Gender\":\"female\",\"born_in\":\"Italy\",\"Alcoholic\":\"false\",\"knowledge_selfass\":\"5\",\"total_wealth\":\"200000\"',
'2019-11-01T06:43:22.581Z', '{\"age_years\":\"34\",\"job_des\":\"self-employed\",\"height_cm\":\"158\",\"Gender\":\"male\",\"born_in\":\"Germany\",\"Alcoholic\":\"true\",\"knowledge_selfass\":\"3\",\"total_wealth\":\"10000\"',
'2019-11-10T07:59:46.136Z', '{\"age_years\":\"24\",\"height_cm\":\"187\",\"Gender\":\"male\",\"born_in\":\"England\",\"Alcoholic\":\"false\",\"knowledge_selfass\":\"3\",\"total_wealth\":\"150000\"',
'2019-11-11T13:01:48.488Z', '{\"age_years\":\"59\",\"job_des\":\"employed\",\"height_cm\":\"167\",\"Gender\":\"female\",\"born_in\":\"United States\",\"Alcoholic\":\"false\",\"knowledge_selfass\":\"2\",\"total_wealth\":\"1000000\"',
'2019-11-08T14:54:26.654Z', '{\"age_years\":\"36\",\"height_cm\":\"180\",\"born_in\":\"Germany\",\"Alcoholic\":\"false\",\"knowledge_selfass\":\"5\",\"total_wealth\":\"170000\",\"job_des\":\"employed\",\"Gender\":\"male\"')
libraries
We need a few here. Or you can just call tidyverse.
library(stringr)
library(purrr)
library(dplyr)
library(tibble)
library(tidyr)
function
This function will create a data frame (or tibble) for each question. The first column is the date, the second is the variable, the third is the value.
clean_record <- function(date, text) {
clean_records <- str_split(text, pattern = ",", simplify = TRUE) %>%
str_remove_all(pattern = "\\\"") %>% # remove double quote
str_remove_all(pattern = "\\{|\\}") %>% # remove curly brackets
str_split(pattern = ":", simplify = TRUE)
tibble(date = as.Date(date), variable = clean_records[,1], value = clean_records[,2])
}
iteration
Now we use pmap_dfr from purrr to loop over the rows, outputting each row with an id variable named record.
This will stack the data as described in the function. The mutate() line converts all variable names to lowercase. The distinct() line will filter out rows that are exact duplicates.
What we do then is just pivot on the variable column. Of course, replace data with whatever you name your data frame.
data_clean <- pmap_dfr(data, ~ clean_record(..1, ..2), .id = "record") %>%
mutate(variable = tolower(variable)) %>%
distinct() %>%
pivot_wider(names_from = variable, values_from = value)
result
The result is something like this. Note how I had reordered some of the columns, but it still works. You are probably not done just yet. All columns are now of type character. You need to figure out the desired type for each and convert.
# A tibble: 5 x 10
record date age_years job_des height_cm gender born_in alcoholic knowledge_selfass total_wealth
<chr> <date> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 1 2019-11-09 50 unemployed 170 female Italy false 5 200000
2 2 2019-11-01 34 self-employed 158 male Germany true 3 10000
3 3 2019-11-10 24 NA 187 male England false 3 150000
4 4 2019-11-11 59 employed 167 female United States false 2 1000000
5 5 2019-11-08 36 employed 180 male Germany false 5 170000
For example, convert age_years to numeric.
data_clean %>%
mutate(age_years = as.numeric(age_years))
I am sure you may run into other things, but this should be a start.
I have some repeated measures data I'm trying to clean in R. At this point, it is in the long format and I'm trying to fix some entries before I move to a wide format - for example, if people took my survey too many times I'm going to drop the rows. I have two main problems that I'm trying to solve:
Changing an entry
If someone took the survey from the "pre-test link" when it was actually supposed to be a post-test, I'm fixing it with the following code:
data[data$UserID == 52118254, "Prepost"][2] <- 2
This filters out the entries from that person based on ID, then changes the second entry to be coded as a post-test. This code has enough meaning that reviewing it tells me what is happening.
Dropping a row
I'm struggling to get meaningful code to delete extra rows - for example if someone accidentally clicked on my link twice. I have data like the following:
UserID Prepost Duration..in.seconds.
1 52118250 1 357
2 52118284 1 226
3 52118284 1 11 #This is an extra attempt to remove
4 52118250 2 261
5 52118284 2 151
#to reproduce:
structure(list(UserID = c(52118250, 52118284, 52118284, 52118250, 52118284), Prepost = c("1", "1", "1", "2", "2"), Duration..in.seconds. = c("357", "226", "11", "261", "151")), class = "data.frame", row.names = c(NA, -5L), .Names = c("UserID", "Prepost", "Duration..in.seconds."))
I can filter by UserID to see who has taken it too many times and I'm looking for a way to easily remove those rows from the dataset. In this case, UserID 52118284 has taken it three times and the second attempt needs to be removed. If it is "readable" like the other fix that is better.
I'd use a collection of dplyr functions as shown below. To explain:
group_by(UserID) will help to apply functions separately to each User.
mutate(click_n = row_number()) iteratively counts User appearances and saves it as a new variable click_n.
library(dplyr)
data %>%
group_by(UserID) %>%
mutate(click_n = row_number())
#> Source: local data frame [5 x 4]
#> Groups: UserID [4]
#>
#> UserID Prepost Duration..in.seconds. click_n
#> <dbl> <chr> <chr> <int>
#> 1 52118254 1 357 1
#> 2 52118284 1 226 1
#> 3 52118284 1 11 2
#> 4 52118250 2 261 1
#> 5 52118280 2 151 1
filter(click_n == 1) can then be used to keep only 1st attempts as shown below.
data <- data %>%
group_by(UserID) %>%
mutate(click_n = row_number()) %>%
filter(click_n == 1)
data
#> Source: local data frame [4 x 4]
#> Groups: UserID [4]
#>
#> UserID Prepost Duration..in.seconds. click_n
#> <dbl> <chr> <chr> <int>
#> 1 52118254 1 357 1
#> 2 52118284 1 226 1
#> 3 52118250 2 261 1
#> 4 52118280 2 151 1
Note that this approach assumes that your data frame is ordered. I.e., first clicks appear close to the top.
If you're unfamiliar with %>%, look for help on the "pipe operator".
EXTRA:
To bring the comment into answer, once you're comfortable with what's going on here, you can skip the mutate line a just do the following:
data %>% group_by(UserID) %>% filter(row_number() == 1)
A simple solution to remove duplicates is below:
subset(data, !duplicated(data$UserID))
However, you may want to consider also subsetting by duration, such as if the duration is less than 30 seconds.
Thanks #Simon for the suggestions. One criteria I wanted was that the code made sense as I "read" it. As I thought more, another criteria is that I wanted to be deliberate about what changes to make. So I incorporated Simon's recommendation to make a separate column and then use dplyr::filter() to exclude those variables. Here's what an example segment of code looked like:
#Change pre/post entries
data[data$UserID == 52118254, "Prepost"][2] <- 2
#Mark rows to delete
data$toDelete <- NA #Makes new empty column for marking deletions
data[data$UserID == 52118284,][2, "toDelete"] <- 1 #Marks row for deletion
#Filter to exclude rows
data %>% filter(is.na(toDelete))
#Optionally add "%>% select(-toDelete)" to remove the extra column
In my context, advantages here are that everything is deliberate rather than automatic and changes are anchored to data rather than row numbers that might change. I'd still welcome any feedback or other ways of achieving this (maybe in a single step).