Conversion of Matrix to Columns - r

Say you have
Name August September October November
Bob 5 4 3 2
George 3 2 2 4
Gina 1 4 2 1
And you want to convert into 3 columns like so
Name Month Output
Bob August 5
Bob September 4
.....
I see how to do it in VBA through the following link : https://www.extendoffice.com/documents/excel/2773-excel-convert-matrix-to-list.html
Unsure how to execute in R. All of the searching I've yielded want to simply split the matrix into vectors which isn't correct.

If you have a dataframe, say, df you can define its column names as a column in its own right by using names:
df$Month <- names(df)[2:5]

Related

Convert column with minutes and seconds in seconds only in R

my dataframe have this format:
name <- c("Carlos", "Matthew", "Toth", "Mike", "Joseph", "Andrey")
time <- c("79:45","78","74:45","65:30","64","57")
myexample <- cbind.data.frame(name, time)
> myexample
nane time
1 Carlos 79:45
2 Matthew 78
3 Toth 74:45
4 Mike 65:30
5 Joseph 64
6 Andrey 57
How to convert time column with two formats ("79:45" and "78") in seconds?
the time column is in the character format :(
As output:
> myexample
nane time
1 Carlos 79:45
2 Matthew 78:00
3 Toth 74:45
4 Mike 65:30
5 Joseph 64:00
6 Andrey 57:00
Here is one option using sub:
myexample$time <- sub("^(\\d{1,})$", "\\1:00", myexample$time)
myexample
name time
1 Carlos 79:45
2 Matthew 78:00
3 Toth 74:45
4 Mike 65:30
5 Joseph 64:00
6 Andrey 57:00
Demo
Normally the best thing to do here would be to parse your text times into some formal time type. But since you are storing non standard values, where the minutes component can be greater than 60, I chose to leave it as text for the moment.
An option can be using grepl to detect presence of : and appending :00 in values : is not present.
myexample$time <- ifelse(grepl(":",myexample$time), as.character(myexample$time),
paste0(myexample$time, ":00"))
myexample
# name time
# 1 Carlos 79:45
# 2 Matthew 78:00
# 3 Toth 74:45
# 4 Mike 65:30
# 5 Joseph 64:00
# 6 Andrey 57:00

Find the favorite and analyse sequence questions in R

We have a daily meeting when participants nominate each other to speak. The first person is chosen randomly.
I have a dataframe that consists of names and the order of speech every day.
I have a day1, a day2 ,a day3 , etc. in the columns.
The data in the rows are numbers, meaning the order of speech on that particular day.
NA means that the person did not participate on that day.
Name day1 day2 day3 day4 ...
Albert 1 3 1 ...
Josh 2 2 NA
Veronica 3 5 3
Tim 4 1 2
Stew 5 4 4
...
I want to create two analysis, first, I want to create a dataframe who has chosen who the most times. (I know that the result depends on if a participant was nominated before and therefore on that day that participant cannot be nominated again, I will handle it later, but for now this is enough)
It should look like this:
Name Favorite
Albert Stew
Josh Veronica
Veronica Tim
Tim Stew
...
My questions (feel free to answer only one if you can):
1. What code shall I use for it without having to manunally put the names in a different dataframe?
2. How shall I handle a tie, for example Josh chose Veronica and Tim first the same number of times? Later I want to visualise it and I have no idea how to handle ties.
I also would like to analyse the results to visualise strong connections.
Like to show that there are people who usually chose each other, etc.
Is there a good package that is specialised for these? Or how should I get to it?
I do not need DNA sequences, only this simple ones, but I have not found a suitable one yet.
Thanks for your help!
If I am not misunderstanding your problem, here is some code to get the number of occurences of who choose who as next speaker. I added a fourth day to have some count that is not 1. There are ties in the result, choosing the first couple of each group by speaker ('who') may be a solution :
df <- read.table(textConnection(
"Name,day1,day2,day3,day4
Albert,1,3,1,3
Josh,2,2,,2
Veronica,3,5,3,1
Tim,4,1,2,4
Stew,5,4,4,5"),header=TRUE,sep=",",stringsAsFactors=FALSE)
purrr::map(colnames(df)[-1],
function (x) {
who <- df$Name[order(df[x],na.last=NA)]
data.frame(who,lead(who),stringsAsFactors=FALSE)
}
) %>%
replyr::replyr_bind_rows() %>%
filter(!is.na(lead.who.)) %>%
group_by(who,lead.who.) %>% summarise(n=n()) %>%
arrange(who,desc(n))
Input:
Name day1 day2 day3 day4
1 Albert 1 3 1 3
2 Josh 2 2 NA 2
3 Veronica 3 5 3 1
4 Tim 4 1 2 4
5 Stew 5 4 4 5
Result:
# A tibble: 12 x 3
# Groups: who [5]
who lead.who. n
<chr> <chr> <int>
1 Albert Tim 2
2 Albert Josh 1
3 Albert Stew 1
4 Josh Albert 2
5 Josh Veronica 1
6 Stew Veronica 1
7 Tim Stew 2
8 Tim Josh 1
9 Tim Veronica 1
10 Veronica Josh 1
11 Veronica Stew 1
12 Veronica Tim 1

Add column to R dataframe that is length of string in another column

This should be EASY but I can't figure it out and search didn't help. I'd like to add a column to a dataframe that is just the length of the strings in another column.
So say I have a data frame of names like such:
Name Last
1 John Doe
2 Edgar Poe
3 Walt Whitman
4 Jane Austen
I'd like to append a new column with the string length of, say, the last name, so it would look like:
Name Last Length
1 John Doe 3
2 Edgar Poe 3
3 Walt Whitman 7
4 Jane Austen 6
Thanks
We can use str_count from stringr
library(stringr)
df1$Length <- str_count(df1$Last)
df1$Length
[1] 3 3 7 6
If you want to filter by the length, based on column, then do the following:
library(dplyr)
df<- df %>%
filter(nchar(Last) <= 3)

New column from non-standard date factor in R

I have a dataframe with an oddly formatted dates column. I'd like to create a column just showing the year from the original date column and I am having trouble coming up with a way to do this because the current date column is being treated as a factor. Any advice on how to do this efficiently would be appreciated.
Example
starting with:
org <- c("a","b","c","d")
country <- c("1","2","3","4")
date <- c("01-09-14","01-10-07","11-31-99","10-31-12")
toy <- data.frame(cbind(org,country,date))
toy
org country date
1 a 1 01-09-14
2 b 2 01-10-07
3 c 3 11-31-99
4 d 4 10-31-12
str(toy$date)
Factor w/ 4 levels "01-09-14","01-10-07",..: 1 2 4 3
Desired result:
org country Year
1 a 1 2014
2 b 2 2007
3 c 3 1999
4 d 4 2012
This should work:
transform(toy,Year=format(strptime(date,"%m-%d-%y"),"%Y"))
This produces
## org country date Year
## 1 a 1 01-09-14 2014
## 2 b 2 01-10-07 2007
## 3 c 3 11-31-99 <NA>
## 4 d 4 10-31-12 2012
I initially thought that the NA value was because the %y format indicator wasn't smart enough to handle previous-century dates, but ?strptime says:
‘%y’ Year without century (00-99). On input, values 00 to 68 are
prefixed by 20 and 69 to 99 by 19 - that is the behaviour
specified by the 2004 and 2008 POSIX standards, but they do
also say ‘it is expected that in a future version the default
century inferred from a 2-digit year will change’.
implying that it should be able to handle it.
The problem is actually that 31 November doesn't exist ...
(You can drop the date column at your leisure ...)

selecting rows with specific conditions in R

I currently have a data that looks like this for multiple ids (that range until around 1600)
id year name status
1 1980 James 3
1 1981 James 3
1 1982 James 3
1 1983 James 4
1 1984 James 4
1 1985 James 1
1 1986 James 1
1 1987 James 1
2 1982 John 2
2 1983 John 2
2 1984 John 1
2 1985 John 1
I want to subset this data so that it only has the information for status=1 and the status right before that. I also want to eliminate multiple 1s and only save the first 1s. In conclusion I would want:
id year name status
1 1984 James 4
1 1985 James 1
2 1983 John 2
2 1984 John 1
I'm doing this because I'm in the process of figuring out in what year how many people from certain status changed to status 1. I only know the subset command and I don't think I can get this data from doing subset(data, subset=(status==1)). How could I save the information right before that
I want to add to this question one more time - I did not get same results when I applied the first reply to this question (which uses plr packages) and the third reply which uses duplicated command. I found out that the first reply preserved information accurately while the third one did not.
This does what you want.
library(plyr)
ddply(d, .(name), function(x) {
i <- match(1, x$status)
if (is.na(i))
NULL
else
x[c(i-1, i), ]
})
id year name status
1 1 1984 James 4
2 1 1985 James 1
3 2 1983 John 2
4 2 1984 John 1
Here's a solution - for each grouping of numbers (the cumsum bit), it looks at the first one and takes that and the previous row if status is 1:
library(data.table)
dt = data.table(your_df)
dt[dt[, if(status[1] == 1) c(.I[1]-1, .I[1]),
by = cumsum(c(0,diff(status)!=0))]$V1]
# id year name status
#1: 1 1984 James 4
#2: 1 1985 James 1
#3: 2 1983 John 2
#4: 2 1984 John 1
Using base R, here is a way to do this:
# this first line is how I imported your data after highlighting and copying (i.e. ctrl+c)
d<-read.table("clipboard",header=T)
# find entries where the subsequent row's "status" is equal to 1
# really what's going on is finding rows where "status" = 1, then subtracting 1
# to find the index of the previous row
e<-d[which(d$status==1)-1 ,]
# be careful if your first "status" entry = 1...
# What you want
# Here R will look for entries where "name" and "status" are both repeats of a
# previous row and where "status" = 1, and it will get rid of those entries
e[!(duplicated(e[,c("name","status")]) & e$status==1),]
id year name status
5 1 1984 James 4
6 1 1985 James 1
10 2 1983 John 2
11 2 1984 John 1
I like the data.table solution myself, but there actually is a way to do it with subset.
# import data from clipboard
x = read.table(pipe("pbpaste"),header=TRUE)
# Get the result table that you want
x1 = subset(x, status==1 |
c(status[-1],0)==1 )
result = subset(x1, !duplicated(cbind(name,status)) )

Resources