deletion of leading zeros in string split in R - r

The code below downloads census data from the United States census, names the columns and aims to split the column called FIPS into two. The FIPS column is numeric. The first two characters in position 1 and 2 should go into one column, StateFIPS and the last two characters in position 4 and 5 will make up the CountyFIPS column. The character in the 3rd position will be discarded. The problem I run into is that leading zero's are deleted.
In a previous post, I provided only a segment of code to learn how to split the string, which helped. However, when I applied it to my bigger code chunk it did not work. How do I prevent the deletion of leading zero's while splitting a string in the in the code below?
#State census data from 1990 to 1999
censusneeded<-seq(90,99,1)
for(i in 1:length(censusneeded)){
URL <- paste("https://www.census.gov/popest/data/intercensal/st-co/tables/STCH-Intercensal/STCH-icen19",censusneeded[i],".txt", sep="")
destfile <- paste(censusneeded[i],"statecensus.txt", sep="")
download.file(URL, destfile)
}
#Data fields Year, FIPS Code, FIPS code county, Age Group, Race-Sex, Ethnic Origin, POP
#We need to give names to the columns and separate the FIPS State Code and FIPS Code county
cleancensus_1990_1999 <- function(statecensus){
colnames(statecensus_90_99) <- c("Year", "FIPS", "AgeGroup", "RaceSex",
"HispanicStatus","Population")#label the columns
##separate the FIPS column into a column of State FIPS code and County FIPS code by
x <- c(as.character(statecensus_90_99$FIPS))
# x <- as.vector(as.character(statecensus_90_99$FIPS)) #I thought converting the column to a character and vector would prevent the drop of leading zeros when splitting the string
newfips <- lapply(2:3,function(i) if(i==2) str_sub(x,end=i) else str_sub(x,i+1))
StateFIPS <- newfips[[1]]
#StateFIPS <- substr(x, 1, 2) # 2nd attempt also doesn't work
CountyFIPS <- newfips[[2]]
#CountyFIPS <- str_sub(x,4,5) #2nd attempt also did not work because it drops leading zeros.
return(statecensus)
}
#lets apply the cleaning to census 90 to 99
for(i in 1:length(censusneeded)){
statecensus <- read.table(paste(censusneeded[i],"statecensus.txt", sep=""))
newcensus <- cleancensus_1990_1999(statecensus)
write.csv(newcensus, paste(censusneeded[i],"state1990_1999.txt", sep=""))
}
Thank you!

I rewrite your function so that it returns the original dataframe, plus two additional columns for StateFIPS and CountyFIPS (side note: do you really only want a 2-character CountyFIPS? So 06001 (Alameda County, CA) and 06101 (Sutter County, CA) will have the same CountyFIPS of "01").
cleancensus <- function(d) {
colnames(d) <- c("Year", "FIPS", "AgeGroup", "RaceSex",
"HispanicStatus","Population")
d$FIPS <- sprintf("%05d", d$FIPS)
d$StateFIPS <- substr(d$FIPS, 1, 2)
d$CountyFIPS <- substr(d$FIPS, 4, 5)
d
}
Try out the function:
data_url <- "https://www.census.gov/popest/data/intercensal/st-co/tables/STCH-Intercensal/STCH-icen1999.txt"
statecensus <- read.table(url(data_url))
d <- cleancensus(statecensus)
head(d)
# Year FIPS AgeGroup RaceSex HispanicStatus Population StateFIPS CountyFIPS
# 1 99 01001 0 1 1 218 01 01
# 2 99 01001 0 2 1 239 01 01
# 3 99 01001 1 1 1 947 01 01
# 4 99 01001 1 2 1 928 01 01
# 5 99 01001 2 1 1 1460 01 01
# 6 99 01001 2 2 1 1355 01 01
It behaves as expected (leading zeros are retained). Now, suppose we write it to csv, and read it back:
write.csv(d, "~/Desktop/census99.csv", row.names = FALSE)
d <- read.csv("~/Desktop/census99.csv")
head(d)
# Year FIPS AgeGroup RaceSex HispanicStatus Population StateFIPS CountyFIPS
# 1 99 1001 0 1 1 218 1 1
# 2 99 1001 0 2 1 239 1 1
# 3 99 1001 1 1 1 947 1 1
# 4 99 1001 1 2 1 928 1 1
# 5 99 1001 2 1 1 1460 1 1
# 6 99 1001 2 2 1 1355 1 1
The leading zeros are gone. This is because read.csv coerces character vectors to numeric where it can. There are (at least) two ways to solve this:
sprintf. Use the sprintf function to pad the numbers with leading zeros, so e.g. calling sprintf("%03d", 7) -- take an integer value ("d") and make it 3 characters wide, padding with leading 0s when necessary -- returns "007":
d$FIPS <- sprintf("%05d", d$FIPS)
d$StateFIPS <- sprintf("%02d", d$StateFIPS)
d$CountyFIPS <- sprintf("%02d", d$CountyFIPS)
Specify the column classes when you read in the data:
d <- read.csv("~/Desktop/census99.csv",
colClasses = c("numeric", # Year
"character", # FIPS
rep("numeric", 4), # AgeGroup..Population
rep("character", 2) # StateFIPS, CountyFIPS
)
)
head(d)
# Year FIPS AgeGroup RaceSex HispanicStatus Population StateFIPS CountyFIPS
# 1 99 01001 0 1 1 218 01 01
# 2 99 01001 0 2 1 239 01 01
# 3 99 01001 1 1 1 947 01 01
# 4 99 01001 1 2 1 928 01 01
# 5 99 01001 2 1 1 1460 01 01
# 6 99 01001 2 2 1 1355 01 01

Related

Split numeric variables by decimals in R

I have a data frame with a column that contains numeric values, which represent the price.
ID
Total
1124
12.34
1232
12.01
1235
13.10
I want to split the column Total by "." and create 2 new columns with the euro and cent amount. Like this:
ID
Total
Euro
Cent
1124
12.34
12
34
1232
12.01
12
01
1235
13.10
13
10
1225
13.00
13
00
The euro and cent column should also be numeric.
I tried:
df[c('Euro', 'Cent')] <- str_split_fixed(df$Total, "(\\.)", 2)
But I get 2 new columns of type character that looks like this:
ID
Total
Euro
Cent
1124
12.34
12
34
1232
12.01
12
01
1235
13.10
13
1
1225
13.00
13
If I convert the character columns (euro and cent) to numeric like this:
as.numeric(df$Euro)
the 00 cent value turns into NULL and the 10 cent turn into 1 cent.
Any help is welcome.
Two methods:
If class(dat$Total) is numeric, you can do this:
dat <- transform(dat, Euro = Total %/% 1, Cent = 100 * (Total %% 1))
dat
# ID Total Euro Cent
# 1 1124 12.34 12 34
# 2 1232 12.01 12 1
# 3 1235 13.10 13 10
%/% is the integer-division operator, %% the modulus operator.
If class(dat$Total) is character, then
dat <- transform(dat, Euro = sub("\\..*", "", Total), Cent = sub(".*\\.", "", Total))
dat
# ID Total Euro Cent
# 1 1124 12.34 12 34
# 2 1232 12.01 12 01
# 3 1235 13.10 13 10
The two new columns are also character. For this, you may want one of two more steps:
Removing leading 0s, and keep them character:
dat[,c("Euro", "Cent")] <- lapply(dat[,c("Euro", "Cent")], sub, pattern = "^0+", replacement = "")
dat
# ID Total Euro Cent
# 1 1124 12.34 12 34
# 2 1232 12.01 12 1
# 3 1235 13.10 13 10
Convert to numbers:
dat[,c("Euro", "Cent")] <- lapply(dat[,c("Euro", "Cent")], as.numeric)
dat
# ID Total Euro Cent
# 1 1124 12.34 12 34
# 2 1232 12.01 12 1
# 3 1235 13.10 13 10
(You can also use as.integer if you know both columns will always be such.)
Just use standard numeric functions:
df$Euro <- floor(df$Total)
df$Cent <- df$Total %% 1 * 100

Create a table out of a tibble

I do have the following dataframe with 45 million observations:
year month variable
1992 1 0
1992 1 1
1992 1 1
1992 2 0
1992 2 1
1992 2 0
My goal is to count the frequency of the variable for each month of a year.
I was already able to generate these sums with cps_data as my dataframe and SKILL_1 as my variable.
cps_data %>%
group_by(YEAR, MONTH) %>%
summarise_at(vars(SKILL_1),
list(name = sum))
Logically, I obtained 348 different rows as a tibble. Now, I struggle to create a new table with these values. My new table should look similar to my tibble. How can I do that? Is there even a way? I've already tried to read in an excel file with a date range from 01/1992 - 01/2021 in order to obtain exactly 349 rows and then merge it with the rows of the tibble, but it did not work..
# A tibble: 349 x 3
# Groups: YEAR [30]
YEAR MONTH name
<dbl> <int+lbl> <dbl>
1 1992 1 [January] 499
2 1992 2 [February] 482
3 1992 3 [March] 485
4 1992 4 [April] 457
5 1992 5 [May] 434
6 1992 6 [June] 470
7 1992 7 [July] 450
8 1992 8 [August] 438
9 1992 9 [September] 442
10 1992 10 [October] 427
# ... with 339 more rows
many thanks in advance!!
library(zoo)
createmonthyear <- function(start_date,end_date){
ym <- seq(as.yearmon(start_date), as.yearmon(end_date), 1/12)
data.frame(start = pmax(start_date, as.Date(ym)),
end = pmin(end_date, as.Date(ym, frac = 1)),
month = month.name[cycle(ym)],
year = as.integer(ym),
stringsAsFactors = FALSE)}
Once you create the function, you can specify the start and end date you want:
left_table <- data.frame(createmonthyear(1991-01-01,2021-01-01))
then left join the output with what you have
library(dplyr)
right_table <- data.frame(cps_data %>%
group_by(YEAR, MONTH) %>%
summarise_at(vars(SKILL_1),
list(name = sum)))
results <- left_join(left_table, right_table, by = c("Year" = "year", "Month" = "month")

Calculating distance between two variables and generating new variable

I would like to create a variable called spill which is given as the sum of the distances between vectors of each row multiplied by the stock value. For example, consider
firm us euro asia africa stock year
A 1 4 3 5 46 2001
A 2 0 1 3 889 2002
B 2 3 1 1 343 2001
B 0 2 1 3 43 2002
C 1 3 4 2 345 2001
I would like to create a vector which basically takes the distance between two firms at time t and generates the spill variable. For example, take for Firm A in the year 2001 it would be 0.204588 (which is the cosine distance between firm A and B at time t i.e, in 2001 (1,4,3,5) and (2,3,1,1) (i.e. similarity between the investments in us, euro, asia, africa) and then multiplied by 343, and then to calculate the distance between A and C in 2001 as .10528 * 345 , hence the spill variable is = 0.2045883 * 343+ 0.1052075 * 345 = 106.4704 for the year 2001 for firm A.
I want to get a table including spill like this
firm us euro asia africa stock year spill
A 1 4 3 5 46 2001 106.4704
A 2 0 1 3 889 2002
B 2 3 1 1 343 2001
B 0 2 1 3 43 2002
C 1 3 4 2 345 2001
Can anyone please advise?
Here are the codes for stata[https://www.statalist.org/forums/forum/general-stata-discussion/general/1409182-calculating-distance-between-two-variables-and-generating-new-variable]. I have about 3,000 firms and 30 years. It runs well but very slowly.
dt <- data.frame(id=c("A","A","B","B","C"),us=c(1,2,2,0,1),euro=c(4,0,3,2,3),asia=c(3,1,1,1,4),africa=c(5,3,1,3,2),stock=c(46,889,343,43,345),year=c(2001,2002,2001,2002,2001))
Given the minimal info on how to calculate the similarity distance I've used a formula from Find cosine similarity between two arrays which will return different numbers than yours but should give the same resulting info.
I split the data by year so we can compare the unique ids. I take those individual lists and use lapply to run a for loop comparing all possibilities.
dt <- data.frame(id=c("A","A","B","B","C"), us=c(1,2,2,0,1),euro=c(4,0,3,2,3),asia=c(3,1,1,1,4),africa=c(5,3,1,3,2),stock=c(46,889,343,43,345),year=c(2001,2002,2001,2002,2001))
geo <- c("us","euro","asia","africa")
s <- lapply(split(dt, dt$year), function(a) {
n <- nrow(a)
for(i in 1:n){
csim <- rep(0, n) # reset results of cosine similarity *stock vector
for(j in 1:n){
x <- unlist(a[i,geo])
y <- unlist(a[j,geo])
csim[j] <- (1-(x %*% y / sqrt(x%*%x * y%*%y)))*a[j,"stock"]
}
a$spill[i] <- sum(csim)
}
a
})
do.call(rbind, s)
# id us euro asia africa stock year spill
#2001.1 A 1 4 3 5 46 2001 106.47039
#2001.3 B 2 3 1 1 343 2001 77.93231
#2001.5 C 1 3 4 2 345 2001 72.96357
#2002.2 A 2 0 1 3 889 2002 12.28571
#2002.4 B 0 2 1 3 43 2002 254.00000

R separate lines into columns specified by start and end

I'd like to split a dataset made of character strings into columns specified by start and end.
My dataset looks something like this:
>head(templines,3)
[1] "201801 1 78"
[2] "201801 2 67"
[3] "201801 1 13"
and i'd like to split it by specifying my columns using the data dictionary:
>dictionary
col_name col_start col_end
year 1 4
week 5 6
gender 8 8
age 11 12
so it becomes:
year week gender age
2018 01 1 78
2018 01 2 67
2018 01 1 13
In reality the data comes from a long running survey and the white spaces between some columns represent variables that are no longer collected. It has many variables so i need a solution that would scale.
In tidyr::separate it looks like you can only split by specifying the position to split at, rather than the start and end positions. Is there a way to use start / end?
I thought of doing this with read_fwf but I can't seem to be able to use it on my already loaded dataset. I only managed to get it to work by first exporting as a txt and then reading from this .txt:
write_lines(templines,"t1.txt")
read_fwf("t1.txt",
fwf_positions(start = dictionary$col_start,
end = dictionary$col_end,
col_names = dictionary$col_name)
is it possible to use read_fwf on an already loaded dataset?
Answering your question directly: yes, it is possible to use read_fwf with already loaded data. The relevant part of the docs is the part about the argument file:
Either a path to a file, a connection, or literal data (either a single string or a raw vector).
...
Literal data is most useful for examples and tests.
It must contain at least one new line to be recognised as data (instead of a path).
Thus, you can simply collapse your data and then use read_fwf:
templines %>%
paste(collapse = "\n") %>%
read_fwf(., fwf_positions(start = dictionary$col_start,
end = dictionary$col_end,
col_names = dictionary$col_name))
This should scale to multiple columns, and is fast for many rows (on my machine for 1 million rows and four columns about half a second).
There are a few warnings regarding parsing failures, but they stem from your dictionary. If you change the last line to age, 11, 12 it works as expected.
A solution with substring:
library(data.table)
x <- transpose(lapply(templines, substring, dictionary$col_start, dictionary$col_end))
setDT(x)
setnames(x, dictionary$col_name)
# > x
# year week gender age
# 1: 2018 01 1 78
# 2: 2018 01 2 67
# 3: 2018 01 1 13
How about this?
data.frame(year=substr(templines,1,4),
week=substr(templines,5,6),
gender=substr(templines,7,8),
age=substr(templines,11,13))
Using base R:
m = list(`attr<-`(dat$col_start,"match.length",dat$col_end-dat$col_start+1))
d = do.call(rbind,regmatches(x,rep(m,length(x))))
setNames(data.frame(d),dat$col_name)
year week gender age
1 2018 01 1 78
2 2018 01 2 67
3 2018 01 1 13
DATA USED:
x = c("201801 1 78", "201801 2 67", "201801 1 13")
dat=read.table(text="col_name col_start col_end
year 1 4
week 5 6
gender 8 8
age 11 13 ",h=T)
We could use separate from tidyverse
library(tidyverse)
data.frame(Col = templines) %>%
separate(Col, into = dictionary$col_name, sep= head(dictionary$col_end, -1))
# year week gender age
#1 2018 01 1 78
#2 2018 01 2 67
#3 2018 01 1 13
The convert = TRUE argument can also be used with separate to have numeric columns as output
tibble(Col = templines) %>%
separate(Col, into = dictionary$col_name,
sep= head(dictionary$col_end, -1), convert = TRUE)
# A tibble: 3 x 4
# year week gender age
# <int> <int> <int> <int>
#1 2018 1 1 78
#2 2018 1 2 67
#3 2018 1 1 13
data
dictionary <- structure(list(col_name = c("year", "week", "gender", "age"),
col_start = c(1L, 5L, 8L, 11L), col_end = c(4L, 6L, 8L, 13L
)), .Names = c("col_name", "col_start", "col_end"),
class = "data.frame", row.names = c(NA, -4L))
templines <- c("201801 1 78", "201801 2 67", "201801 1 13")
This is an explicit function which seems to be working the way you wanted.
split_func<-function(char,ref,name,start,end){
res<-data.table("ID" = 1:length(char))
for(i in 1:nrow(ref)){
res[,ref[[name]][i] := substr(x = char,start = ref[[start]][i],stop = ref[[end]][i])]
}
return(res)
}
I have created the same input files as you:
templines<-c("201801 1 78","201801 2 67","201801 1 13")
dictionary<-data.table("col_name" = c("year","week","gender","age"),"col_start" = c(1,5,8,11),
"col_end" = c(4,6,8,13))
# col_name col_start col_end
#1: year 1 4
#2: week 5 6
#3: gender 8 8
#4: age 11 13
As for the arguments,
char - The character vector with the values you want to split
ref - The reference table or dictionary
name - The column number in the reference table containing the column names you want
start - The column number in the reference table containing the start points
end - The column number in the reference table containing the stop points
If I use this function with these inputs, I get the following result:
out<-split_func(char = templines,ref = dictionary,name = 1,start = 2,end = 3)
#>out
# ID year week gender age
#1: 1 2018 01 1 78
#2: 2 2018 01 2 67
#3: 3 2018 01 1 13
I had to include an "ID" column to initiate the data table and make this easier. In case you want to drop it later you can just use:
out[,ID := NULL]
Hope this is closer to the solution you were looking for.

R Cleaning and reordering names/serial numbers in data frame

Let's say I have a data frame as follows in R:
Data <- data.frame("SerialNum" = character(), "Year" = integer(), "Name" = character(), stringsAsFactors = F)
Data[1,] <- c("983\n837\n424\n ", 2015, "Michael\nLewis\nPaul\n ")
Data[2,] <- c("123\n456\n789\n136", 2014, "Elaine\nJerry\nGeorge\nKramer")
Data[3,] <- c("987\n654\n321\n975\n ", 2010, "John\nPaul\nGeorge\nRingo\nNA")
Data[4,] <- c("424\n983\n837", 2015, "Paul\nMichael\nLewis")
Data[5,] <- c("456\n789\n123\n136", 2014, "Jerry\nGeorge\nElaine\nKramer")
What I want to do is the following:
Split up each string of names and each string of serial numbers so that they are their own vectors (or a list of string vectors).
Eliminate any character "NA" in either set of vectors or any blank spaces denoted by "...\n ".
Reorder each list of names alphabetically and reorder the corresponding serial numbers according to the same permutation.
Concatenate each vector in the same fashion it was originally (I usually do this with paste(., collapse = "\n")).
My issue is how to do this without using a for loop. What is an object-oriented way to do this? As a first attempt in this direction I originally made a list by the command LIST <- strsplit(Data$Name, split = "\n") and from here I need a for loop in order to find the permutations of the names, which seems like a process that won't scale according to my actual data. Additionally, once I make the list LIST I'm not sure how I go about removing NA symbols or blank spaces. Any help is appreciated!
Using lapply I take each row of the data frame and turn it into a new data frame with one name per row. This creates a list of 5 data frames, one for each row of the original data frame.
seinfeld = lapply(1:nrow(Data), function(i) {
# Turn strings into data frame with one name per row
dat = data.frame(SerialNum=unlist(strsplit(Data[i,"SerialNum"], split="\n")),
Year=Data[i,"Year"],
Name=unlist(strsplit(Data[i,"Name"], split="\n")))
# Get rid of empty strings and NA values
dat = dat[!(dat$Name %in% c(""," ","NA")), ]
# Order alphabetically
dat = dat[order(dat$Name), ]
})
UPDATE: Based on your comment, let me know if this is the result you're trying to achieve:
seinfeld = lapply(1:nrow(Data), function(i) {
# Turn strings into data frame with one name per row
dat = data.frame(SerialNum=unlist(strsplit(Data[i,"SerialNum"], split="\n")),
Name=unlist(strsplit(Data[i,"Name"], split="\n")))
# Get rid of empty strings and NA values
dat = dat[!(dat$Name %in% c(""," ","NA")), ]
# Order alphabetically
dat = dat[order(dat$Name), ]
# Collapse back into a single row with the new sort order
dat = data.frame(SerialNum=paste(dat[, "SerialNum"], collapse="\n"),
Year=Data[i, "Year"],
Name=paste(dat[, "Name"], collapse="\n"))
})
do.call(rbind, seinfeld)
SerialNum Year Name
1 837\n983\n424 2015 Lewis\nMichael\nPaul
2 123\n789\n456\n136 2014 Elaine\nGeorge\nJerry\nKramer
3 321\n987\n654\n975 2010 George\nJohn\nPaul\nRingo
4 837\n983\n424 2015 Lewis\nMichael\nPaul
5 123\n789\n456\n136 2014 Elaine\nGeorge\nJerry\nKramer
eipi10 offered a great answer. In addition to that, I'd like to leave what I tried mainly with data.table. First, I split two columns (i.e., SerialNum and Name) with cSplit(), added an index with add_rownames(), and split the data by the index. In the first lapply(), I used Stacked() from the splitstackshape package. I stacked SerialNum and Name; separated SeriaNum and Name become two columns, as you see in a part of temp2. In the second lapply(), I used merge from the data.table package. Then, I removed rows with NAs (lapply(na.omit)), combined all data tables (rbindlist), and changed order of rows by rowname, which is row number of the original data) and Name (setorder(rowname, Name))
library(data.table)
library(splitstackshape)
library(dplyr)
cSplit(mydf, c("SerialNum", "Name"), direction = "wide",
type.convert = FALSE, sep = "\n") %>%
add_rownames %>%
split(f = .$rowname) -> temp
#a part of temp
#$`1`
#Source: local data frame [1 x 12]
#
#rowname Year SerialNum_1 SerialNum_2 SerialNum_3 SerialNum_4 SerialNum_5 Name_1 Name_2
#(chr) (dbl) (chr) (chr) (chr) (chr) (chr) (chr) (chr)
#1 1 2015 983 837 424 NA NA Michael Lewis
#Variables not shown: Name_3 (chr), Name_4 (chr), Name_5 (chr)
lapply(temp, function(x){
Stacked(x, var.stubs = c("SerialNum", "Name"), sep = "_")
}) -> temp2
# A part of temp2
#$`1`
#$`1`$SerialNum
# rowname Year .time_1 SerialNum
#1: 1 2015 1 983
#2: 1 2015 2 837
#3: 1 2015 3 424
#4: 1 2015 4 NA
#5: 1 2015 5 NA
#
#$`1`$Name
# rowname Year .time_1 Name
#1: 1 2015 1 Michael
#2: 1 2015 2 Lewis
#3: 1 2015 3 Paul
#4: 1 2015 4 NA
#5: 1 2015 5 NA
lapply(1:nrow(mydf), function(x){
merge(temp2[[x]]$SerialNum, temp2[[x]]$Name, by = c("rowname", "Year", ".time_1"))
}) %>%
lapply(na.omit) %>%
rbindlist %>%
setorder(rowname, Name) -> out
print(out)
# rowname Year .time_1 SerialNum Name
# 1: 1 2015 2 837 Lewis
# 2: 1 2015 1 983 Michael
# 3: 1 2015 3 424 Paul
# 4: 2 2014 1 123 Elaine
# 5: 2 2014 3 789 George
# 6: 2 2014 2 456 Jerry
# 7: 2 2014 4 136 Kramer
# 8: 3 2010 3 321 George
# 9: 3 2010 1 987 John
#10: 3 2010 2 654 Paul
#11: 3 2010 4 975 Ringo
#12: 4 2015 3 837 Lewis
#13: 4 2015 2 983 Michael
#14: 4 2015 1 424 Paul
#15: 5 2014 3 123 Elaine
#16: 5 2014 2 789 George
#17: 5 2014 1 456 Jerry
#18: 5 2014 4 136 Kramer
DATA
mydf <- structure(list(SerialNum = c("983\n837\n424\n ", "123\n456\n789\n136",
"987\n654\n321\n975\n ", "424\n983\n837", "456\n789\n123\n136"
), Year = c(2015, 2014, 2010, 2015, 2014), Name = c("Michael\nLewis\nPaul\n ",
"Elaine\nJerry\nGeorge\nKramer", "John\nPaul\nGeorge\nRingo\nNA",
"Paul\nMichael\nLewis", "Jerry\nGeorge\nElaine\nKramer")), .Names = c("SerialNum",
"Year", "Name"), row.names = c(NA, -5L), class = "data.frame")

Resources