I am trying to load some data into R that is in the following format (as a text file)
Name Country Age
John,Smith United Kingdom 20
Washington,George USA 50
Martin,Joseph Argentina 43
The problem I have is that the "columns" are separated by spaces such that they all line up nicely, but one row may have 5 spaces between values and the next 10 spaces. So when I load it in using read.delim I get a one column data.frame with
"John,Smith United Kingdom 20"
as the first observation and so on.
Is there any way I can either:
Load the data into R into a usable format? or
Split the character strings up into separate columns once I load it in in the one column format?
My thought was to split the character strings by spaces, except it would need to be between 2 and x spaces (so, for example, "United Kingdom" stays together and doesn't become "United" "" "Kingdom"). But I don't know if that is possible.
I tried strsplit(data.frame[,1], sep="\\s") but it returns a list of character strings like:
"John,Smith" "" "" "" "" "" "" "" "United" "" "Kingdom" "" ""...
which I don't know what to do with.
Having columns that all "line up nicely" is a typical characteristic of fixed-width data.
For the sake of this answer, I've written your three lines of data and one line of header information to a temporary file called "x". For your actual use, replace "x" with the file name/path, as you would normally use with read.delim.
Here's the sample data:
x <- tempfile()
cat("Name Country Age\nJohn,Smith United Kingdom 20\nWashington,George USA 50\nMartin,Joseph Argentina 43\n", file = x)
R has it's own function for reading fixed width data (read.fwf) but it is notoriously slow and you need to know the widths before you can get started. We can count those if the file is small, and then use something like:
read.fwf(x, c(22, 18, 4), strip.white = TRUE, skip = 1,
col.names = c("Name", "Country", "Age"))
# Name Country Age
# 1 John,Smith United Kingdom 20
# 2 Washington,George USA 50
# 3 Martin,Joseph Argentina 43
Alternatively, you can let fwf_widths from the "readr" package do the guessing of widths for you, and then use read_fwf:
library(readr)
read_fwf(x, fwf_empty(x, col_names = c("Name", "Country", "Age")), skip = 1)
# Name Country Age
# 1 John,Smith United Kingdom 20
# 2 Washington,George USA 50
# 3 Martin,Joseph Argentina 43
You can do base R, supposing your columns do not contain words with more than 1 space:
txt = "Name Country Age
John,Smith United Kingdom 20
Washington,George USA 50
Martin,Joseph Argentina 43"
conn = textConnection(txt)
do.call(rbind, lapply(readLines(conn), function(u) strsplit(u,'\\s{2,}')[[1]]))
# [,1] [,2] [,3]
#[1,] "Name" "Country" "Age"
#[2,] "John,Smith" "United Kingdom" "20"
#[3,] "Washington,George" "USA" "50"
#[4,] "Martin,Joseph" "Argentina" "43"
Related
I have a corpus of a couple of thousand documents and I'm trying to find the most commonly mentioned countries in the abstracts.
The library countrycode seems to have a comprehensive list of country names I can match against:
# country.name.alt shows multiple potential namings for 'Congo' (yay!):
install.packages(countrycode)
countrycode::countryname_dict |> filter(grepl('congo', tolower(country.name.alt)))
# Also seems to work for ones like "China"/"People's Republic of China"
A reprex of the data looks something like this:
df <- data.frame(entry_number = 1:5,
text = c("a few paragraphs that might contain the country name congo or democratic republic of congo",
"More text that might contain myanmar or burma, as well as thailand",
"sentences that do not contain a country name can be returned as NA",
"some variant of U.S or the united states",
"something with an accent samóoa"))
I want to reduce each entry in the column "text" to contain only a country name. Ideally something like this (note the repeat entry number):
desired_df <- data.frame(entry_number = c(1, 2, 2, 3, 4, 5),
text = c("congo",
"myanmar",
"thailand",
NA,
"united states",
"samoa"))
I've attempted with str_extract and various other failed attempts! The corpus is in English but international alphabets included in countrycode::countryname_dict$country.name.alt do throw reges errors. countrycode::countryname_dict$country.name.alt contains all the alternatives that countrycode::countryname_dict$country.name.en does not...
Open to any approach (dplyr,data.table...) that answers the initial question of how many times each country is mentioned in the corpus. Only requirement is that it is as robust as possible to different potential country names, accents and any other hidden catches!
Thanks community!
P.S, I have reviewed the following questions but no luck with my own example:
Matching an extracting country name from character string in R
extract country names (or other entity) from column
Extracting country names in R
Extracting Country Name from Author Affiliations
This seeems to work well on example data.
library(tidyverse)
all_country <- countrycode::countryname_dict %>%
filter(grepl('[A-Za-z]', country.name.alt)) %>%
pull(country.name.alt) %>%
tolower()
pattern <- str_c(all_country, collapse = '|')
df %>%
mutate(country = str_extract_all(tolower(text), pattern)) %>%
select(-text) %>%
unnest(country, keep_empty = TRUE)
# entry_number country
# <int> <chr>
#1 1 congo
#2 1 democratic republic of congo
#3 2 myanma
#4 2 burma
#5 2 thailand
#6 3 NA
#7 4 united states
#8 5 samóoa
I am using R for extracting text. The code below works well to extract the non-bold text from pdf but it ignores the bold part. Is there a way to extract both bold and non-bold text?
news <-'http://www.frbe-kbsb.be/sites/manager/ICN/14-15/ind01.pdf'
library(pdftools)
library(tesseract)
library(tiff)
info <- pdf_info(news)
numberOfPageInPdf <- as.numeric(info[2])
numberOfPageInPdf
for (i in 1:numberOfPageInPdf){
bitmap <- pdf_render_page(news, page=i, dpi = 300, numeric = TRUE)
file_name <- paste0("page", i, ".tiff")
file_tiff <- tiff::writeTIFF(bitmap, file_name)
out <- ocr(file_name)
file_txt <- paste0("text", i, ".txt")
writeLines(out, file_txt)
}
I like using the tabulizer library for this. Here's a small example:
devtools::install_github("ropensci/tabulizer")
library(tabulizer)
news <-'http://www.frbe-kbsb.be/sites/manager/ICN/14-15/ind01.pdf'
# note that you need to specify UTF-8 as the encoding, otherwise your special characters
# won't come in correctly
page1 <- extract_tables(news, guess=TRUE, page = 1, encoding='UTF-8')
page1[[1]]
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] "" "Division: 1" "" "" "" "" "Série: A"
[2,] "" "514" "" "Fontaine 1 KBSK 1" "" "" "303"
[3,] "1" "62529 WIRIG ANTHONY" "" "2501 1⁄2-1⁄2" "51560" "CZEBE ATTILLA" "2439"
[4,] "2" "62359 BRUNNER NICOLAS" "" "2443 0-1" "51861" "PICEU TOM" "2401"
[5,] "3" "75655 CEKRO EKREM" "" "2393 0-1" "10391" "GEIRNAERT STEVEN" "2400"
[6,] "4" "50211 MARECHAL ANDY" "" "2355 0-1" "35181" "LEENHOUTS KOEN" "2388"
[7,] "5" "73059 CLAESEN PIETER" "" "2327 1⁄2-1⁄2" "25615" "DECOSTER FREDERIC" "2373"
[8,] "6" "63614 HOURIEZ CLEMENT" "" "2304 1⁄2-1⁄2" "44954" "MAENHOUT THIBAUT" "2372"
[9,] "7" "60369 CAPONE NICOLA" "" "2283 1⁄2-1⁄2" "10430" "VERLINDE TIEME" "2271"
[10,] "8" "70653 LE QUANG KIM" "" "2282 0-1" "44636" "GRYSON WOUTER" "2269"
[11,] "" "" "< 2361 >" "12 - 20" "" "< 2364 >" ""
You can also use the locate_areas function to specify a specific region if you only care about some of the tables. Note that for locate_areas to work, I had to download the file locally first; using the URL returned an error.
You'll note that each table is its own element in the returned list.
Here's an example using a custom region to just select the first table on each page:
customArea <- extract_tables(news, guess=FALSE, page = 1, area=list(c(84,27,232,569), encoding = 'UTF-8')
This is also a more direct method than using the OCR (Optical Character Recognition) library tesseract beacuse you're not relying on the OCR library to translate pixel arrangement back into text. In digital PDFs, each text element has an x and y position, and the tabulizer library uses that information to detect table heuristics and extract sensibly formatted data. You'll see you still have some clean up to do, but it's pretty manageable.
Edit: just for fun, here's a little example of starting the clean up with data.table
library(data.table)
cleanUp <- setDT(as.data.frame(page1[[1]]))
cleanUp[ , `:=` (Division = as.numeric(gsub("^.*(\\d+{1,2}).*", "\\1", grep('Division', cleanUp$V2, value=TRUE))),
Series = as.character(gsub(".*:\\s(\\w).*","\\1", grep('Série:', cleanUp$V7, value=TRUE))))
][,ID := tstrsplit(V2," ", fixed=TRUE, keep = 1)
][, c("V1", "V3") := NULL
][-grep('Division', V2, fixed=TRUE)]
Here we've moved Division, Series, and ID into their own columns, and removed the Division header row. This is just the general idea, and would need a little refinement to apply to all 27 pages.
V2 V4 V5 V6 V7 Division Series ID
1: 514 Fontaine 1 KBSK 1 303 1 A 514
2: 62529 WIRIG ANTHONY 2501 1/2-1/2 51560 CZEBE ATTILLA 2439 1 A 62529
3: 62359 BRUNNER NICOLAS 2443 0-1 51861 PICEU TOM 2401 1 A 62359
4: 75655 CEKRO EKREM 2393 0-1 10391 GEIRNAERT STEVEN 2400 1 A 75655
5: 50211 MARECHAL ANDY 2355 0-1 35181 LEENHOUTS KOEN 2388 1 A 50211
6: 73059 CLAESEN PIETER 2327 1/2-1/2 25615 DECOSTER FREDERIC 2373 1 A 73059
7: 63614 HOURIEZ CLEMENT 2304 1/2-1/2 44954 MAENHOUT THIBAUT 2372 1 A 63614
8: 60369 CAPONE NICOLA 2283 1/2-1/2 10430 VERLINDE TIEME 2271 1 A 60369
9: 70653 LE QUANG KIM 2282 0-1 44636 GRYSON WOUTER 2269 1 A 70653
10: 12 - 20 < 2364 > 1 A NA
There is no need to go through the PDF -> TIFF -> OCR loop, since pdftools::pdf_text() can read this file directly:
stringi::stri_split(pdf_text(news), regex = "\n")
This is my first question so please excuse the mistakes.
I have a dataframe where the address is in one line and has many missing values and several errors.
Address
Braemor Drive, Clontarf, Co.Dublin
Meadow Avenue, Dundrum
Philipsburgh Avenue, Marino
Myrtle Square, The Coast
I would like to add a new field "District", if the value of the address contains certain values for example if it contains Marino, Fairview or Clontarf the District should be Dublin 3.
Dublin3 <- c("Marino", "Fairview", "Clontarf")
matches <- unique (grep(paste(Dublin3,collapse="|"),
DubPPReg$Address, value=TRUE))
Using R, how can I update the value of District where the match is true?
# I've created example data frame with column Adress
df <- data.frame(Adress = c("Braemor Drive",
"Clontarf",
"Co.Dublin",
"Meadow Avenue",
"Dundrum",
"Philipsburgh Avenue",
"Marino",
"Myrtle Square", "The Coast"))
# And vector Dublin
Dublin3 <- c("Marino", "Fairview", "Clontarf")
# Match names in column Adress and vector Dublin 3
df$District <- ifelse(df$Adress %in% Dublin3, "Dublin 3",FALSE)
df
Adress District
1 Braemor Drive FALSE
2 Clontarf Dublin 3
3 Co.Dublin FALSE
4 Meadow Avenue FALSE
5 Dundrum FALSE
6 Philipsburgh Avenue FALSE
7 Marino Dublin 3
8 Myrtle Square FALSE
9 The Coast FALSE
Instead of FALSE you can choose something else (e.g. NA).
Edited: If your data are in vector
df <- c("Braemor Drive, Churchtown, Co.Dublin",
"Meadow Avenue, Clontarf, Dublin 14",
"Sallymount Avenue, Ranelagh", "Philipsburgh Avenue, Marino")
Which looks like this
df
[1] "Braemor Drive, Churchtown, Co.Dublin"
[2] "Meadow Avenue, Clontarf, Dublin 14"
[3] "Sallymount Avenue, Ranelagh"
[4] "Philipsburgh Avenue, Marino"
You can find your maches using grepl like this
match <- ifelse(grepl("Marino|Fairview|Clontarf", df, ignore.case = T), "Dublin 3",FALSE)
and output is
[1] "FALSE" "Dublin 3" "FALSE" "Dublin 3"
Which means that one or all of the matching names that you are looking for (i.e. Marino, Fairview or Clontarf) are in second and fourth row in df.
I have the following name_total = matrix(nrow = 51, ncol=3, NA), where each row corresponds to a state (51 being District of Columbia). The first column is a string giving the name of the state (for example: name_total[1,1]= "Alabama").
The second and third are urls of CSV files from the Census, respectively linking counties with the state senate districts, and counties with state house districts.
For Alabama:
name_total[1,2] ="http://www2.census.gov/geo/relfiles/cdsld13/01/co_lu_delim_01.txt"
name_total[1,3] ="http://www2.census.gov/geo/relfiles/cdsld13/01/co_ll_delim_01.txt"
I wish to get as a final output a table which would basically be all 50 states + DC with their respective counties and linked Senate and House districts. I don't know if that's very clear so here is an example:
[,1] [,2] [,3] [,4]
[1,] "Alabama" "countyX1" "Senate District Y1" "House District Z1"
[2,] "Alabama" "countyX2" "Senate District Y2" "House District Z2"
[3,] "Alabama" "countyX3" "Senate District Y3" "House District Z3"
[4,] "Alaska" "countyX4" "Senate District Y4" "House District Z4"
[5,] "Alaska" "countyX5" "Senate District Y4" "House District Z5"
I use a forloop:
for (i in 1:51){
senate= name_total[i,2]
link_senate = url(senate)
house= name_total[i,3]
link_house = url(house)
state=name_total[i,1]
data_senate= read.csv2(link_senate,sep=",",header=TRUE, skip=1)
data_house= read.csv2(link_house,sep=",",header=TRUE, skip=1)
final=cbind(state, data_senate, data_house)
}
Of course each element has a different number of rows, for Alabama (i=1) State returns "Alabama" once, the others returning respectively 3 by 122 and 3 by 207 matrices. I get an error message about these variations in the number of rows.
I'm pretty sure one of the issues is the use of the cbind function, but I do not know what to use to get a better result.
In case others have similar issues, I found a way to get what I wanted separately for State Senates and State Houses. First of all some of the States only have of the two, and the link for Oregon was down. Personally I took them out of my original data.
Then I initialized for the first state outside of the loop:
senate = url(name_total[1,2])
data_senate= read.csv2(senate,sep=",",header=TRUE, skip=1)
assign(paste("Base_senate_",name_total[1,1],sep=""),data_senate)
A = assign(paste("Base_senate_",name_total[1,1],sep=""),data_senate)
house= url(name_total[1,3])
data_house= read.csv2(house,sep=",",header=TRUE, skip=1)
assign(paste("Base_house_",name_total[1,1],sep=""),data_house)
B = assign(paste("Base_house_",name_total[1,1],sep=""),data_house)
and then I used for loop:
for (i in 2:48){
senate = url(name_total[i,2])
house= url(name_total[i,3])
data_senate= read.csv2(senate,sep=",",header=TRUE, skip=1)
assign(paste("Base_senate_",name_total[i,1],sep=""),data_senate)
names(data_senate)[2] = "County"
A = rbind(A,assign(paste("Base_senate_",name_total[i,1],sep=""),data_senate))
data_house= read.csv2(house,sep=",",header=TRUE, skip=1)
assign(paste("Base_house_",name_total[i,1],sep=""),data_house)
names(data_house)[2] = "County"
B = rbind(B,assign(paste("Base_house_",name_total[i,1],sep=""),data_house))
}
A and B give you the expected tables (without the string name of the State, but the first variable identifies the state).
I had to use the names(data_senate)[2] = "County" because the second column had a different name for some states.
Hope it helps!
I have a CSV file like
LocationList,Identity,Category
"New York,New York,United States","42","S"
"NA,California,United States","89","lyt"
"Hartford,Connecticut,United States","879","polo"
"San Diego,California,United States","45454","utyr"
"Seattle,Washington,United States","uytr","69"
"NA,NA,United States","87","tree"
I want to remove all 'NA' from the 'LocationList' Column
The Desired Result -
LocationList,Identity,Category
"New York,New York,United States","42","S"
"California,United States","89","lyt"
"Hartford,Connecticut,United States","879","polo"
"San Diego,California,United States","45454","utyr"
"Seattle,Washington,United States","uytr","69"
"United States","87","tree"
The number of columns are not fixed and they may increase or decrease. Also I want to write to the CSV file without quotes and without escaping for the 'LocationList' column.
How to achieve the following in R?
New to R any help is appreciated.
In this case, you just want to replace the NA, with nothing. However, this is not the standard way to remove NA values.
Assuming dat is your data, use
dat$LocationList <- gsub("^(NA,)+", "", dat$LocationList)
Try:
my.data <- read.table(text='LocationList,Identity,Category
"New York,New York,United States","42","S"
"NA,California,United States","89","lyt"
"Hartford,Connecticut,United States","879","polo"
"San Diego,California,United States","45454","utyr"
"Seattle,Washington,United States","uytr","69"
"NA,NA,United States","87","tree"', header=T, sep=",")
my.data$LocationList <- gsub("NA,", "", my.data$LocationList)
my.data
# LocationList Identity Category
# 1 New York,New York,United States 42 S
# 2 California,United States 89 lyt
# 3 Hartford,Connecticut,United States 879 polo
# 4 San Diego,California,United States 45454 utyr
# 5 Seattle,Washington,United States uytr 69
# 6 United States 87 tree
If you get rid of the quotes when you write to a conventional csv file, you will have trouble reading the data in later. This is because you have commas already inside each value in the LocationList variable, so you would have commas both in the middle of fields and marking the break between fields. You might try using write.csv2() instead, which will indicate new fields with a semicolon ;. You could use:
write.csv2(my.data, file="myFile.csv", quote=FALSE, row.names=FALSE)
Which yields the following file:
LocationList;Identity;Category
New York,New York,United States;42;S
California,United States;89;lyt
Hartford,Connecticut,United States;879;polo
San Diego,California,United States;45454;utyr
Seattle,Washington,United States;uytr;69
United States;87;tree
(I now notice that the values for Identity and Category for row 5 are presumably messed up. You may want to switch those before writing to file.)
x <- my.data[5, 2]
my.data[5, 2] <- my.data[5, 3]
my.data[5, 2] <- x
rm(x)