Remove specific value in R or Linux - r

Hi I have a file (tab sep) in terminal that has several columns as below. You can see last column has a comma in between followed by one or more characters.
1 100 Japan Na pa,cd
2 120 India Ca pa,ces
5 110 Japan Ap pa,cres
1 540 China Sn pa,cd
1 111 Nepal Le pa,b
I want to keep last column values before the comma so the file can look like
2 120 India Ca pa
5 110 Japan Ap pa
1 540 China Sn pa
1 111 Nepal Le pa
I have looked for sed but I cannot find a way to exclude them
Regards

In R you can read the file with a tab-separator and remove the values after comma.
result <- transform(read.table('file1.txt', sep = '\t'), V5 = sub(',.*', '', V5))
V5 is used assuming it is the 5th column that you want to change the value.

We can use
df1 <- read.tsv('file1.txt', sep="\t")
df1$V5 <- sub("^([^,]+),.*", "\\1", df1$V5)

Related

Importing csv with date headers

I am importing an excel csv file which has a large number of columns. Each column is for a different date. e.g. March 1990, April 1990.
When I import the column headers are being changed to numbers, for example, 34355, 34356.
How do I preserve the dates?
I tried using the r studio import function
sales <- read_csv("W:/Sales_data/sales.csv")
Expected
First_Name Sir_name Region Jan_1980 Feb_1980 Mar_1980
George Dell LA 52 23 121
Lisa Stevens NY 234 122
Peter Hunt TX 3242 12 123
Actual
First_Name Sir_name Region 34524 34525 34526
George Dell LA 52 23 121
Lisa Stevens NY 234 122
Peter Hunt TX 3242 12 123
Any help is greatly appreciated.
You need to import the first as data and not headers. Then, change the format to match your desired. Finally, assign the first row as column names and remove it next.
library(readr)
sales <- read_csv("W:/Sales_data/sales.csv",
col_names = FALSE)
sales[1,4:6] <- format(as.Date(sales[1,4:6], origin = "1899-12-30"), "%b_%Y")
colnames(sales) <- sales[1,]
sales <- sales[-1,]

Opening .bcp files in R

I have been trying to convert UK charity commission data which is in .bcp file format into .csv file format which could then be read into R. The data I am referring to is available here: http://data.charitycommission.gov.uk/. What I am trying to do is turn these .bcp files into useable dataframes that I can clean and run analyses on in R.
There are suggestions on how to do this through python on this github page https://github.com/ncvo/charity-commission-extract but unfortunately I haven't been able to get these options to work.
I am wondering if there is any syntax or packages that will allow me to open these data in R directly? I haven't been able to find any.
Another option would be to simply open the files within R as a single character vector using readLines. I have done this and the files are delimited with #**# for columns and *##* for rows. (See here: http://data.charitycommission.gov.uk/data-definition.aspx). Is there an R command that would allow me to create a dataframe from a long character string, defining de-limiters for both rows and columns?
R-solution
edited version
Not sure if all .bcp files are in the same format.. I downloaded the dataset you mentioned, and tried a solution for the smallest file; extract_aoo_ref.bcp
library(data.table)
#read the file as-is
text <- readChar("./extract_aoo_ref.bcp",
nchars = file.info( "./extract_aoo_ref.bcp" )$size,
useBytes = TRUE)
#replace column and row separator
text <- gsub( ";", ":", text)
text <- gsub( "#\\*\\*#", ";", text)
text <- gsub( "\\*##\\*", "\n", text, perl = TRUE)
#read the results
result <- data.table::fread( text,
header = FALSE,
sep = ";",
fill = TRUE,
quote = "",
strip.white = TRUE)
head(result,10)
# V1 V2 V3 V4 V5 V6
# 1: A 1 THROUGHOUT ENGLAND AND WALES At least 10 authorities in England and Wales N NA
# 2: B 1 BRACKNELL FOREST BRACKNELL FOREST N NA
# 3: D 1 AFGHANISTAN AFGHANISTAN N 2
# 4: E 1 AFRICA AFRICA N NA
# 5: A 2 THROUGHOUT ENGLAND At least 10 authorities in England only N NA
# 6: B 2 WEST BERKSHIRE WEST BERKSHIRE N NA
# 7: D 2 ALBANIA ALBANIA N 3
# 8: E 2 ASIA ASIA N NA
# 9: A 3 THROUGHOUT WALES At least 10 authorities in Wales only Y NA
# 10: B 3 READING READING N NA
same for the tricky file; extract_charity.bcp
head(result[,1:3],10)
# V1 V2 V3
# 1: 200000 0 HOMEBOUND CRAFTSMEN TRUST
# 2: 200001 0 PAINTERS' COMPANY CHARITY
# 3: 200002 0 THE ROYAL OPERA HOUSE BENEVOLENT FUND
# 4: 200003 0 HERGA WORLD DISTRESS FUND
# 5: 200004 0 THE WILLIAM GOLDSTEIN LAY STAFF BENEVOLENT FUND (ROYAL HOSPITAL OF ST BARTHOLOMEW)
# 6: 200005 0 DEVON AND CORNWALL ROMAN CATHOLIC DEVELOPMENT SOCIETY
# 7: 200006 0 THE HORLEY SICK CHILDREN'S FUND
# 8: 200007 0 THE HOLDENHURST OLD PEOPLE'S HOME TRUST
# 9: 200008 0 LORNA GASCOIGNE TRUST FUND
# 10: 200009 0 THE RALPH LEVY CHARITABLE COMPANY LIMITED
so.. looks like it is working :)

Find and tag a number between a range

I have two dfs as below
>codes1
Country State City Start No End No
IN Telangana Hyderabad 100 200
IN Maharashtra Pune (Bund Garden) 300 400
IN Haryana Gurgaon 500 600
IN Maharashtra Pune 700 800
IN Gujarat Ahmedabad (Vastrapur) 900 1000
Now i want to tag ip address from table 1
>codes2
ID No
1 157
2 346
3 389
4 453
5 562
6 9874
7 98745
Now i want to tag numbers in codes2 df as per the range given in codes1 df for No column , expected ouput is
ID No Country State City
1 157 IN Telangana Hyderabad
2 346 IN Maharashtra Pune(Bund Garden)
.
.
.
Basically want to tag No column in codes 2 with codes1 according to the range (Start No and End No) that No observations falls in.
Also the order could be anything in codes 2 df .
You could use the non-equi join capability of the data.table package for that:
library(data.table)
setDT(codes1)
setDT(codes2)
codes2[codes1, on = .(No > StartNo, No < EndNo), ## (1)
`:=`(cntry = Country, state = State, city = City)] ## (2)
(1) obtains matching row indices in codes2 corresponding to each row in codes1, while matching on the condition provided to the on argument.
(2) updates codes2 values for those matching rows for the columns specified directly by reference (i.e., you don't have to assign the result back to another variable).
This gives:
codes2
# ID No cntry state city
# 1: 1 157 IN Telangana Hyderabad
# 2: 2 346 IN Maharashtra Pune (Bund Garden)
# 3: 3 389 IN Maharashtra Pune (Bund Garden)
# 4: 4 453 NA NA NA
# 5: 5 562 IN Haryana Gurgaon
# 6: 6 9874 NA NA NA
# 7: 7 98745 NA NA NA
if you're comfortable writing SQL, you might consider using the sqldf package to do something like
library('sqldf')
result <- sqldf('select * from codes2 left join codes1 on codes2.No between codes1.StartNo and codes1.EndNo')
you may have to remove special characters and spaces from the columnnames of your dataframes beforehand.

Convert one column into multiple columns

I am a novice. I have a data set with one column and many rows. I want to convert this column into 5 columns. For example my data set looks like this:
Column
----
City
Nation
Area
Metro Area
Urban Area
Shanghai
China
24,000,000
1230040
4244234
New york
America
343423
23423434
343434
Etc
The output should look like this
City | Nation | Area | Metro City | Urban Area
----- ------- ------ ------------ -----------
Shangai China 2400000 1230040 4244234
New york America 343423 23423434 343434
The first 5 rows of the data set (City, Nation,Area, etc) need to be the names of the 5 columns and i want the rest of the data to get populated under these 5 columns. Please help.
Here is a one liner (considering that your column is character, i.e. df$column <- as.character(df$column))
setNames(data.frame(matrix(unlist(df[-c(1:5),]), ncol = 5, byrow = TRUE)), c(unlist(df[1:5,])))
# City Nation Area Metro_Area Urban_Area
#1 Shanghai China 24,000,000 1230040 4244234
#2 New_york America 343423 23423434 343434
I'm going to go out on a limb and guess that the data you're after is from the URL: https://en.wikipedia.org/wiki/List_of_largest_cities.
If this is the case, I would suggest you actually try re-reading the data (not sure how you got the data into R in the first place) since that would probably make your life easier.
Here's one way to read the data in:
library(rvest)
URL <- "https://en.wikipedia.org/wiki/List_of_largest_cities"
XPATH <- '//*[#id="mw-content-text"]/table[2]'
cities <- URL %>%
read_html() %>%
html_nodes(xpath=XPATH) %>%
html_table(fill = TRUE)
Here's what the data currently looks like. Still needs to be cleaned up (notice that some of the columns which had names in merged cells from "rowspan" and the sorts):
head(cities[[1]])
## City Nation Image Population Population Population
## 1 Image City proper Metropolitan area Urban area[7]
## 2 Shanghai China 24,256,800[8] 34,750,000[9] 23,416,000[a]
## 3 Karachi Pakistan 23,500,000[10] 25,400,000[11] 25,400,000
## 4 Beijing China 21,516,000[12] 24,900,000[13] 21,009,000
## 5 Dhaka Bangladesh 16,970,105[14] 15,669,000 18,305,671[15][not in citation given]
## 6 Delhi India 16,787,941[16] 24,998,000 21,753,486[17]
From there, the cleanup might be like:
cities <- cities[[1]][-1, ]
names(cities) <- c("City", "Nation", "Image", "Pop_City", "Pop_Metro", "Pop_Urban")
cities["Image"] <- NULL
head(cities)
cities[] <- lapply(cities, function(x) type.convert(gsub("\\[.*|,", "", x)))
head(cities)
# City Nation Pop_City Pop_Metro Pop_Urban
# 2 Shanghai China 24256800 34750000 23416000
# 3 Karachi Pakistan 23500000 25400000 25400000
# 4 Beijing China 21516000 24900000 21009000
# 5 Dhaka Bangladesh 16970105 15669000 18305671
# 6 Delhi India 16787941 24998000 21753486
# 7 Lagos Nigeria 16060303 13123000 21000000
str(cities)
# 'data.frame': 163 obs. of 5 variables:
# $ City : Factor w/ 162 levels "Abidjan","Addis Ababa",..: 133 74 12 41 40 84 66 148 53 102 ...
# $ Nation : Factor w/ 59 levels "Afghanistan",..: 13 41 13 7 25 40 54 31 13 25 ...
# $ Pop_City : num 24256800 23500000 21516000 16970105 16787941 ...
# $ Pop_Metro: int 34750000 25400000 24900000 15669000 24998000 13123000 13520000 37843000 44259000 17712000 ...
# $ Pop_Urban: num 23416000 25400000 21009000 18305671 21753486 ...

readr::read_csv(), empty strings as NA not working

I was trying to load a CSV file (readr::read_csv()) in which some entries are blank. I set the na="" in read_csv() but it still loads them as blank entries.
d1 <- read_csv("sample.csv",na="") # want to load empty string as NA
Where Sample.csv file can be like following:-
Name,Age,Weight,City
Sam,13,30,
John,35,58,CA
Doe,20,50,IL
Ann,18,45,
d1 should show me as following(using read_csv())
Name Age Weight City
1 Sam 13 30 NA
2 John 35 58 CA
3 Doe 20 50 IL
4 Ann 18 45 NA
First and fourth row of City should have NA (as shown above). But in actual its showing blank there.
Based on the comments and verifying myself, the solution was to upgrade to readr_0.2.2.
Thanks to fg nu, akrun and Richard Scriven

Resources