Separating address column into multiple columns in R - r

I have a dataset that has an address column for 400 records. I would like to split this column into multiple columns.
Sample data
Full_Address = c("1111 Harding St Hollywood, FL 33024",
"2222 W Broward Blvd Plantation, 33317",
"3333 SW 74 Ave Davie, 33314",
"4444 Thomas Street Hollywood, FL 33024",
"11111 Lake Road (SW 12 Street) Davie, 33325",
"555 Bryan Blvd Plantation, 33317",
"5555 NW 71 Ter Parkland, 33067",
"7777 N Oakland Forest Dr Oakland Park, 33309,
"888 Some Ave Pines Pembroke Pines, 33346",
"9999 Some Blvd Hallandale Beach, 33365",
"4440 Some 123 Ave Pompano Beach, 33389")
Desired Columns
ID = c("1111",
"2222",
"3333",
"4444",
"11111",
"555",
"5555",
"7777",
"888",
"9999",
"4440")
Street_Address = c("Harding St",
"W Broward Blvd",
"SW 74 Ave",
"Thomas Street",
"Lake Road (SW 12 Street)",
"Bryan Blvd",
"NW 71 Ter",
"N Oakland Forest Dr",
"Some Ave Pines",
"Some Blvd",
"Some 123 Ave")
City = c("Hollywood",
"Plantation",
"Davie",
"Hollywood",
"Davie",
"Plantation",
"Parkland",
"Oakland Park",
"Pembroke Pines",
"Hallandale Beach",
"Pompano Beach")
Zipcode = c("33024",
"33317",
"33314",
"33024",
"33325",
"33317",
"33067",
"33309",
"33346",
"33365",
"33389")
How can I do this in R via tidyr?
Code
library(tidyverse)
library(tidyr)
df = Full_Address
df = df %>% tidyr::separate( c("ID", "Street_Address", "City", "Zipcode"),
sep = , extra = "merge")) # stuck at this step.....

Note that this is taking a city to only have one Name: cities like New York Los Angeles will not be matched.
data.frame(Full_Address) %>%
extract(Full_Address, c("ID", "Street_Address", "City", "Zipcode"),
'(\\d+) ([^,]+) (\\w+),\\D+(\\d+)')
ID Street_Address City Zipcode
1 1111 Harding St Hollywood 33024
2 2222 W Broward Blvd Plantation 33317
3 3333 SW 74 Ave Davie 33314
4 4444 Thomas Street Hollywood 33024
5 11111 Lake Road (SW 12 Street) Davie 33325
6 555 Bryan Blvd Plantation 33317
7 5555 NW 71 Ter Parkland 33067

Related

R replace characters in a column based on a word in another column

I have a dataset that has multiple columns. In the Date_Received column there are certain rows that have the word Abandonment in them. I would like to search the rows in this column that have this word, and then replace the characters AP with AB in the corresponding rows of the AP column.
How can I do this?
Sample data df:
structure(list(id = 1:6, Date_Received = c("Addition 1/2/2018",
"Swimming Pool 1/8/2018", "Swimming Pool 1/8/2018", "Abandonment 1/9/2018",
"Swimming Pool 1/12/2017", "Abandonment 2/5/2018"), Date_Approved = c("1/2/2018",
"1/8/2018", "1/8/2018", "1/9/2018", "1/12/2017", "2/5/2018"),
AP= c("AP-18-001", "AP-18-002", "AP-18-003", "AP-18-004",
"AP-18-005", "AP-18-006"), Permit.. = c("06-SE-1812147",
"06-SS-1813516", "06-SS-1813699", "06-SE-1814032", "06-SE-1814924",
"06-SS-1820333"), Owner.Name.Agent = c("Tiny Tots Academy, Inc Mike Davis",
"Ernesto & Elizabeth Diaz Ensign Pools", "DSL Contruction & Investments LLC",
"BSD North Federal LLC EPOCA Plumbing Corp", "Maria Silva Parkwood Pools And Pavers LLC",
"HPA Borrower Westland Plumbing"), X = c("NA NA", "NA NA",
"NA NA", "NA NA", "NA NA", "NA NA"), Project.Address.City = c("61111 Washington Street Hollywood, 33024",
"1224 SW 170 Avenue SW Ranches, 33331", "1233 NW 6 Place Plantation, 33325",
"1231 N Federal Hwy Hollywood, 33020", "3223 Dawson Street",
"3691 SW 31 Avenue Fort Lauderdale")), class = c("tbl_df",
"tbl", "data.frame"), row.names = c(NA, -6L))
Code:
library(tidyverse)
library(dplyr)
df = df %>% grepl("Abandonement", df$Date_Received) %>% str_replace(df$AP) #.... stuck
This should do it:
df %>%
mutate(AP = ifelse(grepl("Abandonment", Date_Received, fixed = TRUE), gsub("AP", "AB", AP), AP))
Which gives:
# A tibble: 6 × 8
id Date_Received Date_Approved AP Permit.. Owner.Name.Agent X Project.Address.City
<int> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 1 Addition 1/2/2018 1/2/2018 AP-18-001 06-SE-1812147 Tiny Tots Academy, Inc Mike Davis NA NA 61111 Washington Street Holly…
2 2 Swimming Pool 1/8/2018 1/8/2018 AP-18-002 06-SS-1813516 Ernesto & Elizabeth Diaz Ensign Pools NA NA 1224 SW 170 Avenue SW Ranches…
3 3 Swimming Pool 1/8/2018 1/8/2018 AP-18-003 06-SS-1813699 DSL Contruction & Investments LLC NA NA 1233 NW 6 Place Plantation, 3…
4 4 Abandonment 1/9/2018 1/9/2018 AB-18-004 06-SE-1814032 BSD North Federal LLC EPOCA Plumbing Corp NA NA 1231 N Federal Hwy Hollywood,…
5 5 Swimming Pool 1/12/2017 1/12/2017 AP-18-005 06-SE-1814924 Maria Silva Parkwood Pools And Pavers LLC NA NA 3223 Dawson Street
6 6 Abandonment 2/5/2018 2/5/2018 AB-18-006 06-SS-1820333 HPA Borrower Westland Plumbing NA NA 3691 SW 31 Avenue Fort Lauder…

Drop rows after criteria

I have some data that I'm trying to clean up, and I noticed that I have 150 files that have rows that are subsets of previous rows. Is there a way that I can drop everything after certain criteria occur? Below I'm not sure how I'd write out sample data for this via code, so I've listed an example of the data as text. Below. I'd like to drop all rows at and below "section 2"
Name,Age,Address
Section 1,,
Abby,10,1 Baker St
Alice,12,3 Main St
Becky,13,156 F St
Ben,14,2 18th St
Cameron,15,4 Journey Road
Danny,16,123 North Ave
Eric,17,325 Hill Blvd
,,
Section 2,,
Abby,10,1 Baker St
Alice,12,3 Main St
Becky,13,156 F St
Ben,14,2 18th St
,,
Section 3,,
Becky,13,156 F St
Ben,14,2 18th St
Cameron,15,4 Journey Road
Danny,16,123 North Ave
,,
Section 5,,
Alice,12,3 Main St
Becky,13,156 F St
Ben,14,2 18th St
Cameron,15,4 Journey Road
Danny,16,123 North Ave
Eric,17,325 Hill Blvd
Expected output
Name,Age,Address
Section 1,,
Abby,10,1 Baker St
Alice,12,3 Main St
Becky,13,156 F St
Ben,14,2 18th St
Cameron,15,4 Journey Road
Danny,16,123 North Ave
Eric,17,325 Hill Blvd
Assuming your text file is called temp.txt you can use readLines to read it in, find the line with 'Section 2' in it and read all the lines above that.
tmp <- readLines('temp.txt')
inds <- grep('Section 2', tmp) - 2
data <- read.csv(text = paste0(tmp[1:inds], collapse = '\n'))
data
# Name Age Address
#1 Section 1 NA
#2 Abby 10 1 Baker St
#3 Alice 12 3 Main St
#4 Becky 13 156 F St
#5 Ben 14 2 18th St
#6 Cameron 15 4 Journey Road
#7 Danny 16 123 North Ave
#8 Eric 17 325 Hill Blvd
Here, I "read" in your data by calling strsplit and using the newline as the separator. If you were doing this from file, you could use readLines
I use grep to find the line number that contains "Section 2", use that to subset raw_data. I paste0(..., collapse="") the lines that do not start with "Section" and use read.table using sep="," with header=TRUE to parse as if I read just that section with read.csv.
raw_data <- strsplit(split = "\\n", "Name,Age,Address
Section 1,,
Abby,10,1 Baker St
Alice,12,3 Main St
Becky,13,156 F St
Ben,14,2 18th St
Cameron,15,4 Journey Road
Danny,16,123 North Ave
Eric,17,325 Hill Blvd
,,
Section 2,,
Abby,10,1 Baker St
Alice,12,3 Main St
Becky,13,156 F St
Ben,14,2 18th St
,,
Section 3,,
Becky,13,156 F St
Ben,14,2 18th St
Cameron,15,4 Journey Road
Danny,16,123 North Ave
,,
Section 5,,
Alice,12,3 Main St
Becky,13,156 F St
Ben,14,2 18th St
Cameron,15,4 Journey Road
Danny,16,123 North Ave
Eric,17,325 Hill Blvd")
section2_idx <- grep('Section 2', raw_data[[1]])
raw_data_clean <- trimws(raw_data[[1]][1:(section2_idx-2)])
allsect_idx <- grep('^Section', raw_data_clean)
if(length(allsect_idx > 0))
raw_data_clean <- raw_data_clean[-allsect_idx]
read.table(text = paste0(raw_data_clean, collapse="\n"), sep=",", header = TRUE)
#> Name Age Address
#> 1 Abby 10 1 Baker St
#> 2 Alice 12 3 Main St
#> 3 Becky 13 156 F St
#> 4 Ben 14 2 18th St
#> 5 Cameron 15 4 Journey Road
#> 6 Danny 16 123 North Ave
#> 7 Eric 17 325 Hill Blvd
Created on 2020-12-06 by the reprex package (v0.3.0)
Here is a made up example that avoids having to type in your starting data.
mixed_data is 500 elements long and each row is a string containing two commas. The string doesn't need to be broken apart if if looks like your example.
Create an empty vector to hold just one of each value. Then loop through the whole mixed list and add the unique entries to that vector. This example resulted in 444 unique items in one_of_each of the original 500 in mixed_data.
set.seed(101)
a <- sample(LETTERS,500, replace = TRUE)
b <- sample(letters,500, replace = TRUE)
d <- sample(c(1:3),500, replace = TRUE)
mixed_data <- paste0(a,",",b,",",d)
head(mixed_data)
one_of_each <- c() #starts empty
for (i in 1:length(mixed_data)){
if (mixed_data[i] %in% one_of_each == FALSE) {
one_of_each <- c(one_of_each,mixed_data[i]) #if not found, then add
}
}

Extract cell with AND without commas in R

I'm trying to extract the city and state from the Address column into 2 separate columns labeled City and State in r. This is what my data looks like:
df <- data.frame(address = c("Los Angeles, CA", "Pittsburgh PA", "Miami FL","Baltimore MD", "Philadelphia, PA", "Trenton, NJ")) %>%
separate(address, c("City", "State"), sep=",")
I tried using the separate function but that only gets the ones with commas. Any ideas on how to do this for both cases?
There is a pattern at the end (space, letter, letter) which I can use to exploit and then remove any commas but not sure how the syntax would work using grep.
Starting from your df
df <- data.frame(address = c("Los Angeles, CA", "Pittsburgh PA", "Miami FL","Baltimore MD", "Philadelphia, PA", "Trenton, NJ"))
> df
address
1 Los Angeles, CA
2 Pittsburgh PA
3 Miami FL
4 Baltimore MD
5 Philadelphia, PA
6 Trenton, NJ
It's possible to use gsub to subset the string like this:
> city=gsub(',','',gsub("(.*).{3}","\\1",df[,1]))
> city
[1] "Los Angeles" "Pittsburgh" "Miami" "Baltimore" "Philadelphia"
[6] "Trenton"
> state=gsub(".*(\\w{2})","\\1",df[,1])
> state
[1] "CA" "PA" "FL" "MD" "PA" "NJ"
df=data.frame(City=city,State=state)
> df
City State
1 Los Angeles CA
2 Pittsburgh PA
3 Miami FL
4 Baltimore MD
5 Philadelphia PA
6 Trenton NJ
This is a little unorthodox but it works well. It assumes that all states are 2 characters long and that there is at least 1 space between the city and state. Comma's are ignored
df <- data.frame(address = c("Los Angeles, CA", "Pittsburgh PA", "Miami FL","Baltimore MD", "Philadelphia, PA", "Trenton, NJ"))
df$city <- substring(sub(",","",df$address),1,nchar(sub(",","",df$address))-3)
df$state <- substring(as.character(df$address),nchar(as.character(df$address))-1,nchar(as.character(df$address)))
df <- within(df,rm(address))
output:
city state
1 Los Angeles CA
2 Pittsburgh PA
3 Miami FL
4 Baltimore MD
5 Philadelphia PA
6 Trenton NJ

R: Mission impossible? How to assign "New York" to a county

I run into problems assigning a county to some city places. When querying via the acs package
> geo.lookup(state = "NY", place = "New York")
state state.name county.name place place.name
1 36 New York <NA> NA <NA>
2 36 New York Bronx County, Kings County, New York County, Queens County, Richmond County 51000 New York city
3 36 New York Oneida County 51011 New York Mills village
, you can see that "New York", for instance, has a bunch of counties. So do Los Angeles, Portland, Oklahoma, Columbus etc. How can such data be assigned to a "county"?
Following code is currently used to match "county.name" with the corresponding county FIPS code. Unfortunately, it only works for cases of only one county name output in the query.
Script
dat <- c("New York, NY","Boston, MA","Los Angeles, CA","Dallas, TX","Palo Alto, CA")
dat <- strsplit(dat, ",")
dat
library(tigris)
library(acs)
data(fips_codes) # FIPS codes with state, code, county information
GeoLookup <- lapply(dat,function(x) {
geo.lookup(state = trimws(x[2]), place = trimws(x[1]))[2,]
})
df <- bind_rows(GeoLookup)
#Rename cols to match
colnames(fips_codes) = c("state.abb", "statefips", "state.name", "countyfips", "county.name")
# Here is a problem, because it works with one item in "county.name" but not more than one (see output below).
df <- df %>% left_join(fips_codes, by = c("state.name", "county.name"))
df
Returns:
state state.name county.name place place.name state.abb statefips countyfips
1 36 New York Bronx County, Kings County, New York County, Queens County, Richmond County 51000 New York city <NA> <NA> <NA>
2 25 Massachusetts Suffolk County 7000 Boston city MA 25 025
3 6 California Los Angeles County 20802 East Los Angeles CDP CA 06 037
4 48 Texas Collin County, Dallas County, Denton County, Kaufman County, Rockwall County 19000 Dallas city <NA> <NA> <NA>
5 6 California San Mateo County 20956 East Palo Alto city CA 06 081
In order to retain data, the left_join might better be matched as "look for county.name that contains place.name (without the appending xy city in the name), or choose the first item by default. It would be great to see how this could be done.
In general: I assume, there's no better way than this approach?
Thanks for your help!
What about something like the code below to create a "long" data frame for joining. We use the tidyverse pipe operator to chain operations. strsplit returns a list, which we unnest to stack the list values (the county names that go with each combination of state.name and place.name) into a long data frame where each county.name now gets its own row.
library(tigris)
library(acs)
library(tidyverse)
dat = geo.lookup(state = "NY", place = "New York")
state state.name county.name place place.name
1 36 New York <NA> NA <NA>
2 36 New York Bronx County, Kings County, New York County, Queens County, Richmond County 51000 New York city
3 36 New York Oneida County 51011 New York Mills village
dat = dat %>%
group_by(state.name, place.name) %>%
mutate(county.name = strsplit(county.name, ", ")) %>%
unnest
state state.name place place.name county.name
<chr> <chr> <int> <chr> <chr>
1 36 New York NA <NA> <NA>
2 36 New York 51000 New York city Bronx County
3 36 New York 51000 New York city Kings County
4 36 New York 51000 New York city New York County
5 36 New York 51000 New York city Queens County
6 36 New York 51000 New York city Richmond County
7 36 New York 51011 New York Mills village Oneida County
UPDATE: Regarding the second question in your comment, assuming you have the vector of metro areas already, how about this:
dat <- c("New York, NY","Boston, MA","Los Angeles, CA","Dallas, TX","Palo Alto, CA")
df <- map_df(strsplit(dat, ", "), function(x) {
geo.lookup(state = x[2], place = x[1])[-1, ] %>%
group_by(state.name, place.name) %>%
mutate(county.name = strsplit(county.name, ", ")) %>%
unnest
})
df
state state.name place place.name county.name
1 36 New York 51000 New York city Bronx County
2 36 New York 51000 New York city Kings County
3 36 New York 51000 New York city New York County
4 36 New York 51000 New York city Queens County
5 36 New York 51000 New York city Richmond County
6 36 New York 51011 New York Mills village Oneida County
7 25 Massachusetts 7000 Boston city Suffolk County
8 25 Massachusetts 7000 Boston city Suffolk County
9 6 California 20802 East Los Angeles CDP Los Angeles County
10 6 California 39612 Lake Los Angeles CDP Los Angeles County
11 6 California 44000 Los Angeles city Los Angeles County
12 48 Texas 19000 Dallas city Collin County
13 48 Texas 19000 Dallas city Dallas County
14 48 Texas 19000 Dallas city Denton County
15 48 Texas 19000 Dallas city Kaufman County
16 48 Texas 19000 Dallas city Rockwall County
17 48 Texas 40516 Lake Dallas city Denton County
18 6 California 20956 East Palo Alto city San Mateo County
19 6 California 55282 Palo Alto city Santa Clara County
UPDATE 2: If I understand your comments, for cities (actually place names in the example) with more than one county, we want only the county that includes the same name as the city (for example, New York County in the case of New York city), or the first county in the list otherwise. The following code selects a county with the same name as the city or, if there isn't one, the first county for that city. You might have to tweak it a bit to make it work for the entire U.S. For example, for it to work for Louisiana, you might need gsub(" County| Parish"... instead of gsub(" County"....
map_df(strsplit(dat, ", "), function(x) {
geo.lookup(state = x[2], place = x[1])[-1, ] %>%
group_by(state.name, place.name) %>%
mutate(county.name = strsplit(county.name, ", ")) %>%
unnest %>%
slice(max(1, which(grepl(sub(" [A-Za-z]*$","", place.name), gsub(" County", "", county.name))), na.rm=TRUE))
})
state state.name place place.name county.name
<chr> <chr> <int> <chr> <chr>
1 36 New York 51000 New York city New York County
2 36 New York 51011 New York Mills village Oneida County
3 25 Massachusetts 7000 Boston city Suffolk County
4 6 California 20802 East Los Angeles CDP Los Angeles County
5 6 California 39612 Lake Los Angeles CDP Los Angeles County
6 6 California 44000 Los Angeles city Los Angeles County
7 48 Texas 19000 Dallas city Dallas County
8 48 Texas 40516 Lake Dallas city Denton County
9 6 California 20956 East Palo Alto city San Mateo County
10 6 California 55282 Palo Alto city Santa Clara County
Could you prep the data by using something like the below code?
new_york_data <- geo.lookup(state = "NY", place = "New York")
prep_data <- function(full_data){
output <- data.frame()
for(row in 1:nrow(full_data)){
new_rows <- replicateCounty(full_data[row, ])
output <- plyr::rbind.fill(output, new_rows)
}
return(output)
}
replicateCounty <- function(row){
counties <- str_trim(unlist(str_split(row$county.name, ",")))
output <- data.frame(state = row$state,
state.name = row$state.name,
county.name = counties,
place = row$place,
place.name = row$place.name)
return(output)
}
prep_data(new_york_data)
It's a little messy and you'll need the plyr and stringr packages. Once you prep the data, you should be able to join on it

Batch Geocoding in R with Google Maps returns 'no result' for many queries

Objective: Using R, get lat/long data for a vector of addresses through Google Maps.
Approach: I began with the excellent code from Tony Breyal at: Geocoding in R with Google Maps and modified it slightly to suit my own purposes (I need the function to return a SpatialPoints object with all of the addresses in it and the projection string defined).
So we have:
library(RCurl)
library(RJSONIO)
library(maptools)
#unchanged from Breyal's code
construct.geocode.url <- function(address, return.call = "json", sensor = "false") {
root <- "http://maps.google.com/maps/api/geocode/"
u <- paste(root, return.call, "?address=", address, "&sensor=", sensor, sep = "")
return(URLencode(u))
}
#obviously this would be better vectorized, but I left it in this form to help me
#understand where the errors were coming from
batchGeoCode <- function(address.list) {
lat<-vector(mode="numeric",length=length(address.list))
lng<-vector(mode="numeric",length=length(address.list))
for(i in 1:length(address.list)){
address<-address.list[i]
u <- construct.geocode.url(address)
doc <- getURL(u)
x <- fromJSON(doc,simplify = FALSE)
#I put the try in here just so I could see if it was always getting
#hung on the same address or the same special character or the like
errtest<-try(x$results[[1]]$geometry$location$lat)
if(is(errtest, 'try-error')){
lat[i]=0
lng[i]=0
}else{
lat[i] <- x$results[[1]]$geometry$location$lat
lng[i] <- x$results[[1]]$geometry$location$lng
}
}
points.df<-data.frame(lat,lng)
coordinates(points.df)=~lng+lat
points.sp<-SpatialPoints(coords=points.df,
proj4string=CRS("+proj=longlat +datum=WGS84"))
return(points.sp)
}
#some sample data that is enough to cause a fail for me--groceries and mini marts in
#South Seattle. All of these return a valid address when entered individually in
#Google Maps
address.list<-c("T W YOUNG MARKET 7144 BEACON AVE S SEATTLE WA 98108",
"RAINIER MARKET CTR 3625 1ST AVE S SEATTLE WA 98134",
"JING JING ASIAN MARKET 12402 SE 38TH ST BELLEVUE WA 98006",
"HILAL MARKET PLACE 15061 MILITARY RD S SEATAC WA 98188",
"GUADALUPE MARKET 1111 SW 128TH ST BURIEN WA 98146",
"JAMAL MARKET 14645 TUKWILA INTERNATIONAL BL TUKWILA WA 98168",
"C & D INTL MARKET SEATTLE WA 98198",
"GUERRERO MARKET 17730 AMBAUM BLVD S # B SEATTLE WA 98148",
"MI CASA MARKET 17174 116TH AVE SE RENTON WA 98058",
"GUERRERO MARKET 17730 AMBAUM BLVD S # B SEATTLE WA 98148",
"MI CASA MARKET 17174 116TH AVE SE RENTON WA 98058",
"BILISEE MARKET 8300 RAINIER AVE S SEATTLE WA 98118",
"ALI SEATAC INTL MARKET 16324 INTERNATIONAL BLVD SEATAC WA 98188",
"OCEAN MARKET 2119 RAINIER AVE S SEATTLE WA 98144",
"RED APPLE MARKETS 9627 DES MOINES MEMORIAL DR SEATTLE WA 98108",
"ANKGOR MARKET 9660 16TH AVE SW SEATTLE WA 98106",
"SIXTEENTH AVENUE GROCERY 9001 16TH AVE SW SEATTLE WA 98106",
"RAINIER ORIENTAL MARKET 9237 RAINIER AVE S SEATTLE WA 98118",
"CLEAN MACHINE LAUNDRYMAT SEATTLE WA 98118",
"HALIN GROCERY & DELI SEATTLE WA 98118")
#call the function
result<-batchGeoCode(address.list)
With short lists this works, but as the list gets over 15 I start getting 'no result' responses. I am pretty sure this is a throttling problem (such as Slow down Geocoding a batch of Address), but not too clear on how to implement this within R so it fits the specific needs of this function.
Thanks in advance,
Chris
One approach would be to use ggmap.
library(ggmap)
address.list<-c("T W YOUNG MARKET 7144 BEACON AVE S SEATTLE WA 98108",
"RAINIER MARKET CTR 3625 1ST AVE S SEATTLE WA 98134",
"JING JING ASIAN MARKET 12402 SE 38TH ST BELLEVUE WA 98006",
"HILAL MARKET PLACE 15061 MILITARY RD S SEATAC WA 98188",
"GUADALUPE MARKET 1111 SW 128TH ST BURIEN WA 98146",
"JAMAL MARKET 14645 TUKWILA INTERNATIONAL BL TUKWILA WA 98168",
"C & D INTL MARKET SEATTLE WA 98198",
"GUERRERO MARKET 17730 AMBAUM BLVD S # B SEATTLE WA 98148",
"MI CASA MARKET 17174 116TH AVE SE RENTON WA 98058",
"GUERRERO MARKET 17730 AMBAUM BLVD S # B SEATTLE WA 98148",
"MI CASA MARKET 17174 116TH AVE SE RENTON WA 98058",
"BILISEE MARKET 8300 RAINIER AVE S SEATTLE WA 98118",
"ALI SEATAC INTL MARKET 16324 INTERNATIONAL BLVD SEATAC WA 98188",
"OCEAN MARKET 2119 RAINIER AVE S SEATTLE WA 98144",
"RED APPLE MARKETS 9627 DES MOINES MEMORIAL DR SEATTLE WA 98108",
"ANKGOR MARKET 9660 16TH AVE SW SEATTLE WA 98106",
"SIXTEENTH AVENUE GROCERY 9001 16TH AVE SW SEATTLE WA 98106",
"RAINIER ORIENTAL MARKET 9237 RAINIER AVE S SEATTLE WA 98118",
"CLEAN MACHINE LAUNDRYMAT SEATTLE WA 98118",
"HALIN GROCERY & DELI SEATTLE WA 98118")
geocode(address.list, output="latlona")

Resources