RNOAA R package data access - r

I've been trying to use the r package rnoaa to download climate data from weather stations closest to my sites of study (essentially almost every state or national park in the state of Florida) over the course of two decades.
I have not found any vignettes or tutorials that help or really make sense to me especially considering the number of parks I'm working with. I was wondering if someone on here has any experience working with this package and could show an example on how to do this with a few parks from my list?
I also have the park longitudes and latitudes:
df<-structure(list(ParkName = structure(c(2L, 6L, 4L, 7L, 5L, 6L,
3L, 3L, 1L), .Label = c("Big Talbot Island State Park", "Fakahatchee Strand Preserve State Park",
"Jonathan Dickinson State Park", "Key Largo Hammocks", "Myakka River State Park",
"Paynes Prairie Preserve State Park", "Sebastian Inlet State Park"
), class = "factor"), ParkLatitude = c(26.02109, 29.57728, 25.25342,
27.86018, 27.2263, 29.57728, 27.00857, 27.00857, 30.47957), ParkLongitude = c(-81.42208,
-82.30675, -80.31574, -80.45221, -82.26661, -82.30675, -80.13897,
-80.13897, -81.43955), Year = c(2004L, 2000L, 1996L, 1997L, 2008L,
2002L, 2004L, 2002L, 1995L)), .Names = c("ParkName", "ParkLatitude",
"ParkLongitude", "Year"), class = "data.frame", row.names = c(NA,
-9L))
The end goal from this example data would be to have annual temperatures, humidity and other environmental variables from weather stations closest to these parks (or park coordinates) for the years listed in the data. I know that there might be missing data for those years depending on the weather station.

This should get you started (using df from your question):
library(rnooa)
# load station data - takes some minutes
station_data <- ghcnd_stations()
# add id column for each location (necessary for next function)
df$id <- 1:nrow(df)
# retrieve all stations in radius (e.g. 20km) using lapply
stations <- lapply(1:nrow(df),
function(i) meteo_nearby_stations(df[i,],lat_colname = 'ParkLatitude',lon_colname = 'ParkLongitude',radius = 20,station_data = station_data)[[1]])
# pull data for nearest stations - x$id[1] selects ID of closest station
stations_data <- lapply(stations,function(x) meteo_pull_monitors(x$id[1]))
This will give you all variables for the nearest station. Of course, you can specify which variables you need with var in meteo_pull_monitors from all the available variables.
Your next step would be to check if the variables you want are available for these stations within your desired time frame. If not, you could use the next closest one.
E.g.
The closest station to your first park only has precipitation, min and max temperature:
stations_data[[1]]
# # A tibble: 4,077 x 5
# id date prcp tmax tmin
# <chr> <date> <dbl> <dbl> <dbl>
# 1 USW00092826 2007-02-01 NA NA NA
# 2 USW00092826 2007-02-02 NA NA NA
# 3 USW00092826 2007-02-03 NA NA NA
# 4 USW00092826 2007-02-04 NA NA NA
# 5 USW00092826 2007-02-05 NA NA NA
# 6 USW00092826 2007-02-06 NA NA NA
# 7 USW00092826 2007-02-07 NA NA NA
# 8 USW00092826 2007-02-08 NA NA NA
# 9 USW00092826 2007-02-09 NA NA NA
#10 USW00092826 2007-02-10 NA NA NA
# # ... with 4,067 more rows
And you can see that there's missing measurements which you'll need to handle.

Related

Explain joining please?

I need some help understanding the concept of joining.
I understand how to mentally model how a join works if you have 2 data files that have a common variable. Like:
Animal
Weight
Age
Dog
12
5
Cat
4
19
Fish
2
4
Mouse
1
2
Animal
Award
Dog
1st
Cat
1st
Fish
3rd
Mouse
5th
These can be joined because the animal column is exactly the same and it just adds on another variable to the same observations of animals.
But I don't understand it when its something like this:
Mortality Rate (Heart Attack)
Year
Place
Death Rate (Heart Attack)
2011
Paris
200
2011
Paris
94
2011
Rome
23
2009
London
15
Mortality Rate (Car Crash)
Year
Place
Death Rate (Car Crash)
2011
London
987
2012
London
34
2012
Paris
09
2007
Melbourne
12
The variable TYPES are the same (years, cities and death rates). But the year values aren't the same, they arent in the same order, there arent the same number of 2011's for example, the locations are different, and there are obviously two different death rates that need to be two different columns, but how does this join work? Which variable would you join by? How would it be configured once joined? Would it just result in lots of NA values if this was across a larger data set?
I understand there are different types of joins that do different things, but I'm just struggling to understand how the years and cities would sit if you were wanting to be able to compare the two different death rates in cities and years.
Thank you!
If you do
merge(heart, car, all=TRUE)
# Year Place Death_Rate_heart Death_Rate_Car
# 1 2007 Melbourne NA 12
# 2 2009 London 15 NA
# 3 2011 London NA 987
# 4 2011 Paris 200 NA
# 5 2011 Paris 94 NA
# 6 2011 Rome 23 NA
# 7 2012 London NA 34
# 8 2012 Paris NA 9
merge automatically looks for matching names and merges on them. It's looking for pairs in those columns, so they won't be mixed. More verbosely you could do
merge(heart, car, all=TRUE, by.x=c("Year", "Place"), by.y=c("Year", "Place"))
which is actually what happens in this case.
Data:
heart <- structure(list(Year = c(2011L, 2011L, 2011L, 2009L), Place = c("Paris",
"Paris", "Rome", "London"), Death_Rate_heart = c(200L, 94L, 23L,
15L)), class = "data.frame", row.names = c(NA, -4L))
car <- structure(list(Year = c(2011L, 2012L, 2012L, 2007L), Place = c("London",
"London", "Paris", "Melbourne"), Death_Rate_Car = c(987L, 34L,
9L, 12L)), class = "data.frame", row.names = c(NA, -4L))

How to remove values in a column based on other column values equaling the column values above it?

I am currently coding in R and merged two dataframes together so I could include all the information together but I don't want the one column "Cost" to be duplicated multiple times (it was due to the unique values of the last 3 columns). I want it to include the cost 100 only in the first column and then for every other instance where the columns "State", "Market", "Date", and "Cost" are the same as above. I attached what the dataframe looks like and what I want it to be changed to. Thank you!
What it currently looks like
What it should look like
Please use index like in this example:
name_of_your_dataset[nrow_init:nrow_fin, ncol] <- NA
In your case, assuming the name of your dataset as 'data'
data[2:4,4]<- NA
Just leave a positive feedback and if I was useful, just vote this answer up.
Here is a solution using duplicated with your dataframe (df)
State Market Date Cost Word format Type
1 AZ Phoenix 10-20-2020 100 HELLO AM Sports related
2 AZ Phoenix 10-21-2020 NA GOODBYE PM Non Sports related
3 AZ Phoenix 10-22-2020 NA YES FM Country
4 AZ Phoenix 10-23-2020 NA NONE CM Rock
Set duplicates to NA
df$Cost[duplicated(df$Cost)] <- NA
Output:
State Market Date Cost Word format Type
1 AZ Phoenix 10-20-2020 100 HELLO AM Sports related
2 AZ Phoenix 10-21-2020 NA GOODBYE PM Non Sports related
3 AZ Phoenix 10-22-2020 NA YES FM Country
4 AZ Phoenix 10-23-2020 NA NONE CM Rock
The column Date is different so I think you want to do replace duplicated Cost for every value of State and Market combination.
library(dplyr)
df <- df %>%
group_by(State, Market) %>%
mutate(Cost = replace(Cost, duplicated(Cost), NA)) %>%
ungroup
df
# State Market Date Cost Word format Type
# <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
#1 AZ Phoenix 10-20-2020 100 HELLO AM Sports related
#2 AZ Phoenix 10-21-2020 NA GOODBYE PM Non Sports related
#3 AZ Phoenix 10-22-2020 NA YES FM Country
#4 AZ Phoenix 10-23-2020 NA NONE CM Rock
data
It is easier to help if you provide data in a reproducible format
df <- structure(list(State = c("AZ", "AZ", "AZ", "AZ"), Market = c("Phoenix",
"Phoenix", "Phoenix", "Phoenix"), Date = c("10-20-2020", "10-21-2020",
"10-22-2020", "10-23-2020"), Cost = c(100, 100, 100, 100), Word = c("HELLO",
"GOODBYE", "YES", "NONE"), format = c("AM", "PM", "FM", "CM"),
Type = c("Sports related", "Non Sports related", "Country",
"Rock")), row.names = c(NA, -4L), class = "data.frame")

Sorting data via if statement in R

I have a large CSV of workout data extracted from GPX files consisting of 6 columns:
1. No (e.g, (1 through ~900 thousand)
2. latitude (e.g., 34.105329,
3. longitude (e.g., -118.299236),
4. elevation (in meters,
5. date (e.g., 10/20/2017),
6. time (2:08:05 AM)
I would like to establish a column that notes the workout number, e.g., workout 1 encompasses rows 1 through 2000 and workout 2 encompasses rows 2001 through 5000. I was able to accomplish in Excel with an If statement, but have not figured out how to accomplish this in R.
Basically if a data point was recorded on the same day AND within two hours of the preceding data point, both points belonged to the same workout. If data points were logged in the same day but were separated by more than 2 hours they belong to two separate workouts. I've pasted some data below that include the first few rows of Workout 1 and the first few rows of Workout 2 (just enough to demonstrate how the Excel formula works).
Dput Code:
dput(droplevels(mydata[1:10, ]))
Dput Output:
structure(list(No = 1:10, Latitude = c(34.092483, 34.092534,
34.092573, 34.092624, 34.092652, 34.092684, 34.092712, 34.092742,
34.092774, 34.092808), Longitude = c(-118.300414, -118.300448,
-118.300434, -118.300431, -118.300428, -118.300425, -118.300423,
-118.300425, -118.300426, -118.300427), Altitude = c(104.2, 104.2,
104.3, 104.4, 104.4, 104.5, 104.5, 104.5, 104.6, 104.6), Date = structure(c(1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = "10/20/2017", class = "factor"),
Time = structure(1:10, .Label = c("1:40:18", "1:43:06", "1:43:08",
"1:43:10", "1:43:11", "1:43:12", "1:43:13", "1:43:14", "1:43:15",
"1:43:16"), class = "factor")), row.names = c(NA, 10L), class = "data.frame")
Data Sample:
No Latitude Longitude Altitude Date Time Workout#
1 34.092483 -118.300414 104.2 10/20/2017 1:40:18 1
2 34.092534 -118.300448 104.2 10/20/2017 1:43:06 1
3 34.092573 -118.300434 104.3 10/20/2017 1:43:08 1
4 34.092624 -118.300431 104.4 10/20/2017 1:43:10 1
5 34.092652 -118.300428 104.4 10/20/2017 1:43:11 1
1332 34.092487 -118.300577 104.1 11/4/2017 1:23:24 2
1333 34.092513 -118.300565 104.2 11/4/2017 1:23:25 2
1334 34.09255 -118.30053 104.3 11/4/2017 1:23:26 2
1335 34.092592 -118.300495 104.4 11/4/2017 1:23:28 2
1336 34.092619 -118.300481 104.4 11/4/2017 1:23:29 2
1337 34.092668 -118.300467 104.5 11/4/2017 1:23:31 2
Edit:
Thank you to #AllanCameron and #GregorThomas. I ran your code and summed it up using the code below which yields the desired results.
cumsum <- cumsum(c(1, as.numeric(diff(workout_times) > 7200)))
# Add 'cumsum' to 'mydata' data frame
mydata$cumsum <- cumsum
sqldf("select distinct(cumsum) from mydata")
Assuming that your workouts are more than 30 minutes apart, you can do this:
workout_times <- as.POSIXct(paste(df$Date, df$Time), format = "%m/%d/%Y %H:%M:%S")
cumsum(c(1, as.numeric(diff(workout_times) > 1800)))
#> [1] 1 1 1 1 1 2 2 2 2 2 2
You can change the 1800 to a number of seconds between workouts that seems best for you.

Compare values in data.frame from different rows [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I have an R data.frame of college football data, with two entries for each game (one for each team, with stats and whatnot). I would like to compare points from these to create a binary Win/Loss variable, but I have no idea how (I'm not very experienced with R).
Is there a way I can iterate through the columns and try to match them up against another column (I have a game ID variable, so I'd match on that) and create aforementioned binary Win/Loss variable by comparing points values?
Excerpt of dataframe (many variables left out):
Team Code Name Game Code Date Site Points
5 Akron 5050320051201 12/1/2005 NEUTRAL 32
5 Akron 404000520051226 12/26/2005 NEUTRAL 23
8 Alabama 419000820050903 9/3/2005 TEAM 37
8 Alabama 664000820050910 9/10/2005 TEAM 43
What I want is to append a new column, a binary variable that's assigned 1 or 0 based on if the team won or lost. To figure this out, I need to take the game code, say 5050320051201, find the other row with that same game code (there's only one other row with that same game code, for the other team in that game), and compare the points value for the two, and use that to assign the 1 or 0 for the Win/Loss variable.
Assuming that your data has exactly two teams for each unique Game Code and there are no tie games as given by the following example:
df <- structure(list(`Team Code` = c(5L, 6L, 5L, 5L, 8L, 9L, 9L, 8L
), Name = c("Akron", "St. Joseph", "Akron", "Miami(Ohio)", "Alabama",
"Florida", "Tennessee", "Alabama"), `Game Code` = structure(c(1L,
1L, 2L, 2L, 3L, 3L, 4L, 4L), .Label = c("5050320051201", "404000520051226",
"419000820050903", "664000820050910"), class = "factor"), Date = structure(c(13118,
13118, 13143, 13143, 13029, 13029, 13036, 13036), class = "Date"),
Site = c("NEUTRAL", "NEUTRAL", "NEUTRAL", "NEUTRAL", "TEAM",
"AWAY", "AWAY", "TEAM"), Points = c(32L, 25L, 23L, 42L, 37L,
45L, 42L, 43L)), .Names = c("Team Code", "Name", "Game Code",
"Date", "Site", "Points"), row.names = c(NA, -8L), class = "data.frame")
print(df)
## Team Code Name Game Code Date Site Points
##1 5 Akron 5050320051201 2005-12-01 NEUTRAL 32
##2 6 St. Joseph 5050320051201 2005-12-01 NEUTRAL 25
##3 5 Akron 404000520051226 2005-12-26 NEUTRAL 23
##4 5 Miami(Ohio) 404000520051226 2005-12-26 NEUTRAL 42
##5 8 Alabama 419000820050903 2005-09-03 TEAM 37
##6 9 Florida 419000820050903 2005-09-03 AWAY 45
##7 9 Tennessee 664000820050910 2005-09-10 AWAY 42
##8 8 Alabama 664000820050910 2005-09-10 TEAM 43
You can use dplyr to generate what you want:
library(dplyr)
result <- df %>% group_by(`Game Code`) %>%
mutate(`Win/Loss`=if(first(Points) > last(Points)) as.integer(c(1,0)) else as.integer(c(0,1)))
print(result)
##Source: local data frame [8 x 7]
##Groups: Game Code [4]
##
## Team Code Name Game Code Date Site Points Win/Loss
## <int> <chr> <fctr> <date> <chr> <int> <int>
##1 5 Akron 5050320051201 2005-12-01 NEUTRAL 32 1
##2 6 St. Joseph 5050320051201 2005-12-01 NEUTRAL 25 0
##3 5 Akron 404000520051226 2005-12-26 NEUTRAL 23 0
##4 5 Miami(Ohio) 404000520051226 2005-12-26 NEUTRAL 42 1
##5 8 Alabama 419000820050903 2005-09-03 TEAM 37 0
##6 9 Florida 419000820050903 2005-09-03 AWAY 45 1
##7 9 Tennessee 664000820050910 2005-09-10 AWAY 42 0
##8 8 Alabama 664000820050910 2005-09-10 TEAM 43 1
Here, we first group_by the Game Code and then use mutate to create the Win/Loss column for each group. The logic here is simply that if the first Points is greater than the last (there are only two by assumption), then we set the column to c(1,0). Otherwise, we set it to (0,1). Note that this logic does not handle ties, but can easily be extended to do so. Note also that we surround the column names with back-quotes because of special characters such as space and /.
footballdata$SomeVariable[footballdata$Wins == "1"] = stuff
call yours wins by either 1 or 0, thus binomial
R's data frames are nice in that you can aggregate what you want like, I only want the data frames with wins are 1. Then you can set the data to some variable as above. If you wanna do another data frame to populate a data frame, make sure they have the same amount of data.
footballdata$SomeVariable[footballdata$Wins == "1"][footballdata$Team == "Browns"] = Hopeful

Reading names with special characters using R

I've an excel (xlsx) table and in the column "PLAYERS" European players have an asterisk in their names and South Americans don't. Something like this
PLAYERS
Neymar
*Bale*
Messi
*Ronaldo*
*Benzema*
*Iniesta*
DiMaria
Is there any way I can use R (or excel itself) to split this dataset into one with Europeans (with asterisk) and another one with South Americans? Of course, the data set contains other columns like "SALARY", "SCORED GOALS", "OFFSITE", "AGE" etc. etc. etc.
Thanks,
Diego.
You could check if there's an "*" in the players name and in a new column write "European" or "South American" and, if you want, you could then split the data frame into a list with two data.frames, one with Europeans and the other with South Americans:
df <- data.frame(PLAYERS = c("Neymar", "*Ronaldo*", "Messi"), SALARY = 5:7)
df
# PLAYERS SALARY
#1 Neymar 5
#2 *Ronaldo* 6
#3 Messi 7
# check if there's a * in the PLAYERS column
df$Location <- ifelse(grepl("\\*", df$PLAYERS), "European", "South American")
df
# PLAYERS SALARY Location
#1 Neymar 5 South American
#2 *Ronaldo* 6 European
#3 Messi 7 South American
#split the data based on location:
dflist <- split(df, df$Location)
dflist
#$European
# PLAYERS SALARY Location
#2 *Ronaldo* 6 European
#
#$`South American`
# PLAYERS SALARY Location
#1 Neymar 5 South American
#3 Messi 7 South American
Now you can access each list element (which is a data.frame) by typing
dflist[["European"]] # or "South American" instead
# PLAYERS SALARY Location
#2 *Ronaldo* 6 European
You can split this specific column and name the resulting list with split and setNames
> dat <- structure(list(PLAYERS = structure(c(6L, 1L, 5L, 7L, 2L, 4L, 3L),
.Label = c("*Bale*", "*Benzema*", "DiMaria", "*Iniesta*",
"Messi", "Neymar", "*Ronaldo*"), class = "factor")),
.Names = "PLAYERS", class = "data.frame", row.names = c(NA,-7L))
> setNames(split(dat, grepl("[*]", dat$PLAYERS)), nm = c("Euro", "SoAm"))
#$Euro
# PLAYERS
# 1 Neymar
# 3 Messi
# 7 DiMaria
#
# $SoAm
# PLAYERS
# 2 *Bale*
# 4 *Ronaldo*
# 5 *Benzema*
# 6 *Iniesta*
Create a PivotTable from your source data with PLAYERS for ROWS. Filter with Label Filters, Contains... ~* and click on Grand Total. Return to PT, select Does Not Contain... and click on Grand Total again.

Resources