How do I add the time to the netcdf file? - r

I want to create a time series in a netcdf file with 3 dimensions(lon, lat, time[unlimited]). The timeseries should be created from other netcdf-files. Each of them have only one timepoint [For Example 17856].
I know how to create the new netcdf-file, how to extract the data from the netcdf-file as a 2D array and the time for the data.
My problem is:
How do I put the 2D array in the netcdf-file with its correct time? How does the start and count argument in the "ncvar_put" fucntion does work?
I use the ncdf4 package and read the Tutorial on:
http://geog.uoregon.edu/bartlein/courses/geog490/week04-netCDF.html#create-and-write-a-netcdf-file and searched for an answer but I still don`t understand it. I´m still unexperienced with netcdf files.
Example
e of my problem:
# data from other netcdf file
values = array(data = c(1:9)/10, dim = c(3,3))
values_2 = array(data = c(9:25)/10, dim = c(3,3))
time = 25
time_2 = 23
# set parameters
lon = 1:3
lat = 1:3
# define dimensions
# Longitude
londim = ncdim_def(name = "longitude", units = "degrees", vals = as.double(lon),
longname = "longitude")
# Latitude
latdim = ncdim_def(name = "latitude", units = "degrees", vals = as.double(lat),
longname = "latitude")
# Time
timedim = ncdim_def(name = "time", units ="days since 1582-10-15 00:00", vals = as.double(1),
unlim = TRUE, calendar = "gregorian")
# define variables
B01 = ncvar_def(name = "B01",
units ="percent",
list(londim,latdim,timedim),
missval = NA,
prec="double")
# create netcdf
nc_test = nc_create("test.nc", list(B01), force_v4 = TRUE)
# Add values
### Here is somethin missing --> How do I add the timestamp?
ncvar_put(nc_test, "B01", values, start=c(1,1,1), count=c(-1,-1,1))
ncvar_put(nc_test, "B01", values2, start=c(1,1,2), count=c(-1,-1,1))
When I want to extract the data I get the 3-3-2 array, but the timesteps are not correct, because I didnt add them. How do I do this?
I would like to have the 3-3-2 array and when I take the time and I want the right times in the correct order.

I add time to the netCDF file by using another method. This is the sample code for your reference.
from datetime import datetime
from datetime import timedelta
from netCDF4 import date2num
from netCDF4 import Dataset
import os
# generate time for netCDF with 1 hour interval
utc_now = datetime.utcnow()
time_list = [utc_now + timedelta(hours=1*step) for step in range(6)]
trans_time = date2num(time_list, units="hours since 0001-01-01 00:00:00.0", calendar="gregorian")
with Dataset(os.getcwd() + "/sample.nc", "w") as sample:
# Create dimension for sample.nc
time = sample.createDimension("time", None)
lat = sample.createDimension("lat", 3) # 3 is the latitude size in this sample
lon = sample.createDimension("lon", 3)
# Create variable for sample.nc
time = sample.createVariable("time","f8",("time",))
lat = sample.createVariable("lat","f4",("lat",))
lon = sample.createVariable("lon","f4",("lon",))
time[:] = trans_time
variable_with_time = sample.createVariable("variable_with_time", "f4", ("time", "lat", "lon"))
for key, value in sample.variables.items():
print(key)
print(value)
print("*"*70)
Output:
time
<class 'netCDF4._netCDF4.Variable'>
float64 time(time)
unlimited dimensions: time
current shape = (6,)
filling on, default _FillValue of 9.969209968386869e+36 used
**********************************************************************
lat
<class 'netCDF4._netCDF4.Variable'>
float32 lat(lat)
unlimited dimensions:
current shape = (3,)
filling on, default _FillValue of 9.969209968386869e+36 used
**********************************************************************
lon
<class 'netCDF4._netCDF4.Variable'>
float32 lon(lon)
unlimited dimensions:
current shape = (3,)
filling on, default _FillValue of 9.969209968386869e+36 used
**********************************************************************
variable_with_time
<class 'netCDF4._netCDF4.Variable'>
float32 variable_with_time(time, lat, lon)
unlimited dimensions: time
current shape = (6, 3, 3)
filling on, default _FillValue of 9.969209968386869e+36 used
**********************************************************************
You may notice that, time is placed as the first dimension. For detailed information, this is the link to the document that I referenced.

Related

How can I put another table in one cell of existing table

Using the following codes, I would be able to extract daily tempreature data from NOAA data base for specific weather station with latitude of 62.1925 and longitude of -150.5033.
install.packages("pacman")
pacman::p_load(rgdol,ggplot2,patchwork,rnoaa)
stns = meteo_distance(station_data = ghcnd_stations(), lat= 62.1925,
long = -150.5033, units = "deg", radius =0.01, limit="NUL")
WXData = meteo_pull_monitors(
monitors = stns[1,1],
keep_flags = FALSE,
date_min = "1990-01-01",
date_max = "2022-01-01",
var = c("TMAX","TMIN","TAVG","TOBS")
The output of above codes is a table. It means that for every weather station, we have such table. I have a csv file called "station" from which I should import latitude and longitude of each staions. My question is that how I can insert the generated tempreatuer table in front of its coordinate in the "station" file
I did not try anything, since I am very new to r.

read.WSdata Error in data.frame

I am trying to do the Landsat 8 example at https://cran.r-project.org/web/packages/water/vignettes/Landsat8.html example. I get stuck at the read.WSdata example where I get the error Error in data.frame(date = unique(WSdata$date), radiation_sum = tapply(WSdata$radiation, :
arguments imply differing number of rows: 1, 0 I am using my own data - NOT the data provided in the example.
My csv file has been organized exactly as the example dataset ("INTA.csv"). The only difference I have noticed between the datasets is that mine has a datetime every 15min and the example dataset has datetime every hour.
Here is my code.
`rm(list=ls())
library(water)
aoi<-createAoi(topleft=c(385387,4776577),
bottomright=c(414825,4749526), EPSG = 32612)
raw_data_folder <- system.file("rossfrk072616", package="water")
image <- loadImage(path=raw_data_folder, aoi=aoi, sat="L8")
image.SR <- loadImageSR(path=raw_data_folder, aoi=aoi)
plot(image)
plot(image.SR)
csvfile<-system.file("rossfrk072616","FTHI_L8_1.csv",package="water")`
I am also assuming we use the original MTL file and NOT the surface reflectance MTL file, which when you download from the ESPA gives the same name of the mtl file as the original?
MTLfile <- system.file("rossfrk072616",
"LC08_L1TP_039030_20160726_20170221_01_T1_MTL.txt", package="water")
WeatherStation <- read.WSdata(WSdata = csvfile,datetime.format = "%Y/%m/%d
%H:%M",columns = c("datetime", "temp","RH", "pp", "radiation",
"wind"),lat=43.07138, long= -112.4311, elev=1354.5, height= 2.5, MTL =
MTLfile)
After I run the read.WSdata I get the error
Error in data.frame(date = unique(WSdata$date), radiation_sum =
tapply(WSdata$radiation,: arguments imply differing number of rows: 1, 0
For some reason I was not able to get the code from the website to function with my dataset. However, I was able to read my weather station data with the following code. WeatherStation <- read.WSdata(WSdata = csvfile, date.format = "%d/%m/%Y",
lat=43.07138, long= -112.4311, elev=1354.5, height= 2.5,
MTL = MTLfile)
This is error is related to different formats for the date. In your first attempt the date.format was set to '%Y/%m/%d'.
Also, you can directly specify the file in the read.WSdata() function, e.g.:
WeatherStation <- read.WSdata(WSdata = 'FTHI_L8_1.csv', date.format = "%d/%m/%Y",
lat=43.07138, long= -112.4311, elev=1354.5, height= 2.5,
MTL = MTLfile)
I used my data and it worked well
library(water)
aoi <- createAoi(topleft = c(810927, 2134059), bottomright = c( 272751,1985845),
EPSG = 32616)
plot(aoi)
csvfile <- system.file("extdata", "datos.csv", package="water")
MTLfile <- system.file("extdata", "L8.MTL.txt", package="water")
ws<- read.WSdata(WSdata = csvfile, date.format = "%d/%m/%Y", time.format="%H:%M:%S", cf=
c(1,1,1),lat=18.094, long= -89.462, elev=279, height= 2, MTL =
MTLfile, columns=c("date" = 1, "time" = 2, "radiation" = 3,"wind" = 4,
"RH" = 5, "temp" = 6, "rain" = 7))

Plot function outside the candlestick pattern in R

I have two xts objects: stock and base. I calculate the relative strength (which is simply the ratio of closing price of stock and of the base index) and I want to plot the weekly relative strength outside the candlestick pattern. The links for the data are here and here.
library(quantmod)
library(xts)
read_stock = function(fichier){ #read and preprocess data
stock = read.csv(fichier, header = T)
stock$DATE = as.Date(stock$DATE, format = "%d/%m/%Y") #standardize time format
stock = stock[! duplicated(index(stock), fromLast = T),] # Remove rows with a duplicated timestamp,
# but keep the latest one
stock$CLOSE = as.numeric(stock$CLOSE) #current numeric columns are of type character
stock$OPEN = as.numeric(stock$OPEN) #so need to convert into double
stock$HIGH = as.numeric(stock$HIGH) #otherwise quantmod functions won't work
stock$LOW = as.numeric(stock$LOW)
stock$VOLUME = as.numeric(stock$VOLUME)
stock = xts(x = stock[,-1], order.by = stock[,1]) # convert to xts class
return(stock)
}
relative.strength = function(stock, base = read_stock("vni.csv")){
rs = Cl(stock) / Cl(base)
rs = apply.weekly(rs, FUN = mean)
}
stock = read_stock("aaa.csv")
candleChart(stock, theme='white')
addRS = newTA(FUN=relative.strength,col='red', legend='RS')
addRS()
However R returns me this error:
Error in `/.default`(Cl(stock), Cl(base)) : non-numeric argument to binary operator
How can I fix this?
One problem is that "vni.csv" contains a "Ticker" column. Since xts objects are a matrix at their core, you can't have columns of different types. So the first thing you need to do is ensure that you only keep the OHLC and volume columns of the "vni.csv" file. I've refactored your read_stock function to be:
read_stock = function(fichier) {
# read and preprocess data
stock <- read.csv(fichier, header = TRUE, as.is = TRUE)
stock$DATE = as.Date(stock$DATE, format = "%d/%m/%Y")
stock = stock[!duplicated(index(stock), fromLast = TRUE),]
# convert to xts class
stock = xts(OHLCV(stock), order.by = stock$DATE)
return(stock)
}
Next, it looks like the the first argument to relative.strength inside the addRS function is passed as a matrix, not an xts object. So you need to convert to xts, but take care that the index class of the stock object is the same as the index class of the base object.
Then you need to make sure your weekly rs object has an observation for each day in stock. You can do that by merging your weekly data with an empty xts object that has all the index values for the stock object.
So I refactored your relative.strength function to:
relative.strength = function(stock, base) {
# convert to xts
sxts <- as.xts(stock)
# ensure 'stock' index class is the same as 'base' index class
indexClass(sxts) <- indexClass(base)
index(sxts) <- index(sxts)
# calculate relative strength
rs = Cl(sxts) / Cl(base)
# weekly mean relative strength
rs = apply.weekly(rs, FUN = mean)
# merge 'rs' with empty xts object contain the same index values as 'stock'
merge(rs, xts(,index(sxts)), fill = na.locf)
}
Now, this code:
stock = read_stock("aaa.csv")
base = read_stock("vni.csv")
addRS = newTA(FUN=relative.strength, col='red', legend='RS')
candleChart(stock, theme='white')
addRS(base)
Produces this chart:
The following line in your read_stock function is causing the problem:
stock = xts(x = stock[,-1], order.by = stock[,1]) # convert to xts class
vni.csv has the actual symbol name in the third column of your data, so when you put stock[,-1] you're actually including a character column and xts forces all the other columns to be characters as well. Then R alerts you about dividing a number by a character at Cl(stock) / Cl(base). Here is a simple example of this error message with division:
> x <- c(1,2)
> y <- c("A", "B")
> x/y
Error in x/y : non-numeric argument to binary operator
I suggest you remove the character column in vni.csv that contains "VNIndex" in every row or modify your function called read_stock() to better protect against this type of issue.

R rnoaa annual results - No data found

I am currently trying to configure the rnoaa library to connect city, state data with a weather station, and therefore output ANNUAL weather data, namely temperature. I have included a hardcoded input for reference, but I intend on feeding in hundreds of geocoded cities eventually. This isn't the issue so much as it is retrieving data.
require(rnoaa)
require(ggmap)
city<-geocode("birmingham, alabama", output = "all")
bounds<-city$results[[1]]$geometry$bounds
se<-bounds$southwest$lat
sw<-bounds$southwest$lng
ne<-bounds$northeast$lat
nw<-bounds$northeast$lng
stations<-ncdc_stations(extent = c(se, sw, ne, nw),token = noaakey)
I am calculating a MBR (rectangle) around the geographic area, in this case Birmingham, and then getting a list of stations. I'm then pulling out the station_id and then attempting to retrieve results with any type of parameters with no success. I'm looking to associate annual temperatures with each city.
test <- ncdc(datasetid = "ANNUAL", locationid = topStation[1],
datatypeid = "DSNW",startdate = "2000-01-01", enddate = "2010-01-01",
limit = 1000, token = noaakey)
Warning message:
Sorry, no data found
Looks like location ID is creating issue. Try without it ( as it is optional field )
ncdc_locs(datasetid = "ANNUAL",datatypeid = "DSNW",startdate = "2000-01-01", enddate = "2010-01-01", limit = 1000,token = <your token key>)
and then with valid location ID
ncdc_locs(datasetid = "ANNUAL",datatypeid = "DSNW",startdate = "2000-01-01", enddate = "2010-01-01", limit = 1000,locationid='CITY:US000001',token = <your token>)
returns
$meta
NULL
$data
mindate maxdate name datacoverage id
1 1872-01-01 2016-04-16 Washington D.C., US 1 CITY:US000001
attr(,"class")
[1] "ncdc_locs"

How to add polylines from one location to others separately using leaflet in shiny?

I'm trying to add polylines from one specific location to many others in shiny R using addPolylines from leaflet. But instead of linking from one location to the others, I am only able to link them all together in a sequence. The best example of what I'm trying to achieve is seen here in the cricket wagon wheel diagram:
.
observe({
long.path <- c(-73.993438700, (locations$Long[1:9]))
lat.path <- c(40.750545000, (locations$Lat[1:9]))
proxy <- leafletProxy("map", data = locations)
if (input$paths) {
proxy %>% addPolylines(lng = long.path, lat = lat.path, weight = 3, fillOpacity = 0.5,
layerId = ~locations, color = "red")
}
})
It is in a reactive expression as I want them to be activated by a checkbox.
I'd really appreciate any help with this!
Note
I'm aware the OP asked for a leaflet answer. But this question piqued my interest to seek an alternative solution, so here are two
Example - mapdeck
Mapdeck (my package) uses Deck.gl on a Mapbox map, so you need a Mapbox API key to use it. But it does let you plot 2.5d arcs
It works on data.frames and data.tables (as well as sp and sf) objects.
center <- c(144.983546, -37.820077)
df_hits$center_lon <- center[1]
df_hits$center_lat <- center[2]
df_hits$score <- sample(c(1:4,6), size = nrow(df_hits), replace = T)
library(mapdeck)
set_token("MAPBOX")
mapdeck(
style = mapdeck_style("satellite")
) %>%
add_arc(
data = df_hits
, origin = c("center_lon", "center_lat")
, destination = c("lon", "lat")
, stroke_from = "score"
, stroke_to = "score"
, stroke_width = "score"
, palette = "magma"
)
Example - googleway
This example uses googleway (also my package, which interfaces Google Maps API), and also works on data.frames and data.tables (as well as sp and sf)
The trick is in the encodeCoordinates function, which encodes coordinates (lines) into a Google Polyline
library(data.table)
library(googleway)
library(googlePolylines) ## gets installed when you install googleway
center <- c(144.983546, -37.820077)
setDT(df_hits) ## data given at the end of the post
## generate a 'hit' id
df_hits[, hit := .I]
## generate a random score for each hit
df_hits[, score := sample(c(1:4,6), size = .N, replace = T)]
df_hits[
, polyline := encodeCoordinates(c(lon, center[1]), c(lat, center[2]))
, by = hit
]
set_key("GOOGLE_MAP_KEY") ## you need an API key to load the map
google_map() %>%
add_polylines(
data = df_hits
, polyline = "polyline"
, stroke_colour = "score"
, stroke_weight = "score"
, palette = viridisLite::plasma
)
The dplyr equivalent would be
df_hits %>%
mutate(hit = row_number(), score = sample(c(1:4,6), size = n(), replace = T)) %>%
group_by(hit, score) %>%
mutate(
polyline = encodeCoordinates(c(lon, center[1]), c(lat, center[2]))
)
Data
df_hits <- structure(list(lon = c(144.982933659011, 144.983487725258,
144.982804912978, 144.982869285995, 144.982686895782, 144.983239430839,
144.983293075019, 144.983529109412, 144.98375441497, 144.984103102141,
144.984376687461, 144.984183568412, 144.984344500953, 144.984097737723,
144.984065551215, 144.984339136535, 144.984001178199, 144.984124559814,
144.984280127936, 144.983990449363, 144.984253305846, 144.983030218536,
144.982896108085, 144.984022635871, 144.983786601478, 144.983668584281,
144.983673948699, 144.983577389175, 144.983416456634, 144.983577389175,
144.983282346183, 144.983244795257, 144.98315360015, 144.982896108085,
144.982686895782, 144.982617158347, 144.982761997634, 144.982740539962,
144.982837099486, 144.984033364707, 144.984494704658, 144.984146017486,
144.984205026084), lat = c(-37.8202049841516, -37.8201201023877,
-37.8199253045246, -37.8197812267274, -37.8197727515541, -37.8195269711051,
-37.8197600387923, -37.8193828925304, -37.8196964749506, -37.8196583366193,
-37.8195820598976, -37.8198956414717, -37.8200651444706, -37.8203575362288,
-37.820196509027, -37.8201032825917, -37.8200948074554, -37.8199253045246,
-37.8197897018997, -37.8196668118057, -37.8200566693299, -37.8203829615443,
-37.8204295746001, -37.8205355132537, -37.8194761198756, -37.8194040805737,
-37.819569347103, -37.8197007125418, -37.8196752869912, -37.8195015454947,
-37.8194930702893, -37.8196286734591, -37.8197558012046, -37.8198066522414,
-37.8198151274109, -37.8199549675656, -37.8199253045246, -37.8196964749506,
-37.8195862974953, -37.8205143255351, -37.8200270063298, -37.8197430884399,
-37.8195354463066)), row.names = c(NA, -43L), class = "data.frame")
I know this was asked a year ago but I had the same question and figured out how to do it in leaflet.
You are first going to have to adjust your dataframe because addPolyline just connects all the coordinates in a sequence. It seems that you know your starting location and want it to branch out to 9 separate locations. I am going to start with your ending locations. Since you have not provided it, I will make a dataframe with 4 separate ending locations for the purpose of this demonstration.
dest_df <- data.frame (lat = c(41.82, 46.88, 41.48, 39.14),
lon = c(-88.32, -124.10, -88.33, -114.90)
)
Next, I am going to create a data frame with the central location of the same size (4 in this example) of the destination locations. I will use your original coordinates. I will explain why I'm doing this soon
orig_df <- data.frame (lat = c(rep.int(40.75, nrow(dest_df))),
long = c(rep.int(-73.99,nrow(dest_df)))
)
The reason why I am doing this is because the addPolylines feature will connect all the coordinates in a sequence. The way to get around this in order to create the image you described is by starting at the starting point, then going to destination point, and then back to the starting point, and then to the next destination point. In order to create the dataframe to do this, we will have to interlace the two dataframes by placing in rows as such:
starting point
- destination point 1
- starting point
- destination point 2
- and so forth...
The way I will do is create a key for both data frames. For the origin dataframe, I will start at 1, and increment by 2 (e.g., 1 3 5 7). For the destination dataframe, I will start at 2 and increment by 2 (e.g., 2, 4, 6, 8). I will then combine the 2 dataframes using a UNION all. I will then sort by my sequence to make every other row the starting point. I am going to use sqldf for this because that is what I'm comfortable with. There may be a more efficient way.
orig_df$sequence <- c(sequence = seq(1, length.out = nrow(orig_df), by=2))
dest_df$sequence <- c(sequence = seq(2, length.out = nrow(orig_df), by=2))
library("sqldf")
q <- "
SELECT * FROM orig_df
UNION ALL
SELECT * FROM dest_df
ORDER BY sequence
"
poly_df <- sqldf(q)
The new dataframe looks like this (notice how the origin locations are interwoven between the destination):
And finally, you can make your map:
library("leaflet")
leaflet() %>%
addTiles() %>%
addPolylines(
data = poly_df,
lng = ~lon,
lat = ~lat,
weight = 3,
opacity = 3
)
And finally it should look like this:
I hope this helps anyone who is looking to do something like this in the future
Here is a possible approach based on the mapview package. Simply create SpatialLines connecting your start point with each of the end points (stored in locations), bind them together and display the data using mapview.
library(mapview)
library(raster)
## start point
root <- matrix(c(-73.993438700, 40.750545000), ncol = 2)
colnames(root) <- c("Long", "Lat")
## end points
locations <- data.frame(Long = (-78):(-70), Lat = c(40:44, 43:40))
## create and append spatial lines
lst <- lapply(1:nrow(locations), function(i) {
SpatialLines(list(Lines(list(Line(rbind(root, locations[i, ]))), ID = i)),
proj4string = CRS("+init=epsg:4326"))
})
sln <- do.call("bind", lst)
## display data
mapview(sln)
Just don't get confused by the Line-to-SpatialLines procedure (see ?Line, ?SpatialLines).

Resources