Reading and Writing CSV file in R - r
I want to update specific rows in a CSV file that has dates with a data frame that I created in R.
01/04/20, Asset, Position, Price, Mark-to-Market
0, PORTFOLIO, NA, NA, 1000000
1, CASH, NA, NA, 1000000
02/04/20, Asset, Position, Price, Mark-to-Market, Position prior, Transaction, TC spent
0, PORTFOLIO, NA, NA, 999231, NA, NA, NA
1, CASH, NA, NA, 509866, NA, NA, NA
2, FUTURES, 500, 2516, 1258250, 0, 500, 629
3, VXc1, -5931, 47, -279795, 0, -5931, 140
, Total, Buys:, 1, Sells:, 1, TC spent:, 769
There are approximately 1000+ rows.
However, I am unable to read this CSV file using the following codes.
Can anyone help me with this?
df4 <- read.csv("filename.csv")
Further, I have to add two columns (2 and 3) from df3 mentioned below in the rows of df4 that have dates (except the first row). Can anyone help me with this as well?
The code to get df3 is as follows. However, I don't know how to add the rows to df4 selectively in R.
df1 <- read.csv("filename1.csv")
df2 <- read.csv("filename2.csv")
df3 <- cbind(df2[,c(1)], df1[,c(3)], df2[,c(3)])
I'm not sure what you need for your second question, but to address the first:
txt <- readLines("filename.csv")
# Warning in readLines("filename.csv") :
# incomplete final line found on 'filename.csv'
multidf <- by(txt, cumsum(!grepl("\\S", txt)),
FUN = function(x) read.csv(text = x, strip.white = TRUE))
multidf
# cumsum(!grepl("\\S", txt)): 0
# X01.04.20 Asset Position Price Mark.to.Market
# 1 0 PORTFOLIO NA NA 1000000
# 2 1 CASH NA NA 1000000
# ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# cumsum(!grepl("\\S", txt)): 1
# X02.04.20 Asset Position Price Mark.to.Market Position.prior Transaction TC.spent
# 1 0 PORTFOLIO <NA> NA 999231 NA <NA> NA
# 2 1 CASH <NA> NA 509866 NA <NA> NA
# 3 2 FUTURES 500 2516 1258250 0 500 629
# 4 3 VXc1 -5931 47 -279795 0 -5931 140
# 5 NA Total Buys: 1 Sells: 1 TC spent: 769
The multidf object is technically a "by"-class object, but that's really just a glorified list:
str(multidf)
# List of 2
# $ 0:'data.frame': 2 obs. of 5 variables:
# ..$ X01.04.20 : int [1:2] 0 1
# ..$ Asset : chr [1:2] "PORTFOLIO" "CASH"
# ..$ Position : logi [1:2] NA NA
# ..$ Price : logi [1:2] NA NA
# ..$ Mark.to.Market: int [1:2] 1000000 1000000
# $ 1:'data.frame': 5 obs. of 8 variables:
# ..$ X02.04.20 : int [1:5] 0 1 2 3 NA
# ..$ Asset : chr [1:5] "PORTFOLIO" "CASH" "FUTURES" "VXc1" ...
# ..$ Position : chr [1:5] NA NA "500" "-5931" ...
# ..$ Price : int [1:5] NA NA 2516 47 1
# ..$ Mark.to.Market: chr [1:5] "999231" "509866" "1258250" "-279795" ...
# ..$ Position.prior: int [1:5] NA NA 0 0 1
# ..$ Transaction : chr [1:5] NA NA "500" "-5931" ...
# ..$ TC.spent : int [1:5] NA NA 629 140 769
From here, you can keep it as a list (can be good, see https://stackoverflow.com/a/24376207/3358227) or try to combine into a single frame (the same link has info for that, too).
Related
Rayshader: Rendered polygons don't align with the surface height
this is my first post and i will try to describe my problem as exactly as i can without writing a novel. Also since english is not my native language please forgive any ambiguities or spelling errors. I am currently trying out the rayshader package for R in order to visualise several layers and create a representation of georeferenced data from Berlin. The data i got is a DEM (5m resolution) and a GEOJSON including a building layer including information of the building heights, a water layer and a tree layer including tree heights. For now only the DEM and the building layer are used. I can render the DEM without any problems. The buildingpolygons are also getting extruded and rendered, but their foundation height does not coincide with the corresponding height that should be read from the elevation matrix created from the DEM. I expected the polygons to be placed correctly and "stand" on the rendered surface, but most of them clip through said surface or are stuck inside the ground layer. My assumption is, that i use a wrong function for my purpose - the creator of the package uses render_multipolygonz() for buildings as can be seen here timecode 12:49. I tried that, but it just renders an unextruded continuous polygon on my base layer underneath the ground. Or that i am missing an Argument of the render_polygons() function. It could also be quite possible, that i am producing a superficial calling or assignment error, since i am all but an expert in R. I am just starting my coding journey. Here is my code: #set wd to save location setwd(dirname(rstudioapi::getActiveDocumentContext()$path)) #load libs library(geojsonR) library(rayshader) library(raster) library(sf) library(rgdal) library(dplyr) library(rgl) #load DEM tempel_DOM <- raster("Daten/Tempelhof_Gelaende_5m_25833.tif") #load buildings layer from GEOJSON buildings_temp <- st_read(dsn = "Daten/Tempelhof_GeoJSON_25833.geojson", layer = "polygon") %>% st_transform(crs = st_crs(tempel_DOM)) %>% filter(!is.na(bh)) #create elevation matrix from DEM tempel_elmat <- raster_to_matrix(tempel_DOM) #Tempelhof Render tempel_elmat %>% sphere_shade(texture = "imhof1") %>% add_shadow(ray_shade(tempel_elmat), 0.5) %>% plot_3d( tempel_elmat, zscale = 5, fov = 0, theta = 135, zoom = 0.75, phi = 45, windowsize = c(1000, 800), ) render_polygons( buildings_temp, extent = extent(tempel_DOM), color = 'hotpink4', parallel = TRUE, data_column_top = 'bh', clear_previous = T, ) The structure of my buildings_temp using str() is: > str(buildings_temp) Classes ‘sf’ and 'data.frame': 625 obs. of 11 variables: $ t : int 1 1 1 1 1 1 1 1 1 1 ... $ t2 : int NA NA NA NA NA NA NA NA NA NA ... $ t3 : int NA NA NA NA NA NA NA NA NA NA ... $ t4 : int NA NA NA NA NA NA NA NA NA NA ... $ t1 : int 1 4 1 1 1 1 1 1 1 1 ... $ bh : num 20.9 2.7 20.5 20.1 19.3 20.9 19.7 19.8 19.6 17.8 ... $ t5 : int NA NA NA NA NA NA NA NA NA NA ... $ t6 : int NA NA NA NA NA NA NA NA NA NA ... $ th : num NA NA NA NA NA NA NA NA NA NA ... $ id : int 261 262 263 264 265 266 267 268 269 270 ... $ geometry:sfc_MULTIPOLYGON of length 625; first list element: List of 1 ..$ :List of 1 .. ..$ : num [1:12, 1:2] 393189 393191 393188 393182 393177 ... ..- attr(*, "class")= chr [1:3] "XY" "MULTIPOLYGON" "sfg" - attr(*, "sf_column")= chr "geometry" - attr(*, "agr")= Factor w/ 3 levels "constant","aggregate",..: NA NA NA NA NA NA NA NA NA NA ..- attr(*, "names")= chr [1:10] "t" "t2" "t3" "t4" ... Thanks in advance for any help. Cheers WiTell
How to use an API in R to be able to get data for storing into a db?
I am trying to figure out how to get data in R for the purposes of making it into a table that I can store into a database like sql. API <- "https://covidtrackerapi.bsg.ox.ac.uk/api/v2/stringency/date-range/{2020-01-01}/{2020-06-30}" oxford_covid <- GET(API) I then try to parse this data and make it into a dataframe but when I do so I get the errors of: "Error: Columns 4, 5, 6, 7, 8, and 178 more must be named. Use .name_repair to specify repair." and "Error: Tibble columns must have compatible sizes. * Size 2: Columns deaths, casesConfirmed, and stringency. * Size 176: Columns ..2020.12.27, ..2020.12.28, ..2020.12.29, and" I am not sure if there is a better approach or how to parse this. Is there a method or approach? I am not having much luck online.
It looks like you're trying to take the JSON return from that API and call read.table or something on it. Don't do that, JSON should be parsed by JSON tools (such as jsonlite::parse_json). Some work on that URL. js <- jsonlite::parse_json(url("https://covidtrackerapi.bsg.ox.ac.uk/api/v2/stringency/date-range/2020-01-01/2020-06-30")) lengths(js) # scale countries data # 3 183 182 str(js, max.level = 2, list.len = 3) # List of 3 # $ scale :List of 3 # ..$ deaths :List of 2 # ..$ casesConfirmed:List of 2 # ..$ stringency :List of 2 # $ countries:List of 183 # ..$ : chr "ABW" # ..$ : chr "AFG" # ..$ : chr "AGO" # .. [list output truncated] # $ data :List of 182 # ..$ 2020-01-01:List of 183 # ..$ 2020-01-02:List of 183 # ..$ 2020-01-03:List of 183 # .. [list output truncated] So this is rather large. Since you're hoping for a data.frame, I'm going to look at js$data only; js$countries looks relatively uninteresting, str(unlist(js$countries)) # chr [1:183] "ABW" "AFG" "AGO" "ALB" "AND" "ARE" "ARG" "AUS" "AUT" "AZE" "BDI" "BEL" "BEN" "BFA" "BGD" "BGR" "BHR" "BHS" "BIH" "BLR" "BLZ" "BMU" "BOL" "BRA" "BRB" "BRN" "BTN" "BWA" "CAF" "CAN" "CHE" "CHL" "CHN" "CIV" "CMR" "COD" "COG" "COL" "CPV" ... and does not correlate with the js$data. The js$scale might be interesting, but I'll skip it for now. My first go-to for joining data like this into a data.frame is one of the following, depending on your preference for R dialects: do.call(rbind.data.frame, list_of_frames) # base R dplyr::bind_rows(list_of_frames) # tidyverse data.table::rbindlist(list_of_frames) # data.table But we're going to run into problems. Namely, there are entries that are NULL, when R would prefer that they be something (such as NA). str(js$data[[1]][1]) # List of 2 # $ ABW:List of 8 # ..$ date_value : chr "2020-01-01" # ..$ country_code : chr "ABW" # ..$ confirmed : NULL # <--- problem # ..$ deaths : NULL # ..$ stringency_actual : int 0 # ..$ stringency : int 0 # ..$ stringency_legacy : int 0 # ..$ stringency_legacy_disp: int 0 So we need to iterate over each of those and replace NULL with NA. Unfortunately, I don't know of an easy tool to recursively go through lists of lists (even rapply doesn't work well in my tests), so we'll be a little brute-force here with a triple-lapply: Long-story-short, str(js$data[[1]][[1]]) # List of 8 # $ date_value : chr "2020-01-01" # $ country_code : chr "ABW" # $ confirmed : NULL # $ deaths : NULL # $ stringency_actual : int 0 # $ stringency : int 0 # $ stringency_legacy : int 0 # $ stringency_legacy_disp: int 0 jsdata <- lapply(js$data, function(z) { lapply(z, function(y) { lapply(y, function(x) if (is.null(x)) NA else x) }) }) str(jsdata[[1]][[1]]) # List of 8 # $ date_value : chr "2020-01-01" # $ country_code : chr "ABW" # $ confirmed : logi NA # $ deaths : logi NA # $ stringency_actual : int 0 # $ stringency : int 0 # $ stringency_legacy : int 0 # $ stringency_legacy_disp: int 0 (Technically, if we know that it's going to be integers, we should use NA_integer_. Fortunately, R and its dialects are able to work with this shortcut, as we'll see in a second.) After that, we can do a double-dive rbinding and get back to the frame-making I discussed a couple of steps ago. Choose one of the following, whichever dialect you prefer: alldat <- do.call(rbind.data.frame, lapply(jsdata, function(z) do.call(rbind.data.frame, z))) alldat <- dplyr::bind_rows(purrr::map(jsdata, dplyr::bind_rows)) alldat <- data.table::rbindlist(lapply(jsdata, data.table::rbindlist)) For simplicity, I'll show the first (base R) version: tail(alldat) # date_value country_code confirmed deaths stringency_actual stringency stringency_legacy stringency_legacy_disp # 2020-06-30.AND 2020-06-30 AND 855 52 42.59 42.59 65.47 65.47 # 2020-06-30.ARE 2020-06-30 ARE 48667 315 72.22 72.22 83.33 83.33 # 2020-06-30.AGO 2020-06-30 AGO 284 13 75.93 75.93 83.33 83.33 # 2020-06-30.ALB 2020-06-30 ALB 2535 62 68.52 68.52 78.57 78.57 # 2020-06-30.ABW 2020-06-30 ABW 103 3 47.22 47.22 63.09 63.09 # 2020-06-30.AFG 2020-06-30 AFG 31507 752 78.70 78.70 76.19 76.19 And if you're curious about the $scale, do.call(rbind.data.frame, js$scale) # min max # deaths 0 127893 # casesConfirmed 0 2633466 # stringency 0 100 ## or data.table::rbindlist(js$scale, idcol="id") # id min max # <char> <int> <int> # 1: deaths 0 127893 # 2: casesConfirmed 0 2633466 # 3: stringency 0 100 ## or dplyr::bind_rows(js$scale, .id = "id")
Change variable types in data frame [duplicate]
I have a dataframe with all the columns being character like this. ID <- c("A","A","A","A","A","A","A","A","B","B","B","B","B","B","B","B") ToolID <- c("CCP_A","CCP_A","CCQ_A","CCQ_A","IOT_B","CCP_B","CCQ_B","IOT_B", "CCP_A","CCP_A","CCQ_A","CCQ_A","IOT_B","CCP_B","CCQ_B","IOT_B") Step <- c("Step_A","Step_A","Step_B","Step_C","Step_D","Step_D","Step_E","Step_F", "Step_A","Step_A","Step_B","Step_C","Step_D","Step_D","Step_E","Step_F") Measurement <- c("Length","Breadth","Width","Height",NA,NA,NA,NA, "Length","Breadth","Width","Height",NA,NA,NA,NA) Passfail <- c("Pass","Pass","Fail","Fail","Pass","Pass","Pass","Pass", "Pass","Pass","Fail","Fail","Pass","Pass","Pass","Pass") Points <- as.character(c(7,5,3,4,0,0,0,0,17,15,13,14,0,0,0,0)) Average <- as.character(c(7.5,6.5,7.1,6.6,NA,NA,NA,NA,17.5,16.5,17.1,16.6,NA,NA,NA,NA)) Sigma <- as.character(c(2.5,2.5,2.1,2.6,NA,NA,NA,NA,12.5,12.5,12.1,12.6,NA,NA,NA,NA)) Tool <- c("ABC_1","ABC_2","ABD_1","ABD_2","COB_1","COB_2","COB_1","COB_2", "ABC_1","ABC_2","ABD_1","ABD_2","COB_1","COB_2","COB_1","COB_2") Dose <- as.character(c(NA,NA,NA,NA,17.1,NA,NA,17.3,NA,NA,NA,NA,117.1,NA,NA,117.3)) Machine <- c("CO2","CO6","CO3","CO6","CO2,CO6","CO2,CO3,CO4","CO2,CO3","CO2", "CO2","CO6","CO3","CO6","CO2,CO6","CO2,CO3,CO4","CO2,CO3","CO2") df <- data.frame(ID,ToolID,Step,Measurement,Passfail,Points,Average,Sigma,Tool,Dose,Machine) I am trying to check these character vectors for numeric values and then convert those with numeric values to numeric. I use the "varhandle" package in R to do it library(varhandle) if(all(check.numeric(df$Machine, na.rm=TRUE))){ # convert the vector to numeric df$Machine <- as.numeric(df$Machine) } This works but is inefficient because I have to manually enter the column names like above. How can I do it more efficiently in a loop or use vectorization over multiple columns? My actual dataset has around 350 columns. Can someone point me in the right direction?
We can use parse_guess function from readr package which basically tries to guess the type of columns. library(readr) library(dplyr) df1 <- df %>% mutate_all(parse_guess) str(df1) #'data.frame': 16 obs. of 11 variables: # $ ID : chr "A" "A" "A" "A" ... # $ ToolID : chr "CCP_A" "CCP_A" "CCQ_A" "CCQ_A" ... # $ Step : chr "Step_A" "Step_A" "Step_B" "Step_C" ... # $ Measurement: chr "Length" "Breadth" "Width" "Height" ... # $ Passfail : chr "Pass" "Pass" "Fail" "Fail" ... # $ Points : int 7 5 3 4 0 0 0 0 17 15 ... # $ Average : num 7.5 6.5 7.1 6.6 NA NA NA NA 17.5 16.5 ... # $ Sigma : num 2.5 2.5 2.1 2.6 NA NA NA NA 12.5 12.5 ... # $ Tool : chr "ABC_1" "ABC_2" "ABD_1" "ABD_2" ... # $ Dose : num NA NA NA NA 17.1 NA NA 17.3 NA NA ... # $ Machine : chr "CO2" "CO6" "CO3" "CO6" ...
We can do this in base R df[] <- lapply(df, function(x) type.convert(as.character(x), as.is = TRUE)) str(df) #'data.frame': 16 obs. of 11 variables: # $ ID : chr "A" "A" "A" "A" ... # $ ToolID : chr "CCP_A" "CCP_A" "CCQ_A" "CCQ_A" ... # $ Step : chr "Step_A" "Step_A" "Step_B" "Step_C" ... # $ Measurement: chr "Length" "Breadth" "Width" "Height" ... # $ Passfail : chr "Pass" "Pass" "Fail" "Fail" ... # $ Points : int 7 5 3 4 0 0 0 0 17 15 ... # $ Average : num 7.5 6.5 7.1 6.6 NA NA NA NA 17.5 16.5 ... # $ Sigma : num 2.5 2.5 2.1 2.6 NA NA NA NA 12.5 12.5 ... # $ Tool : chr "ABC_1" "ABC_2" "ABD_1" "ABD_2" ... # $ Dose : num NA NA NA NA 17.1 NA NA 17.3 NA NA ... # $ Machine : chr "CO2" "CO6" "CO3" "CO6" ...
With varhandle and tidyverse : df %>% mutate_if(purrr::compose(all,check.numeric),as.numeric)
I think that the easiest solution is to use all.is.numeric from Hmisc. Here's the simple example: Hmisc::all.is.numeric(c("A", "B", "1"), what = "vector", extras = NA) ## [1] "A" "B" "1" Hmisc::all.is.numeric(c("3", "2", "1", NA), what = "vector", extras = NA) ## [1] 3 2 1 NA Then you can use mutate_all from dplyr to do all the job for data.frame: library(dplyr) ID <- c("A","A","A","A","A","A","A","A","B","B","B","B","B","B","B","B") ToolID <- c("CCP_A","CCP_A","CCQ_A","CCQ_A","IOT_B","CCP_B","CCQ_B","IOT_B", "CCP_A","CCP_A","CCQ_A","CCQ_A","IOT_B","CCP_B","CCQ_B","IOT_B") Step <- c("Step_A","Step_A","Step_B","Step_C","Step_D","Step_D","Step_E","Step_F", "Step_A","Step_A","Step_B","Step_C","Step_D","Step_D","Step_E","Step_F") Measurement <- c("Length","Breadth","Width","Height",NA,NA,NA,NA, "Length","Breadth","Width","Height",NA,NA,NA,NA) Passfail <- c("Pass","Pass","Fail","Fail","Pass","Pass","Pass","Pass", "Pass","Pass","Fail","Fail","Pass","Pass","Pass","Pass") Points <- as.character(c(7,5,3,4,0,0,0,0,17,15,13,14,0,0,0,0)) Average <- as.character(c(7.5,6.5,7.1,6.6,NA,NA,NA,NA,17.5,16.5,17.1,16.6,NA,NA,NA,NA)) Sigma <- as.character(c(2.5,2.5,2.1,2.6,NA,NA,NA,NA,12.5,12.5,12.1,12.6,NA,NA,NA,NA)) Tool <- c("ABC_1","ABC_2","ABD_1","ABD_2","COB_1","COB_2","COB_1","COB_2", "ABC_1","ABC_2","ABD_1","ABD_2","COB_1","COB_2","COB_1","COB_2") Dose <- as.character(c(NA,NA,NA,NA,17.1,NA,NA,17.3,NA,NA,NA,NA,117.1,NA,NA,117.3)) Machine <- c("CO2","CO6","CO3","CO6","CO2,CO6","CO2,CO3,CO4","CO2,CO3","CO2", "CO2","CO6","CO3","CO6","CO2,CO6","CO2,CO3,CO4","CO2,CO3","CO2") df <- data.frame(ID,ToolID,Step,Measurement,Passfail,Points,Average,Sigma,Tool,Dose,Machine) dt2 <- df %>% mutate_all(function(x) Hmisc::all.is.numeric(x, what = "vector", extras = NA)) ## check classes sapply(dt2, class) ## ID ToolID Step Measurement Passfail Points ## "character" "character" "character" "character" "character" "numeric" ## Average Sigma Tool Dose Machine ## "numeric" "numeric" "character" "numeric" "character"
Another solution is retype from hablar package: library(hablar) df %>% retype() which gives: # A tibble: 16 x 11 ID ToolID Step Measurement Passfail Points Average Sigma Tool Dose Machine <chr> <chr> <chr> <chr> <chr> <int> <dbl> <dbl> <chr> <dbl> <chr> 1 A CCP_A Step_A Length Pass 7 7.50 2.50 ABC_1 NA CO2 2 A CCP_A Step_A Breadth Pass 5 6.50 2.50 ABC_2 NA CO6 3 A CCQ_A Step_B Width Fail 3 7.10 2.10 ABD_1 NA CO3 4 A CCQ_A Step_C Height Fail 4 6.60 2.60 ABD_2 NA CO6 5 A IOT_B Step_D NA Pass 0 NA NA COB_1 17.1 CO2,CO6 6 A CCP_B Step_D NA Pass 0 NA NA COB_2 NA CO2,CO3,CO4 7 A CCQ_B Step_E NA Pass 0 NA NA COB_1 NA CO2,CO3
dplyr Mutate Creating Matrix Instead of Vector
I am creating a new column that looks at conditions in my data frame and alerts me whether an issue needs to be investigated or monitored. The code to add the column looks like this: library(dplyr) df %>% mutate("Status" = ifelse(apply(.[2:7], 1, sum) > 0 & .[8] > 0, "Investigate", "Monitor" ) ) If I run the command class(df$Status) on this newly generated column the class is listed as 'matrix'. What? Why isn't it listed as 'character'. If I look at the structure of my data frame there's some oddity that may be the key, but I don't understand why. Notice that the first columns listed simply look like intergers, then the third column listed, which is the same data, has all this 'attr' phrasing. What is going on? $ 2017-08 : int NA 1 NA 1 1 2 NA NA NA NA ... $ 2017-09 : int NA NA 1 NA NA NA NA NA NA NA ... $ 2017-10 : int NA NA NA NA NA NA 1 NA NA NA ... - attr(*, "vars")= chr "Material" - attr(*, "drop")= logi TRUE - attr(*, "indices")=List of 34 ..$ : int 0 ..$ : int 1 ..$ : int 2 ..$ : int 3 ..$ : int 4 ...continued... - attr(*, "group_sizes")= int 1 1 1 1 1 1 1 1 1 1 ... - attr(*, "biggest_group_size")= int 1 - attr(*, "labels")='data.frame': 34 obs. of 1 variable: I grouped variables earlier and sometimes ungrouping magically helps. In addition I often have to convert tibbles back to data frames to get other routines to work in my code. This may or may not be related.
Why does mutate change the variable type?
activity <- mutate( activity, steps = ifelse(is.na(steps), lookup_mean(interval), steps)) The "steps" variable changes from an int to a list. I want it to stay an "int" so I can aggregate it (aggregate is failing because it is a list type). Before: > str(activity) 'data.frame': 17568 obs. of 3 variables: $ steps : int NA NA NA NA NA NA NA NA NA NA ... $ date : Factor w/ 61 levels "2012-10-01","2012-10-02",..: 1 1 1 1 1 1 1 1 1 1 ... $ interval: int 0 5 10 15 20 25 30 35 40 45 ... After: > str(activity) 'data.frame': 17568 obs. of 3 variables: $ steps :List of 17568 ..$ : num 1.72 ..$ : num 1.72 Lookup mean is defined here: lookup_mean <- function(i) { return filter(daily_activity_pattern, interval == 0) %>% select(steps) }
The problem is that lookup_mean returns a list, so R casts each value in activity$steps to a list. lookup_mean should be: lookup_mean <- function(i) { interval <- filter(daily_activity_pattern, interval == 0) %>% select(steps) return(interval$steps) }