Much appreciated for your comment/answer.
Context: I have a large data table of daily prices of swap rates across a dozen countries. The columns are [ID, Date, X1Y, X2Y, X3Y ... X30Y], where X..Y are columns indicating the part of the yield curve (e.g. X1Y is 1-year swap, X3Y is 3-year swap). The two keys are ID (e.g. "AUD", "GBP") and Date (e.g. "2001-04-13", "2001-04-16").
Dummy Data:
set.seed(123)
dt <- cbind(ID=rep(c("AUD","GBP"),c(100,100)),X1Y=rnorm(200),X2Y=rnorm(200),X3Y=rnorm(200))
dt <- data.table(dt)
dt[,Date := seq(from=as.IDate("2013-01-01"), by="1 day", length.out=100)]
setkeyv(dt,c("ID","Date"))
Problem 1:
First generate some dummy signals. What's the syntax if there is 100 columns with fairly complicated signal generation formula coded in a separate function say genSig(X1Y)? Here's what I mean using just the 3 columns and some meaningless formula:
dt[,SIG1 :=c(0, diff(X1Y ,1)),by="ID"]
dt[,SIG2 :=c(0, diff(X2Y ,1)),by="ID"]
dt[,SIG3 :=c(0, diff(X3Y ,1)),by="ID"]
Problem 2:
Carry forward column(s) based on "middle of the month". For example, using the SIG columns, I'd like to make everything after say the 15th of each month the same as the signal on the 15th, until next month's 15th. The tricky thing is that the actual data contains just trading days so some months do not have 15th if it is a weekend/holiday. Another issue is using an efficient syntax, I could achieve something similar using loop (I know..) for the start of each month just to show what I meant:
for (i in 2:length(dt$Date)){
if(as.POSIXlt(dt[i,]$Date)$mon == as.POSIXlt(dt[i-1,]$Date)$mon){
dt[i, SIG1 := dt[i-1,SIG1]]
dt[i, SIG2 := dt[i-1,SIG2]]
dt[i, SIG3 := dt[i-1,SIG3]]
}
}
I can't figure out how to deal with the "mid-month" issue since it can fall on the 15th or 16th or 17th. Like Problem 1, would appreciate if there is a smart way to insert/update multiple/dozen columns.
As far as problem 2 goes, you can use rolling joins:
# small sample to demonstrate
dt = data.table(date = as.Date(c('2013-01-01', '2013-01-15', '2013-01-17', '2013-02-14', '2013-02-17'), '%Y-%m-%d'), val = 1:5)
dt
# date val
#1: 2013-01-01 1
#2: 2013-01-15 2
#3: 2013-01-17 3
#4: 2013-02-14 4
#5: 2013-02-17 5
setkey(dt, date)
midmonth = seq(as.Date('2013-01-15', '%Y-%m-%d'),
as.Date('2013-12-15', '%Y-%m-%d'),
by = '1 month')
dt[, flag := 0]
dt[J(midmonth), flag := 1, roll = -Inf]
dt
# date val flag
#1: 2013-01-01 1 0
#2: 2013-01-15 2 1
#3: 2013-01-17 3 0
#4: 2013-02-14 4 0
#5: 2013-02-17 5 1
And now you can cumsum the flag to obtain the grouping you want to e.g. do:
dt[, val1 := val[1], by = cumsum(flag)]
dt
# date val flag val1
#1: 2013-01-01 1 0 1
#2: 2013-01-15 2 1 2
#3: 2013-01-17 3 0 2
#4: 2013-02-14 4 0 2
#5: 2013-02-17 5 1 5
# problem 1
nsig <- 3L
csig <- 1:nsig+1L
newcols <- paste('SIG',1:nsig,sep='')
dt[,(newcols):=0]
for (j in csig) set(dt,j=j+nsig+1L,value=c(0, diff(dt[[j]],1)))
After looking at #eddi's answer, I see that set is not so useful for problem 2. Here's what I would do:
dt[,(newcols):=lapply(newcols,function(x) get(x)[1]),by=list(ID,month(Date-14))]
According to this answer, you can subtract days from a date in this way.
Aside. Cbind-ing vectors makes a matrix. In your example, you've got a character matrix. I think you were looking for...
# Creating better data...
set.seed(123)
dt <- data.table(ID=rep(c("AUD","GBP"),c(100,100)),
X1Y=rnorm(200),X2Y=rnorm(200),X3Y=rnorm(200),
Date=seq(from=as.IDate("2013-01-01"), by="1 day", length.out=100))
Related
Does anyone have a solution to perform
separate operations on
groups of consecutive values that are a
subset of a time series and are
identified by a reoccurring, identical flag
with R ?
In the example data set created by the code below, this would refer for example to calculating the mean of “value” separately for each group where “flag” == 1 on consecutive days.
A typical case in science would be a data set recorded by an instrument that repeatedly executes a calibration procedure and flags the corresponding data with the same flag, but the user needs to evaluate each calibration separately with the same procedure.
Thanks for your suggestions. Jens
library(lubridate)
df <- data.frame(
date = seq(ymd("2018-01-01"), ymd("2018-06-29"), by = "days"),
flag = rep( c(rep(1,10), rep(0, 20)), 6),
value = seq(1,180,1)
)
The data.table function rleid is great for giving group IDs to runs of consecutive values. I continue to use data.table, but you could everything but the rleid part just as well in dplyr or base.
My answer comes down to use data.table::rleid and then pick your favorite way to take the mean by group (R-FAQ link).
library(data.table)
setDT(df)
df[, r_id := rleid(flag)]
df[flag == 1, list(
min_date = min(date),
max_date = max(date),
mean_value = mean(value)
), by = r_id]
# r_id min_date max_date mean_value
# 1: 1 2018-01-01 2018-01-10 5.5
# 2: 3 2018-01-31 2018-02-09 35.5
# 3: 5 2018-03-02 2018-03-11 65.5
# 4: 7 2018-04-01 2018-04-10 95.5
# 5: 9 2018-05-01 2018-05-10 125.5
# 6: 11 2018-05-31 2018-06-09 155.5
I have two data frames, one of hospital stays, and the other of lab results. I need to identify which hospital stay a lab result takes place in, and copy the admission and discharge dates from the hospital data frame into the row for the relevant lab result.
I am doing this with a for loop to walk through the lab results, and then if statements and subsets that look for matching entries (by patient SSN and surrounding dates) in the hospital records.
This is quite a large data set and using the for loop is very slow. Is there a way to speed this kind of problem up? (I have several similar problems, so would love an answer.)
Sample data added, note that there are multiple hospital records for each patient with the goal being to get the dates from the record where the dates overlap the lab date. In this example, the resulting data frame should only have admission and discharge dates for patient 1, as patient 2 has no hospital data, and patient 3's records do not overlap the lab date.
testDate <- as.Date(c("2017-01-15", "2017-01-15", "2017-01-15"))
patientSSN <- c("1","2","3")
labs <- data.frame(patientSSN, testDate)
# patientSSN testDate
# 1 1 2017-01-15
# 2 2 2017-01-15
# 3 3 2017-01-15
patientSSN <- c("1","1","3","3")
admissionDate <- as.Date(c("2017-01-07", "2017-02-01", "2016-12-01", "2017-01-16"))
dischargeDate <- as.Date(c("2017-01-16", "2017-02-10", "2016-12-15", "2017-02-01"))
hospitalRec <- data.frame(patientSSN, admissionDate, dischargeDate)
for (I in 1:nrow(labs)) {
labs[I,]$admissionDate <- hospitalRec[hospitalRec$patientSSN == labs[I,]$patientSSN & hospitalRec$admissionDate <= labs[I,]$testDate & hospitalRec$dischargeDate >= labs[I,]$testDate,]$admissionDate
labs[I,]$admissionDate <- hospitalRec[hospitalRec$PatientSSN == labs[I,]$PatientSSN & hospitalRec$admissionDate <= labs[I,]$testDate & hospitalRec$dischargeDate >= labs[I,]$testDate,]$dischargeDate
}
The desired data frame would look like:
labs:
patientSSN testDate admissionDate dischargeDate
1 2017-01-15 2017-01-07 2017-01-16
2 2017-01-15 NA NA
3 2017-01-15 NA NA
Notice, in the real data, there is also the problem of multiple hospital records qualifying (discharges between departments) these records would have the same admission date, but different discharge times with the latest one being important. But first things first...
A non equi join works, for example with data.table:
library(data.table)
setDT(labs); setDT(hospitalRec)
labs[hospitalRec, on=.(patientSSN, testDate >= admissionDate, testDate <= dischargeDate),
`:=`(aDate = i.admissionDate, dDate = i.dischargeDate)]
patientSSN testDate aDate dDate
1: 1 2017-01-15 2017-01-07 2017-01-16
2: 2 2017-01-15 <NA> <NA>
3: 3 2017-01-15 <NA> <NA>
in the real data, there is also the problem of multiple hospital records qualifying (discharges between departments) these records would have the same admission date, but different discharge times with the latest one being important.
If hospitalRec is sorted, adding mult="last" above should work. See ?data.table for full documentation. Alternately, you could just create a version of the hospital records that excludes these "duplicates", like ... sort and then
lastRec = unique(hospitalRec, by=c("patientSSN", "admissionDate"), fromLast=TRUE))
The setorder function is the standard tool for sorting data.tables.
Assuming this is similar to what your df looks like, use dplyr::left_join:
hospital_data <- data.frame(PatientSSN = c('1234567890','9876543210'),
admit = c('8/1/17','8/5/17'),
discharge = c('8/10/17','8/15/17'))
lab_data <- data.frame(specimen_id = c('foo1','foo2','foo3','foo4','foo5','foo6','foo7'),
PatientSSN = c('1234567890','1234567890','1234567890','9876543210','9876543210','9876543210','8527419600'),
test = c('hemoglobin','inr','platelette','hemoglobin','inr','platelette','inr'))
lab_data %>% left_join(hospital_data)
specimen_id PatientSSN test admit discharge
1 foo1 1234567890 hemoglobin 8/1/17 8/10/17
2 foo2 1234567890 inr 8/1/17 8/10/17
3 foo3 1234567890 platelette 8/1/17 8/10/17
4 foo4 9876543210 hemoglobin 8/5/17 8/15/17
5 foo5 9876543210 inr 8/5/17 8/15/17
6 foo6 9876543210 platelette 8/5/17 8/15/17
7 foo7 8527419600 inr <NA> <NA>
Note that your id variable (PatientSSN) are the same in each table.
OK- here's a method. Just a quick heads up, though; it's pretty unlikely that you'll ever work with EMR data that doesn't have an ID variable specific to a visit/account. I would look to use that as the unique identifier before I used SSN. Nevertheless; this should work. I used the data you provided above.
for(i in 1:nrow(labs)){
#finding the ID (ssn)
ssn_match_df <- hospitalRec[which(as.character(labs$patientSSN[i]) == as.character(hospitalRec$patientSSN)),]
#finding record in table where the test date fall between the admit/discharge
ssn_match_df <- ssn_match_df[which(labs$testDate[i] >= ssn_match_df$admissionDate &
labs$testDate[i] <= ssn_match_df$dischargeDate),]
if(nrow(ssn_match_df)>0){
labs[i,3] <- as.character(ssn_match_df[1,2])
labs[i,4] <- as.character(ssn_match_df[1,3])
} else {
labs[i,3] <- NA
labs[i,4] <- NA
}
}
colnames(labs)[3] <- 'admitDate'
colnames(labs)[4] <- 'dischargeDate'
Consider this dataset:
mydf <- data.frame(churn_indicator = c(0,0,1,0,1),
resign_date = c(NA,NA,"2011-01-01",NA,"2012-02-01"),
join_date = c("2001-01-01","2001-03-01","2002-04-02",
"2003-09-01","2005-05-10"))
The task is to calculate a vector 'length' which is resign_date - join_date for churn_indicator=1 and Sys.Date()-join_date for churn_indicator =0.
I have already figured out how to do this using a for loop but I want to use something that more efficient(the apply family maybe). Also, is it possible to do this using dplyr's mutate function?
A possible solution :
# convert column from factor/characters to Date (if not already done)
mydf$resign_date <- as.Date(mydf$resign_date)
mydf$join_date <- as.Date(mydf$join_date)
# compute the date differences
days_churn1 <- as.numeric(difftime(mydf$resign_date,mydf$join_date,units='days'))
days_churn0 <- as.numeric(difftime(Sys.Date(),mydf$join_date,units='days'))
# set to zero the values where churn indicator is not what we want
days_churn1[mydf$churn_indicator==0]<-0
days_churn0[mydf$churn_indicator==1]<-0
# sum the two vectors
mydf$length <- days_churn1+days_churn0
> mydf
churn_indicator resign_date join_date length
1 0 <NA> 2001-01-01 5997
2 0 <NA> 2001-03-01 5938
3 1 2011-01-01 2002-04-02 3196
4 0 <NA> 2003-09-01 5024
5 1 2012-02-01 2005-05-10 2458
Alternatively, you can combine some operations using ifelse :
# convert column from factor/characters to Date (if not already done)
mydf$resign_date <- as.Date(mydf$resign_date)
mydf$join_date <- as.Date(mydf$join_date)
mydf$length <-
as.numeric(
ifelse(mydf$churn_indicator==1,
difftime(mydf$resign_date,mydf$join_date,units='days'),
difftime(Sys.Date(),mydf$join_date,units='days')
))
I have a large number of files (~1200) which each contains a large timeserie with data about the height of the groundwater. The starting date and length of the serie is different for each file. There can be large data gaps between dates, for example (small part of such a file):
Date Height (cm)
14-1-1980 7659
28-1-1980 7632
14-2-1980 7661
14-3-1980 7638
28-3-1980 7642
14-4-1980 7652
25-4-1980 7646
14-5-1980 7635
29-5-1980 7622
13-6-1980 7606
27-6-1980 7598
14-7-1980 7654
28-7-1980 7654
14-8-1980 7627
28-8-1980 7600
12-9-1980 7617
14-10-1980 7596
28-10-1980 7601
14-11-1980 7592
28-11-1980 7614
11-12-1980 7650
29-12-1980 7670
14-1-1981 7698
28-1-1981 7700
13-2-1981 7694
17-3-1981 7740
30-3-1981 7683
14-4-1981 7692
14-5-1981 7682
15-6-1981 7696
17-7-1981 7706
28-7-1981 7699
28-8-1981 7686
30-9-1981 7678
17-11-1981 7723
11-12-1981 7803
18-2-1982 7757
16-3-1982 7773
13-5-1982 7753
11-6-1982 7740
14-7-1982 7731
15-8-1982 7739
14-9-1982 7722
14-10-1982 7794
15-11-1982 7764
14-12-1982 7790
14-1-1983 7810
28-3-1983 7836
28-4-1983 7815
31-5-1983 7857
29-6-1983 7801
28-7-1983 7774
24-8-1983 7758
28-9-1983 7748
26-10-1983 7727
29-11-1983 7782
27-1-1984 7801
28-3-1984 7764
27-4-1984 7752
28-5-1984 7795
27-7-1984 7748
27-8-1984 7729
28-9-1984 7752
26-10-1984 7789
28-11-1984 7797
18-12-1984 7781
28-1-1985 7833
21-2-1985 7778
22-4-1985 7794
28-5-1985 7768
28-6-1985 7836
26-8-1985 7765
19-9-1985 7760
31-10-1985 7756
26-11-1985 7760
20-12-1985 7781
17-1-1986 7813
28-1-1986 7852
26-2-1986 7797
25-3-1986 7838
22-4-1986 7807
27-5-1986 7785
24-6-1986 7787
26-8-1986 7744
23-9-1986 7742
22-10-1986 7752
1-12-1986 7749
17-12-1986 7758
I want to calculate the average height over 5 years. So, in case of the example 14-1-1980 + 5 years, 14-1-1985 + 5 years, .... The amount of datapoints is different for each calculation of the average. It is very likely that the date 5 years later will not be in the dataset as a datapoint. Hence, I think I need to tell R somehow to take an average in a certain timespan.
I searched on the internet but didn't find something that fitted my needs. A lot of useful packages like uts, zoo, lubridate and the function aggregate passed by. Instead of getting closer to the solution I get more and more confused about which approach is the best for my problem.
Thanks a lot in advance!
As #vagabond points out, you'll want to combine your 1200 files into a single data frame (the plyr package would allow you to do something simple like: data.all <- adply(dir([DATA FOLDER]), 1, read.csv).
Once you have the data, the first step would be to transform the Date column into proper POSIXct date data. Right now the data appear to be strings, and we want them to have an underlying numerical representation (which POSIXct does):
library(lubridate)
df$date.new <- as.Date(dmy(df$Date))
Date Height date.new
1 14-1-1980 7659 1980-01-14
2 28-1-1980 7632 1980-01-28
3 14-2-1980 7661 1980-02-14
4 14-3-1980 7638 1980-03-14
5 28-3-1980 7642 1980-03-28
6 14-4-1980 7652 1980-04-14
Note that the date.new column looks like a string, but is in fact Date data, and can be handled with numerical operations (addition, comparison, etc.).
Next, we might construct a set of date periods, over which we want to compute averages. Your example mentions 5 years, but with the data you provided, that's not a very illustrative example. So here I'm creating 1-year periods starting at every day between Jan 14 1980 and Jan 14 1985
date.start <- as.Date(as.Date('1980-01-14') : as.Date('1985-01-14'), origin = '1970-01-01')
date.end <- date.start + years(1)
dates <- data.frame(start = date.start, end = date.end)
start end
1 1980-01-14 1981-01-14
2 1980-01-15 1981-01-15
3 1980-01-16 1981-01-16
4 1980-01-17 1981-01-17
5 1980-01-18 1981-01-18
6 1980-01-19 1981-01-19
Then we can use the dplyr package to move through each row of this data frame and compute a summary average of Height:
library(dplyr)
df.mean <- dates %>%
group_by(start, end) %>%
summarize(height.mean = mean(df$Height[df$date.new >= start & df$date.new < end]))
start end height.mean
<date> <date> <dbl>
1 1980-01-14 1981-01-14 7630.273
2 1980-01-15 1981-01-15 7632.045
3 1980-01-16 1981-01-16 7632.045
4 1980-01-17 1981-01-17 7632.045
5 1980-01-18 1981-01-18 7632.045
6 1980-01-19 1981-01-19 7632.045
The foverlaps function is IMHO the perfect candidate for such a situation:
library(data.table)
library(lubridate)
# convert to a data.table with setDT()
# convert the 'Date'-column to date-format
# create a begin & end date for the required period
setDT(dat)[, Date := as.Date(Date, '%d-%m-%Y')
][, `:=` (begindate = Date, enddate = Date + years(1))]
# set the keys (necessary for the foverlaps function)
setkey(dat, begindate, enddate)
res <- foverlaps(dat, dat, by.x = c(1,3))[, .(moving.average = mean(i.Height)), Date]
the result:
> head(res,15)
Date moving.average
1: 1980-01-14 7633.217
2: 1980-01-28 7635.000
3: 1980-02-14 7637.696
4: 1980-03-14 7636.636
5: 1980-03-28 7641.273
6: 1980-04-14 7645.261
7: 1980-04-25 7644.955
8: 1980-05-14 7646.591
9: 1980-05-29 7647.143
10: 1980-06-13 7648.400
11: 1980-06-27 7652.900
12: 1980-07-14 7655.789
13: 1980-07-28 7660.550
14: 1980-08-14 7660.895
15: 1980-08-28 7664.000
Now you have for each date an average of all the values that lie the date and one year ahead of that date.
Hey I just tried after seeing your question!!! Ran on a sample data frame. Try it on yours after understanding the code and then let me know!
Bdw instead of having an interval of 5 years, I used just 2 months (2*30 = approx 2 months) as the interval!
df = data.frame(Date = c("14-1-1980", "28-1-1980", "14-2-1980", "14-3-1980", "28-3-1980",
"14-4-1980", "25-4-1980", "14-5-1980", "29-5-1980", "13-6-1980:",
"27-6-1980", "14-7-1980", "28-7-1980", "14-8-1980"), height = 1:14)
# as.Date(df$Date, "%d-%m-%Y")
df1 = data.frame(orig = NULL, dest = NULL, avg_ht = NULL)
orig = as.Date(df$Date, "%d-%m-%Y")[1]
dest = as.Date(df$Date, "%d-%m-%Y")[1] + 2*30 #approx 2 months
dest_final = as.Date(df$Date, "%d-%m-%Y")[14]
while (dest < dest_final){
m = mean(df$height[which(as.Date(df$Date, "%d-%m-%Y")>=orig &
as.Date(df$Date, "%d-%m-%Y")<dest )])
df1 = rbind(df1,data.frame(orig=orig,dest=dest,avg_ht=m))
orig = dest
dest = dest + 2*30
print(paste("orig:",orig, " + ","dest:",dest))
}
> df1
orig dest avg_ht
1 1980-01-14 1980-03-14 2.0
2 1980-03-14 1980-05-13 5.5
3 1980-05-13 1980-07-12 9.5
I hope this works for you as well
This is my best try, but please keep in mind that I am working with the years instead of the full date, i.e. based on the example you provided I am averaging over beginning of 1980- end of 1984.
dat<-read.csv("paixnidi.csv")
install.packages("stringr")
library(stringr)
dates<-dat[,1]
#extract the year of each measurement
years<-as.integer(str_sub(dat[,1], start= -4))
spread_y<-years[length(years)]-years[1]
ind<-list()
#find how many 5-year intervals there are
groups<-ceiling(spread_y/4)
meangroups<-matrix(0,ncol=2,nrow=groups)
k<-0
for (i in 1:groups){
#extract the indices of the dates vector whithin the 5-year period
ind[[i]]<-which(years>=(years[1]+k)&years<=(years[1]+k+4),arr.ind=TRUE)
meangroups[i,2]<-mean(dat[ind[[i]],2])
meangroups[i,1]<-(years[1]+k)
k<-k+5
}
colnames(meangroups)<-c("Year:Year+4","Mean Height (cm)")
I have a bit of a unique question. I've tried a few different things which I'll detail after the problem itself.
The problem:
For each user ID, I need to iterate through event dates and check if each date is within 30 days of the next date. I have 260,000 records, and a not-insignificant number of IDs only have a single entry. The data look like:
id | date1 | date2
1 | 2016-01-01 | 2016-02-12
and so on
I have tried:
foreach (split out each ID's set of events, calculate, recombine; ran into memory issues).
data.table, but I don't know enough to know if I exhausted this option.
briefly dplyr (namely:
mutate(time_btwn=abs(as.numeric(difftime(data$date,lag(data$date2,1),"days")))))
and I'm currently running a straight for loop that iterates through all rows. It is extremely slow and I wish I didn't have to do it. The code:
for ( i in 2:nrow(data) ){
if ( data$id[ i ] != data$id[ i - 1 ] ){
next
} else {
data$timebtwn[i] <- abs( as.numeric( difftime( data$date1[i], data$date2[ i - 1 ], "days" ) ) )
}
}
I've looked into apply and lapply, but can't quite work out the function to plug into apply or lapply that will do what I need (i.e. for each entry in column1, check one row back in column2 and return the difference between the dates IF both rows have the same id).
Is there a faster way than a straight for loop (or a way using foreach) that is fast and not memory intensive?
Since I do not have a sample dataset to work with, I had to make one up, and thus it is difficult to know what exactly you are after, but:
library(data.table)
library(lubridate)
# generate random date samples
latemail <- function(N, st="2012/01/01", et="2015/12/31") {
st <- as.POSIXct(as.Date(st))
et <- as.POSIXct(as.Date(et))
dt <- as.numeric(difftime(et,st,unit="sec"))
ev <- sort(runif(N, 0, dt))
rt <- as_date(st + ev)
}
set.seed(42)
mydat<-data.table(id = as.character(sample.int(1000, 10000, replace =T)),
date1 = as_date(latemail(10000)),
date2 = as_date(latemail(10000)))
setkey(mydat, id)
mydat[, .(timebtw = abs( as.numeric(difftime(date1, date2), "days" )),
date1 = date1,
date2 = date2), by = id]
# id timebtw date1 date2
#1: 1 4 2012-01-15 2012-01-11
#2: 1 2 2012-03-21 2012-03-19
#3: 1 9 2012-10-01 2012-10-10
#4: 1 1 2013-08-08 2013-08-09
#5: 1 9 2014-02-11 2014-02-02
#---
#9996: 999 7 2014-10-28 2014-11-04
#9997: 999 9 2015-03-28 2015-04-06
#9998: 999 0 2015-07-22 2015-07-22
#9999: 999 10 2015-09-06 2015-09-16
#10000: 999 8 2015-10-03 2015-10-11
I got the date generating function from this nice post. Let me know if this is what you are trying to do. This example has 10,000 rows and 999 unique ids. To illustrate the speed:
system.time(
mydat[, .(timebtw = abs( as.numeric(difftime(date1, date2), "days")),
date1 = date1,
date2 = date2), by = id])
#user system elapsed
#0.26 0.00 0.26