Remove blank lines in txt output from R - r

I am trying to create a specifically formatted file to use as an input file in another software. I have been able, with the help of people here, to create a file that is almost there. Now I just need to remove some empty lines in my *.txt output file. I have tried several different approaches with gsub() but can't figure out a way. Below is an example that produces a file that shows where I'm stuck.
matsplitter<-function(M, r, c) {
rg <- (row(M)-1)%/%r+1
cg <- (col(M)-1)%/%c+1
rci <- (rg-1)*max(cg) + cg
N <- prod(dim(M))/r/c
cv <- unlist(lapply(1:N, function(x) M[rci==x]))
dim(cv)<-c(r,c,N)
cv}
B <- matrix(c(1:1380),ncol=5)
capture.output(matsplitter(B,3,5), file='output.txt')
write.table(gsub('\\[.*\\]', '',
readLines('output.txt')),
file='output.txt', row.names=FALSE, quote=FALSE)
What I need to further remove are the two blank lines between the ", , 1", ", , 2" etc. string and the matrix of numbers.
x
, , 1
1 277 553 829 1105
2 278 554 830 1106
3 279 555 831 1107
, , 2
4 280 556 832 1108
5 281 557 833 1109
6 282 558 834 1110
, , 3
7 283 559 835 1111
8 284 560 836 1112
9 285 561 837 1113

A possible solution if you are willing to go beyond gsub. I have taken the liberty of breaking the answer up into pieces for clarity (hopefully).
#read in file created by "capture.out"
out = gsub('\\[.*\\]', '', readLines('output.txt'))
If you look at this object out you will see that blocks seem separated by five spaces, and that the first of the two spaces you want to get rid of is an empty string "". We get rid of the multiple spaces by means of:
out = gsub("\\s{5}","",out)
Now after the header but in front of every block there is two empty strings and after every block there is one empty string. As we only look to exclude spaces in front of blocks we use the function rle to find repeating elements and exclude these.
#get indicator vector
exclvec = rep(rle(out)$lengths,rle(out)$lengths)
#remove values as indicated by exclvec
out = out[ifelse(out=="" & exclvec==2,F,T)]
As i interpret your question writing this dataframe provides the desired result.
write.table(out,file='output.txt', row.names=FALSE, quote=FALSE)

Related

How to create specefic columns out of text in r

Here is just an example I hope you can help me with, given that the input is a line from a txt file, I want to transform it into a table (see output) and save it as a csv or tsv file.
I have tried with separate functions but could not get it right.
Input
"PR7 - Autres produits d'exploitation 6.9 371 667 1 389"
Desired output
Variable
note
2020
2019
2018
PR7 - Autres produits d'exploitation
6.9
371
667
1389
I'm assuming that this badly delimited data-set is the only place where you can read your data.
I created for the purpose of this answer an example file (that I called PR.txt) that contains only the two following lines.
PR6 - Blabla 10 156 3920 245
PR7 - Autres produits d'exploitation 6.9 371 667 1389
First I create a function to parse each line of this data-set. I'm assuming here that the original file does not contain the names of the columns. In reality, this is probably not the case. Thus this function that could be easily adapted to take a first "header" line into account.
readBadlyDelimitedData <- function(x) {
# Read the data
dat <- read.table(text = x)
# Get the type of each column
whatIsIt <- sapply(dat, typeof)
# Combine the columns that are of type "character"
variable <- paste(dat[whatIsIt == "character"], collapse = " ")
# Put everything in a data-frame
res <- data.frame(
variable = variable,
dat[, whatIsIt != "character"])
# Change the names
names(res)[-1] <- c("note", "Year2021", "Year2020", "Year2019")
return(res)
}
Note that I do not call the columns with the yearly figure by only "numeric" names because giving rows or columns purely "numerical" names is not a good practice in R.
Once I have this function, I can (l)apply it to each line of the data by combining it with readLines, and collapse all the lines with an rbind.
out <- do.call("rbind", lapply(readLines("tests/PR.txt"), readBadlyDelimitedData))
out
variable note Year2021
1 PR6 - Blabla 10.0 156
2 PR7 - Autres produits d'exploitation 6.9 371
Year2020 Year2019
1 3920 245
2 667 1389
Finally, I save the result with read.csv :
read.csv(out, file = "correctlyDelimitedFile.csv")
If you can get your hands on the Excel file, a simple gdata::read.xls or openxlsx::read.xlsx would be enough to read the data.
I wish I knew how to make the script simpler... maybe a tidyr magic person would have a more elegant solution?

R: accented characters in data frame

I'm confused about why certain characters (e.g. "Ě", "Č", and "ŝ") lose their diacritical marks in a data frame, while others (e.g. "Š" and "š") do not. My OS is Windows 10, by the way. In my sample code below, a vector czechvec has 11 single-character strings, all Slavic accented characters. R displays those characters properly. Then a data frame mydf is created with czechvec as the second column (the function I() is used so it won't be converted to a factor). But then when R displays mydf or any row of mydf, it converts most of these characters to their plain-ascii equivalent; e.g. mydf[3,] shows the character as "E" not "Ě". But subscripting with row and column, e.g. mydf[3,2], it properly shows the accented character ("Ě"). Why should it make a difference whether R displays the whole row or just one cell? And why are some characters like "Š" completely unaffected? Also when I write this data frame to a file, it completely loses the accent, even though I specify fileEncoding="UTF-8".
> charvals <- c(193, 269, 282, 268, 262, 263, 348, 349, 350, 352, 353)
> hexvals <- as.hexmode(charvals)
> czechvec <- unlist(strsplit(intToUtf8(charvals), ""))
> czechvec
[1] "Á" "č" "Ě" "Č" "Ć" "ć" "Ŝ" "ŝ" "Ş" "Š" "š"
>
> mydf = data.frame(dec=charvals, char=I(czechvec), hex=I(format(hexvals, width=4, upper.case=TRUE)))
> mydf
dec char hex
1 193 Á 00C1
2 269 c 010D
3 282 E 011A
4 268 C 010C
5 262 C 0106
6 263 c 0107
7 348 S 015C
8 349 s 015D
9 350 S 015E
10 352 Š 0160
11 353 š 0161
> mydf[3,2]
[1] "Ě"
> mydf[3,]
dec char hex
3 282 E 011A
>
> write.table(mydf, file="myfile.txt", fileEncoding="UTF-8")
>
> df2 <- read.table("myfile.txt", stringsAsFactors=FALSE, fileEncoding="UTF-8")
> df2[3,2]
[1] "E"
Edited to add: Per Ernest A's answer, this behaviour is not reproducible in Linux. It must be a Windows issue. (I'm using R 3.4.1 for Windows.)
I cannot reproduce this behaviour, using R version 3.3.3 (Linux).
> data.frame(dec=charvals, char=I(czechvec), hex=I(format(hexvals, width=4, upper.case=TRUE)))
dec char hex
1 193 Á 00C1
2 269 č 010D
3 282 Ě 011A
4 268 Č 010C
5 262 Ć 0106
6 263 ć 0107
7 348 Ŝ 015C
8 349 ŝ 015D
9 350 Ş 015E
10 352 Š 0160
11 353 š 0161
Thanks to Ernest A's answer checking that the weird behaviour I observed does not occur in Linux, I Googled R WINDOWS UTF-8 BUG which led me to this article by Ista Zahn: Escaping from character encoding hell in R on Windows
The article confirms there is a bug in the data.frame print method on Windows, and gives some workarounds. (However, the article doesn't note the issue with write.table in Windows, for data frames with foreign-language text.)
One workaround suggested by Zahn is to change the locale to suit the particular language we are working with:
Sys.setlocale(category = "LC_CTYPE", locale = "czech")
charvals <- c(193, 269, 282, 268, 262, 263, 348, 349, 350, 352, 353)
hexvals <- format(as.hexmode(charvals), width=4, upper.case=TRUE)
df1 <- data.frame(dec=charvals, char=I(unlist(strsplit(intToUtf8(charvals), ""))), hex=I(hexvals))
print.listof(df1)
dec :
[1] 193 269 282 268 262 263 348 349 350 352 353
char :
[1] "Á" "č" "Ě" "Č" "Ć" "ć" "Ŝ" "ŝ" "Ş" "Š" "š"
hex :
[1] "00C1" "010D" "011A" "010C" "0106" "0107" "015C" "015D" "015E" "0160"
[11] "0161"
df1
dec char hex
1 193 Á 00C1
2 269 č 010D
3 282 Ě 011A
4 268 Č 010C
5 262 Ć 0106
6 263 ć 0107
7 348 S 015C
8 349 s 015D
9 350 Ş 015E
10 352 Š 0160
11 353 š 0161
Notice that the Czech characters are now displayed correctly but not "Ŝ" and "ŝ", Unicode U+015C and U+015D, which apparently are used in Esperanto. But with the print.listof command, all the characters are displayed correctly. (By the way, dput(df1) lists the Esperanto characters incorrectly, as "S" and "s".)
write.table(df1, file="special characters example.txt", fileEncoding="UTF-8")
df2 <- read.table("special characters example.txt", stringsAsFactors=FALSE, fileEncoding="UTF-8")
print.listof(df2)
dec :
[1] 193 269 282 268 262 263 348 349 350 352 353
char :
[1] "Á" "č" "Ě" "Č" "Ć" "ć" "S" "s" "Ş" "Š" "š"
hex :
[1] "00C1" "010D" "011A" "010C" "0106" "0107" "015C" "015D" "015E" "0160"
[11] "0161"
When I write.table df1 and then read.table it back as df2, the "Ŝ" and "ŝ" characters have lost their circumflex. This must be a problem with the write.table command, as confirmed when I open the file with a different application such as OpenOffice Writer. The Czech characters are all there correctly, but the "Ŝ" and "ŝ" have been changed to "S" and "s".
For the time being, the best workaround for my purposes is, instead of putting the actual character in my data frame, to record the Unicode value of it, then using write.table, and using the UNICHAR function in OpenOffice Calc to add the character itself to the file. But this is inconvenient.
I believe this same bug is relevant to this question: how to read data in utf-8 format in R?
Edited to add: Other similar questions I've now found on Stack Overflow:
Why do some Unicode characters display in matrices, but not data frames in R?
UTF-8 file output in R
Write UTF-8 files from R
And I found a workaround for the display issue by Peter Meissner here:
http://r.789695.n4.nabble.com/Unicode-display-problem-with-data-frames-under-Windows-tp4707639p4707667.html
It involves defining your own class unicode_df and print function print.unicode_df.
This still does not solve the issue I have with using write.table to write my data frame (which contains some columns with text in a variety of European languages) to a file that can be imported to a spreadsheet or any arbitrary application. But perhaps Meissner's solution can be adapted to work with write.table.
Here's a function write.unicode.csv that uses paste and writeLines (with useBytes=TRUE) to export a data frame containing foreign-language characters (encoded in UTF-8) to a csv file. All cells in the data frame will be enclosed in quote marks in the csv file.
#function that will create a CSV file for a data frame containing Unicode text
#this can be used instead of write.csv in R for Windows
#source: https://stackoverflow.com/questions/46137078/r-accented-characters-in-data-frame
#this is not elegant, and probably not robust
write.unicode.csv <- function(mydf, filename="") { #mydf can be a data frame or a matrix
linestowrite <- character( length = 1+nrow(mydf) )
linestowrite[1] <- paste('"","', paste(colnames(mydf), collapse='","'), '"', sep="") #first line will have the column names
if(nrow(mydf)<1 | ncol(mydf)<1) print("This is not going to work.") #a bit of error checking
for(k1 in 1:nrow(mydf)) {
r <- paste('"', k1, '"', sep="") #each row will begin with the row number in quotes
for(k2 in 1:ncol(mydf)) {r <- paste(r, paste('"', mydf[k1, k2], '"', sep=""), sep=",")}
linestowrite[1+k1] <- r
}
writeLines(linestowrite, con=filename, useBytes=TRUE)
} #end of function
Sys.setlocale(category = "LC_CTYPE", locale = "usa")
charvals <- c(193, 269, 282, 268, 262, 263, 348, 349, 350, 352, 353)
hexvals <- format(as.hexmode(charvals), width=4, upper.case=TRUE)
df1 <- data.frame(dec=charvals, char=I(unlist(strsplit(intToUtf8(charvals), ""))), hex=I(hexvals))
print.listof(df1)
write.csv(df1, file="test1.csv")
write.csv(df1, file="test2.csv", fileEncoding="UTF-8")
write.unicode.csv(df1, filename="test3.csv")
dftest1 <- read.csv(file="test1.csv", encoding="UTF-8", colClasses="character")
dftest2 <- read.csv(file="test2.csv", encoding="UTF-8", colClasses="character")
dftest3 <- read.csv(file="test3.csv", encoding="UTF-8", colClasses="character")
print("CSV file written using write.csv with no fileEncoding parameter:")
print.listof(dftest1)
print('CSV file written using write.csv with fileEncoding="UTF-8":')
print.listof(dftest2)
print("CSV file written using write.unicode.csv:")
print.listof(dftest3)

r import csv skip first and last lines

I know many posts have already answered similar questions like mine, but I've tried to figure it out for 2 days now and it seems as if I'm not seeing the picture here...
I got this csv file looking like this:
Werteformat: wertabh. (Q)
Werte:
01.01.76 00:00 0,363
02.01.76 00:00 0,464
...
31.12.10 00:00 1,03
01.01.11 00:00 Lücke
I wanna create a timeline with the data, but I can't import the csv properly.
I've tried this so far:
data<-read.csv2(file,
header = FALSE,
sep = ";",
quote="\"",
dec=",",
col.names=c("Datum", "Abfluss"),
skip=2,
nrows=length(strs)-2,
colClasses=c("date","numeric"))`
But then I get
"Fehler in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
scan() erwartete 'a real', bekam 'L�cke'"
so i delete the colClasses and it works, I got rid of all unwanted rows. But: everything is in factors. So i use as.numeric
Abfluss1<-as.numeric(data$Abfluss)
Know i can calculate with Abfluss 1, but the values are totally different than in the original csv...
Abfluss1
[1] 99 163 250 354 398 773 927 844 796 772 1010 1468 1091 955 962 933 881 844 803 772 773 803 1006 969 834 779 755
[28] 743 739
Where did I go wrong?! I really would appreciate some helpful hints.
By the way, the files I'm working on can be downloaded here:
http://ehyd.gv.at/#
Just click on one of these blue-ish triangles and download "Q-Tagesmittel"
First of all, there seems a problem with the file encoding. The downloaded file has obviously a Latin-encoding which is not correctly recognizes, why it says L�cke and not Lücke:
encoding = "latin1"
Secondly, Your example seems to be not reproducible: From my understanding you want to skip 28 lines (maybe I am wrong). And the variable strs is not declared in your example. From what I understood you want to skip 28 lines and leave the last one out so in total
nrows = length( readLines( file ) ) - 29
Finally you bumped into this common R issue: How to convert a factor to an integer\numeric without a loss of information?. The entire column is interpreted as character vector because not all elements could be interpreted as numeric. And when adding a character vector to a data.frame it is per default casted to a factor column. Although it is not necessary, if you specify the correct range of lines, you can avoid this with
stringsAsFactors = FALSE
So in total:
f <- readLines("Q-Tagesmittel-204586.csv")
df <- read.csv2(
text = f,
header = FALSE,
sep = ";",
quote="\"",
dec=",",
skip=28,
col.names=c("Datum", "Abfluss"),
nrows = length(f) -29,
encoding = "latin1",
stringsAsFactors = FALSE
)
Oh, and just in case you want to convert as next step the Datum column to a date object, one method to achieve this would be
df$Datum <- strptime( df$Datum, "%d.%m.%Y %H:%M:%S" )
str(df)
'data.frame': 12784 obs. of 2 variables:
$ Datum : POSIXlt, format: "1976-01-01" "1976-01-02" "1976-01-03" "1976-01-04" ...
$ Abfluss: num 0.691 0.799 0.814 0.813 0.795 0.823 0.828 0.831 0.815 0.829 ...

R correct use of read.csv

I must be misunderstanding how read.csv works in R. I have read the help file, but still do not understand how a csv file containing:
40900,-,-,-,241.75,0
40905,244,245.79,241.25,244,22114
40906,244,246.79,243.6,245.5,18024
40907,246,248.5,246,247,60859
read into R using: euk<-data.matrix(read.csv("path\to\csv.csv"))
produces this as a result (using tail):
Date Open High Low Close Volume
[2713,] 15329 490 404 369 240.75 62763
[2714,] 15330 495 409 378 242.50 127534
[2715,] 15331 1 1 1 241.75 0
[2716,] 15336 504 425 385 244.00 22114
[2717,] 15337 504 432 396 245.50 18024
[2718,] 15338 512 442 405 247.00 60859
It must be something obvious that I do not understand. Please be kind in your responses, I am trying to learn.
Thanks!
The issue is not with read.csv, but with data.matrix. read.csv imports any column with characters in it as a factor. The '-' in the first row for your dataset are character, so the column is converted to a factor. Now, you pass the result of the read.csv into data.matrix, and as the help states, it replaces the levels of the factor with it's internal codes.
Basically, you need to insure that the columns of your data are numeric before you pass the data.frame into data.matrix.
This should work in your case (assuming the only characters are '-'):
euk <- data.matrix(read.csv("path/to/csv.csv", na.strings = "-", colClasses = 'numeric'))
I'm no R expert, but you may consider using scan() instead, eg:
> data = scan("foo.csv", what = list(x = numeric(), y = numeric()), sep = ",")
Where foo.csv has two columns, x and y, and is comma delimited. I hope that helps.
I took a cut/paste of your data, put it in a file and I get this using 'R'
> c<-data.matrix(read.csv("c:/DOCUME~1/Philip/LOCALS~1/Temp/x.csv",header=F))
> c
V1 V2 V3 V4 V5 V6
[1,] 40900 1 1 1 241.75 0
[2,] 40905 2 2 2 244.00 22114
[3,] 40906 2 3 3 245.50 18024
[4,] 40907 3 4 4 247.00 60859
>
There must be more in your data file, for one thing, data for the header line. And the output you show seems to start with row 2713. I would check:
The format of the header line, or get rid of it and add it manually later.
That each row has exactly 6 values.
The the filename uses forward slashes and has no embedded spaces
(use the 8.3 representation as shown in my filename).
Also, if you generated your csv file from MS Excel, the internal representation for a date is a number.

Read a CSV file in R, and select each element

Sorry if the title is confusing. I can import a CSV file into R, but once I would like to select one element by providing the row and col index. I got more than one elements. All I want is to use this imported csv as a data.frame, which I can select any columns, rows and single cells. Can anyone give me some suggestions?
Here is the data:
SKU On Off Duration(hr) Sales
C010100100 2/13/2012 4/19/2012 17:00 1601 238
C010930200 5/3/2012 7/29/2012 0:00 2088 3
C011361100 2/13/2012 5/25/2012 22:29 2460 110
C012000204 8/13/2012 11/12/2012 11:00 2195 245
C012000205 8/13/2012 11/12/2012 0:00 2184 331
CODE:
Dat = read.table("Dat.csv",header=1,sep=',')
Dat[1,][1] #This is close to what I need but is not exactly the same
SKU
1 C010100100
Dat[1,1] # Ideally, I want to have results only with C010100100
[1] C010100100
3861 Levels: B013591100 B024481100 B028710300 B038110800 B038140800 B038170900 B038260200 B038300700 B040580700 B040590200 B040600400 B040970200 ... YB11624Q1100
Thanks!
You can convert to character to get the value as a string, and no longer as a factor:
as.character(Dat[1,1])
You have just one element, but the factor contains all levels.
Alternatively, pass the option stringsAsFactors=FALSE to read.table when you read the file, to prevent creation of factors for character values:
Dat = read.table("Dat.csv",header=1,sep=',', stringsAsFactors=FALSE )

Resources