I have a .txt file and am using Rstudio.
200416657210340 1665721 20040608 20090930 20060910 20070910 20080827 20090804
200416657210345 1665721 20040907 20090203 20070331 20080719
200416657210347 1665721 20040914 20091026 20070213 20080114 20090302
200416657210352 1665721 20041111 20100315 20070123 20071205 20081202
I am trying to read in the .txt file using read.fwf :
gripalisti <- read.fwf(file = "gripalisti.txt",
widths = c(15,8,9,9,9,9,9,9),
header = FALSE,
#stringsAsFactors = FALSE,
col.names = c("einst","bu","faeding","forgun","burdur1",
"burdur2","burdur3","burdur4"))
This works and the columns are the correct lenght.
However the "einst" and "bu" are supposed to be integer values and the rest are supposed to be dates.
When imported all the values in the first column (ID variables) look like this:
2.003140e+14
I have been trying to search for a way to change the imported column to integer (or character?) values and I have not found anything that does not result in an error.
An example, that I tried after a google:
gripalisti <- read.fwf(file = "gripalisti.txt",
widths = c(15,8,9,9,9,9,9,9),
header = FALSE,
#stringsAsFactors = FALSE,
col.names = c("einst","bu","faeding","forgun","burdur1",
"burdur2","burdur3","burdur4"),
colclasses = c("integer", "integer", "Date", "Date",
"Date", "Date", "Date", "Date"))
results in the error:
Error in read.table(file = FILE, header = header, sep = sep, row.names = row.names, :
unused argument (colclasses = c("integer", "integer", "Date", "Date", "Date", "Date", "Date", "Date"))
There are many missing values in the dataset that is over 100.000 lines. So other ways of importing have not worked for me. The dataset is NOT tab delimited.
Sorry if this is obvious, I am a very new R user.
edit:
Thanks for the help, I changed it to:
colClasses = c("character",
And now it look good.
As suggested in the comments:
it is colClasses=, not colclasses=, typo;
that first field cannot be stored as "integer", it must either be "numeric" or "character";
(additionally) those dates are not in the default format of %Y-%m-%d, you will need to convert them after reading in the data.
Prep:
writeLines("200416657210340 1665721 20040608 20090930 20060910 20070910 20080827 20090804\n200416657210345 1665721 20040907 20090203 20070331 20080719 \n200416657210347 1665721 20040914 20091026 20070213 20080114 20090302 \n200416657210352 1665721 20041111 20100315 20070123 20071205 20081202",
con = "gripalisti.txt")
Execution:
dat <- read.fwf("gripalisti.txt", widths = c(15,8,9,9,9,9,9,9), header = FALSE,
col.names = c("einst","bu","faeding","forgun","burdur1", "burdur2","burdur3","burdur4"),
colClasses = c("character", "integer", "character", "character", "character", "character", "character", "character"))
str(dat)
# 'data.frame': 4 obs. of 8 variables:
# $ einst : chr "200416657210340" "200416657210345" "200416657210347" "200416657210352"
# $ bu : int 1665721 1665721 1665721 1665721
# $ faeding: chr " 20040608" " 20040907" " 20040914" " 20041111"
# $ forgun : chr " 20090930" " 20090203" " 20091026" " 20100315"
# $ burdur1: chr " 20060910" " 20070331" " 20070213" " 20070123"
# $ burdur2: chr " 20070910" " 20080719" " 20080114" " 20071205"
# $ burdur3: chr " 20080827" " " " 20090302" " "
# $ burdur4: chr " 20090804" " " " " " 20081202"
dat[,3:8] <- lapply(dat[,3:8], as.Date, format = "%Y%m%d")
dat
# einst bu faeding forgun burdur1 burdur2 burdur3 burdur4
# 1 200416657210340 1665721 2004-06-08 2009-09-30 2006-09-10 2007-09-10 2008-08-27 2009-08-04
# 2 200416657210345 1665721 2004-09-07 2009-02-03 2007-03-31 2008-07-19 <NA> <NA>
# 3 200416657210347 1665721 2004-09-14 2009-10-26 2007-02-13 2008-01-14 2009-03-02 <NA>
# 4 200416657210352 1665721 2004-11-11 2010-03-15 2007-01-23 2007-12-05 <NA> 2008-12-02
str(dat)
# 'data.frame': 4 obs. of 8 variables:
# $ einst : chr "200416657210340" "200416657210345" "200416657210347" "200416657210352"
# $ bu : int 1665721 1665721 1665721 1665721
# $ faeding: Date, format: "2004-06-08" "2004-09-07" "2004-09-14" "2004-11-11"
# $ forgun : Date, format: "2009-09-30" "2009-02-03" "2009-10-26" "2010-03-15"
# $ burdur1: Date, format: "2006-09-10" "2007-03-31" "2007-02-13" "2007-01-23"
# $ burdur2: Date, format: "2007-09-10" "2008-07-19" "2008-01-14" "2007-12-05"
# $ burdur3: Date, format: "2008-08-27" NA "2009-03-02" NA
# $ burdur4: Date, format: "2009-08-04" NA NA "2008-12-02"
here the number in the first column is very large number, if you import it in term of integer or numeric it will automatically shown in exponent format. The way to resolve this to set scipen before reading the file. use below code :
options(scipen = 999)
I think this should resolve your problem.
Below is code I run, of course for date columns you need to to work. For that you can use simple command like as.Date(gripalisti$burdur1, format = "%Y%m%d")
Related
I'm trying to input a CSV file, but I get the following error:
associatedata <- read.csv("AssociatedSpeciesID_1.csv", header=TRUE, fileEncoding = 'UTF-8-BOM') %>% mutate_all(na_if, "")
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
more columns than column names
Here's the CSV below: I can't find where the number of columns doesn't matched up. I've tried common solutions to other questions, but nothing's worked.
ObjectID,GlobalID,AssociatedSpeciesKnown,Associates,NewAssociate,UnknownSpecies_Description,AssociatedSpeciesAbundance,Coflowering,ParentGlobalID,CreationDate,Creator,EditDate,Editor
1,54e33e7c-1ff1-464f-8872-df027fcfe8ec,known,Amelanchier utahensis,,,Few,no,9fc6b840-8584-4045-b69f-f0e9488a1f06,1/7/2022 3:55:46 PM,ejob_BLM,1/7/2022 3:55:46 PM,ejob,,
2,68420bc9-d6c6-4d7f-a149-7306399ce5c1,known,NewSpecies,Genus species,,Occasional,yes,9fc6b840-8584-4045-b69f-f0e9488a1f06,1/7/2022 3:55:46 PM,ejob_BLM,1/7/2022 3:55:46 PM,ejob,,
3,88a6807b-b00c-4e58-84bb-4e8cb61409ae,unknown,,,ritiidiwjjviern bg,Common,no,9fc6b840-8584-4045-b69f-f0e9488a1f06,1/7/2022 3:55:46 PM,ejob_BLM,1/7/2022 3:55:46 PM,ejob,,
4,9fc8ea4a-e197-42cc-bd75-614d5b106364,known,Artemisia nova,,,Common,no,ea9eb086-89c2-4aa5-a2f6-95519cd35a58,1/7/2022 3:56:26 PM,ejob_BLM,1/7/2022 3:56:26 PM,ejob,,
The header has 13 fields and all other records have 15 and examining it we see that there are two trailing commas on the end of each data line.
count.fields("abc.csv", sep = ",")
## [1] 13 15 15 15 15
1) If we remove the two trailing commas then it works. (You may not need the strip.white but it was added because the code in the Note at the end is indented 4 spaces to satisfy SO. It won't hurt.)
L <- "abc.csv" |>
readLines() |>
sub(pattern = ",,$", replacement = "")
DF <- read.csv(text = L, strip.white = TRUE)
giving
> str(DF)
'data.frame': 4 obs. of 13 variables:
$ ObjectID : int 1 2 3 4
$ GlobalID : chr "54e33e7c-1ff1-464f-8872-df027fcfe8ec" "68420bc9-d6c6-4d7f-a149-7306399ce5c1" "88a6807b-b00c-4e58-84bb-4e8cb61409ae" "9fc8ea4a-e197-42cc-bd75-614d5b106364"
$ AssociatedSpeciesKnown : chr "known" "known" "unknown" "known"
$ Associates : chr "Amelanchier utahensis" "NewSpecies" "" "Artemisia nova"
$ NewAssociate : chr "" "Genus species" "" ""
$ UnknownSpecies_Description: chr "" "" "ritiidiwjjviern bg" ""
$ AssociatedSpeciesAbundance: chr "Few" "Occasional" "Common" "Common"
$ Coflowering : chr "no" "yes" "no" "no"
$ ParentGlobalID : chr "9fc6b840-8584-4045-b69f-f0e9488a1f06" "9fc6b840-8584-4045-b69f-f0e9488a1f06" "9fc6b840-8584-4045-b69f-f0e9488a1f06" "ea9eb086-89c2-4aa5-a2f6-95519cd35a58"
$ CreationDate : chr "1/7/2022 3:55:46 PM" "1/7/2022 3:55:46 PM" "1/7/2022 3:55:46 PM" "1/7/2022 3:56:26 PM"
$ Creator : chr "ejob_BLM" "ejob_BLM" "ejob_BLM" "ejob_BLM"
$ EditDate : chr "1/7/2022 3:55:46 PM" "1/7/2022 3:55:46 PM" "1/7/2022 3:55:46 PM" "1/7/2022 3:56:26 PM"
$ Editor : chr "ejob" "ejob" "ejob" "ejob"
2) Alternately if sed is on your path then:
read.csv(pipe("sed -e s/,,$// abc.csv"), strip.white = TRUE)
3) This would also work.
DF <- read.csv("abc.csv", header = FALSE, skip = 1, strip.white = TRUE)[1:13]
names(DF) <- read.table("abc.csv", sep = ",", strip.white = TRUE, nrows = 1)
Note
Generate file from question.
Lines <- "ObjectID,GlobalID,AssociatedSpeciesKnown,Associates,NewAssociate,UnknownSpecies_Description,AssociatedSpeciesAbundance,Coflowering,ParentGlobalID,CreationDate,Creator,EditDate,Editor
1,54e33e7c-1ff1-464f-8872-df027fcfe8ec,known,Amelanchier utahensis,,,Few,no,9fc6b840-8584-4045-b69f-f0e9488a1f06,1/7/2022 3:55:46 PM,ejob_BLM,1/7/2022 3:55:46 PM,ejob,,
2,68420bc9-d6c6-4d7f-a149-7306399ce5c1,known,NewSpecies,Genus species,,Occasional,yes,9fc6b840-8584-4045-b69f-f0e9488a1f06,1/7/2022 3:55:46 PM,ejob_BLM,1/7/2022 3:55:46 PM,ejob,,
3,88a6807b-b00c-4e58-84bb-4e8cb61409ae,unknown,,,ritiidiwjjviern bg,Common,no,9fc6b840-8584-4045-b69f-f0e9488a1f06,1/7/2022 3:55:46 PM,ejob_BLM,1/7/2022 3:55:46 PM,ejob,,
4,9fc8ea4a-e197-42cc-bd75-614d5b106364,known,Artemisia nova,,,Common,no,ea9eb086-89c2-4aa5-a2f6-95519cd35a58,1/7/2022 3:56:26 PM,ejob_BLM,1/7/2022 3:56:26 PM,ejob,,
"
cat(Lines, file = "abc.csv")
I have a file with irregular quotes like the following:
"INDICATOR,""CTY_CODE"",""MGN_CODE"",""EVENT_NR"",""EVENT_NR_CR"",""START_DATE"",""PEAK_DATE"",""END_DATE"",""MAX_EXT_ON"",""DURATION"",""SEVERITY"",""INTENSITY"",""AVERAGE_AREA"",""WIDEST_AREA_PERC"",""SCORE"",""GRP_ID"""
"Spi-3,""AFG"","""",1,1,""1952-10-01"",""1952-11-01"",""1953-06-01"",""1952-11-01"",9,6.98,0.78,19.75,44.09,5,1"
It seems irregular because the first column is only wrapped in single quotes, whereas every subsequent column is wrapped in double quotes. I'd like to read it so that every column is imported without quotes (neither in the header, nor the data).
What I've tried is the following:
# All sorts of tidyverse imports
tib <- readr::read_csv("file.csv")
And I also tried the suggestions offered here:
# Base R import
DF0 <- read.table("file.csv", as.is = TRUE)
DF <- read.csv(text = DF0[[1]])
# Data table import
DT0 <- fread("file.csv", header =F)
DT <- fread(paste(DT0[[1]], collapse = "\n"))
But even when it imports the file in the latter two cases, the variable names and some of the elements are wrapped in quotation marks.
I used data.table::fread with the quote="" option (which is "as is").
Then I cleaned the names and data by eliminating all the quotes.
The dates could be converted too, but I didn't do that.
library(data.table)
library(magrittr)
DT0 <- fread('file.csv', quote = "")
DT0 %>% setnames(names(.), gsub('"', '', names(.)))
string_cols <- which(sapply(DT0, class) == 'character')
DT0[, (string_cols) := lapply(.SD, function(x) gsub('\\"', '', x)),
.SDcols = string_cols]
str(DT0)
Classes ‘data.table’ and 'data.frame': 1 obs. of 16 variables:
$ INDICATOR : chr "Spi-3"
$ CTY_CODE : chr "AFG"
$ MGN_CODE : chr ""
$ EVENT_NR : int 1
$ EVENT_NR_CR : int 1
$ START_DATE : chr "1952-10-01"
$ PEAK_DATE : chr "1952-11-01"
$ END_DATE : chr "1953-06-01"
$ MAX_EXT_ON : chr "1952-11-01"
$ DURATION : int 9
$ SEVERITY : num 6.98
$ INTENSITY : num 0.78
$ AVERAGE_AREA : num 19.8
$ WIDEST_AREA_PERC: num 44.1
$ SCORE : int 5
$ GRP_ID : chr "1"
- attr(*, ".internal.selfref")=<externalptr>
I have following code:
url <- "https://lebensmittel-naehrstoffe.de/calciumhaltige-lebensmittel/"
page <- read_html(url) #Creates an html document from URL
Ca <- html_table(page, fill = TRUE, dec = ",") #Parses tables into data frames
Ca <- data.frame(Ca)
But my last column of my data.frame Ca[,4] consists of values containing "." and "," - hence it is a german talbe the dec is",", but in R it is always a character. I tried already with gsub and as.numeric, but it always failed. Pleasse note: I already put dec=","
Could someone help me? If possible it should be a solution to run it on a lot of data.frames (or html imports or what ever) because I have many such tables...
Thank you very much!
You can use readr::parse_number :
Ca <- html_table(page, fill = TRUE, dec = ",")[[1]]
Ca$`Calciumgehalt in mg` <- readr::parse_number(Ca$`Calciumgehalt in mg`, locale = locale(decimal_mark = ",", grouping_mark = "."))
str(Ca)
# 'data.frame': 82 obs. of 4 variables:
# $ Lebensmittel : chr "Basilikum, getrocknet" "Majoran, getrocknet" "Thymian, getrocknet" "Selleriesamen" ...
# $ Kategorie : chr "Gewürze" "Gewürze" "Gewürze" "Gewürze" ...
# $ Mengenangabe : chr "je 100 Gramm" "je 100 Gramm" "je 100 Gramm" "je 100 Gramm" ...
# $ Calciumgehalt.in.mg: num 2240 1990 1890 1767 1597 ...
I have a data frame with one row and many columns, and I want to present it with the kable function in Rmarkdown (PDF output). For presenting it in a better way I used the "transpose" function and generated a new data frame. The problem is when I'm using: big.mark = "," , it doesn't work on the transposed data frame, although it works when I'm using the original data frame.
I'm attaching here an example to this problem by code I wrote to demonstrate that problem:
```{r warning = FALSE, error = FALSE, message=FALSE, echo = FALSE, results =
'hide'}
library(kableExtra)
library(tidyverse)
```
```{r warning = FALSE, error = FALSE, message=FALSE, echo = FALSE}
df <- data.frame(x=1000, y=scales::percent(0.34), z=500000)
kable(df, format = "latex", caption = "big.mark problem", booktabs=TRUE,
format.args = list(big.mark = ","))
```
```{r warning = FALSE, error = FALSE, message=FALSE, echo = FALSE}
df_transpose <- t(data.frame(x=1000, y=scales::percent(0.34), z=500000))
kable(df_transpose, format = "latex", caption = "big.mark problem",
booktabs=TRUE, format.args = list(big.mark = ","))
```
```{r warning = FALSE, error = FALSE, message=FALSE, echo = FALSE}
df_transpose_df <- as.data.frame(t(data.frame(x=1000,
y=scales::percent(0.34), z=500000)))
kable(df_transpose_df, format = "latex", caption = "big.mark problem",
booktabs=TRUE, format.args = list(big.mark = ","))
```
```{r warning = FALSE, error = FALSE, message=FALSE, echo = FALSE}
df_transpose_tibble <- as.tibble(t(data.frame(x=1000,
y=scales::percent(0.34), z=500000)))
kable(df_transpose_tibble, format = "latex", caption = "big.mark problem",
booktabs=TRUE, format.args = list(big.mark = ","))
```
The first table display the first number as: 1,000.
and the other tables display them as: 1000.
I want that all will be looked like the first one.
Thanks!
You've got data types issues here. Forgetting about the kable stuff for a minute, go through and investigate the class and structure of each object you've created.
First off is the fact that scales::percent formats a number and returns a string.
library(dplyr)
library(tidyr)
scales::percent(0.34)
#> [1] "34.0%"
class(scales::percent(0.34))
#> [1] "character"
Because data.frame has a default stringsAsFactors = TRUE, that string you've created for y is now a factor--maybe not a problem, but probably awkward and not what you might be expecting.
df <- data.frame(x=1000, y=scales::percent(0.34), z=500000)
df
#> x y z
#> 1 1000 34.0% 5e+05
class(df)
#> [1] "data.frame"
str(df)
#> 'data.frame': 1 obs. of 3 variables:
#> $ x: num 1000
#> $ y: Factor w/ 1 level "34.0%": 1
#> $ z: num 5e+05
Look at the docs for t: it returns a matrix. Matrices only have a single data type, so everything is coerced to strings.
df_transpose <- t(data.frame(x=1000, y=scales::percent(0.34), z=500000))
class(df_transpose)
#> [1] "matrix"
str(df_transpose)
#> chr [1:3, 1] "1000" "34.0%" "5e+05"
#> - attr(*, "dimnames")=List of 2
#> ..$ : chr [1:3] "x" "y" "z"
#> ..$ : NULL
When you converted that into a data frame again, you once again got factors, not any numeric values.
df_transpose_df <- as.data.frame(t(data.frame(x=1000, y=scales::percent(0.34), z=500000)))
class(df_transpose_df)
#> [1] "data.frame"
str(df_transpose_df)
#> 'data.frame': 3 obs. of 1 variable:
#> $ V1: Factor w/ 3 levels "1000","34.0%",..: 1 2 3
#> ..- attr(*, "names")= chr "x" "y" "z"
as_tibble doesn't coerce into factors, so the difference here from the previous df is that you have all strings instead of factors.
df_transpose_tibble <- as_tibble(t(data.frame(x=1000, y=scales::percent(0.34), z=500000)))
class(df_transpose_tibble)
#> [1] "tbl_df" "tbl" "data.frame"
str(df_transpose_tibble)
#> Classes 'tbl_df', 'tbl' and 'data.frame': 3 obs. of 1 variable:
#> $ V1: chr "1000" "34.0%" "5e+05"
The underlying problem with each of these is that after these transformations, you're then calling formatting functions—supplying a big.mark argument to kable, or directly using the format function kable calls—on strings, whereas they only operate on numbers.
Instead, you can start with everything numeric (or set stringsAsFactors = FALSE), set the formatting the way you want for each of these columns, then use a function for reshaping that's designed to work with data frames. One common option is tidyr::gather, which will get you the longer-shaped data you were looking for, but keep it as a data frame/tibble.
all_numeric <- data.frame(x = 1000, y = 0.34, z = 500000)
all_numeric %>%
mutate(x = formatC(x, big.mark = ","),
y = scales::percent(y)) %>%
gather(key, value)
#> key value
#> 1 x 1,000
#> 2 y 34.0%
#> 3 z 5e+05
Created on 2018-10-29 by the reprex package (v0.2.1)
I have a problem when importing .csv file into R. With my code:
t <- read.csv("C:\\N0_07312014.CSV", na.string=c("","null","NaN","X"),
header=T, stringsAsFactors=FALSE,check.names=F)
R reports an error and does not do what I want:
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
more columns than column names
I guess the problem is because my data is not well formatted. I only need data from [,1:32]. All others should be deleted.
Data can be downloaded from:
https://drive.google.com/file/d/0B86_a8ltyoL3VXJYM3NVdmNPMUU/edit?usp=sharing
Thanks so much!
Open the .csv as a text file (for example, use TextEdit on a Mac) and check to see if columns are being separated with commas.
csv is "comma separated vectors". For some reason when Excel saves my csv's it uses semicolons instead.
When opening your csv use:
read.csv("file_name.csv",sep=";")
Semi colon is just an example but as someone else previously suggested don't assume that because your csv looks good in Excel that it's so.
That's one wonky CSV file. Multiple headers tossed about (try pasting it to CSV Fingerprint) to see what I mean.
Since I don't know the data, it's impossible to be sure the following produces accurate results for you, but it involves using readLines and other R functions to pre-process the text:
# use readLines to get the data
dat <- readLines("N0_07312014.CSV")
# i had to do this to fix grep errors
Sys.setlocale('LC_ALL','C')
# filter out the repeating, and wonky headers
dat_2 <- grep("Node Name,RTC_date", dat, invert=TRUE, value=TRUE)
# turn that vector into a text connection for read.csv
dat_3 <- read.csv(textConnection(paste0(dat_2, collapse="\n")),
header=FALSE, stringsAsFactors=FALSE)
str(dat_3)
## 'data.frame': 308 obs. of 37 variables:
## $ V1 : chr "Node 0" "Node 0" "Node 0" "Node 0" ...
## $ V2 : chr "07/31/2014" "07/31/2014" "07/31/2014" "07/31/2014" ...
## $ V3 : chr "08:58:18" "08:59:22" "08:59:37" "09:00:06" ...
## $ V4 : chr "" "" "" "" ...
## .. more
## $ V36: chr "" "" "" "" ...
## $ V37: chr "0" "0" "0" "0" ...
# grab the headers
headers <- strsplit(dat[1], ",")[[1]]
# how many of them are there?
length(headers)
## [1] 32
# limit it to the 32 columns you want (Which matches)
dat_4 <- dat_3[,1:32]
# and add the headers
colnames(dat_4) <- headers
str(dat_4)
## 'data.frame': 308 obs. of 32 variables:
## $ Node Name : chr "Node 0" "Node 0" "Node 0" "Node 0" ...
## $ RTC_date : chr "07/31/2014" "07/31/2014" "07/31/2014" "07/31/2014" ...
## $ RTC_time : chr "08:58:18" "08:59:22" "08:59:37" "09:00:06" ...
## $ N1 Bat (VDC) : chr "" "" "" "" ...
## $ N1 Shinyei (ug/m3): chr "" "" "0.23" "null" ...
## $ N1 CC (ppb) : chr "" "" "null" "null" ...
## $ N1 Aeroq (ppm) : chr "" "" "null" "null" ...
## ... continues
If you only need the first 32 columns, and you know how many columns there are, you can set the other columns classes to NULL.
read.csv("C:\\N0_07312014.CSV", na.string=c("","null","NaN","X"),
header=T, stringsAsFactors=FALSE,
colClasses=c(rep("character",32),rep("NULL",10)))
If you do not want to code up each colClass and you like the guesses read.csv then just save that csv and open it again.
Alternatively, you can skip the header and name the columns yourself and remove the misbehaved rows.
A<-data.frame(read.csv("N0_07312014.CSV",
header=F,stringsAsFactors=FALSE,
colClasses=c(rep("character",32),rep("NULL",5)),
na.string=c("","null","NaN","X")))
Yournames<-as.character(A[1,])
names(A)<-Yournames
yourdata<-unique(A)[-1,]
The code above assumes you do not want any duplicate rows. You can alternatively remove rows that have the first entry equal to the first column name, but I'll leave that to you.
try read.table() instead of read.csv()
I was also facing the same issue. Now solved.
Just use header = FALSE
read.csv("data.csv", header = FALSE) -> mydata
I had the same problem. I opened my data in textfile and double expressions are separated by semicolons, you should replace them with a period
I was having this error that was caused by multiple rows of meta data at the top of the file. I was able to use read.csv by doing skip= and skipping those rows.
data <- read.csv('/blah.csv',skip=3)
For me, the solution was using csv2 instead of csv.
read.csv("file_name.csv", header=F)
Setting the HEADER to be FALSE will do the job perfectly for you...