What options are there for the sqsh style "csv" (or anyway to get tab-delimited out) - sqsh

With the DB tool sqsh, I want to get the column names and the data tab delimited.
The bcp option does not include the column names.
The csv option includes the column names, but uses comma as the separate (doh). Is there a way to change it?
Currently looking to post-process the file to change the commas to tabs (ignoring the commas within strings...).

You can \set colsep="\t" to change the separator for the standard output to tab.
Edit: \t didn’t work (in my cygwin), so I used <CTRL-V><TAB>. That works:
[228] > \set colsep=" " -- Hit CTRL-V then <TAB> here.
[229] > select 'ABC' as STRING, 12 as INT;
STRING INT
------ -----------
ABC 12
(1 row affected)

Please note that since sqsh version 2.5 it is now possible to assign control characters to some variables like colsep, linesep, bcp_colsep and bcp_rowsep. So the
\set colsep="\t"
should work now properly with sqsh-2.5.

Related

R Reading a badly formatted csv with uneven quotes and separators in fields

I have a badly formatted csv file (I did not make it) that includes both separators and broken quotes in some fields. I would like to read this into R.
Three lines of the table look something like this:
| ids |info | text |
| id 1 |extra_info;1998| text text text |
| id 2 |extra_info2 | text with broken dialogues quotes "hi! |
#the same table in R string could be
string <- "ids;info;text\n\"id 1\";\"extra_info;1998\";\"text text text\"\n\"id 2\";extra_info2;\"text with broken dialogues quotes \"hi!\" \n"
With " quotes surrounding any field with more than one word as is common in csv-s, and semicolon ; used as a separator. Unfortunately the way it was built, the last column (and it is always last), can contain a random number of semicolons or quotes within a text bulk, and these quotes are not always escaped.
I'm looking for a way to read this file. So far I have come up with a really complicated workflow to replace the first N separators with another less used separator when they are in the beginning of line with regex (from here) - because text is always last, however this still fails currently when there is an uneven number of quotes in the line.
I'm thinking there must be an easier way to do this, as badly formed csv-s should be a reoccurring problem here. Thanks.
data.table::fread works wonders:
library(data.table)
test <- fread("test.csv")
# Remove extraneous columns
test$V1 <- NULL
test$V5 <- NULL

R - extract data which changes position from file to file (txt)

I have a folder with tons of txt files from where I have to extract especific data. The problem is that the format of the file has changed once and the position of the data I need to extract has also changed. So I need to deal with files in different format.
To try to make it more clear, in column 4 I have the name of the variable and in 5 I have the value, but sometimes this is in a different row. Is there a way to find the name of the variable (in which row) and then extract its value?
Thanks in advance
EDITING
In some files I will have the data like this:
Column 1-------Column 2.
Device ID------A.
Voltage------- 500.
Current--------28
But in some point in life, there was a change in the software to add another variable and the new file iis like this:
Column 1-------Column 2.
Device ID------A.
Voltage------- 500.
Error------------5.
Current--------28
So I need to deal with these 2 types of data, extracting the same variables which are in different rows.
If these files can't be read with read.table use readLines and then find those lines that start with the keyword you need.
For example:
Sample file 1 (with the dashes included and extra line breaks):
Column 1-------Column 2.
Device ID------A.
Voltage------- 500.
Error------------5.
Current--------28
Sample file2 (with a comma as separator):
Column 1,Column 2.
Device ID,A.
Current,555
Voltage, 500.
Error,5.
For both cases do:
text = readLines(con = file("your filename here"))
curr = text[grepl("^Current", text, ignore.case = T)]
Which returns:
for file 1:
[1] "Current--------28"
for file 2:
[1] "Current,555"
Then use gsub to remove anything that is not a number.

Want a commandline to get the data as it is as we get when we export data

I have a data that is having some thousands of records and each record having multiple columns.One of the column is having a data where there is a punctuation mark "," in it.
When I had tried to spool that data into a csv file and text to columns data using the delimters as comma,the data seems to be inappropriate as the data itself has a comma in it.
I am looking for a solution where I can export the data using a command line which is having as it is look when I export the data via TOAD.
Any help is much appreciated.
Note: I was looking for this solution since many days but got a chance now to post it here.
When exporting the dataset in Toad, select a delimiter other than a comma or drop down the "string quoting" dropdown box and select "double quote strings including NULLS".
Oh wait if you are spooling output, you'll need to add the double-quotes in your select statement like this in order to surround the columns containing the comma with double-quotes:
select '"' || column || '"' as column from table;
This format is pretty standard but use pipes as delimiters instead and save space by not having to wrap strings in double-quotes. Depends on what the consumer of the data requires really.

What does the "More Columns than Column Names" error mean?

I'm trying to read in a .csv file from the IRS and it doesn't appear to be formatted in any weird way.
I'm using the read.table() function, which I have used several times in the past but it isn't working this time; instead, I get this error:
data_0910<-read.table("/Users/blahblahblah/countyinflow0910.csv",header=T,stringsAsFactors=FALSE,colClasses="character")
Error in read.table("/Users/blahblahblah/countyinflow0910.csv", :
more columns than column names
Why is it doing this?
For reference, the .csv files can be found at:
http://www.irs.gov/uac/SOI-Tax-Stats-County-to-County-Migration-Data-Files
(The ones I need are under the county to county migration .csv section - either inflow or outflow.)
It uses commas as separators. So you can either set sep="," or just use read.csv:
x <- read.csv(file="http://www.irs.gov/file_source/pub/irs-soi/countyinflow1011.csv")
dim(x)
## [1] 113593 9
The error is caused by spaces in some of the values, and unmatched quotes. There are no spaces in the header, so read.table thinks that there is one column. Then it thinks it sees multiple columns in some of the rows. For example, the first two lines (header and first row):
State_Code_Dest,County_Code_Dest,State_Code_Origin,County_Code_Origin,State_Abbrv,County_Name,Return_Num,Exmpt_Num,Aggr_AGI
00,000,96,000,US,Total Mig - US & For,6973489,12948316,303495582
And unmatched quotes, for example on line 1336 (row 1335) which will confuse read.table with the default quote argument (but not read.csv):
01,089,24,033,MD,Prince George's County,13,30,1040
you have have strange characters in your heading # % -- or ,
For the Germans:
you have to change your decimal commas into a Full stop in your csv-file (in Excel:File -> Options -> Advanced -> "Decimal seperator") , then the error is solved.
Depending on the data (e.g. tsv extension) it may use tab as separators, so you may try sep = '\t' with read.csv.
This error can get thrown if your data frame has sf geometry columns.

Copy to without quotes

I have a large dataset in dbf file and would like to export it to the csv type file.
Thanks to SO already managed to do it smoothly.
However, when I try to import it into R (the environment I work) it combines some characters together, making some rows much longer than they should be, consequently breaking the whole database. In the end, whenever I import the exported csv file I get only half of the db.
Think the main problem is with quotes in string characters, but specifying quote="" in R didn't help (and it helps usually).
I've search for any question on how to deal with quotes when exporting in visual foxpro, but couldn't find the answer. Wanted to test this but my computer catches error stating that I don't have enough memory to complete my operation (probably due to the large db).
Any helps will be highly appreciated. I'm stuck with this problem on exporting from the dbf into R for long enough, searched everything I could and desperately looking for a simple solution on how to import large dbf to my R environment without any bugs.
(In R: Checked whether have problems with imported file and indeed most of columns have much longer nchars than there should be, while the number of rows halved. Read the db with read.csv("file.csv", quote="") -> didn't help. Reading with data.table::fread() returns error
Expected sep (',') but '0' ends field 88 on line 77980:
But according to verbose=T this function reads right number of rows (read.csv imports only about 1,5 mln rows)
Count of eol after first data row: 2811729 Subtracted 1 for last eol
and any trailing empty lines, leaving 2811728 data rows
When exporting to TYPE DELIMITED You have some control on the VFP side as to how the export formats the output file.
To change the field separator from quotes to say a pipe character you can do:
copy to myfile.csv type delimited with "|"
so that will produce something like:
|A001|,|Company 1 Ltd.|,|"Moorfields"|
You can also change the separator from a comma to another character:
copy to myfile.csv type delimited with "|" with character "#"
giving
|A001|#|Company 1 Ltd.|#|"Moorfields"|
That may help in parsing on the R side.
There are three ways to delimit a string in VFP - using the normal single and double quote characters. So to strip quotes out of character fields myfield1 and myfield2 in your DBF file you could do this in the Command Window:
close all
use myfile
copy to mybackupfile
select myfile
replace all myfield1 with chrtran(myfield1,["'],"")
replace all myfield2 with chrtran(myfield2,["'],"")
and repeat for other fields and tables.
You might have to write code to do the export, rather than simply using the COPY TO ... DELIMITED command.
SELECT thedbf
mfld_cnt = AFIELDS(mflds)
fh = FOPEN(m.filename, 1)
SCAN
FOR aa = 1 TO mfld_cnt
mcurfld = 'thedbf.' + mflds[aa, 1]
mvalue = &mcurfld
** Or you can use:
mvalue = EVAL(mcurfld)
** manipulate the contents of mvalue, possibly based on the field type
DO CASE
CASE mflds[aa, 2] = 'D'
mvalue = DTOC(mvalue)
CASE mflds[aa, 2] $ 'CM'
** Replace characters that are giving you problems in R
mvalue = STRTRAN(mvalue, ["], '')
OTHERWISE
** Etc.
ENDCASE
= FWRITE(fh, mvalue)
IF aa # mfld_cnt
= FWRITE(fh, [,])
ENDIF
ENDFOR
= FWRITE(fh, CHR(13) + CHR(10))
ENDSCAN
= FCLOSE(fh)
Note that I'm using [ ] characters to delimit strings that include commas and quotation marks. That helps readability.
*create a comma delimited file with no quotes around the character fields
copy to TYPE DELIMITED WITH "" (2 double quotes)

Resources