I have been trying to load Sybase ASE query data into a text file. The text file data is going to be loaded into a Postgres table.
But many rows are sent as 2 separate rows to the output file. The output in isql itself has this issue.
I have tried the below options still no success.
Tried ltrim(rtrim(cast(column_name as varchar))) -- tried for all columns in the query.
Tried sed to streamline the output format
Tried different column widths, delimiters etc. in the isql connection.
None of the above steps fixes my issue.
Below is a part of the query output with the above said issue.
3240 1MB MGMT AB -8377 NULL LEGACY PASSED
3240 1MB MGMT AB -8377 D22600484
DISCONNECT DISCONNECT
The above query result has 2 rows(first column has the value 3240 in both the rows)
As you see, the 'DISCONNECT DISCONNECT' part in the second row comes to the next line and this is treated as 3rd row. The datatype of the last 2 columns are varchar(10), so there is no space issue seemingly.
There is no space before or after the column values either.
Please let me know if there is any way to overcome this issue.
As #Shelter suggested have a look at the -w option for isql. It controls the width of the output from isql. Provided -w is wider than any of the rows, every row will appear all on one line.
You may also want to remove other extraneous stuff:
Column headings
Row count
Column headings can be removed with the -b option.
The row count can be removed with the option
set nocount on
go
in your SQL script.
Another alternative is to create a view using the SQL used to create a view and use the BCP tool to export the data in character format.
Answer to a similar question on StackOverflow.
Related
Does anyone know how to count number of rows in a SAS table using x command, I need to achieve this through unix. I tried wc -l but it gave me a different result than what proc sql count(*) gives me.
Lee has got the right idea here - SAS datasets are stored in a proprietary binary format, in which line breaks are not necessarily row separators, so you cannot use tools like wc to get an accurate row count. Using SAS itself is one option, or you could potentially use other tools like the python pandas.read_sas module to load the table if you don't have SAS installed on your unix server.
Writing a script to do this for you is outside the scope of this answer, so have a go at writing one yourself and post a more specific question if you get stuck.
I need to use cast function with length of column in teradata.
say I have a table with following data ,
id | name
1|dhawal
2|bhaskar
I need to use cast operation something like
select cast(name as CHAR(<length of column>) from table
how can i do that?
thanks
Dhawal
You have to find the length by looking at the table definition - either manually (show table) or by writing dynamic SQL that queries dbc.ColumnsV.
update
You can find the maximum length of the actual data using
select max(length(cast(... as varchar(<large enough value>))) from TABLE
But if this is for FastExport I think casting as varchar(large-enough-value) and postprocessing to remove the 2-byte length info FastExport includes is a better solution (since exporting a CHAR() will results in a fixed-length output file with lots of spaces in it).
You may know this already, but just in case: Teradata usually recommends switching to TPT instead of the legacy fexp.
I am trying to run teradata fexp with a simple sql script.
The select output column is a string expression and as such results in 2 extra length indicator bytes at the start of each row output.
I have searched for solutions online to the problem. I would like to avoid having to post-process if possible.
There is a thread suggesting the possibility of using an OUTMOD. I don't know what that is.
https://forums.teradata.com/forum/tools/fastexport-remove-binaryindicator-values-in-outmod
http://teradataforum.com/teradata/20100726_155313.htm
And yet another thread suggests casting to a fixed width string type but this would result in padding which I'd like to avoid.
https://forums.teradata.com/forum/tools/fexp-data-doubt
The desired output is actually a delimited plain text file. Is there a way to do it?
I'm importing an .xls file using the following connection string:
If _
SetDBConnect( _
"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & filepath & _
";Extended Properties=""Excel 8.0;HDR=Yes;IMEX=1""", True) Then
This has been working well for parsing through several Excel files that I've come across. However, with this particular file, when I SELECT * into a DataTable, there is a whole column of data, Item Description, missing from the DataTable. Why?
Here are some things that may set this particular workbook apart from the others that I've been working with:
The workbook has a freeze pane consisting of the first 24 rows (however, all of these rows appear in the DataTable)
There is some weird cell highlighting going on throughout the workbook
That's pretty much it. I can't see anything that would make the Item Description column not import correctly. Its data is comprised of all Strings that really have no special characters apart from &. Additionally, each data entry in this column is a maximum of 20 characters. What is happening? Is there any other way I can get all of the data? Keep in mind I have to use the original file and I cannot alter it, as I want this to ultimately be an automated process.
Thanks!
Some initial thoughts/questions: Is the missing column the very first column? What happens if you remove the space within "Item Description"? Stupid question, but does that column have a column header?
-- EDIT 1 --
If you delete that column, does the problem move to another column (the new index 4), or is the file complete. My reason for asking this -- is the problem specific to data in that column/header, or is the problem more general, on index 4.
-- EDIT 2 --
Ok, so since we know it's that column, we know it's either the header, or the rows. Let's concentrate on rows for now. Start with that ampersand; dump it, and see what happens. Next, work with the first 50% of rows. Does deleting that subset affect anything? What about the latter 50% of rows? If one of those subsets changes the result, you ought to be able to narrow it down to an individual row (hopefully not plural) by halfing your selection each time.
My guess is that you're going to find a unicode character or something else funky is one of the cells. Maybe there's a formula or, as you mentioned, some of that "weird cell highlighting."
It's been years since I worked with excel access, but I recall some problems with excel grouping content into some areas that would act as table inside each sheet. Try copy/paste the content from the problematic sheet to a new workbook and connect to that workbook. If this works you may be able to investigate a bit further about areas.
I tried the simple join
join query.txt source.tab
Based on 1st colum in both files. It's clear that source.tab
contain the query. But why the operation yields no result?
Both of the query and source file is downloadable here:
http://dl.dropbox.com/u/11482318/query.txt (2B)
http://dl.dropbox.com/u/11482318/source.tab (40KB)
Tha man page for join says that (as suggested by shelter):
Important: FILE1 and FILE2 must be sorted on the join fields.
In your case the source.tab file is sorted naturally on the first field (r1.1, r2.1, etc.) But the sort order required by join would be based on the collating sequence of sort (probably r1.1, r10.1, r100.1, r11.1, r12.1, etc.)
If you sort your source.tab file using the sort command, then join, it should work.
(Note that - perhaps by luck - the query.txt file has the correct sort order.)
The join command will not return results if the file has a header. This causes join to consider the file as unsorted, and thus fails all matches to keys less than that field in the header.
One way to strip the header out is to use grep -v ",Header2," file1.txt >file2.txt then join to file2.txt (assuming the file was sorted to begin with). Another option is to just sort it as it is, allowing the header to remain. This will work if the header row will not be displayed in the final result.