What would cause Microsoft Jet OLEDB SELECT to miss a whole column? - asp.net

I'm importing an .xls file using the following connection string:
If _
SetDBConnect( _
"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & filepath & _
";Extended Properties=""Excel 8.0;HDR=Yes;IMEX=1""", True) Then
This has been working well for parsing through several Excel files that I've come across. However, with this particular file, when I SELECT * into a DataTable, there is a whole column of data, Item Description, missing from the DataTable. Why?
Here are some things that may set this particular workbook apart from the others that I've been working with:
The workbook has a freeze pane consisting of the first 24 rows (however, all of these rows appear in the DataTable)
There is some weird cell highlighting going on throughout the workbook
That's pretty much it. I can't see anything that would make the Item Description column not import correctly. Its data is comprised of all Strings that really have no special characters apart from &. Additionally, each data entry in this column is a maximum of 20 characters. What is happening? Is there any other way I can get all of the data? Keep in mind I have to use the original file and I cannot alter it, as I want this to ultimately be an automated process.
Thanks!

Some initial thoughts/questions: Is the missing column the very first column? What happens if you remove the space within "Item Description"? Stupid question, but does that column have a column header?
-- EDIT 1 --
If you delete that column, does the problem move to another column (the new index 4), or is the file complete. My reason for asking this -- is the problem specific to data in that column/header, or is the problem more general, on index 4.
-- EDIT 2 --
Ok, so since we know it's that column, we know it's either the header, or the rows. Let's concentrate on rows for now. Start with that ampersand; dump it, and see what happens. Next, work with the first 50% of rows. Does deleting that subset affect anything? What about the latter 50% of rows? If one of those subsets changes the result, you ought to be able to narrow it down to an individual row (hopefully not plural) by halfing your selection each time.
My guess is that you're going to find a unicode character or something else funky is one of the cells. Maybe there's a formula or, as you mentioned, some of that "weird cell highlighting."

It's been years since I worked with excel access, but I recall some problems with excel grouping content into some areas that would act as table inside each sheet. Try copy/paste the content from the problematic sheet to a new workbook and connect to that workbook. If this works you may be able to investigate a bit further about areas.

Related

R read csv with comma in column

Update 2020-5-14
Working with a different but similar dataset from here, I found read_csv seems to work fine. I haven't tried it with the original data yet though.
Although the replies didn't help solve the problem because my question was not correct, Shan's reply fits the original question I posted the most, so I accepted his answer.
Update 2020-5-12
I think my original question is not correct. Like mentioned in the comment, the data was quoted. Although changing the separator made the 11582 row in R look the same as the 11583 row in excel, it doesn't mean it's "right". Maybe there is some incorrect line switch due to inappropriate encoding or something, and thus causing some of the columns to be displaced. If I open the data with notepad++, the instance at row 11583 in excel is at the 11596 row.
Original question
I am trying to read the listings.csv from this dataset in kaggle into R. I downloaded the file and wrote the coderead.csv('listing.csv'). The first column, the column id, is supposed to be numeric. However, it shows:
listing$id[1:10]
[1] 2015 2695 3176 3309 7071 9991 14325 16401 16644 17409
13129 Levels: Ole Berl穩n!,16736423,Nerea,Mitte,Parkviertel,52.55554132116211,13.340658248460871,Entire home/apt,36,6,3,2018-01-26,0.16,1,279\n17312576,Great 2 floor apartment near Friederich Str MITTE,116829651,Selin,Mitte,Alexanderplatz,52.52349354926847,13.391003496971203,Entire home/apt,170,3,31,2018-10-13,1.63,1,92\n17316675,80簡 m of charm in 3 rooms with office space,116862833,Jon,Neuk繹lln,Schillerpromenade,52.47499080234379,13.427509313575928...
I think it is because there are values with commas in the second column. For example, opening the file with MiCrosoft excel, I can see one of the value in the second column is Ole,Ole...:
How can I read a csv file into R correctly when some values contain commas?
Since you have access to the data in Excel, you can 'Save As' in Excel with a seperator other than comma (,). First go in to Control Panel –> Region and Language -> Additional settings, you can change the "List Seperator". Most common one other than comma is pipe symbol (|). In R, when you read_csv, specify the seperator as '|'.
You could try this?
lsitings <- read.csv("listings.csv", stringsAsFactors = FALSE)
listings$name <- gsub(",","", listings$name) - This will remove the comma in Col name
If you don't need the information in the second column, then you can always delete it (in Excel) before importing into R. The read.csv function, which calls scan, can also omit unwanted columns using the colClasses argument. However, the fread function from the data.table package does this much more simply with the drop argument:
library(data.table)
listings <- fread("listings.csv", drop=2)
If you do need the information in that column, then other methods are needed (see other solutions).

Choosing column values to minimize filesize of SQLite database?

I have an SQLite database in which I altered a table to add a column that will contain a kind of permanently unique ID for each row (in addition to the existing INTEGER PRIMARY KEY which might be reassigned and thus not permanent). I also want to avoid accidentally mixing up the normal ID's and the new "permanent ID's", therefor I decided to use a TEXT column and give each value a prefix, for example pid-.
So I simply added a column named perma_id with the type TEXT and ran UPDATE mytable SET perma_id = 'pid-' || _rowid_ to assign values for the existing rows. I then saved and compacted/vacuumed the database and compressed it into a zip-file because I will include it in an Android APK.
I noticed that the filesize had gone up from 379kB to 417kB after adding the new column. This is of course expected. But as an experiment, I thought maybe I could reduce the filesize by just using p... instead of pid-... for the perma_id column values, so I reassigned all the values. But to my surprise, the filesize had instead increased to 420kB! I experimented a bit further, and I can consitently get the (compressed) filesize o become 417kB with pid-... and 420kB with p.... As expected, using an INTEGER column reduces the filesize further, but only to 414kB.
This makes me wonder - what is the black magic behind the smaller file size when using a longer string as a prefix in the perma_id column? And is there a way to determine which string would produce the smallest filesize?
Edit
Just tried using the prefix perma-id-..., which results in a compressed file size of 414kB - i.e. same as using an INTEGER column with just the number after the prefix. So I tried very-long-permanent-id-with-the-value-... as prefix - 413kB. Mind = blown.
Did you try running the VACUUM command on the database before zipping each time?
When you shortened the Primary Key values it may have reduced the size of the data but kept the .DB file the same size, as SQLite doesnt automatically reduce the file size it just marks chunks of the file as 'overwriteable'. Until, that is, you run the VACUUM to throw away all this spare space.
I'm guessing the 'overwriteable' proportion of your file was hard to zip. Then when it got filled up with lots of repeating text saying "permanent-id-with-the-value-" - it got easier to zip!

Import table with irregular linebreaks

I have a 2.8gb text file that I'm trying to import into R.
1) I used fread(file='file.txt',sep = ';',header = T,nrows = 1000,stringsAsFactors = F,fill=T) to take a quick look, and I saw that some rows happen to show some columns with NA's and in the row below are the values that should be in the NA's place.
2) Next, I used HJSplit to see a part of the file in a notepad and noticed that there are linebreaks in the middle of some rows, making these rows occupy two rows. Here's an ilustration of what's happening (example of ';' separated file with 4 columns):
id;name;age;sex
150;bob;40;F
151;luke;20;M
152;mary
20;F
153;larry;30;M
Question: Is there a way that I can solve this problem?
One thing that came to my mind was to use the fact that the number of columns are defined, but I don't know how.

Is it possible to remove the binary values that prefix the output of teradata fexp?

I am trying to run teradata fexp with a simple sql script.
The select output column is a string expression and as such results in 2 extra length indicator bytes at the start of each row output.
I have searched for solutions online to the problem. I would like to avoid having to post-process if possible.
There is a thread suggesting the possibility of using an OUTMOD. I don't know what that is.
https://forums.teradata.com/forum/tools/fastexport-remove-binaryindicator-values-in-outmod
http://teradataforum.com/teradata/20100726_155313.htm
And yet another thread suggests casting to a fixed width string type but this would result in padding which I'd like to avoid.
https://forums.teradata.com/forum/tools/fexp-data-doubt
The desired output is actually a delimited plain text file. Is there a way to do it?

Integer zero, "0' will be ignored when upload to SQL Server

i have a page that allow user to upload an excel file and insert the data in excel file to the SQL Server. Now i have a small issue that, there is a column in excel file with values, such as "001", "029", "236". When it's insert to the SQL Server, the zero in front will be ignored in SQL, so the data would become "1", "29", "239". The data type for the column in SQL is varchar(10). How to solve this?
Excel seems to automatically convert cell values to numbers. Try prefixing the cell contents with a single quote in the Excel sheet prior to processing. Eg '001. If you can't trust the users to do that, use a string formatting routine to left pad the numbers with zeroes.
Something must be converting the data in the excel cell from a string to an integer. How are you performing the insert?
If a user enters 001 into Excel, it will be converted to the number 1.
If the user enters '001 into Excel, it will be saved in the cell as text.
If the cell is pre-formatted with the number format "#", then when the user enters 001 into the cell it will be entered as the text "001". The "#" number format tells Excel that the cell is a text cell and any entry (whether it looks like a number, date, time, fraction, etc...) should simply be placed in the cell as is - as a text cell.
Can you tell your users to pre-format this column with "#"? This is generally the most reliable way to handle this since the user does not have to remember to enter '001.
Maybe setting up the datatype "Text" for an Excel cell will help.
Excel is probably the culprit here. Try converting your file to CSV and see how it comes out. If the leading zeros are gone in the new CSV file, Excel is the problem.
Excel always does this, and its a nuissance. There are three workarounds I know of:
BEFORE entering the data in any cell in Excel format the cell as text (you can do a whole column if needed.) This only works if you control the spreadsheets and users, which is basically never :-).
Assume you'll get a mix of numbers and/or text in the Excel data, and fix it in Excel before import: add a column to the spreadsheet and use the TEXT() function to convert the number to text, as in =TEXT(A2, "000"); fill down. Also assumes you can edit the worksheet.
Assume you have to fix the numbers upon insert in your code. Depending on how you are loading the data, that could happen in T-SQL or in your other code. In TSQL this expression works to pad with zeros to a width of 3 characters: right( '000' + cast ( 2 as varchar(3) ), 3 )

Resources