Merging two text files to one in Scilab - scilab

How to combine two text files in to a single text file in Scilab? I'm using the following code to write a text file.
filename = fullfile("filepath");
csvWrite(M,filename,ascii(9),".",4);
mgetl(filename);
One text file contain text lines while the other one contain string values. Please help me combining the two so that the text lines comes above the column string values

I am not sure to understand exactly what you want to do.If you want to catenate two ascii files (not binary) you can proceed as follow.
mputl([mgetl("file1");mgetl("file2")], "file12")
If you want to form a text file from a text file and a binary file
You have to first read the text file
t1=mgetl("file1")
Then read the data of the binary file using the mopen, mget and mclose functions. But you must know how the data are stored in the file (integer, double, ...)
Then you have to format your data as you wish, using the string or msprintf function
to form the t2 array of strings
and finally write [t1;t2];

Related

Optimized way to write a series strings to a text file without quotations

I am new to Julia so sorry if this question is obvious.
I am trying to use Julia to help me run a series of finite element models, which use a text input file to give instructions to the finite element solver. Basically, I would like to use Julia to read in the base input file, edit some parameters on some lines of the file and then write it as a new file. I am getting hung up on a couple things though.
Currently, I am reading in the file like this
mdl = "fullmodelSVTV"; #name of input file
A = readlines(mdl*".inp")
This read each line from the file in as a separate string in a vector which I like because it makes it easier to edit the sections I want but it also makes things more difficult when I try to write to a new file.
I am writing the file like this.
io = open("name.inp","w")
print(io,A)
close(io)
When I try to write to a new file the output ends up look like this
Output from code
which is ["string at index 1","string at index 2","string at index 3"...].
What I would like to do is output this the exact same way is it is read in with string at each index of the vector on its own line. I would also like to remove the brackets and quotation marks from the file, as they might interfere with the finite element solver.
I think I have found a way to concatenate all of the strings at each index and separated them with a new line like shown below.
for i in 1:length(A)
conc = conc*"\n"*lines[i]
end
The issue with this is that it takes a long time to do given the size of the input files I am working with and I feel like there has to achieve my goal.
I also cannot find a way to remove the brackets or quotation marks when writing the file.
So, I'm wondering if anyone has any advice for a better way to write these text files in terms of both concatenating all of the strings from the vector when outputting as well as outputting without the brackets and quotation marks.
Thanks, any advice is appreciated.
The issue with print(io,A) is that it is printing a representation of the vector, but in fact you want to print each element of the vector. To do so, you can simply print each line in a loop:
open("name.inp", "w") do io
for line in A
println(io, line)
end
end
This avoids the overhead of string concatenation.

How do I read in data sitting a folder with a quotation in the name?

So let's say I have a file data.csv that I want to read into R. However, this file is in the path C:/Users/abc/Documents/"My Data's Methods". Notice the quotation after Data. How can I read the CSV in using fread()?
You can wrap the whole string in double quotes, fread("path/My Data's Methods").
Try :
data.table::fread('C:/Users/abc/Documents/"My Data\'s Methods"/data.csv')

Dealing with quotation marks in a quote-surrounded string

Take this CSV file:
ID,NAME,VALUE
1,Blah,100
2,"Has space",200
3,"Ends with quotes"",300
4,""Surrounded with quotes"",300
It loads just fine in most statistical programs (R, SAS, etc.) but in Excel the third row is misinterpreted because it has two quotation marks. Escaping the last quote as \" will also not work in Excel. The only way I have found so far is to replace the one double quote with two double quotes:
ID,NAME,VALUE
1,Blah,100
2,"Has space",200
3,"Ends with quotes""",300
4,"""Surrounded with quotes""",300
But that would render the file completely useless for all other programs (R, SAS, etc.)
Is there a way to format the CSV file where strings can begin or end with the same characters as that used to surround them, such that it would work in Excel as well as commonly used statistical software?
Your second representation is the normal way to generate a CSV file and so should be easy to work with in any software. See the RFC 4180 specifications. https://www.ietf.org/rfc/rfc4180.txt
So your second example represents this data:
Obs id name value
1 1 Blah 100
2 2 Has space 200
3 3 Ends with quotes" 300
4 4 "Surrounded with quotes" 300
If you want to represent it as a delimited file where none of the values are allowed to contain the delimiter (in other words NOT as a standard CSV file) than it would look like:
id,name,value
1,Blah,100
2,Has space,200
3,Ends with quotes",300
4,"Surrounded with quotes",300
But if you want to allow the values to contain the delimiter then you need some way to distinguish embedded delimiters from real delimiters. So the standard forces values that contain the delimiter to be quoted. But once you do that you also need to also add quotes around fields that contain the quote character itself (and double the embedded quotes) to avoid making an ambiguous file. For example the quotes in the 4th observation in your first file look like they are optional quotes around a value instead of part of the value.
Many programs try to handle ambiguous situations. For example SAS does not allow values to contain embedded line breaks so you will always get four observations with your first example file.
But EXCEL allows the embedding of the end of line character(s) inside of quoted values. So in your original file the value of the second field in the third observations looks like what you would start to get if you added quotes around this value:
Ends with quotes",300
4,"Surrounded with quotes",300
So instead of 4 complete observations of three fields values in each there are only three observations and the last observation has only two field values.
This is caused by the fact that escape character for " in Excel is "": Escaping quotes and delimiters in CSV files with Excel
A quick and simple workaround that comes to mind in R is to first read the content of the csv with readLines, then replace the double (escaped) double quotes with just one double quotes, and then read.table:
read.table(
text = gsub(pattern = "\"\"", "\"", readLines("data.csv")),
sep = ",",
header = TRUE
)

readcsv fails to read # character in Julia

I've been using asd=readcsv(filename) to read a csv file in Julia.
The first row of the csv file contains strings which describe the column contents; the rest of the data is a mix of integers and floats. readcsv reads the numbers just fine, but only reads the first 4+1/2 string entries.
After that, it renders "". If I ask the REPL to display asd[1,:], it tells me it is 1x65 Array{Any,2}.
The fifth column in the first row of the csv file (this seems to be the entry it chokes on) is APP #1 bias voltage [V]; but asd[1,5] is just APP . So it looks to me as though readcsv has choked on the "#" character.
I tried using "quotes=false" keyword in readcsv, but it didn't help.
I used to use xlsread in Matlab and it worked fine.
Has anybody out there seen this sort of thing before?
The comment character in Julia is #, and this applies when reading files from delimited text files.
But luckily, the readcsv() and readdlm() functions have an optional argument to help in these situations.
You should try readcsv(filename; comment_char = '/').
Of course, the example above assumes that you don't have any / characters in your first line. If you do, then you'll have to change that / above to something else.

Integer zero, "0' will be ignored when upload to SQL Server

i have a page that allow user to upload an excel file and insert the data in excel file to the SQL Server. Now i have a small issue that, there is a column in excel file with values, such as "001", "029", "236". When it's insert to the SQL Server, the zero in front will be ignored in SQL, so the data would become "1", "29", "239". The data type for the column in SQL is varchar(10). How to solve this?
Excel seems to automatically convert cell values to numbers. Try prefixing the cell contents with a single quote in the Excel sheet prior to processing. Eg '001. If you can't trust the users to do that, use a string formatting routine to left pad the numbers with zeroes.
Something must be converting the data in the excel cell from a string to an integer. How are you performing the insert?
If a user enters 001 into Excel, it will be converted to the number 1.
If the user enters '001 into Excel, it will be saved in the cell as text.
If the cell is pre-formatted with the number format "#", then when the user enters 001 into the cell it will be entered as the text "001". The "#" number format tells Excel that the cell is a text cell and any entry (whether it looks like a number, date, time, fraction, etc...) should simply be placed in the cell as is - as a text cell.
Can you tell your users to pre-format this column with "#"? This is generally the most reliable way to handle this since the user does not have to remember to enter '001.
Maybe setting up the datatype "Text" for an Excel cell will help.
Excel is probably the culprit here. Try converting your file to CSV and see how it comes out. If the leading zeros are gone in the new CSV file, Excel is the problem.
Excel always does this, and its a nuissance. There are three workarounds I know of:
BEFORE entering the data in any cell in Excel format the cell as text (you can do a whole column if needed.) This only works if you control the spreadsheets and users, which is basically never :-).
Assume you'll get a mix of numbers and/or text in the Excel data, and fix it in Excel before import: add a column to the spreadsheet and use the TEXT() function to convert the number to text, as in =TEXT(A2, "000"); fill down. Also assumes you can edit the worksheet.
Assume you have to fix the numbers upon insert in your code. Depending on how you are loading the data, that could happen in T-SQL or in your other code. In TSQL this expression works to pad with zeros to a width of 3 characters: right( '000' + cast ( 2 as varchar(3) ), 3 )

Resources