DEFINE VARIABLE cExportData AS CHARACTER NO-UNDO FORMAT 'X(250)'.
DEFINE VARIABLE cPath AS CHARACTER NO-UNDO.
DEFINE VARIABLE cExt AS CHARACTER NO-UNDO.
DEFINE VARIABLE cSFTL AS CHARACTER NO-UNDO FORMAT 'X(150)'.
DEFINE VARIABLE cMessageDateTime AS CHARACTER NO-UNDO.
ASSIGN
cPath = "R:\Downloads\progress\".
cExt = ".Txt".
cMessageDateTime = "123456789".
OUTPUT TO VALUE (cPath + cMessageDateTime + STRING(MTIME) + cExt ).
cExportData = "Data1" + CHR(10) + "Data2" + CHR(10) + "Data3" + CHR(10) + "END.".
MESSAGE cExportData.
OUTPUT TO CLOSE.
So when I see exported text file using notepad++ i could see first 3 for Data1,Data2,Data3 but 4th line is created with empty.How do I stop creating empty line.
MESSAGE is not usually what you want to use for output to a file, it has many extra behaviors specific to interacting with users in the context of providing error messages etc. PUT is generally more appropriate for writing files. Embedding CHR(10) is also not a good idea -- that is a very OS specific line terminator. CHR(10) is a Unix style newline but you are clearly running on Windows (which uses CHR(10) + CHR(13).
I might re-write your code as follows:
DEFINE VARIABLE cExportData AS CHARACTER NO-UNDO FORMAT 'X(250)'.
DEFINE VARIABLE cPath AS CHARACTER NO-UNDO.
DEFINE VARIABLE cExt AS CHARACTER NO-UNDO.
DEFINE VARIABLE cSFTL AS CHARACTER NO-UNDO FORMAT 'X(150)'.
DEFINE VARIABLE cMessageDateTime AS CHARACTER NO-UNDO.
/* the "." that you had at the ends of the ASSIGN sub statements
* is turning it into 3 distinct statements, not one as your
* indentation shows
*/
ASSIGN
cPath = "R:\Downloads\progress\"
cExt = ".Txt"
cMessageDateTime = "123456789"
. /* end the ASSIGN statement */
/* if you are using MTIME because you imagine it will make your
* filename unique then you are mistaken, on a multi-user or
* networked system it is trivial for 2 processes to create files
* at the very same MTIME
*/
OUTPUT TO VALUE (cPath + cMessageDateTime + STRING(MTIME) + cExt ).
/* usually some kind of looping structure would output each line
* building the whole output by concatenating into a string will
* eventually exhaust memory.
*/
put unformatted "Data1" skip "Data2" skip "Data3" skip "End." skip.
/* the final SKIP might not be needed - it is unclear to me
* if that is a problem for your client
*/
/* as originally written this creates
* an empty file called "CLOSE"
*/
OUTPUT /*** TO ***/ CLOSE.
Related
In my program I am outputting a .csv file which exceeds 1000000 lines. Currently after the file is exported, I am splitting the file from linux using the below commands. However, I would like to know if we can split the files using a progress code. If so, could someone plese let me know on how to do it.
Below is the linux command I use manually.
ls -l xa*
split -1000000 filename.csv
mv xaa filename-01.csv
mv xab filename-02.csv
Without any code to work with I invented some code outputting to different files. You will have to work with OUTPUT TO and set new filenames.
This example will output 1050 lines split in files of 100 lines each.
DEFINE VARIABLE iLinesToOutput AS INTEGER NO-UNDO INIT 1050.
DEFINE VARIABLE iSplitAt AS INTEGER NO-UNDO INIT 100.
DEFINE VARIABLE iLine AS INTEGER NO-UNDO.
DEFINE VARIABLE cFile AS CHARACTER NO-UNDO.
DEFINE VARIABLE iFile AS INTEGER NO-UNDO.
DEFINE VARIABLE iOpen AS INTEGER NO-UNDO.
DEFINE STREAM str.
DO iLine = 1 TO iLinesToOutput:
// Open a new stream/file
IF (iLine - 1 ) MOD iSplitAt = 0 THEN DO:
iFile = iFile + 1.
cFile = "c:\temp\file-" + STRING(iFile, "999") + ".txt".
OUTPUT STREAM str TO VALUE(cFile).
EXPORT STREAM str DELIMITER "," "Customer ID" "Order Number" "Contact" "Count"
END.
// Output some data
PUT STREAM str UNFORMATTED "Line " iLine SKIP.
// Close the stream/file
IF iLine MOD iSplitAt = 0 THEN DO:
OUTPUT STREAM str CLOSE.
END.
END.
/* Close last file if not exactly right number of lines */
/* This could also be checked but close twice doesn't really matter */
OUTPUT STREAM str CLOSE.
I have created a .html file from a Progress program which contains a table of rows and columns.
I would like to add the contents of the HTML file to an email body that I am sending with the "febooti" email utility on Windows.
How can I send this HTML file from my Progress program using "febooti"?
The febooti.com website says that the tool supports HTML in the body of the email:
https://www.febooti.com/products/command-line-email/commands/htmlfile/
There are a lot of options but a simple 4gl test example might look something like this:
define variable smtpServer as character no-undo.
define variable emailFrom as character no-undo.
define variable emailTo as character no-undo.
define variable emailSubject as character no-undo.
define variable altText as character no-undo.
define variable htmlFileName as character no-undo.
define variable htmlContent as character no-undo.
assign
smtpServer = "smtp.server.com"
emailFrom = "varun#email.com"
emailTo = "someone#email.com"
emailSubject = "Test email!"
altText = "Sorry, your email app cannot display HTML"
htmlFileName = "test.html"
.
/* this is obviously just an example, according to your question you
* have already created the HTML and don't actually need to do this
*/
htmlContent = "<table> <tr><td>abc</td></tr> <tr><td>...</td></tr> <tr><td>xyz</td></tr> </table>".
output to value( htmlFileName ).
put unformatted htmlFile skip.
output close.
/* end of stub html file creation - use your real code */
/* this shells out and sends the email using whatever
* is in htmlFileName as the content
*/
os-command value( substitute( "febootimail -SERVER &1 -FROM &2 -TO &3 -SUBJECT &4 -HTMLFILE &5 -TEXT &6", smtpServer, emailFrom, emailTo, emailSubject, htmlFileName, altText )).
I have written a program for export some text files to a specific directory. So i preferred using MTIME is the best way to have a unique name but this will be a problem when multiple process exporting same file name using MTIME.
Could you please tell me the best way to have a unique file name? Let me share some sample.
DEFINE INPUT PARAMETER ipData1 AS CHARACTER NO-UNDO.
DEFINE INPUT PARAMETER ipData2 AS CHARACTER NO-UNDO.
DEFINE INPUT PARAMETER ipData3 AS CHARACTER NO-UNDO.
DEFINE VARIABLE cExportData AS CHARACTER NO-UNDO FORMAT 'X(250)'.
DEFINE VARIABLE cPath AS CHARACTER NO-UNDO.
DEFINE VARIABLE cExt AS CHARACTER NO-UNDO.
DEFINE VARIABLE cSFTL AS CHARACTER NO-UNDO FORMAT 'X(150)'.
DEFINE VARIABLE cMessageDateTime AS CHARACTER NO-UNDO.
ASSIGN
cPath = "R:\Downloads\progress\"
cExt = ".Txt"
cMessageDateTime = "123456789".
OUTPUT TO VALUE (cPath + cMessageDateTime + STRING(MTIME) + cExt ).
put unformatted ipData1 skip ipData2 skip ipData3 skip "End."
OUTPUT CLOSE.
You have several options:
Use the program that Progress has supplied: adecomm/_tmpfile.p
define variable fname as character no-undo format "x(30)".
run adecomm/_tmpfile.p ( "xxx", ".tmp", output fname ).
display fname.
Use a GUID:
define variable fname as character no-undo format "x(30)".
fname = substitute( "&1&3&2", "xxx", ".tmp", GUID( GENERATE-UUID )).
display fname.
Ask Windows to do it (if you are always running on Windows):
define variable fname as character no-undo format "x(30)".
fname = System.IO.Path:GetTempFileName().
display fname.
Trial and error:
define variable fname as character no-undo.
do while true:
fname = substitute( "&1&3&2", "xxx", ".tmp", string( random( 1, 1000 ), "9999" )).
file-info:filename = fname.
if file-info:full-pathname = ? then leave. /* if the file does NOT exist it is ok to use this name */
end.
display fname.
You'll probably also need to pass in a token or an identifier of some sort to make this truly unique. Maybe username, or the machine's up, something like that. Then my advice would be concatenating that with
REPLACE (STRING(TODAY),'/','') + STRING(MTIME).
Edit: even though OP has flagged my answer as correct, it's not. Check Tom's answer to this for better options.
i have used this before.
("Filename" + STRING(TODAY,"999999") + ".csv").
Ho do I write code for a program that can accept three input parameters: x , y, and the filename to write to?
I should be able to call the program like this:
run prog.p (input “1”, input 5, input “filename1.csv”).
so far my I have written the code below and not sure how to go around it.
OUTPUT TO xxxxxx\filename1.csv".
DEFINE VARIABLE Profit AS DECIMAL FORMAT "->>,>>9.99":U INITIAL 0 NO-UNDO.
EXPORT DELIMITER "," "Amount" "Customer Number" "Invoice Date" "Invoice Number" "Total_Paid" "Profit".
FOR EACH Invoice WHERE Invoice.Ship-charge > 5.00
AND Invoice.Total-Paid > 0.01
AND Invoice.Invoice-Date GE 01/31/93 /* this is between also can use < >*/
AND Invoice.Invoice-Date LE TODAY NO-LOCK:
Profit = (Invoice.Invoice-Num / Invoice.Total-Paid) * 100.
EXPORT DELIMITER "," Amount Cust-Num Invoice-Date Invoice-Num Total-Paid Profit.
END.
OUTPUT CLOSE.
Thank you.
You're on the right track! OUTPUT TO VALUE(variable) is what might help you. Also you should possibly use a named stream.
It's not clear to me what parameters x and y should do so I just inserted them as dummies below.
Note:
You're commenting about using <> instead of GE. That might work logically but could (will) effect your performance by forcing the database to scan entires tables instead of using an index.
Something like this:
DEFINE INPUT PARAMETER pcX AS CHARACTER NO-UNDO.
DEFINE INPUT PARAMETER piY AS INTEGER NO-UNDO.
DEFINE INPUT PARAMETER pcFile AS CHARACTER NO-UNDO.
/* Bogus temp-table to make the program run... */
/* Remove this unless just testing without database ...*/
DEFINE TEMP-TABLE Invoice NO-UNDO
FIELD Ship-Charge AS DECIMAL
FIELD Total-Paid AS DECIMAL
FIELD Invoice-Date AS DATE
FIELD Invoice-Num AS INTEGER
FIELD Amount AS INTEGER
FIELD Cust-Num AS INTEGER.
DEFINE STREAM str.
DEFINE VARIABLE Profit AS DECIMAL FORMAT "->>,>>9.99":U INITIAL 0 NO-UNDO.
OUTPUT STREAM str TO VALUE(pcFile).
EXPORT STREAM str DELIMITER "," "Amount" "Customer Number" "Invoice Date" "Invoice Number" "Total_Paid" "Profit".
FOR EACH Invoice WHERE Invoice.Ship-charge > 5.00
AND Invoice.Total-Paid > 0.01
AND Invoice.Invoice-Date GE 01/31/93 /* this is between also can use < >*/
AND Invoice.Invoice-Date LE TODAY NO-LOCK:
Profit = (Invoice.Invoice-Num / Invoice.Total-Paid) * 100.
EXPORT STREAM str DELIMITER "," Amount Cust-Num Invoice-Date Invoice-Num Total-Paid Profit.
END.
OUTPUT STREAM str CLOSE.
Now you can run this program, assuming it's named "program.p":
RUN program.p("1", 5, "c:\temp\file.txt").
or
RUN program.p(INPUT "1", INPUT 5, INPUT "c:\temp\file.txt").
(INPUT is the default direction for parameters).
EDIT:
Run example + changed first input to CHARACTER instead of integer
I'm trying to write a .p script that will export a table from a database as a csv. The following code creates the csv:
OUTPUT TO VALUE ("C:\Users\Admin\Desktop\test.csv").
FOR EACH table-name NO-LOCK:
EXPORT DELIMITER "," table-name.
END.
OUTPUT CLOSE.
QUIT.
However, I can't figure how to encapsulate all of the fields with double quotes. Nor can I figure out how to get the first row of the .csv to have the column names of the table. How would one go about doing this?
I'm very new to Progress / 4GL. Originally I was using R and an ODBC connection to import and format the table before saving it as a csv. But I've learned that the ODBC driver I'm using does not work reliably...sometimes it will not return all the rows in the table.
The ultimate goal is to pass an argument (table-name) to a .p script that will export the table as a csv. Then I can import the csv in R, manipulate / format the data and then export the table again as a csv.
Any advice would be greatly appreciated.
EDIT:
The version of Progress I am using is 9.1D
Using the above code, the output might look like this...
"ACME",01,"Some note that may contain carriage returns.\n More text",yes,"01A"
The reason for trying to encapsulate every field with double quotes is because some fields may contain carriage returns or other special characters. R doesn't always like carriage return in the middle of field. So the desired output would be...
"ACME","01","Some note that may contain carriage returns.\n More text","yes","01A"
Progress version is important to know. Your ODBC issue is likely caused by the fact that formats in Progress are default display formats and don't actually limit the amount of data to be stored. Which of course drives SQL mad.
You can use this KB to learn about the DBTool utility to fix the SQL width http://knowledgebase.progress.com/articles/Article/P24496
As far as the export is concerned what you are doing will already take care of the double quotes for character columns. You have a few options to solve your header issue depending on your version of Progress. This one will work no matter your version but is not as elegant as the newer options....
Basically copy this into the procedure editor and it will generate a program with internal procedures for each table in your DB. Run the csvdump.p by passing in the table name and the csv file you want( run csvdump.p ("mytable","myfile").
Disclaimer you may run into some odd datatypes that can't be exported like RAW but they aren't very common.
DEF VAR i AS INTEGER NO-UNDO.
OUTPUT TO csvdump.p.
PUT UNFORMATTED
"define input parameter ipTable as character no-undo." SKIP
"define input parameter ipFile as character no-undo." SKIP(1)
"OUTPUT TO VALUE(ipFile)." SKIP(1)
"RUN VALUE('ip_' + ipTable)." SKIP(1)
"OUTPUT CLOSE." SKIP(1).
FOR EACH _file WHERE _file._tbl-type = "T" NO-LOCK:
PUT UNFORMATTED "PROCEDURE ip_" _file._file-name ":" SKIP(1)
"EXPORT DELIMITER "~",~"" SKIP.
FOR EACH _field OF _File NO-LOCK BY _Field._Order:
IF _Field._Extent = 0 THEN
PUT UNFORMATTED "~"" _Field-Name "~"" SKIP.
ELSE DO i = 1 TO _Field._Extent:
PUT UNFORMATTED "~"" _Field-Name STRING(i,"999") "~"" SKIP.
END.
END.
PUT UNFORMATTED "." SKIP(1)
"FOR EACH " _File._File-name " NO-LOCK:" SKIP
" EXPORT DELIMITER "~",~" " _File._File-Name "." SKIP
"END." SKIP(1).
PUT UNFORMATTED "END PROCEDURE." SKIP(1).
END.
OUTPUT CLOSE.
BIG Disclaimer.... I don't have 9.1D to test with since it is well past the supported date.... I believe all of this will work though.
There are other ways to do this even in 9.1D (dynamic queries) but this will probably be easier for you to modify if needed since you are new to Progress. Plus it is likely to perform better than purely dynamic exports. You can keep nesting the REPLACE functions to get rid of more and more characters... or just copy the replace line and let it run it over and over if needed.
DEF VAR i AS INTEGER NO-UNDO.
FUNCTION fn_Export RETURNS CHARACTER (INPUT ipExtent AS INTEGER):
IF _Field._Data-Type = "CHARACTER" THEN
PUT UNFORMATTED "fn_Trim(".
PUT UNFORMATTED _File._File-Name "." _Field._Field-Name.
IF ipExtent > 0 THEN
PUT UNFORMATTED "[" STRING(ipExtent) "]" SKIP.
IF _Field._Data-Type = "CHARACTER" THEN
PUT UNFORMATTED ")".
PUT UNFORMATTED SKIP.
END.
OUTPUT TO c:\temp\wks.p.
PUT UNFORMATTED
"define input parameter ipTable as character no-undo." SKIP
"define input parameter ipFile as character no-undo." SKIP(1)
"function fn_Trim returns character (input ipChar as character):" SKIP
" define variable cTemp as character no-undo." SKIP(1)
" if ipChar = '' or ipChar = ? then return ipChar." SKIP(1)
" cTemp = replace(replace(ipChar,CHR(13),''),CHR(11),'')." SKIP(1)
" return cTemp." SKIP(1)
"end." SKIP(1)
"OUTPUT TO VALUE(ipFile)." SKIP(1)
"RUN VALUE('ip_' + ipTable)." SKIP(1)
"OUTPUT CLOSE." SKIP(1).
FOR EACH _file WHERE _file._tbl-type = "T" NO-LOCK:
PUT UNFORMATTED "PROCEDURE ip_" _file._file-name ":" SKIP(1)
"EXPORT DELIMITER "~",~"" SKIP.
FOR EACH _field OF _File NO-LOCK BY _Field._Order:
IF _Field._Extent = 0 THEN
PUT UNFORMATTED "~"" _Field-Name "~"" SKIP.
ELSE DO i = 1 TO _Field._Extent:
PUT UNFORMATTED "~"" _Field-Name STRING(i) "~"" SKIP.
END.
END.
PUT UNFORMATTED "." SKIP(1)
"FOR EACH " _File._File-name " NO-LOCK:" SKIP.
PUT UNFORMATTED "EXPORT DELIMITER ~",~"" SKIP.
FOR EACH _field OF _File NO-LOCK BY _Field._Order:
IF _Field._Extent = 0 OR _Field._Extent = ? THEN
fn_Export(0).
ELSE DO i = 1 TO _Field._Extent:
fn_Export(i).
END.
END.
PUT UNFORMATTED "." SKIP(1)
"END." SKIP(1).
PUT UNFORMATTED "END PROCEDURE." SKIP(1).
END.
OUTPUT CLOSE.
I beg to differ on one small point with #TheMadDBA. using EXPORT will not deal with quoting all the fields in your output in CSV style. Logical fields, for example, will not be quoted.
'CSV Format' is the vaguest of standards, but the export command does not conform with it. It was not designed for that. (I notice that in #TheMadDBA's final example, they do not use export, either.)
If you want all the non-numeric fields quoted, you need to handle this yourself.
def stream s.
output stream s to value(v-filename).
for each tablename no-lock:
put stream s unformatted
'"' tablename.charfield1 '"'
',' string(tablename.numfield)
',"' tablename.charfield2 '"'
skip.
end.
output stream s close.
In this example I'm assuming that you are okay with coding a specific dump for a single table, rather than a generic solution. You can certainly do the latter with meta-programming as in #TheMadDBA's answer, with ABL's dynamic query syntax, or even with -- may the gods forgive us both -- include files. But that's a more advanced topic, and you said you were just starting with ABL.
You will still have to deal with string truncation as per #TheMadDBAs answer.
After some inspiration from #TheMadDBAs and additional thought here is my solution to the problem...
I decided to write a script in R that would generate the p scripts. The R script uses one input, the table name and dumps out the p script.
Below is a sample p script...
DEFINE VAR columnNames AS CHARACTER.
columnNames = """" + "Company" + """" + "|" + """" + "ABCCode" + """" + "|" + """" + "MinDollarVolume" + """" + "|" + """" + "MinUnitCost" + """" + "|" + """" + "CountFreq" + """".
/* Define the temp-table */
DEFINE TEMP-TABLE tempTable
FIELD tCompany AS CHARACTER
FIELD tABCCode AS CHARACTER
FIELD tMinDollarVolume AS CHARACTER
FIELD tMinUnitCost AS CHARACTER
FIELD tCountFreq AS CHARACTER.
FOR EACH ABCCode NO-LOCK:
CREATE tempTable.
tempTable.tCompany = STRING(Company).
tempTable.tABCCode = STRING(ABCCode).
tempTable.tMinDollarVolume = STRING(MinDollarVolume).
tempTable.tMinUnitCost = STRING(MinUnitCost).
tempTable.tCountFreq = STRING(CountFreq).
END.
OUTPUT TO VALUE ("C:\Users\Admin\Desktop\ABCCode.csv").
/* Output the column names */
PUT UNFORMATTED columnNames.
PUT UNFORMATTED "" SKIP.
/* Output the temp-table */
FOR EACH tempTable NO-LOCK:
EXPORT DELIMITER "|" tempTable.
END.
OUTPUT CLOSE.
QUIT.
/* Done */
The R script makes an ODBC call to the DB to get the column names for the table of interest and then populates the template to generate the p script.
I'm not sure creating a temp table and casting everything as a character is the best way of solving the problem, but...
we have column names
everything is encapsulated in double quotes
and we can choose any delimiter (e.g. "|" instead of ",")