Save an Excel sheet as PDF programatically through powerbuilder - ole

There is a requirement to save an excel sheet as a pdf file programmatically through powerbuilder (Powerbuilder 12.5.1).
I run the code below; however, I am not getting the right results. Please let me know if I should do something different.
OLEObject ole_excel;
ole_excel = create OLEObject;
IF ( ole_excel.ConnectToObject(ls_DocPath) = 0 ) THEN
ole_excel.application.activeworkbook.SaveAs(ls_DocPath,17);
ole_excel.application.activeworkbook.ExportAsFixedFormat(0,ls_DocPath);
END IF;
....... (Parsing values from excel)
DESTROY ole_excel;
I have searched through this community and others for a solution but no luck so far. I tried using two different commands that I found during this search. Both of them return a null object reference error. It would be great if someone can point me in the right direction.

It looks to me like you need to have a reference to the 'activeworkbook'. This would be of type OLEobject so the declaration would be similar to: OLEobject lole_workbook.
Then you need to set this to the active work book. Look for the VBA code on Excel (should be in the Excel help) for something like a 'getactiveworkbook' method. You would then (in PB) need to do something like
lole_workbook = ole_excel.application.activeworkbook
This gets the reference for PB to the activeworkbook. Then do you saveas and etc. like this lole_workbook.SaveAs(ls_DocPath,17)

workBook.saveAs() documentation says that saveAs() has the following parameters:
SaveAs(Filename, FileFormat, Password, WriteResPassword, ReadOnlyRecommended, CreateBackup, AccessMode, ConflictResolution, AddToMru, TextCodepage, TextVisualLayout, Local)
we need the two first params:
FileName - full path with filename and extension, for instance: c:\myfolder\file.pdf
FileFormat - predefined constant, that represents the target file format.
According to google (MS does not list pdf format constant for XLFileFormat), FileFormat for pdf is equal to 57
so, try to use the following call:
ole_excel.application.activeworkbook.SaveAs(ls_DocPath, 57);

Related

How to get rid of the original file path instead of name in PGP encryption with ChoPGP library?

I tried PGP encrypting a file in ChoPGP library. At the end of the process it shows embedded file name along with the whole original file name path.
But I thought it will show only the filename without the whole path. Which I intend to work on and figure out a way to do so?
Doing the following:
using (ChoPGPEncryptDecrypt pgp = new ChoPGPEncryptDecrypt())
{
pgp.EncryptFile(#"\\appdevtest\c$\appstest\Transporter\test\Test.txt",
#"\\appdevtest\c$\appstest\Transporter\test\OSCTestFile_ChoPGP.gpg",
#"\\appdevtest\c$\appstest\Transporter\pgpKey\PublicKey\atorres\atorres_publicKey.asc",
true,
true);
}
which will result in:
But I would like to only extract the Test.txt in the end something
like this:
Looking at this line from the ChoPGP sources: https://github.com/Cinchoo/ChoPGP/blob/7152c7385022823013324408e84cb7e25d33c3e7/ChoPGP/ChoPGPEncryptDecrypt.cs#L221
You may find out that it uses internal function GetFileName, which ends up with this for the FileStream: return ((FileStream)stream).Name;.
And this one, according to documentation, Gets the absolute path of the file opened in the FileStream..
So you should either make fork of ChoPGP and modify this line to extract just filename, or submit a pull request to the ChoPGP. Btw, it's just a wrapper around the BouncyCastle so you may use that instead.

In R, read.config is escaping a google-service-key but I dont want it to

For my job, I have to use an .my.cnf file to pass a password key into my R code, since I cannot (for security reasons) keep my password key in the R code itself. So I have R code that looks like this:
library(configr)
my_configs = read.config(file = "~/.my.cnf")
my_googleservicekey = my_configs$api_authentication$googleServiceKeyJsonString
Separately, I have a .my.cnf file saved on my computer, and that file looks like this:
[api authentication]
googleServiceKeyJsonString = {\r\n \"type"\: \"service_account\",\r\n \"project_id\": ... ... \r\n}
where the ... ... signifies that the actual googleServiceKey is much longer. This key was provided by a coworker of mine, and needs to be set exactly as such in the R variable my_googleservicekey. However, when I save the key like this in the .my.cnf file, and read it into R in the manner that I did, I get the following:
> my_googleservicekey
[1] "{\\r\\n \\\"type\\\": \\\"service_account\\\",\\r\\n \\\"project_id\\\": ... ... \\r\\n}"
So what happens is, sometime when read.config(file = "~/.my.cnf") is run, the
object/string I passed/saved into the .my.cnf file looks like it had all of the quotes and backslashes escaped, and as a result I don't have the correct variable in R.
My question then is this:
Is it possible to save the key in the .my.cnf and have read.config in R read the key EXACTLY as it is saved in the .my.cnf file, so that it doesn't add the additional escapes?
Thanks in advance for any help with this!!
EDIT: In particular, are there any parameters to R's read.config function, or any other functions in R that could help out with this?

How to write some amounts to a File?

I'm doing this program for a class; I have a listbox with 4 choices that are counted each time they're selected,and i'm supposed to write out the results to an out.File so that they can then be retrieved and displayed when asked to.
Here's an image,so you can see what i mean.
Here's the code i got for the file part so far:
'declare a streamwriter variable
Dim outFile As IO.StreamWriter
'open file for output
outFile = IO.File.CreateText("Commercials.txt")
'write to file
For intIndex As Integer = 0 To lstCommercial.Items.Count - 1
outFile.WriteLine(lstCommercial.Items(intIndex))
Next intIndex
'close th efile
outFile.Close()
So my problem is this,i can get everything to work except for it to write the totals to the file,with the result of not displaying. How do i go about doing this? What am i doing wrong in any case?
It depends on what lstCommercial.Items is.
If it is a Collection object, then you can get the index of an item using the following:
outFile.WriteLine(lstCommercial.Items.Item(intIndex))
If it is an array, then instead of using the Count property to find the length, you should rather use GetUpperBound(0) to find the last index:
For intIndex As Integer = 0 To lstCommercial.Items.GetUpperBound(0)
I have very little experience in Visual Basic so I am probably wrong, but I have just gathered this from reading Arrays in Visual Basic and Collection Object (Visual Basic).
Also, looking at the docs here, you may need to 'Flush' the buffer to the file before closing it:
outFile.Flush()
'close the file
outFile.Close()

Sequencefiles which map a single key to multiple values

I am trying to do some preprocessing on data that will be fed to LucidWorks Big Data for indexing. LWBD accepts SolrXML in the form of Sequencefile files. I want to create a Pig script which will take all the SolrXML files in a directory and output them in the format
filename_1 => <here goes some XML>
...
filename_N => <here goes some more XML>
Pig's native PigStorage() load function can automatically create a column that includes the name of the file from which the data was extracted, which ideally would look like this:
{"filename_1", "<here goes some XML>"}
...
{"filename_N", "<here goes some more XML>"}
However, PigStorage() also automatically uses '\n' as a line delimiter, so what I actually end up with is a bag that looks like this:
{"filename_1", "<some partial XML from file 1>"}
{"filename_1", "<some more partial XML from file 1>"}
{"filename_1", "<the end of file 1>"}
...
I'm sure you get the picture. My question is, if I were to write this bag to a SequenceFile, how would it be read by other applications? Could it be combined as
"filename_1" => "<some partial XML from file 1>
<some more partial XML from file 1>
<the end of file 1>"
, by the default handling of the application I feed it to? Or is there some post-processing that I can do to get it into this format? Thank you for your help.
Since I can't find anything about a builtin SequenceFile writer, I'm assuming you are using a UDF (and if you aren't, then you need to).
You'll have to group the files (by filename) ahead of time, and then send that to the writer UDF.
DESCRIBE xml ;
-- xml: {filename: chararray, xml_data: chararray}
B = FOREACH (GROUP xml BY filename)
GENERATE group AS filename, xml.xml_data AS all_xml_data ;
Depending on how you have written the SequenceFile writer, it may be easier to convert the all_xml_data bag ahead of time to a chararray using a Python UDF like:
#outputSchema('xml_complete: chararray')
def stringify(bag):
delim = ''
return delim.join(bag)
NOTE: It is important to realize that this way the order of the xml data will become jumbled. If possible based on your data, stringify can maybe be expanded upon the reorgize it.

How to get programatically the file path from a directory parameter in abap?

The transaction AL11 returns a mapping of "directory parameters" to file paths on the application server AFAIK.
The trouble with transaction AL11 is that its program only calls c modules, there's almost no trace of select statements or function calls to analize there.
I want the ability to do this dynamically, in my code, like for instance a function module that took "DATA_DIR" as input and "E:\usr\sap\IDS\DVEBMGS00\data" as output.
This thread is about a similar topic, but it doesn't help.
Some other guy has the same problem, and he explains it quite well here.
I strongly suspect that the only way to get these values is through the kernel directly. some of them can vary depending on the application server, so you probably won't be able to find them in the database. You could try this:
TYPE-POOLS abap.
TYPES: BEGIN OF t_directory,
log_name TYPE dirprofilenames,
phys_path TYPE dirname_al11,
END OF t_directory.
DATA: lt_int_list TYPE TABLE OF abaplist,
lt_string_list TYPE list_string_table,
lt_directories TYPE TABLE OF t_directory,
ls_directory TYPE t_directory.
FIELD-SYMBOLS: <l_line> TYPE string.
START-OF-SELECTION-OR-FORM-OR-METHOD-OR-WHATEVER.
* get the output of the program as string table
SUBMIT rswatch0 EXPORTING LIST TO MEMORY AND RETURN.
CALL FUNCTION 'LIST_FROM_MEMORY'
TABLES
listobject = lt_int_list.
CALL FUNCTION 'LIST_TO_ASCI'
EXPORTING
with_line_break = abap_true
IMPORTING
list_string_ascii = lt_string_list
TABLES
listobject = lt_int_list.
* remove the separators and the two header lines
DELETE lt_string_list WHERE table_line CO '-'.
DELETE lt_string_list INDEX 1.
DELETE lt_string_list INDEX 1.
* parse the individual lines
LOOP AT lt_string_list ASSIGNING <l_line>.
* If you're on a newer system, you can do this in a more elegant way using regular expressions
CONDENSE <l_line>.
SHIFT <l_line> LEFT DELETING LEADING '|'.
SHIFT <l_line> RIGHT DELETING TRAILING '|'.
SPLIT <l_line>+1 AT '|' INTO ls_directory-log_name ls_directory-phys_path.
APPEND ls_directory TO lt_directories.
ENDLOOP.
Try the following
data : dirname type DIRNAME_AL11.
CALL 'C_SAPGPARAM' ID 'NAME' FIELD 'DIR_DATA'
ID 'VALUE' FIELD dirname.
Alternatively if you wanted to use your own parameters(AL11->configure) then read these out of table user_dir.

Resources