How to write some amounts to a File? - writetofile

I'm doing this program for a class; I have a listbox with 4 choices that are counted each time they're selected,and i'm supposed to write out the results to an out.File so that they can then be retrieved and displayed when asked to.
Here's an image,so you can see what i mean.
Here's the code i got for the file part so far:
'declare a streamwriter variable
Dim outFile As IO.StreamWriter
'open file for output
outFile = IO.File.CreateText("Commercials.txt")
'write to file
For intIndex As Integer = 0 To lstCommercial.Items.Count - 1
outFile.WriteLine(lstCommercial.Items(intIndex))
Next intIndex
'close th efile
outFile.Close()
So my problem is this,i can get everything to work except for it to write the totals to the file,with the result of not displaying. How do i go about doing this? What am i doing wrong in any case?

It depends on what lstCommercial.Items is.
If it is a Collection object, then you can get the index of an item using the following:
outFile.WriteLine(lstCommercial.Items.Item(intIndex))
If it is an array, then instead of using the Count property to find the length, you should rather use GetUpperBound(0) to find the last index:
For intIndex As Integer = 0 To lstCommercial.Items.GetUpperBound(0)
I have very little experience in Visual Basic so I am probably wrong, but I have just gathered this from reading Arrays in Visual Basic and Collection Object (Visual Basic).
Also, looking at the docs here, you may need to 'Flush' the buffer to the file before closing it:
outFile.Flush()
'close the file
outFile.Close()

Related

Adding an attribute to XmlWriter causes all namespaces to become aliased

Here is a line of XML found in an Excel book I created with a PivotTable/Cache in it:
<pivotCacheDefinition xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" r:id="rId1" refreshOnLoad="1" refreshedBy="m" refreshedDate="44873.446783912033" createdVersion="4" refreshedVersion="4" minRefreshableVersion="3" recordCount="4">
I am using XmlWriter (and System.IO.Packaging) to modify this XML to cause it to not use cached values and instead recalculate from the original data every time it's opened (you can do this in Excel, they always forget to). All this needs is an additional attribute in this header, savedata="0".
We use similar code to rewrite the worksheets holding the data, so I simply copypasta it and changed the element names to produce this:
Dim WR As XmlWriter
Dim WRSettings As XmlWriterSettings
WRSettings = New XmlWriterSettings() With {.CloseOutput = False}
pPivotPart.Data = New MemoryStream()
pPivotPart.Data.Position = 0
WR = XmlWriter.Create(pPivotPart.Data, WRSettings)
WR.WriteStartDocument(True)
WR.WriteStartElement("pivotCacheDefinition", cXl07WorksheetSchema)
WR.WriteAttributeString("xmlns", "r", Nothing, cXl07RelationshipSchema)
WR.WriteAttributeString("r", "Id", Nothing, "rId" & RelId)
WR.WriteAttributeString("saveData", "0")
'the rest of the lengthy code creates the other attributes and then copies over the original XML line by line
The r:id attribute is causing a problem. When it is present, XmlWriter adds an alias for the main namespace, and I get this:
<x:pivotCacheDefinition xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" saveData="0" refreshOnLoad="1" refreshedBy="m" refreshedDate="44816.473130671293" createdVersion="4" refreshedVersion="4" minRefreshableVersion="3" recordCount="4" r:Id="rId1" xmlns:x="http://schemas.openxmlformats.org/spreadsheetml/2006/main">
This makes regression testing... difficult. If I comment out the single line that inserts that attribute, all the aliasing goes away - even if I leave in the line that manually inserts that namespace for it:
<pivotCacheDefinition xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" saveData="0" refreshOnLoad="1" refreshedBy="m" refreshedDate="44816.473130671293" createdVersion="4" refreshedVersion="4" minRefreshableVersion="3" recordCount="4" xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main">
I thought perhaps that manually adding the r NS was the issue, and tried adding just the attribute and letting XmlWriter add it for me, but that did not do what I expected either, it simply ignored the "r" namespace entirely:
<x:pivotCacheDefinition saveData="0" refreshOnLoad="1" refreshedBy="m" refreshedDate="44816.473130671293" createdVersion="4" refreshedVersion="4" minRefreshableVersion="3" recordCount="4" Id="rId1" xmlns:x="http://schemas.openxmlformats.org/spreadsheetml/2006/main">
It is difficult to find this topic because of PHP's similar-name library, but a few threads I found present seemingly similar code.
Can someone who better understands XmlWriter's NS logic explain what's happening here, and how to avoid the renaming?
UPDATE:
I'm still playing with it using the dox on MS and various posts, and in doing so found this problem occurs if these are the only lines:
WR.WriteStartElement("pivotCacheDefinition", cXl07WorksheetSchema)
WR.WriteAttributeString("xmlns", "r", Nothing, cXl07RelationshipSchema)
WR.WriteAttributeString("r", "Id", cXl07RelationshipSchema, "rId" & RelId)
WR.WriteEndElement()
WR.WriteEndDocument()
As soon as it encounters the second NS the first is renamed. I have tried adding the first NS explicitly, I have tried using ns parameters instead of the xmlns tags, every variation seems to have the same effect: as soon as you insert an element on the r namespace the first one gets renamed x.

How to automatically resize all tables to optimal column width in a docx with LibreOffice Writer?

Problem
I have 100+ *.docx files created with Microsoft Word on a Windows machine that I would like to use with LibreOffice Writer.
Unfortunately, somehow all the tables have been squished in Writer as shown:
I've tried to fix this by:
Selecting the entire table (ctrl-a, ctrl-a)
Right-click the table
Go to size
Click optimal column width
This indeed gives me the desired result:
What I've tried so far
failed approach 1
I've created the following macro that does the formatting for a single table:
REM ***** BASIC *****
sub setTableColumns
rem ----------------------------------------------------------------------
rem define variables
dim document as object
dim dispatcher as object
rem ----------------------------------------------------------------------
rem get access to the document
document = ThisComponent.CurrentController.Frame
dispatcher = createUnoService("com.sun.star.frame.DispatchHelper")
dispatcher.executeDispatch(document, ".uno:SelectAll", "", 0, Array())
dispatcher.executeDispatch(document, ".uno:SelectAll", "", 0, Array())
dispatcher.executeDispatch(document, ".uno:SetOptimalColumnWidth", "", 0, Array())
end sub
I've assigned a keyboard shortcut to this macro such that I can reformat a single table by:
Clicking on the table of interest.
Running the shortcut.
This is an improvement, but it still requires a lot of manual work.
failed approach 2
I've also tinkered with the examples given in this different SO question.
I've managed to set the relative width of all tables to 100% by changing this property of all my tables:
https://www.openoffice.org/api/docs/common/ref/com/sun/star/text/TextTable.html#RelativeWidth
I did this by using the following macro (snippet):
tables = ThisComponent.TextTables
for tid = 0 to tables.count - 1
table = tables(tid)
table.RelativeWidth = 100
next
This does widen all the tables, however the format is not desirable.
Question
Is there a way to apply the optimal column width setting to all tables in a file?
It would be awesome if I could apply it to all tables in multiple docx files at once.
However, it would already make me very happy if I could automatically format all tables in a single docx file.
The code needs to loop through all tables in the document, select each one and then optimize the selected table.
The following Basic code from Andrew Pitonyak's macro document section 7.2.1 shows how to select a table.
ThisComponent.getCurrentController().select(oTable)
Here is some python code from a project of mine.
cellsRange = table.getCellRangeByPosition(
0, 0, endCol, numRows - 1)
controller.select(cellsRange)
dispatcher.executeDispatch(
frame, ".uno:SetOptimalColumnWidth", "", 0, ())
There are also ways to loop through a set of documents, so the code can open, optimize, save and close each document. The most difficult part is closing properly without a crash, as closing events can sometimes interrupt each other.
If this is a problem, you may find it easier to open, save and close each by hand, running the macro to optimize all tables in the currently open document. It's also not too hard to loop through all open documents, if you find it more convenient to open multiple files at the same time.

Reading in a binary grid file in Fortran 90

I'm having issues when trying to read in a binary file I've previously written into another program. I have been able to open it and read it to an array with out compilation errors, however, the array is not populated (all 0's). Any suggestions or thoughts would be great. Here is the open/read statement I'm using:
allocate(dummy(imax,jmax))
open(unit=io, file=trim(input), form='binary', access='stream', &
iostat=ioer, status='old', action='READWRITE')
if(ioer/=0) then
print*, 'Cannot open file'
else
print*,'success opening file'
end if
read(unit=io, fmt=*, iostat=ioer) dummy
j=0
k=0
size: do j=1, imax
do k=1, jmax
if(dummy(j,k) > 0.) print*,dummy(j,k)
end do
end do size
Please let me know if you need more info.
Here is how the file is originally written:
out_file = trim(output_dir)//'SEVIRI_FRP_.08deg_'//trim(season)//'.bin'
print*, out_file
print*, i_max,' i_max,',j_max,' j_max'
open (io, file = out_file, access = 'direct', status = 'replace', recl = i_max*j_max*4)
write(io, rec = 1) sev_frp
write(io, rec = 2) count_sev_frp
write(io, rec = 3) sum_sev_frp
check: do n=1, i_max
inna: do m=1, j_max
!if (sev_frp(n,m) > 0) print*, count_sev_frp(n,m)
end do inna
end do check
print*,'n-',n,'m-',m
close(io)
First of all the form takes two possible values as far as I know: "FORMATTED" or "UNFORMATTED".
Second, to read, you should use a open that is symmetric to the open statement that you used to write the file, Unless you know exactely what you are doing. I suggest that for reading, you open with:
open(unit=io, file=trim(input), access='direct', &
iostat=ioer, status='old', action='READ', recl = i_max*j_max*4)
That corresponds to the open statement that you used to save the file.
As innoSPG says, you have a mismatch in the way the file is written and how it is read.
An external file may be connected with one of three access methods: sequential; direct; stream. Further, a connection may be formatted or unformatted.
When the file is opened for writing it uses the direct access method with unformatted records. The records are unformatted because this is the default (in the abscence of the form= specifier).
When you open the file for reading you use the non-standard extension of form="binary" and stream access. There is possibly nothing wrong with this, but it does require care.
However, with the read statements you are using formatted (list-directed) input. This will not be allowed.
The way suggested in the previous answer, of using a similar access method and record length will require a further change to the code. [You'll also need to set the value of the record length somehow.]
Not only will you need to remove the format, to match the unformatted records written, but you'll want to use the rec= specifier to access the records of the file.
Finally, if you are using the iostat= specifier you really should check the resulting value.

Save an Excel sheet as PDF programatically through powerbuilder

There is a requirement to save an excel sheet as a pdf file programmatically through powerbuilder (Powerbuilder 12.5.1).
I run the code below; however, I am not getting the right results. Please let me know if I should do something different.
OLEObject ole_excel;
ole_excel = create OLEObject;
IF ( ole_excel.ConnectToObject(ls_DocPath) = 0 ) THEN
ole_excel.application.activeworkbook.SaveAs(ls_DocPath,17);
ole_excel.application.activeworkbook.ExportAsFixedFormat(0,ls_DocPath);
END IF;
....... (Parsing values from excel)
DESTROY ole_excel;
I have searched through this community and others for a solution but no luck so far. I tried using two different commands that I found during this search. Both of them return a null object reference error. It would be great if someone can point me in the right direction.
It looks to me like you need to have a reference to the 'activeworkbook'. This would be of type OLEobject so the declaration would be similar to: OLEobject lole_workbook.
Then you need to set this to the active work book. Look for the VBA code on Excel (should be in the Excel help) for something like a 'getactiveworkbook' method. You would then (in PB) need to do something like
lole_workbook = ole_excel.application.activeworkbook
This gets the reference for PB to the activeworkbook. Then do you saveas and etc. like this lole_workbook.SaveAs(ls_DocPath,17)
workBook.saveAs() documentation says that saveAs() has the following parameters:
SaveAs(Filename, FileFormat, Password, WriteResPassword, ReadOnlyRecommended, CreateBackup, AccessMode, ConflictResolution, AddToMru, TextCodepage, TextVisualLayout, Local)
we need the two first params:
FileName - full path with filename and extension, for instance: c:\myfolder\file.pdf
FileFormat - predefined constant, that represents the target file format.
According to google (MS does not list pdf format constant for XLFileFormat), FileFormat for pdf is equal to 57
so, try to use the following call:
ole_excel.application.activeworkbook.SaveAs(ls_DocPath, 57);

Sequencefiles which map a single key to multiple values

I am trying to do some preprocessing on data that will be fed to LucidWorks Big Data for indexing. LWBD accepts SolrXML in the form of Sequencefile files. I want to create a Pig script which will take all the SolrXML files in a directory and output them in the format
filename_1 => <here goes some XML>
...
filename_N => <here goes some more XML>
Pig's native PigStorage() load function can automatically create a column that includes the name of the file from which the data was extracted, which ideally would look like this:
{"filename_1", "<here goes some XML>"}
...
{"filename_N", "<here goes some more XML>"}
However, PigStorage() also automatically uses '\n' as a line delimiter, so what I actually end up with is a bag that looks like this:
{"filename_1", "<some partial XML from file 1>"}
{"filename_1", "<some more partial XML from file 1>"}
{"filename_1", "<the end of file 1>"}
...
I'm sure you get the picture. My question is, if I were to write this bag to a SequenceFile, how would it be read by other applications? Could it be combined as
"filename_1" => "<some partial XML from file 1>
<some more partial XML from file 1>
<the end of file 1>"
, by the default handling of the application I feed it to? Or is there some post-processing that I can do to get it into this format? Thank you for your help.
Since I can't find anything about a builtin SequenceFile writer, I'm assuming you are using a UDF (and if you aren't, then you need to).
You'll have to group the files (by filename) ahead of time, and then send that to the writer UDF.
DESCRIBE xml ;
-- xml: {filename: chararray, xml_data: chararray}
B = FOREACH (GROUP xml BY filename)
GENERATE group AS filename, xml.xml_data AS all_xml_data ;
Depending on how you have written the SequenceFile writer, it may be easier to convert the all_xml_data bag ahead of time to a chararray using a Python UDF like:
#outputSchema('xml_complete: chararray')
def stringify(bag):
delim = ''
return delim.join(bag)
NOTE: It is important to realize that this way the order of the xml data will become jumbled. If possible based on your data, stringify can maybe be expanded upon the reorgize it.

Resources