Process many EDI files through single MFX - map-force

I've created a mapping in MapForce 2013 and exported the MFX file. Now, I need to be able to run the mapping using MapForce Server. The problem is, I need to specify both the input EDI file and the output file. As far as I can tell, the usage pattern is to run the mapping with MapForce server using the input/output configuration in the MFX itself, not passed in on the command line.
I suppose I could change the input/output to some standard file name and then just write the input file to that path before performing the mapping, and then grab the output from the standard output file path when the mapping is complete.
But I'd prefer to be able to do something like:
MapForceServer run -in=MyInputFile.txt -out=MyOutputFile.xml MyMapping.mfx > MyLogFile.txt
Is something like this possible? Perhaps using parameters within the mapping?

There are two options that I've come across in dealing with a similar situation.
Option 1- If you set the input XML file to *.xml in the component settings, mapforceserver.exe will automatically process all txt in the directory assuming your source is xml (this should work for text just the same). Similar to the example below you can set a cleanup routine to move the files into another folder after processing.
Note: It looks in the folder where the schema file is located.
Option 2 - Since your output is XML you can use Altova's raptorxml (rack up another license charge). Now you can generate code in XSLT 2.0 and use a batch file to automatically execute, something like this.
::#echo off
for %%f IN (*.xml) DO (RaptorXML xslt --xslt-version=2 --input="%%f" --output="out/%%f" %* "mymapping.xslt"
if NOT errorlevel 1 move "%%f" processed
if errorlevel 1 move "%%f" error)
sleep 15
mymapping.bat
I tossed in a sleep command to loop the batch for rechecking every 15 seconds. Unfortunately this does not work if your output target is a database.

Related

Datastage Sequence job- how to process each file at a time if those files are in 7 different folders

DataStage - There are 7 folders in a path and in each folder there are 2 files . for eg : the 2 files are in the folllowing format- filename = test_s1_YYYYMMDD.txt, test_s1_YYYYMMDD.done. The path for these files are user/test/test_s1/
user/test/test_s2/
...
...
..
user/test/test_s7/------here s1,s2...s7 represents the different folders
In these folders the 2 above mentioned files are present , so how can i process each file in a sequence job?
First you need a job to process a file and the filename needs to be a parameter of that job.
For the Sequence level you need two levels - the inner one for the two files within each folder and a outer one for the different directories.
For the inner one you can choose to build a loop with to iterations or simply add the processing job twice to the sequence (which will reduce complexity in case it will always be two files).
The outer Sequence is a loop where you could parameterize the path in a way that the loop counter could be used to generate your 1-7 flexible path addon.
Check out more details on loops here
You can use the loop counter (stage_label.$Counter) to parameterize your job.
Depending on what you want to do with the files, it is an important decision how to process your files. Starting a job (or more) in a sequence for each file can lead to heavy overhead for just starting the jobs. Try loading all files at once in a parallel job using the sequenial file stage.
In the Sequential File Stage, set the appropriate Format. You can also set everything to none to just put each row in one column and process that in a later job. This will make the reading very flexible and forgiving. If your files are all the same structure, define your columns as needed.
To select the files, use File Patterns. In the Options of the Sequential File Stage, choose to have a File Name Column so you can process the filenames in a later job. You might also want to add a Row Number Column.
This method works pretty fast.

Instrumenting R programs using intel-pin

I am instrumenting R programs using pinatrace.so tool to generate trace for read and write memory instructions. What I observe is that multiple #eof statements get printed in the trace file at different places(which should have been actually get printed only at the end of the trace). Also, the immediate next line after #eof gets distorted and is not printed properly.
I am invoking R shell and my R program using the following command:
../../../pin -follow_execv -t obj-intel64/pinatrace.so -- /home/R-3.5.3./bin/R -f hello.R
The trace file gets printed as shown:
0 0x7ffc812cd1c8
1 0x7ffc812cd1c8
0 0x7f7f8555ee78
#eof
f6971ce8
1 0x6f4518
0 0x7ffc171a0b70
.....
.....
1 0x7ffc6da8f078
0 0x7f7c38786e78
#eof
ffc171a07c8
0 0x6f4e30
0 0x6ff918
What is wrong with this instrumentation?
When Pin is invoked with the follow_execv knob, it will create a new copy of itself in every child process that is created. The new copy is not aware that another copy is running in the parent or at all. See here:
If -follow_execv is enabled and the user has not registered to get a notification, Pin will be injected into child/exec-ed process with the same command line as of current process.
If the Pintool wasn't created with -follow_execv in mind, all copies of the Pintool will normally write to the same file. This will create strange artifacts such as what you're seeing, as different processes write to the same file and terminate it while other processes are writing after the terminator.
The simplest solution is to add a PID suffix to the file, another option is to use the Follow Child Process API (linked above) to determine which subprocess is the actual R program you want to trace. Finally, R may have support for instrumentation which you could use to instrument the program itself.

remove log information from report and save report in desire location

I am new to robot framework and wanted to see if i can get any simple code for custom report. I am also fine with answer to my problem. I went through all questions related to report but could not find any specific answer to my problem. currently my report contains log and wanted to see if i can remove log information from reports and save report in specific location. I just want to get PASS/FAIL information in my report. Can any one give me example how i can overcome this problem? I also need to know how i can save my report in different location. Any example would be helpful. Thank you in advance.
There is a tool called Rebot which is part of Robot Framework.
By default, Robot Framework creates XML reports. The XML reports are automatically converted into HTML reports by Rebot.
You can set the location of the output files in the execution by specifying the parameter --outputdir (and thus set a different base directory for outputs).
From the documentaiton:
All output files can be set using an absolute path, in which case they are created to the specified place, but in other cases, the path is considered relative to the output directory. The default output directory is the directory where the execution is started from, but it can be altered with the --outputdir (-d) option. The path set with this option is, again, relative to the execution directory, but can naturally be given also as an absolute path. Regardless of how a path to an individual output file is obtained, its parent directory is created automatically, if it does not exist already.
You can call Rebot yourself to control this conversion.
You can also run Rebot after the test was run in order to create new output on a different location.
See documentation in:
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#post-processing-outputs
The following example shows how to store the HTML reports in a different location and including only partial data:
rebot --include smoke --name Smoke_Tests c:\results\output.xml --outputdir c:\version1.0\reports
In the example above, we process the file c:\results\output.xml, create a new report called Smoke_Tests that includes only tests with the tag smoke and save it to the output folder c:\version1.0\reports
In addition you can also set the location of the log file (HTML) from the execution.
The command line option --log (-l) determines where log files are created.
The command line option --report (-r) determines where report files are created
Removing log lines can be done a bit differently. If you run rebot --help you'll get the following options:
--removekeywords all|passed|for|wuks|name: * Remove keyword data
from all generated outputs. Keywords containing
warnings are not removed except in `all` mode.
all: remove data from all keywords
passed: remove data only from keywords in passed
test cases and suites
for: remove passed iterations from for loops
wuks: remove all but the last failing keyword
inside `BuiltIn.Wait Until Keyword Succeeds`
name:: remove data from keywords that match
the given pattern. The pattern is matched
against the full name of the keyword (e.g.
'MyLib.Keyword', 'resource.Second Keyword'),
is case, space, and underscore insensitive,
and may contain `*` and `?` as wildcards.
Examples: --removekeywords name:Lib.HugeKw
--removekeywords name:myresource.*
--flattenkeywords for|foritem|name: * Flattens matching keywords
in all generated outputs. Matching keywords get all
log messages from their child keywords and children
are discarded otherwise.
for: flatten for loops fully
foritem: flatten individual for loop iterations
name:: flatten matched keywords using same
matching rules as with
`--removekeywords name:`

unix: can i write to the same file in parallel without missing entries?

I wrote a script that executes commands in parallel. I let them all write an entry to the same log file. It does not matter if the order is wrong or entries are interleaved, but i noticed that some entries are missing. I should probably lock the file before writing, however, is it true that if multiple processes try to write to a file simultaneously, it will result in missing entries?
Yes, if different processes independently open and write to the same file, it may result in overlapping writes and missing data. This happens because each process will get its own file pointer, that advances only by local writes.
Instead of locking, a better option might be to open the log file once in an ancestor of all worker processes, have it inherited across fork(), and used by them for logging. This means that there will be a single shared file pointer, that advances when any of the processes writes a new entry.
In a script you should use ">> file" (double greater than) to append output to that file. The interpreter will open the destination in "append" mode. If your program also wants to append, follow the directives below:
Open a text file in "append" mode ("a+") and give preference to printing only full lines (don't do multiple 'print' followed by a final 'println', but print the entire line with a single 'println').
The fopen documentation states this:
DESCRIPTION
The fopen() function opens the file whose pathname is the
string pointed to by filename, and associates a stream with
it.
The argument mode points to a string beginning with one of
the following sequences:
r or rb Open file for reading.
w or wb Truncate to zero length or create file
for writing.
a or ab Append; open or create file for writing
at end-of-file.
r+ or rb+ or r+b Open file for update (reading and writ-
ing).
w+ or wb+ or w+b Truncate to zero length or create file
for update.
a+ or ab+ or a+b Append; open or create file for update,
writing at end-of-file.
The character b has no effect, but is allowed for ISO C
standard conformance (see standards(5)). Opening a file with
read mode (r as the first character in the mode argument)
fails if the file does not exist or cannot be read.
Opening a file with append mode (a as the first character in
the mode argument) causes all subsequent writes to the file
to be forced to the then current end-of-file, regardless of
intervening calls to fseek(3C). If two separate processes
open the same file for append, each process may write freely
to the file without fear of destroying output being written
by the other. The output from the two processes will be
intermixed in the file in the order in which it is written.
It is because of this intermixing that you want to give preference to
using only 'println' (or its equivalent).

How to insert text into middle of text file in QT?

I'm writing a program that performs several tests on a hardware unit, and logs both the results of each test and the steps taken to perform the test. The trick is that I want the program to log these results to a text file as they become available, so that if the program crashes the results that had been obtained are not lost, and the log can help debug the crash.
For example, assume a program consisting of two tests. If the program has finished the first test and is working on the second, the log file would look like:
Results:
Test 1 Result A: Passed
Test 1 Result B: 1.5 Volts
Log:
Setting up instruments.
Beginning test 1.
[Steps in test 1]
Finished test 1.
Beginning test 2.
[whatever test 2 steps have been completed]
Once the second test has finished, the log file would look like this:
Results:
Test 1 Result A: Passed
Test 1 Result B: 1.5 Volts
Test 2 Result A: Passed
Test 2 Result B: 2.0 Volts
Log:
Setting up instruments.
Beginning test 1.
[Steps in test 1]
Finished test 1.
Beginning test 2.
[Steps in test 2]
Finished test 2.
All tests complete.
How would I go about doing this? I've been looking at the help files for QFile and QTextStream, but I'm not seeing a way to insert text in the middle of existing text. I don't want to create separate files and merge them at the end because I'd end up with separate files in the event of a crash. I also don't want to write the file from scratch every time a change is made because it seems like there should be a faster, more elegant way of doing this.
QFile.readAll will read the entire file into a QByteArray.
On the QByteArray you can then use insert to insert text in the middle,
and then write it back to file again.
Or you could use the classic c style that can modify files in the middle with the help of filepointers.
As #Roku pointed out, there is no built in way to insert data in a file with a rewrite. However if you know the size of the region, i.e., if the text you want to write has a fixed length, then you can write an empty space in the file and replace it later. Check
this discussion in overwriting part of a file.
I ended up going with the "write the file from scratch" method that I mentioned being hesitant about in my question. The benefit of this technique is that it results in a single file, even in the event of a crash since the log and the results are never placed in different files to begin with. Additionally, rewriting the file only happens when adding new results (an infrequent occurrence), whereas updating the log means simply appending text to the file as usual. I'm still a bit surprised that there isn't a way to have the OS insert text into a file for you.
Oh, and for those of you who absolutely must have this functionality as efficiently as possible, the following might be of use:
http://www.codeproject.com/Articles/17716/Insert-Text-into-Existing-Files-in-C-Without-Temp
You just cannot add more stuff in the middle of a file. I would go with two separate files, another for the results and another for the logs.

Resources