Testable map base xml to flat file fails in BTS2013r2 - biztalk

I have lots of BTS2010 unit tests that check an XML file can be mapped to flat file.
I have developed my first of such tests on BTS2013r2 but on executing TestableMapBase.TestMap(_inputFilename, _inputType, outputFilename, _outputType), I get the error "Generate schema instance failure"
I've used reflector to debug the MS assemblies and got as far as the following line within CFrameworkSchemaTreeExtensions.cs of Microsoft.BizTalk.TOM.Adapter :
infoArray = instanceGenerator.GenerateInstance(filename, xmlInstance);
on executing, the infoArray is populated with the following error
ErrorInfo: hexadecimal value 0x00, is an invalid character. Line 2, position 1."
Prior to executing I have taken the content of xmlInstance, pasted into Notepad++ and used the Hex plugin to search for null characters (hex 0x00), there are none.
I have tried many different XML inputs to the maps on two different BizTalk development laptops and get the same result.
Has anyone been able to successfully run tests of XML to flat file in BTS2013r2?
Today I have created the most basic of solutions (1 BizTalk project + 1 unit test project) in order to test if this really is a Microsoft bug. It does seem that way because I got the same error when running this very simple test on a third BizTalk development laptop. I have added the source code to the following github repo: https://github.com/RobBowman/FFMapFailBTS2013r2

Make sure it is not an encoding issue. Finding a 0x00 at that position sounds like the input file is in UTF-16 format, while the processor is expecting UTF-8 or another single-byte encoding.

Microsoft have published a hotfix for this - see: https://social.msdn.microsoft.com/Forums/en-US/cacecbfd-8b71-409c-bd59-2eed26950f25/test-map-to-flat-file-in-bts-2013r2-does-this-ever-work?forum=biztalkgeneral

Related

First token could not be read or is not the keyword 'FoamFile' in OpenFOAM

I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.

Tesseract use of number-dawg

I need to specify a numeric pattern. I already made training normally.
I created a config file that has the line
user_patterns_suffix user-patterns
and the file user-patterns contains my patterns, for example:
:\d\d\d\d\d\d\d.
:\d\d\d\d\d\d\d\d\d;
!\d\d\d\d\d\d\d\d}
then I launch tesseract with the config file over a tif, and it tells me "Error: failed to insert pattern " message, for the first two patterns. It ultimately acts as if no pattern has been issued.
I need to recognize only and ever that patterns, and tried to train a language with a number-dawg file, but then, when using tesseract command, I got a segmentation fault.
I used in the number-dawg file its conversion of the above patterns:
: .
: ;
! }
The questions, as the google documentation is not clear, and I do not speak english:
the patterns file, where have to be used? I suppose number-dawg has to be used during training, but I got seg fault so couldn't try with it, and user-patterns during recognition phase, when launching Tesseract, but didn't work. Where am I doing errors?
do I need a dictionary, also, when training with number-dawg? I have a digit and punctiation only set of possible characters, and all the possible numbers in the digits, a dictionary is not possible. If I need to use dictionaries, how could I do?
Thanks in advance for help, any hint would be very appreciated

How to import Geonames into SQLite?

I need to import the Geonames database (http://download.geonames.org/export/dump/) into SQLite (file is about a gigabyte in size, ±8,000,000 records, tab-delimited).
I'm using the built-in SQLite-possibilities of Mac OS X, accessed through terminal. All goes well, until record 381174 (tested with older file, the exact number varies slightly depending on the exact version of the Geonames database, as it is updated every few days), where the error "expected 19 columns of data but found 18" is displayed.
The exact line causing the problem is:
126704 Gora Kyumyurkey Gora Kyumyurkey Gora Kemyurkey,Gora
Kyamyar-Kup,Gora Kyumyurkey,Gora Këmyurkëy,Komur Qu",Komur
Qu',Komurkoy Dagi,Komūr Qū’,Komūr Qū”,Kummer Kid,Kömürköy Dağı,kumwr
qwʾ,كُمور
قوء 38.73335 48.24133 T MT AZ AZ 00 0 2471 Asia/Baku 2014-03-05
I've tested various countries separately, and the western countries all completely imported without a problem, causing me to believe the problem is somewhere in the exotic characters used in some entries. (I've put this line into a separate file and tested with several other database-programs, some did give an error, some imported without a problem).
How do I solve this error, or are there other ways to import the file?
Thanks for your help and let me know if you need more information.
Regarding the question title, a preliminary search resulted in
the GeoNames format description ("tab-delimited text in utf8 encoding")
https://download.geonames.org/export/dump/readme.txt
some libraries (untested):
Perl: https://github.com/mjradwin/geonames-sqlite (+ autocomplete demo JavaScript/PHP)
PHP: https://github.com/robotamer/geonames-to-sqlite
Python: https://github.com/commodo/geonames-dump-to-sqlite
GUI (mentioned by #charlest):
https://github.com/sqlitebrowser/sqlitebrowser/
The SQLite tools have import capability as well:
https://sqlite.org/cli.html#csv_import
It looks like a bi-directional text issue. "كُمور قوء" is expected to be at the end of the comma-separated alternate name list. However, on account of it being dextrosinistral (or RTL), it's displaying on the wrong side of the latitude and longitude values.
I don't have visibility of your import method, but it seems likely to me that that's why it thinks a column is missing.
I found the same problem using the script from the geonames forum here: http://forum.geonames.org/gforum/posts/list/32139.page
Despite adjusting the script to run on Mac OS X (Sierra 10.12.6) I was getting the same errors. But thanks to the script author since it helped me get the sqlite database file created.
After a little while I decided to use the sqlite DB Browser for SQLite (version 3.11.2) rather than continue with the script.
I had errors with this method as well and found that I had to set the "Quote character" setting in the import dialog to the blank state. Once that was done the import from the FULL allCountries.txt file ran to completion taking just under an hour on my MacBookPro (an old one but with SSD).
Although I have not dived in deeper I am assuming that the geonames text files must not be quote parsed in any way. Each line simply needs to be handled as tab delimited UTF-8 strings.
At the time of writing allCountries.txt is 1.5GB with 11,930,517 records. SQLite database file is just short of 3GB.
Hope that helps.
UPDATE 1:
Further investigation has revealed that it is indeed due to the embedded quotes in the geonames files, and looking here: https://sqlite.org/quirks.html#dblquote shows that SQLite has problems with quotes. Hence you need to be able to switch off quote parsing in SQLite.
Despite the 3.11.2 version of DB Browser being based on SQLite 3.27.2 which does not have the required mods to ignore the quotes, I can only assume it must be escaping the quotes when you set the "Quote character" to blank.

Are there any equivalent of C/C++ __FILE__ and __LINE__ macros in R?

I'm trying to get the equivalent of FILE or LINE macros in C or C++ in R (or S+). Any ideas?
FILE The presumed name of the current source file (a character string literal).
LINE The presumed line number (within the current source file) of the current source line (an integer constant).
As for context - I have log messages being flushed to console from different sections of the code, and given that the messages themselves are built at run-time, it is often very difficult to find out where this log message is coming from (with the size of the R code growing to many thousand lines and running on a distributed grid). However if I could dump the FILE and LINE number along with the log messages, it would be much easier to trace the logs...
Use the #line directive. The structure is #line nn "filename". See Duncan's Murdoch's article on source references for more.

How do you transfer a binary file via Connect:Direct NDM?

I'm trying to submit a binary file, in this case, an Excel file from my local server (Solaris server with Mainframe rehosting software) using Connect:Direct NDM to a destination server (Mainframe).
Here are the environment values I set:
SODETFL "DetailedReport.xls"
SODDETNDM "FIN.REPORT(+1)"
TDCOPTS ":DATATYPE=BINARY:XLATE=NO:STRIP.BLANKS=NO"
Here is the NDM configuration I use:
ASSGNDD ddname='SYSIN' type='INSTREAM' << !
SIGNON 00260005
SUBMIT PROC=COPYFILE - 00270005
JOBNAME=JOB00001 - 00280005
PNODE=SERVER001 - 00290005
SNODE=NDMIDS - 00300005
SNODEID=(xxxxxx,xxxxxx) - 00310005
HOLD=NO - 00320005
NOTIFY=CCACTD - 00330005
NODE=, - 00360005
DSN1=${SODDETFL} - 00370005
DSN2=${SODDETNDM} -
DCBINFO='dcb=(dsorg=ps, recfm=vb, lrecl=1504)' - 00385005
DISP1=NEW, - 00390005
DISP2=CATLG,DELETE - 00400005
UNIT=BATCH - 00410005
SYSOPTS=${TDCOPTS} - 00440005
AEFAJOB=PSIAPNB5
SEL PROC WHERE (QUEUE=A) TABLE 00450005
SIGNOFF 00460005
I'm able to send text files via NDM all day long, no problems there. However, it seems that binary is a bit more difficult. When I try with the above configuration, I get the following error:
Completion Code => 8
Message Id => XCPS009I
Short Text => Read buffer too small. Possibly src reclen > dest reclen.
Ckpt=>Y Lkfl=>N Rstr=>N Xlat=>Y Scmp=>N Ecmp=>Y Ecpr=>0.00 CRC=>N Zlvl=>1 win=>13 Zmem=>4
Can anyone shed some light as to how I can go about submitting a binary file via NDM?
Off the cuff...
Try changing RECFM=VB to RECFM=U and specify a BLKSIZE= instead of a LRECL=
This is really not all that different from how executable load modules are stored on the mainframe except you don't want the file to be a PDS dataset. I'm not at my office right now and I think I have some examples of NDM that transmit load modules that I can look-up if this suggestion doesn't work but I think it will.
Give this suggestion a shot and if it still doesn't fly let me know.

Resources