Create bitmap file and send it to a client - biztalk

I need to create a bitmap file with below specification and send it to client using BizTalk
BIT NAME Attribute Length
- Msg Type n 4
- Bit map b 64
1 Bit map, Extended b 64
2 UniqueID n …19
7 Transmission Date and Time n 10
This is the first time I am working with bitmap fields.. Can someone provide me the example of the records as per the above bitmap fields specification.
And how do we send the record using biztalk..like which piepline should we use.

It looks like a positional flat file.
So create a positional flat file schema and use a flatfile pipeline.

Related

Decode JSON RPC request to a contract

I am currently using some website to read some useful data. Using the browser's Inspect>Network I can see this data comes from JSON RPC requests to (https://bsc-dataseed1.defibit.io/) the public available BSC explorer API endpoint.
This requests have the following format:
Request params:
{"jsonrpc":"2.0","id":43,"method":"eth_call","params":[{"data":"...LONGBYTESTRING!!!","to":"0x1ee38d535d541c55c9dae27b12edf090c608e6fb"},"latest"]}
Response:
{"jsonrpc":"2.0","id":43,"result":"...OTHERVERYLONGBYTESTRING!!!"}
I know that the to field corresponds to the address of a smart contract 0x1ee38d535d541c55c9dae27b12edf090c608e6fb.
Looks like this requests "queries" the contract for some data (but it costs 0 gas?).
From (the very little) I understand, the encoded data can be decoded with the schema, which I think I could get from the smart contract address. (perhaps this is it? https://api.bscscan.com/api?module=contract&action=getabi&address=0x1ee38d535d541c55c9dae27b12edf090c608e6fb)
My goal is to understand the data being sent in the request and the data given in the response so I can reproduce the data from the website without having to scrape this data from the website.
Thanks.
The zero cost is because of the eth_call method. It's a read-only method which doesn't record any state changes to the blockchain (and is mostly used for getter functions, marked as view or pure in Solidity).
The data field consists of:
0x
4 bytes (8 hex characters) function signature
And the rest is arguments passed to the function.
You can find an example that converts the function name to the signature in this other answer.

What exactly is "bulk data" in WADO-RS standard?

[Referring to http://dicom.nema.org/medical/dicom/2016e/output/chtml/part18/sect_6.5.html]
When we are talking about WADO-RS, NEMA mentions that:
Every request (we'll leave out /metadata & /rendered requests for now) can have accept-type of three types:
1. multipart/related; type="application/dicom" [dcm-parameters]
------- (DICOM File format as mentioned in PS3.10)
2. multipart/related; type="application/octet-stream" [dcm-parameters]
------- (Bulk data)
3. multipart/related; type="{media-type}" [dcm-parameters]
------- (Bulk data)
For all these accept types, response is created as multipart with each part corresponding to a particular Instance. Now I understand the first case (application/dicom) in which we'll have fill each response part with each SOP Instance's .dcm counterpart. (for e.g., if the WADO RS is for a Study, then the multipart response will have one part for each SOP Instance's Dicom File Stream)
But when it comes to the bulk data I have few questions:
What exactly is bulk-data in WADO-RS standard? Is it only the 7FE00010 tag, or is it all the binary tags of an SOP Instance combined into one single binary data?
If it is just 7FE00010, then there will be one http response part each for every SOP Instance. Then how will the WADO-RS client come to know which bulk data is of which SOP Instance?
Information about this is limited on the internet. Hence asking here.
If any one has any article about this, that's welcome too.
Ps: I am new to DICOM/DICOMWeb
What I've generally used is to look at what's included (or should it be excluded?) in the Composite Instance Retrieve Without Bulk Data service class:
(7FE0,0010) Pixel Data
(7FE0,0008) Float Pixel Data
(7FE0,0009) Double Float Pixel Data
(0028,7FE0) Pixel Data Provider URL
(5600,0020) Spectroscopy Data
(60xx,3000) Overlay Data
(50xx,3000) Curve Data
(50xx,200C) Audio Sample Data
(0042,0011) Encapsulated Document
(5400,1010) Waveform Data (within a (5400,0100) Waveform Sequence)

C# BinaryReader/Writer equivalent in JAVA

I have a stream (hooked to an azure blob) which contains strings and integers. The same stream is consumed by a .net process also.
In C# the writing and reading is done through the type specific methods of BinaryWriter and BinaryReader classes e,g., BinaryWriter.Write("path1;path2") and BinaryReader.ReadString().
In Java, I couldn't find the relevant libraries to achieve the same. Most of the InputStream methods are capable of reading the whole line of the string.
If there are such libraries in Java, please share with me.
Most of the InputStream methods are capable of reading the whole line of the string.
None of the InputStream methods is capable of doing that.
What you're looking for is DataInputStreamand DataOutputStream.
If you are trying to read in data generated from BinaryWriter in C# you are going to have to mess with this on the bit level. The data you actually want is prefixed with an integer to show the length of the data. You can read about how the prefix is generated here:
C# BinaryWriter length prefix - UTF7 encoding
It's worth mentioning that from what I tested the length is written backwards. In my case the first two bytes of the file were 0xA0 0x54 convert this to binary to get 10100000 01010100. The first byte here starts with a 1 so it is not the last byte. The second byte starts with a 0 however so it is the last (or in this case first byte) for the length. So the resulting length prefix is 1010100 (taken from the last byte removing the indicator that it is the last byte) Then all previous bytes 0100000 which gives us the result of 10101000100000 or 10784 bytes. The file I was dealing with was 10786 bytes so with the two byte prefix indicating the length this is correct.

Sending files through connect direct from UNIX to MAINFRAME

I am sending a file from UNIX to MAINFRAME server via connect direct. I am able to upload the file successfully.At the destination host, when the file is received it is not readable and not in the same format as I sent from the UNIX server.
Below is the transmission job
Direct> Enter a ';' at the end of a command to submit it. Type 'quit;' to exit CLI.
submit maxdelay=unlimited TINIRS process snode=b1ap005
TRANSMIT copy from (file=myFile.txt
pnode
sysopts=":datatype=text"
)
ckpt=1k
to (file=myFile.txt(+1)
snode
DCB=(DSORG=PS,RECFM=VB,LRECL=1500)
disp=(new)
)
pend ;
Please let me know the DCB values needs to be updated. The file I am sending has 3 records of variable length and the maximum length of record is 1500.
Actually, that looks almost right. But if your maximum record length is 1500 characters (exclusive of the NL at the end of the line), your LRECL should be at least 1504. But don't skimp on the maximum - there's no cost or penalty to larger values (up to 32767). And NealB's correct - if this is a text file, you may need to specify a character-set translation - but I don't know how to do that in CONNECT:Direct.
C:D automatically converts ascii to EBCDIC when DATATYPE=TEXT is used. To be positive, you may want to use ":datatype=text:xlate=yes:".

Using BizTalk Flat File Disassembler to split incoming file by larger than 1 record?

I have an incoming flat file that I wish to receive and break into discrete chunks for more efficient processing. There is a nice sample post for BT2010 on getting the flat file disassembler to help with this here:
http://msdn.microsoft.com/en-us/library/aa560774(v=bts.70).aspx
However, near the bottom of the post you will see that they set the max occurs of the body record to 1 and neatly split the file into one message per record. However, I would like to split my file into chunks of 1000 records. However, when attempting to set the max occurs to 1000, the pipeline reads fine until the last chunk which is not an even 1000 records and then we get an unexpected end of stream error.
Is there a way to get the stock FF disassembler to play nice here, or do we need to write a custom disassembler? Or is there some other good way to get the chunking behavior we desire?
Thanks.
The max occurs is used to debatch messages from the incoming message, not to determine how many records should be in the output message. So you will have to create a custom flat file disassembler component which reads the incoming file in a batched fashion: read some data from the stream (e.g. based on the number of lines) and pass it on.
There seems to be a problem with how the GetNext method reads the data in larger files, which could results in excessive memory usage (I had a scenario where this happened with a 10Mb file containing about 800 000 line items). So all one needs to do is re-implement the GetNext method to cater for your scenario of outputting a certain number of records per message and at the same time be more efficient in processing larger messages.
Here is part of the original GetNext (the important parts) methods decompiled code:
private IBaseMessage GetNext2(IPipelineContext pc)
{
...
baseMessage = this.CreateOutputMessage(pc);
...
baseMessage = this.CreateOutputMessage(pc);
...
return baseMessage;
}
The "CreateOutputMessage" method ends up calling the "CreateNonrecoverableOutputMessage" method which is where the problem seems to lie when processing larger messages:
internal IBaseMessage CreateNonrecoverableOutputMessage(IPipelineContext pc)
{
...
XmlReader reader1 = this.m_docspec.Parse(this.m_inputData);
...
return message;
}
The "m_inputData" variable was created calling the "FFDasmComp.DataReaderFunction" delegate passed into the constructor of the flat file disassembler component. You might be able to control the reading of data by passing your own data reader method into the constructor of your custom implementation of the flat file disassembler component.
There are a couple of article out there, but the given implementations has some serious caveats when dealing with larger messages:
Debatching Large Messages and Extending Flatfile Pipeline Disassembler Component in Biztalk 2006
Processing 10 MB Flat File in BizTalk

Resources