Pad Rows in BizTalk Flat File Schema - biztalk

I'm trying to create a BizTalk Schema that looks something like the below (ignore the spaces between the lines- they're only there for clarity.):
"
x Rows will be inserted
Column1^Column2^Column3^Column4^Column5
Data1a^Data2a^Data3a^Data4a^Data5a
Data1b^Data2b^Data3a^Data4b^Data5b"
So, I have a blank row, followed by a Row count, followed by a header row before I get to the actual data. When I try to set up the schema so that the blank line (consists only of {CR}{LF}), and validate an instance, the schema fails (without an error message, to boot).
I'm running BizTalk 2009, and the file is a simple .txt file.

Okay... I found an answer: Feel free to chime in if you find/have a better one.
Set the blank row to a Field of Type xs:string.
Set the Header row to a single record (defining the fields all as strings).
Define the repeating record.
I'm still open to further suggestions if anyone has a better way, as this seems a little clunky to me.

Related

Is it possible to see the contents of the tables, used by appBuilder?

I'm working with appBuilder/procedure editor, release 11.6.
Recently, I had this question about a file, which could not be opened by the appBuilder. The answer as:
Debug the appBuilder and check the contents of the _TRG table tupple the appBuilder is working with.
And indeed the content of that particular _TRG tupple solved my answer.
My reaction now is:
If the content of a _TRG can explain why a particular problem arises, I would like to see the content of all _TRG tupples in order to avoid that problem happening in future.
In order to do this, I tried the "Data Administration" tool, "Dump data and definitions" (obviously after having chosen every possible database and after having checked the "Hidden tables" checkbox).
I also tried following piece of code in the procedure editor, but that didn't work too as the _TRG table seems nowhere to be known:
OUTPUT TO C:\Temp_Folder\_Trg.log.
FOR EACH _TRG:
PUT UNFORMATTED _tEvent "|" _tCode.
END.
However, the _TRG tables seems not to be known.
Does anybody know what I can do in order to access this table and how to obtain all its contents?
By the way, the _TRG table contains a _tEvent column, mentioning the name of the erroneous procedure and it contains a _tCode column, containing the erroneous (too large) code, but there seems not to be a column, containing the *.w file, containing that procedure. In which table will I find this information and what's the link with the _TRG table?
What led you to believe that _trg is a database table? When you open a source file do you select something from a database or are you using a file explorer?
If you look at what you are viewing in the debugger:
You can see that this is a temp-table, which is one of a whole set of temp-tables which is populated upon opening a file.

Tableau data source column name changed when using a duplicated view from database (Teradata)

I was using a view (VW_NEW_CUSTOMERS) in Teradata and all the column names had an underscore in it. The column names in tableau did not contain underscores.
For example:
Customer_Number (From Table View)
Customer Number (From Tableau Column Name)
Now I created a duplicate of the view (VW_NEW_CUSTOMERS_2), all the columns have the underscore in Tableau. So when I use replace data sources, the column name mapping is completely different from the above because of the underscores.
New Tableau fields from duplicated View:
Customer_Number (From Table View)
Customer_Number (From Tableau Column Name)
I would like to know why the underscores did not appear 1st time and it is now appearing when I duplicated the view. How can I rename the fields so that it comes like the 1st time? Should I do them manually now?
Note: Database columns were using aliases
Check this thread, this isn't new, Tableau decided to start renaming fields some time ago. Not sure why it would have done on one of your data sources, but not the other.
Anyway, the exec summary, you may need to reset the field names of the version without the underscore, which should bring the underscore back into your data, making both data sources the same. To do this, copied from the thread:
"Version 9.3 and 10.1, you can select all the measures (and dimensions) in a worksheet, right click and "reset names" in two operations"
I think there's also a way to hack the xml to add the spaces to your copy, should that be preferrable. The thread covers hacking the xml to remove spaces, therefore I assume to add spaces do the same but in reverse.

How to skip inserting value to Dynamics in Logic App with condition

I have many date values in CSV which is sent to logic app. In example:
date1;date2;date3;date4;date5;date6;date7
2011-12-30;2011-12-30;2011-12-30;2011-12-30;2011-12-30;2011-12-30;2011-12-30
2011-12-30;;2011-12-30;2011-12-30;2011-12-30;2011-12-30;2011-12-30
It is possible that there is empty date in the CSV. I need to insert those dates to Dynamics 365. As I insert those values if I insert the empty date it goes as "", which returns an error: "Cannot convert the literal '' to the expected type 'Edm.DateTimeOffset'.". Same happens if I try to pass null when date2 is empty ("").
Is there way to skip inserting anything with logic app? Or is there some other solution to this?
Most likely, you will need to stop sending CRM empty fields.
However, you're transforming the data from the CSV shape to the D356 shape, you need to check the data and not emit the field when the source also does not exist is otherwise invalid.
It seems you're using a Foreach over the input, then creating the D365 message with a Compose Action. The thing is, the Compose Action doesn't give you much control over the output.
Instead, you should use a Liquid Transform where you can test the input before emitting any field.
Liquid - Control Flow

Is it possible to filter the list of fields when outputting a Full Dataset?

I have a DataTable that I'm passing to a FlexCel report. It contains a variable number of columns, so I'm using the Full Dataset feature (e.g. <#table_name.*>).
However, only a subset of the fields are dynamically generated (I have a variable number of attachments). The column name for each attachment field starts with a common word (e.g. "Attachment0", "Attachment1", etc).
What I would like to do is output the known finite set of fields and then the variable number of attachments. It would be nice if I could write something like <#table_name.Attachment*> (and <#table_name.Attachment**>). Is there any way in FlexCel Reports I can achieve the same result?
A side benefit to such a solution means that I could keep the formatting for the known/finite set of fields.
Update
I added place holder columns to the document, each with a <#delete column> tag, so that the un-wanted columns/data are removed.
Although this works, it's not ideal. For example, if I want to see how the columns fit in the page width (in print preview), then I need to hide the columns. Then I have to remember to un-hide them again, so other developers can see/understand my handy work.
It would be much more straight forward if I could filter the fields before they're output to the document.
I realised there's an alternate way around this problem. I broke up the data into two sets of data - <#table_name.*> and <#table_name_attachments.*>.
The fixed set of fields are in the first table and the variable set of fields is in the second table (all the "Attachment*" fields). When the report is run, I place them next to each other (in the same order) in the same worksheet. This means I have two table ranges - "_table_name_" and "_table_name_attachments_" on the one sheet.
Now I'm able to run my print preview without hiding/re-showing the columns-to-be-deleted. I've also eliminated human error - it was all to easy to accidentally set the wrong number of padded/delete columns.

Biztalk out put Flat File has empty Records, how to avoid / remove

I am converting XML file to Flat file. I am struggling for two things.
Want to achieve tag number without mapping file tag field from source to destination. Is there any way where I can populate it if in case any value in entire row. It shouldn't display tag number if record is empty.
After i map fields, if there is no value then blank records is visible as below
101 JOB3434343 34343KKKK
301 SSSSJooojs kkkkkkkk
In the above, between 101 and 301 there is 201 which doesn't have any output value. But still the blank record is visible in output file. Please advice if anyone can.
What i am doing is as below,
there in flattening value mapping i am passing tag numbers as 101,201 and 301. however i want to see best approach.
Thank you.
For #2, Jobs_201 is probably created because there is a JOB_DETAILS record in the source that is essentially empty. You will have to link up some conditional functoids (do a Length > 0 on SLEVEL and STTYPE for instance) and link that to Jobs_201 to suppress the empty records.
Can you elaborate for #1 a bit. If you want a counter, you can use a technique similar to this: http://blogdoc.biztalk247.com/article.aspx?page=ec141ab4-78a7-4012-9273-2a50669b41e2

Resources