Power Automate Flow: Convert json to readable PowerAutomate-Items - crm

In CRM I have a 'Doc_Config' value.
The 'Doc_Config' value gets passed to Power Automate Flow
With the data I populate an Microsoft Word Document. My problem here is, that instead of the data the raw text is filled into the Word Document.
Is there a way to convert the raw text so Power Automate recognizes the data I actually want? Like as if it is presented to the Flow like so:

Problem: You probably have copied the path for your objects and pasted the path value in your 'Doc_Config' value. Here the problems should be with the #{...} pattern.
Solution: Please, remove #{...} pattern from any objects that you are referring to by their path as in the example below:
incorrect:
#{items('Apply_to_each_2')?['productname']}
correct:
items('Apply_to_each_2')?['productname']
Background:
In Power Automate cloud flows, you reference objects that the dynamic content tooling offers. However, sometimes you want to catch objects that the dynamic content tooling cannot see, or does not provide. At these times, you can refer to them by specifying the path for them as in the example below.
items('Apply_to_each_2')?['productname']
You can observe the path for the objects by hovering over any object that the dynamic content tooling is providing you.

Another option would be to simply parse the data from your array, as it is already JSON.
I will add three links here to Imgur images as I can't post images directly yet, but the idea is very simple:
Append your data to your array variable
Add a Parse JSON task, click generate from sample, and paste in the JSON you use.
Your actions can be used in all other steps now.
The images will clarify a lot.
Append your data
Parse your data
Use your data

Related

How to export a complex element in a doc showing just a property and keeping all other information?

I need to import and export some documents from my web app written in .net-core to docx and viceversa: the users should be able to export, modify offline, and import back. Currently I am using OpenXml-PowerTools to export.
The problem is that there are dynamic contents that show the current value of some fields in the database so I should be able to export the document showing a face value (for instance an amount of money) and when importing back I should be able to recall the original reference (which is an object containing an expression and operations, like "sum_db_1 + sum_db_2" and info about the formatting of numbers and so on). Of course if needed everything can be treated as a String instead of a complex object.
In the original document the face value is shown (a text or an amount) while the original formula is stored like in this xml:
<reuse-link reuse="reuse-link">
<reuse-param name="figname" value="exp_sum_n"></reuse-param>
<reuse-param name="format" value="MC"></reuse-param>
</reuse-link>
In short, I need the possibility to export a complex object in Word that shows the face value and keeps somewhere also the other additional fields of the original object so they can be retrieved once imported back. The possibility of editing the "complex" values is not foreseen.
How can I achieve this?
I tried to negotiate with customers explaining they should only edit online but they are not flexible to change their internal workflow that foresee an exchange of the document between various parties.
Thank you in advance for your help.
I suggest you use one or more Custom XML Parts to store whatever additional information you need. You will probably need to create a naming convention that will allow you to relate elements/attributes in those Parts to the "face values" (however they may be expressed).
A Custom XML Part can store any XML (the content does have to be valid XML). As long as you create them, and the necessary relationships, in the .docx or Flat OPC format .xml file, the Parts should survive normal use - i.e. the user would have to do something unusual to delete them.
You could also store the information in Word document variables, but Custom XML Parts look like a better fit to your scenario.
(Sorry, I am not yet allowed to post comments or respond to them here).

Stripping Carriage Returns in ADF Data Wrangle not working

I'm currently trying to build an ADF pipeline using the new Data Wrangling Data Flow, which is effectively the Power Query element of PowerBI as far as I can see (I'm more of a PBI developer!).
In the data flow, I am picking up a CSV file from an SFTP location and using the wrangle to transform the data and load into a SQL server database.
I am successfully picking up the file and loading it into a table, however the CSV contains carriage returns within the cells, which cause additional lines to be inserted into my table.
Using the wrangling data flow, I have added a step that removes the carriage return. I can visibly see the change has been applied in the post steps:
Pre Change: Example of pre change
Post Change: Example of post change
However, when I pass the data wrangling step into my pipeline, it seems to load the data ignoring the step to remove the #(CR)#(LF) - i.e. the carriage return inserts as new lines into my table. Example of Data Inserted to Table
So I guess my question here is does anyone have any experience of using a Data Wrangling data flow to strip out carriage returns and if so can they give me a bit of guidance as to how they made it work? As far as I can see, the carriage returns are taken into account before it goes through the data wrangle - which kinda defeats the objective of using it!
Thanks
Nick
It looks as if this is a limitation of the tool currently - given it is only in preview, this will likely be resolved going forwards, however at this time it does not appear as if the functionality to strip carriage returns is working

Where I can I find a public URL that returns a dataset of approx 3000 rows in JSON format for testing

I need to put together a jsBin example that demonstrates a problem I'm having with some UI controls, which doesn't manifest itself with a only a few records. I need a dataset of about 3000-5000 rows in JSON format that can be obtained via a URL by an AJAX XHR call. Can someone suggest a website with possibly government or open-source data that can be used for such testing?
P.S. It can't just be a download of a zipped file that can be expanded into a JSON text file. I need a JSON XHR response.
P.P.S. Ideally it would have 50-75 distinct values in one of the columns so I could demonstrate a grouping/aggregation issue. Data by US State or by Zipccode within a state would be excellent.
P.P.P.S. I've been searching the internet and found this site, now trying to figure out how to get JSON instead of XML:
http://www.sba.gov/about-sba-services/7617#city-county-state
All you have to do is this:
http://www.sba.gov/about-sba-services/7617#city-county-state/NY.json
You can find a lot of open data here
free open data
Have you looked at Freebase there should be a query to get you that many rows and they offer json responses.
EDIT: Theres a similiar site DBPedia I built this query which will return JSON and has about 3k rows:
http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=select+distinct+%3FConcept+where+%7B%5B%5D+a+%3FConcept%7D+LIMIT+3000&format=json%2Fhtml&timeout=30000&debug=on
you can go here and customize the query if you need more data.
-Ken
Why not create a page with a LOOP that generate those records as you desire. Shouldn't be so hard.
Maybe a Java Servlet.

TALEND DI Job to create points-polygons and store them to postGis table

which is the best way to create points and polygons starting from raw data and than store them into a postGIS table containing a geometry column?
Thanks
Let me try to elabore a more precise answer.
The first step is reading the raw data. There are many components to read data from : JSON, XML, Delimited text file, Excel, etc
I would read that data into a tMap component in order to get full control of the data being read.
Then, I would create a user routine that receives in input the raw data, and return the data in the format you want.
Then, I would use something like the sPostGisInput to insert this data into the PostGisDB
I don't know how postgis work, so perhaps you will need to adapt this solution a little bit in order to get it working.
Helpful link : Strategy to load a set of files in Talend
Helpful blog : http://bekwam.blogspot.com/
Helpful wiki : https://github.com/talend-spatial/talend-spatial/wiki

Is it possible (and wise) to add more data to the riak search index document, after the original riak object has been saved (with a precommit hook)?

I am using riak (and riak search) to store and index text files. For every file I create a riak object (the text content of the file is the object value) and save it to a riak bucket. That bucket is configured to use the default search analyzer.
I would like to store (and be able to search by) some metadata for these files. Like date of submission, size etc.
So I have asked on IRC, and also given it quite some thought.
Here are some solutions, though they are not as good as I would like:
I could have a second "metadata" object that stores the data in question (maybe in another bucket), have it indexed etc. But that is not a very good solution especially if I want to be able to do combined searches like value:someword AND date:somedate
I could put the contents of the file inside a JSON object like: {"date":somedate, "value":"some big blob of text"}. This could work, but it's going to put too much load on the search indexer, as it will have to first deserialize a big json object (and those files are sometimes quite big).
I could write a custom analyzer/indexer that reads my file object and generates/indexes the metadata in question. The only real problem here is that I have a hard time finding documentation on how to do that. And it is probably going to be a bit of an operational PITA as I will need to push some erlang code to every riak node (and remember to do that when I update the cluster, when I add new nodes etc.) I might be wrong on this, if so, please, correct me.
So the best solution for me would be if I could alter the riak search index document, and add some arbitrary search fields to it, after it gets generated. Is this possible, is this wise, and is there support for this in libraries etc.? I can certainly modify the document in question "manually", as a bucket with index documents gets automatically created, but as I said, I just don't know what's the right thing to do.

Resources