Export unique JSON values from two files - jq

I am trying to extract unique values between two JSON files. I see many jq posts on how to filter unique values within the same file, but not compare two.
Both of my files are in the same format:
{
"time":"2021-10-01T04:00:38.161Z",
"Number":2,
"signature":"e03756fa67a30d52837d3743d4d87e9a810c5e2ddf11061a976c386a742fa"
}
{
"time":"2021-10-01T04:01:38.164Z",
"Number":2,
"signature":"3b4d746ac2da2543047d8cc981db2464d4993065993449b321fc15d7f0aa6"
}
I would like to create a 3rd file which contains only unique values. If I must choose a single value to declare as unique, then I would select 'signature.'

Choose a field that will be compared (e.g. .signature) and filter by that using unique_by in the comprehensive array obtained by using the option --slurp or -s:
jq -s 'unique_by(.signature)[]' file*.txt

I'm not sure if I totally understand what you are trying to explain to us here, but if you are trying to extract/export it from your file to a command or a retrieval command, then you would need to specify which files need to be included, along with where you want to post that text to.
With any files you can extract data. For example, if you were using Sqlite:
db.fetch(`data_specified_here`)
Note: This would fetch the data from the database—or for you db file— then what you would want to do is either log or print out the data.
Since you have things like "time" and "Number"'s, you'd want to specify that that (meaning "time":2021-10-01, and so on) you need to specify that it is the string, or input you with to take out from your file.
If this didn't help, please re-ask your question with a little more detail and I can help more. I just gave a general rundown on how to fetch something from the DB, or in your case "JSON".

Related

using multi choice and collection in blueprism

I have about 3 collections and i want to write into an excel and send them in mail separately, I tried to use multi choice to finalize one after the other but it doesn't work
any other idea how to do this?
I will put screen shots below for more illustration
It's likely not functioning the way you intended because each of the three Data Items you're attempting to compare to a blank string ("") aren't Text-typed - rather, they're Collections. Collections themselves can't be compared to strings (see: apples and oranges), but the data contained within them can.
What you're likely attempting to do is to compare the value within the current row of each of those collections - it's not clear what the field name is from your screenshot (the field name itself is cut off on the right edge) but your comparisons should look something like the following:
[No Amount.Politic]<>""
[Pending difference.Field]<>""
[Ready to print.Field]<>""

Is it possible to filter the list of fields when outputting a Full Dataset?

I have a DataTable that I'm passing to a FlexCel report. It contains a variable number of columns, so I'm using the Full Dataset feature (e.g. <#table_name.*>).
However, only a subset of the fields are dynamically generated (I have a variable number of attachments). The column name for each attachment field starts with a common word (e.g. "Attachment0", "Attachment1", etc).
What I would like to do is output the known finite set of fields and then the variable number of attachments. It would be nice if I could write something like <#table_name.Attachment*> (and <#table_name.Attachment**>). Is there any way in FlexCel Reports I can achieve the same result?
A side benefit to such a solution means that I could keep the formatting for the known/finite set of fields.
Update
I added place holder columns to the document, each with a <#delete column> tag, so that the un-wanted columns/data are removed.
Although this works, it's not ideal. For example, if I want to see how the columns fit in the page width (in print preview), then I need to hide the columns. Then I have to remember to un-hide them again, so other developers can see/understand my handy work.
It would be much more straight forward if I could filter the fields before they're output to the document.
I realised there's an alternate way around this problem. I broke up the data into two sets of data - <#table_name.*> and <#table_name_attachments.*>.
The fixed set of fields are in the first table and the variable set of fields is in the second table (all the "Attachment*" fields). When the report is run, I place them next to each other (in the same order) in the same worksheet. This means I have two table ranges - "_table_name_" and "_table_name_attachments_" on the one sheet.
Now I'm able to run my print preview without hiding/re-showing the columns-to-be-deleted. I've also eliminated human error - it was all to easy to accidentally set the wrong number of padded/delete columns.

element-attribute-range-query fetching result but element-attribute-value-query is not fetching any result

I wanted to fetch the document which have the particular element attribute value.
So, I tried the cts:element-attribute-value-query but I didn't get any result. But the same element attribute value, I am able to get using cts:element-attribute-range-query.
Here the sample snippet used.
let $s-query := cts:element-attribute-range-query(xs:QName("tit:title"),xs:QName("name"),"=",
"SampleTitle",
("collation=http://marklogic.com/collation/codepoint"))
let $s-query := cts:element-attribute-value-query(xs:QName("tit:title"),xs:QName("name"),
"SampleTitle",
())
return cts:search(fn:doc(),($s-query))
The problem with range-query is it needs the range index. I have hundreds of DB's in multiple hosts. I need to create range indexes on each DB.
What could be the problem with attribute-value-query?
I found the issue with a couple of research.
Actually the result document is a french language document. It has the structure as follows. This is a sample.
<doc xml:lang="fr:CA" xmlns:tit="title">
<tit:title name="SampleTitle"/>
</doc>
The cts:element-attribute-value-query is a language dependent query. To get the french language results, then language needs to be mentioned in the option as follows.
cts:element-attribute-value-query(xs:QName("tit:title"),xs:QName("name"), "SampleTitle",("lang=fr"))
But cts:element-attribute-range-query don't require the language option.
Thanks for the effort.

Read a CSV file that have indefinite number of columns every time and create a table based on column names in csv file

I have a requirement to load the csv into DB using oracle apex or pl/sql code, but the problem is they are asking to load the csv file which will not come with same number of columns and column names .
I should create table & upload data dynamically based on the file name and data that i'm uploading.
For every file i need to create a new table dynamically and insert data that are present in csv file.
For Example:
File1:
col1 col2 col3 col4 (NOTE: If i upload File 1, Table should be created dynamically based on the file name and table should contain same column name and data same as column headers of csv file . )
file 2:
col1 col2 col3 col4 col 5
file 3:
col4 col2 col1 col3
Depending on the columns and file name i need to create table for every file upload.
Can we load like this or not?
If yes, Please help me on this.
Regards,
Sachin.
((Where's the PL/SQL code in this solution!!??! Bear with me... the
answer is buried in here somewhere... I introduced some considerations
and assumptions you will need to think about before going into the
task. In the end, you'll find that Oracle APEX actually has a
built-in solution that satisfies exactly what you've specified... with
some caveats.))
If you are working within the Oracle APEX platform, you will have some advantages. APEX Version 4.2 and higher has a new page element called "Data Loading". The disadvantage however is that the definition of the upload target is fixed and not dynamic. You will need to know how your table is structured prior to loading the data.
One approach to overcome this is to build a generic, two-column table as your target, which will serve for all uploads. Column 1 will be your file-name and column two will be a single clob data type, which will contain the entire data file's contents including the header row. The "Data Loading" element will give the user the opportunity to verify and select this mapping convention in a couple of clicks.
At this point, it's mostly PL/SQL backend work doing the heavy lifting to parse and transform the data uploaded. As far as the dynamic table creation, I have noticed that the Oracle package, DBMS_SQL allows the execution of DDL SQL commands, which could be the route to making custom tables.
Alex Poole's comment is important as well, you will need to make some blanket assumption about the data type or have a provision to give more clues about what kind of data is contained. Assuming you can rely on a sample of existing data values is not good... what if all the values in your upload are null? I recommend perhaps a second column in the data input with a clue about the type of data for each column... just like the intended header names, maybe: AAAAA = for a five character column, # = for a numeric, MM/DD/YYYY = for a date with a specific masking.
The easier route:
You will need to allow your end-user access to a developer-role account on a workspace of your APEX server. It is not as scary as you think. With careful instruction and some simple precautions, I have been able to make this work with even the most non-technical of users. The reason for this is that there is a more powerful upload tool found under the following menu item:
SQL Workshop --> Utilities --> Data Workshop
There is a choice under "Data Load" --> "Spreadsheet Data"
The data load tool will automatically do the following:
Accept a CSV formatted file through a browse function on your client machine
Upload the file and parse the first record for the column layout (names)
Allow the user to create a new table from the uploaded file, or to map to an existing one.
For new tables, each column data type can be declared and also a specific numeric/date mask if additional conversion from the uploaded data is necessary.
Delimiter type, optional enclosures (like double quotes), decimal conventions and currency types can also be declared prior to parsing the uploaded file.
Once the user has identified all these mappings and settings, the table is created with the uploaded data. Any errors in record upload are reported immediately afterwards with detailed feedback on the failed records.
A security consideration to note:
You probably do not want to give end users access to your APEX server's backend... but you CAN create a new workspace... just for your end users... create a new database schema for receiving their uploads, maybe with some careful resource controls. Developer is the minimum role needed... but even if the end users see the other stuff there won't be access to anything important from an isolated workspace.
I have implemented the isolated workspace approach on a 4.0/4.1 release APEX platform a few years back, and it worked nicely. Our end user had control over the staging and quality checking of her data inputs (from excel spreadsheet/csv exports collected from a combination of sources). I suppose it may have been even better to cut her out of the picture entirely and focused on automating the export-review-upload process between our database and her other sources. In this case, the volume of data involved was not great enough (100's to 1000's of records) and the need for manual review and edit of the exported data was very important prior to pushing it into the database... so the human element was still important in this case - it is something you'll want to think about now.

Merging two feeds to one stream. How unite?

I work with Yahoo Pipes, and have two 'XPath Fetch Page' sources.
Individually, they work perfectly.
One Page. Creating pubDate field
Second Page. Creating other fields
At now, i want insert pubDate filed from first feed to second. I will use UNION module
But pubDate field is not present in the final result.
If i change input order of Union module i get pubDate only. Why?
How insert pubDate in the output stream?
Unfortunately, you cannot easily merge or join entries of two different feeds.
The union operator works like in SQL: the union of a feed with entries { entryA, entryB, entryC } and another feed with entries { entryX, entryY } becomes the set { entryA, entryB, entryC, entryX, entryY }. That is, the entries are unmodified. The entries from both feeds are included in the resulting set without any modification or interaction between the two feeds.
The only way to merge data from two different sources is by nesting your pipes:
Create a first pipe that takes parameter X
Create a second pipe that will have a loop, and for each entry it will make a call to the first pipe, passing some value as parameter X
It's not efficient, not great, but possible, it works.

Resources