Looking to extract values from api_http array. I am looking for output that looks like the following. Each element should have the name and the url value attached a key called api.
{ "name": "lookproduct1", "api": "http://testapi.api.com"}
{ "name": "lookproduct2", "api": "http://testapi2.api.com"}
{ "name": "lookproduct3", "api": "http://testapi3.api.com"}
{ "name": "lookproduct4", "api": "http://testapi4.api.com"}
the JSON data:
{
"meta": {
"details": {
"value": "Details"
},
"network": {
"label": "Network:",
"value": "test"
},
"title": {
"value": "Test Report"
},
"update": {
"label": "Validation last update:",
"value": "2020-07-15 17:40 UTC"
}
},
"report": {
"api_http": [
[
{
"html_name": "Product 1",
"name": "lookproduct1",
"rank": 3
},
"http://testapi.api.com",
"GB",
"TEST"
],
[
{
"html_name": "Product 2",
"name": "lookproduct2",
"rank": 3
},
"http://testapi2.api.com",
"GB",
"TEST"
],
[
{
"html_name": "Product 3",
"name": "lookproduct3",
"rank": 3
},
"http://testapi3.api.com",
"GB",
"TEST"
],
[
{
"html_name": "Product 4",
"name": "lookproduct4",
"rank": 3
},
"http://testapi.api.com",
"GB",
"TEST"
]
]
}
}
I got the following, but unsure to extract those final two values and create the new output.
.report[] | .[]
Try:
.report.api_http[]|{name:values[0]["name"],api:values[1]}
My output is:
{
"name": "lookproduct1",
"api": "http://testapi.api.com"
}
{
"name": "lookproduct2",
"api": "http://testapi2.api.com"
}
{
"name": "lookproduct3",
"api": "http://testapi3.api.com"
}
{
"name": "lookproduct4",
"api": "http://testapi.api.com"
}
You could use the -c command-line option in conjunction with the following jq filter:
.report.api_http[]
| {name: .[0].name, api: .[1]}
Related
I am trying to use jq to create a json from a template json file using --args and the template file. When I execute the below command, jq just hangs in there forever.
I am a rookie with jq, would really appreciate if someone can point out what am I doing wrong.
template.jq
{
"channel": "channel",
"attachments": [
{
"color": "#a7dbb5",
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": $SUMMARY,
"emoji": true
}
},
{
"type": "divider"
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Build ID: * <\($BUILD_URL)|\($BUILD_ID)>\n*Duration:* \($DURATION)\n*User: *<\($USER_EMAIL)|\($USER_NAME)>\n*Test Cases:* \($TEST_CASES)"
},
"accessory": {
"type": "image",
"image_url": "https://raw.githubusercontent.com/sudas-px/dev-repo/main/check.png",
"alt_text": "status thumbnail"
}
},
{
"type": "divider"
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Portworx*\nv\($PX_VERSION)"
},
{
"type": "mrkdwn",
"text": "*PX Backup*\nv\($PX_BACKUP_VERSION)"
}
]
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Stork Image:*\n\($STORK_IMAGE)"
},
{
"type": "mrkdwn",
"text": "*Kubernetes:*\nv\($K8S_VERSION)"
}
]
},
{
"type": "divider"
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Github Repository*"
},
{
"type": "mrkdwn",
"text": $GITHUB_REPO
}
]
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Branch*"
},
{
"type": "mrkdwn",
"text": $GITHUB_BRANCH
}
]
},
{
"type": "divider"
},
{
"type": "actions",
"block_id": "actionblock789",
"elements": [
{
"type": "button",
"text": {
"type": "plain_text",
"text": "View Pipeline"
},
"style": "primary",
"url": $BUILD_URL
},
{
"type": "button",
"text": {
"type": "plain_text",
"text": "View Logs"
},
"url": $KIBANA_URL
}
]
}
]
}
]
}
This is the command I ran
jq --arg SUMMARY "Summary" --arg BUILD_ID "BUILD_ID" --arg BUILD_URL "BUILD_URL" --arg DURATION "DURATION" --arg USER_EMAIL "EMAIL" --arg USER_NAME "USER" --arg TEST_CASES 3 --arg PX_VERSION "VERSION" --arg PX_BACKUP_VERSION "PX_VERSION" --arg STORK_IMAGE "IMAGE_STORK" --arg K8S_VERSION "1.23.0" --arg GITHUB_BRANCH "branch" --arg GITHUB_REPO "repo" --arg KIBANA_URL "url" -f template.jq
Your "template" is just a filter that requires no input, but you forgot to tell jq that the filter won't need any input. As a result, jq is waiting to read from standard input. Use the -n option to tell jq it doesn't need to read from standard input.
jq -n <lots of --args> -f template.jq
Here's what I'm looking to do.
file1.json
{
"info": {
"id": "",
"name": "Text Fields",
"schema": "url"
},
"item": [
{
"name": "CompanyName Field",
"item": [
{
"name": "CompanyName is CompanyName1"
}
]
}
]
}
file2.json
[
{
"name": "Phone Field",
"item": [
{
"name": "Phone is 1234"
}
]
},
{
"name": "Job Field",
"item": [
{
"name": "Job is Job1"
}
]
}
]
Expected output after running jq
file1.json
{
"info": {
"id": "",
"name": "Text Fields",
"schema": "url"
},
"item": [
{
"name": "CompanyName Field",
"item": [
{
"name": "CompanyName is CompanyName1"
}
]
},
{
"name": "Phone Field",
"item": [
{
"name": "Phone is 1234"
}
]
},
{
"name": "Job Field",
"item": [
{
"name": "Job is Job1"
}
]
}
]
}
As a first step I tried to at least concatenate the arrays of the two files and get that as an output before trying to get them in the first file itself but that itself is not happening.
Here's what I tried
jq '.item .' file1.json file2.json
but I get the following error:
jq: error: syntax error, unexpected $end, expecting FORMAT or QQSTRING_START (Unix shell quoting issues?) at <top-level>, line 1:
.item .
jq: 1 compile error
I tried searching a lot, trust me. There are a lot of queries with similar titles but they all seem to be very specific problems when you look into each one. Please help.
Use --argfile to read in the second file into a variable, then += to add it to the existing array in .item
jq --argfile f file2.json '.item += $f' file1.json
{
"info": {
"id": "",
"name": "Text Fields",
"schema": "url"
},
"item": [
{
"name": "CompanyName Field",
"item": [
{
"name": "CompanyName is CompanyName1"
}
]
},
{
"name": "Phone Field",
"item": [
{
"name": "Phone is 1234"
}
]
},
{
"name": "Job Field",
"item": [
{
"name": "Job is Job1"
}
]
}
]
}
I'm creating an UGC post with mediaCategory IMAGE and multiple images attached uploaded with the recommended Assets Api.
I've noticed that LinkedIn does not respect the original order we are sending through.
Has anyone experienced this as well or has any idea what I'm missing?
{
"author": "urn:li:organization:5590506",
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"media": [
{
"media": "urn:li:digitalmediaAsset:ID1",
"status": "READY",
"title": {
"attributes": [],
"text": "Asset 1"
}
},
{
"media": "urn:li:digitalmediaAsset:ID2",
"status": "READY",
"title": {
"attributes": [],
"text": "Asset 2"
}
},
{
"media": "urn:li:digitalmediaAsset:ID3",
"status": "READY",
"title": {
"attributes": [],
"text": "Asset 3"
}
},
{
"media": "urn:li:digitalmediaAsset:ID4",
"status": "READY",
"title": {
"attributes": [],
"text": "Asset 4"
}
},
{
"media": "urn:li:digitalmediaAsset:ID5",
"status": "READY",
"title": {
"attributes": [],
"text": "Asset 5"
}
}
],
"shareCommentary": {
"attributes": [],
"text": "Some share text"
},
"shareMediaCategory": "IMAGE"
}
},
"visibility": {
"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"
}
}
POST https://api.linkedin.com/v2/ugcPosts
I think you want to maintain the order of media in single post.
But I see same media ID "media": "urn:li:digitalmediaAsset:C5500AQG7r2u00ByWjw",
5 times. May be it is just a dummy message.
I think you should try get on it
GET https://api.linkedin.com/v2/ugcPosts/{encoded ugcPostUrn|shareUrn}?viewContext=AUTHOR
and you can see media order. I think media order stays same. It is just how LinkedIn is displaying it.
{
"author": "urn:li:organization:5590506",
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"media": [
{
"media": "urn:li:digitalmediaAsset:C5500AQG7r2u00ByWjw",
"status": "READY",
"title": {
"attributes": [],
"text": "Sample Video Create"
}
}
],
"shareCommentary": {
"attributes": [],
"text": "Some share text"
},
"shareMediaCategory": "VIDEO"
}
},
"targetAudience": {
"targetedEntities": [
{
"locations": [
"urn:li:country:us",
"urn:li:country:gb"
],
"seniorities": [
"urn:li:seniority:3"
]
}
]
},
"visibility": {
"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"
}
}
I have swagger api-docs.json definition generated by SpringFox.
Below minimal-reproducible-example:
{
"swagger": "2.0",
"info": {
"description": "Example REST API.",
"version": "15.11.02",
"title": "Example REST API",
"contact": {
"name": "ExampleTeam",
"url": "https://example.com/",
"email": "support#example.com"
},
"license": {
"name": "Apache License 2.0",
"url": "https://www.apache.org/licenses/LICENSE-2.0.txt"
}
},
"host": "d01088db.ngrok.io",
"basePath": "/cloud",
"tags": [
{
"name": "All Endpoints",
"description": " "
}
],
"paths": {
"/api/v2/users/{userId}/jobs/{jobId}": {
"get": {
"tags": [
"Builds",
"All Endpoints"
],
"summary": "Get job.",
"operationId": "getJobUsingGET",
"produces": [
"*/*"
],
"parameters": [
{
"name": "jobId",
"in": "path",
"description": "jobId",
"required": true,
"type": "integer",
"format": "int64"
},
{
"name": "userId",
"in": "path",
"description": "userId",
"required": true,
"type": "integer",
"format": "int64"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/APIPipelineJob"
}
},
"401": {
"description": "Unauthorized"
},
"403": {
"description": "Forbidden"
},
"404": {
"description": "Not Found"
}
},
"deprecated": false
}
}
},
"definitions": {
"APIPipelineJob": {
"type": "object",
"properties": {
"archiveTime": {
"type": "string",
"format": "date-time",
"example": "example"
},
"content": {
"type": "string",
"example": "example"
},
"createTime": {
"type": "string",
"format": "date-time",
"example": "example"
},
"id": {
"type": "integer",
"format": "int64",
"example": "example"
},
"name": {
"type": "string",
"example": "example"
},
"selfURI": {
"type": "string",
"example": "example"
},
"type": {
"type": "string",
"example": "example",
"enum": [
"BUILD",
"DEPLOY"
]
},
"userId": {
"type": "integer",
"format": "int64",
"example": "example"
}
},
"title": "APIPipelineJob",
"xml": {
"name": "APIPipelineJob",
"attribute": false,
"wrapped": false
}
}
}
}
When I import it to SwaggerHub I got standardization error:
'definitions.*' not allowed -> API must not have local definitions (i.e. only $refs are allowed)
I have found the recommended solution in SwaggerHub documentation
But here is my question how to achieve:
split into domains(then using a reference), or
inline schemas
with Springfox
Or maybe there is another way to get rid of the above standardization error?
If you go to your home page, then hover over your organization on the left hand side and go to settings > Standardization, you should see some options. Unselect "API must not have local definitions (i.e. only $refs are allowed)" at the bottom.
And don't forget to save at the top right!
PingFederate 8.2.2 is used for our systems.All the REST APIs to create PF objects are automated but /sp/adapter(https://pfhost:9999/pf-admin-api/v1/sp/adapters) is not working using the below JSON .Manually If I create the same configuration it works.The below mentioned JSON is retrieved from already manually created /sp/adapter.But when I use the same JSON in API call which says error as below, please help to solve this problem.
ERROR:
{
"resultId": "validation_error",
"message": "Validation error(s) occurred. Please review the error(s) and address accordingly.",
"validationErrors": [
{
"message": "'' is not a valid selection for 'Send Extended Attributes'",
"fieldPath": "configuration.fields[21].value",
"errorId": "plugin_validation_error"
}
]
}
JSON:
{
"id": "opentokenadapt1",
"name": "opentokenadapt1",
"pluginDescriptorRef": {
"id": "com.pingidentity.adapters.opentoken.SpAuthnAdapter"
},
"configuration": {
"tables": [],
"fields": [
{
"name": "Password",
"value": "Password123"
},
{
"name": "Confirm Password",
"value": "Password123"
},
{
"name": "Transport Mode",
"value": "2"
},
{
"name": "Token Name",
"value": "opentoken"
},
{
"name": "Cipher Suite",
"value": "2"
},
{
"name": "Authentication Service",
"value": ""
},
{
"name": "Account Link Service",
"value": ""
},
{
"name": "Logout Service",
"value": ""
},
{
"name": "Cookie Domain",
"value": ""
},
{
"name": "Cookie Path",
"value": "/"
},
{
"name": "Token Lifetime",
"value": "300"
},
{
"name": "Session Lifetime",
"value": "43200"
},
{
"name": "Not Before Tolerance",
"value": "0"
},
{
"name": "Force SunJCE Provider",
"value": "false"
},
{
"name": "Use Verbose Error Messages",
"value": "false"
},
{
"name": "Obfuscate Password",
"value": "true"
},
{
"name": "Session Cookie",
"value": "false"
},
{
"name": "Secure Cookie",
"value": "false"
},
{
"name": "HTTP Only Flag",
"value": "true"
},
{
"name": "Send Subject as Query Parameter",
"value": ""
},
{
"name": "Subject Query Parameter ",
"value": ""
},
{
"name": "Send Extended Attributes",
"value": ""
},
{
"name": "Skip Trimming of Trailing Backslashes",
"value": "false"
}
]
},
"attributeContract": {
"coreAttributes": [
{
"name": "subject"
}
],
"extendedAttributes": [
{
"name": "nsroles"
}
]
}
}
"Send Extended Attributes" needs a valid value (not the empty string you've given it). The possible values are "0" (None), "1" (Cookies) or "2" (Query Parameters).
One tip in trying to narrow these issues down: try building the SP adapter instance in the PingFederate administrative console (UI) then compare it with the JSON model you GET from the API.