How to allow only GET and PATCH request using apigee? - apigee

I am using apigee to allow only GET and PATCH request. I want to raise a fault for all other requests with error code 403. For that I have used Raised Fault policy.
<PreFlow name="PreFlow">
<Request>
<Step>
<Name>RF-only-GET-PATCH</Name>
<Condition>(request.verb !="GET") || (request.verb !="PATCH")</Condition>
</Step>
</Request>
<Response/>
</PreFlow>
Also tried following combinations :
<Condition>(request.verb !="GET") || (request.verb !="PATCH")</Condition>
<Condition>(request.verb !="GET") or (request.verb !="PATCH")</Condition>
<Condition>((request.verb !="GET") || (request.verb !="PATCH"))</Condition>
this is not worked.
What should be the condition to raise the fault : RF-only-GET-PATCH ?

Related

Can we create Munit for on Error propagate in mule 4?

Does on Error Propagate Munit should fail or pass?
Scenario:
When I get db connectivity error, it triggers email with error notification. Flow reference to Error email flow. Flow reference is placed in On Error Propagate.
I have mocked database with Error, DB CONNECTIVITY in error tab and wrap flow Ref in Try block. My tests are failing.
Test suite should fail with error message or pass with green colour in Munit tests?
<munit:test name="implementation-suiteTest" doc:id="804123309-0b27-4e5a-beb3-5372b91eafc3" expectedErrorType="DB:QUERY_EXECUTION">
<munit:behavior >
<munit:set-event doc:name="Set Event" doc:id="5d0290f4-9855-42ba-b434-a4322457819c">
<munit:payload value="#[output application/java --- readUrl('classpath://insertdatabaseFlowtest\mock_payload.dwl')]" />
</munit:set-event>
<munit-tools:mock-when doc:name="Mock " doc:id="d8493233-303b-4765-89b4-9ae19bdffa1f" processor="db:bulk-insert">
<munit-tools:with-attributes>
<munit-tools:with-attribute whereValue="b645fb60-7b08-4939-894d-a476ee58b325" attributeName="doc:id" />
</munit-tools:with-attributes>
<munit-tools:then-return >
<munit-tools:payload value="#[output application/java --- readUrl('classpath://insertdatatodatabaseFlowtest\mock_payload1.dwl')]" />
<munit-tools:error typeId="DB CONNECTIVITY" />
</munit-tools:then-return>
</munit-tools:mock-when>
<munit-tools:mock-when doc:name="Send Email" doc:id="c9cb74e7-9b56-4634-a272-d954d4ab8fb2" processor="email:send">
<munit-tools:with-attributes >
<munit-tools:with-attribute whereValue="3f8bda1c-2e22-43d8-8e0a-3da9a6653dd7" attributeName="doc:id" />
</munit-tools:with-attributes>
<munit-tools:then-return >
<munit-tools:payload value="#[output application/java --- readUrl('classpath://insertdatatodatabaseFlowtest\mock_payload2.dwl')]" />
</munit-tools:then-return>
</munit-tools:mock-when>
</munit:behavior>
<munit:execution>
<try doc:name="Try" doc:id="e73eae0a-25fb-46bb-b0e9-e845bb7310e0" >
<flow-ref doc:name="Flow-ref " doc:id="8582675c-3203-4f70-8988-370e82cd5249" name="insert-dataFlow" />
<error-handler >
<on-error-continue enableNotifications="true" logException="true" doc:name="On Error Continue" doc:id="7999aa4b-4959-4c64-98c4-284f883c5778" >
<logger level="INFO" doc:name="Logger" doc:id="6018dc63-a176-41aa-9ca3-be39264c9e0f" />
</on-error-continue>
</error-handler>
</try>
</munit:execution>
<munit:validation>
<munit-tools:assert-that doc:name="Assert that" doc:id="6b255b30-e43a-4001-a62d-e6374a9d16f0" expression="#[payload]" is="#[MunitTools::containsString('Error Email Sent')]"/>
</munit:validation>
</munit:test>
Thanks

adding 1 user with htpasswd in 2 different servers using ssh connection

i'm working with apache camel and i want to add one user in two differents servers.And i want to test if ssh.redundancy=true.This is my code :
<simple ${headers.op} == 1</simple>
<doTry id="try-cmd-httpd">
<setBody id="httpd.cmd.htpasswd">
<simple>htpasswd -b /etc/httpd/passwords ${header.login} ${header.passwd} {{httpd.io_redir}}</simple>
</setBody>
**<to id="to_exec_htpaswd" uri="ssh://{{ssh.user}}:{{ssh.passwd}}#{{ssh.host}}:{{ssh.port}}"/>**
<log id="htpasswdResp_log" message="response: ${body}"/>
**<to id="to_exec_htpaswd2" uri="ssh://{{ssh.user}}:{{ssh.passwd}}#{{ssh.host2}}:{{ssh.port}}"/>**
<log id="htpasswdResp_log2" message="response: ${body}"/> ```
I found the solution.Just add a choice for the parametre ssh.redundancy and invoque for the second time.
<when id="redundancytrue">
<simple>{{ssh.redundancy}} == "true"</simple>
<setBody id="httpd.cmd.htpasswd">
<simple>htpasswd -b /etc/httpd/passwords ${header.login} ${header.passwd} {{httpd.io_redir}}</simple>
</setBody>
<to id="to_exec_htpaswd2" uri="ssh://{{ssh.user}}:{{ssh.passwd}}#{{ssh.host2}}:{{ssh.port}}"/>
</when>
</choice>```

run keyword and return status get stuck before the keyword ends

does anybody observed a fault where the
run keyword and return status get stuck and doesn't return from keyword ?
Its just not closing the keyword....
I was waiting 3 days... (ok it was weekend)
<kw name="Run Keyword And Return Status" library="BuiltIn">
<doc>Runs the given keyword with given arguments and returns the status as a Boolean value.</doc>
<arguments>
<arg>taut_ssh.SftpClient.File Should Exist</arg>
<arg>${filepath}${filePattern}</arg>
</arguments>
<assign>
<var>${ret}</var>
</assign>
<kw name="File Should Exist" library="taut_ssh.SftpClient">
<doc>Fails if the given `path` does NOT point to an existing file.</doc>
<arguments>
<arg>${filepath}${filePattern}</arg>
</arguments>
<msg timestamp="20200229 00:45:28.987" level="FAIL">file "/saps/tmn/GDI.PMAAV167.00.001.tgz" does not exist</msg>
<status status="FAIL" endtime="20200229 00:45:28.988" starttime="20200229 00:45:28.984"></status>
</kw>
<msg timestamp="20200229 00:45:28.988" level="INFO">${ret} = False</msg>
<status status="PASS" endtime="20200229 00:45:28.988" starttime="20200229

Gracenote eyeQ GNIDs changed over time

On August 6th, I made a TVGRID_LOOKUP request with the gracenote eyeQ API.
The response returned an Episode of "The Big Bang Theory - The Zazzy Substitution" (airing-time 21:45).
The TVPROGRAM GNID was 442470733-5294AFF66A2B66D6CF9368BCE777839F.
Today I made the same request and got a different GNID (445129959-C521A678BE53213977744678C90B202C).
What happend? I thought GNIDs are unique?
Just in case, here's my request:
<?xml version="1.0"?>
<QUERIES>
<AUTH>
<CLIENT>__CLIENT_ID__</CLIENT>
<USER>__USER_ID__</USER>
</AUTH>
<QUERY CMD="TVGRID_LOOKUP">
<TVCHANNEL>
<GN_ID>251533333-26F45A038CFBD8323F70D3944EB16008</GN_ID>
</TVCHANNEL>
<DATE TYPE="START">2014-08-11T20:00</DATE>
<DATE TYPE="END">2014-08-11T20:10</DATE>
</QUERY>
</QUERIES>
TVPROGRAM's are unique within the TVGRID, but are not guaranteed to be consistent from day to day. However, if you do a follow up query, you can get a unique GN_ID of the AV_WORK that represents the show/episode. For example:
<QUERIES>
<AUTH>
<CLIENT>_your_client_id_</CLIENT>
<USER>_your_user_id_</USER>
</AUTH>
<LANG>eng</LANG>
<COUNTRY>usa</COUNTRY>
<QUERY CMD="TVPROGRAM_FETCH">
<GN_ID>445129959-C521A678BE53213977744678C90B202C</GN_ID>
</QUERY>
</QUERIES>
Returns:
...
<AV_WORK>
<GN_ID>240234711-A3BEDE6BF00D48B35FAE5F0E66305B30</GN_ID>
</AV_WORK>
...
This AV_WORK GN_ID will be the same between the different TVPROGRAM GN_IDs you received.

"XMLCommand.initialize failed: java.lang.NullPointerException" when using dataset-proxy in a workflow databroker

I'm creating a workflow databroker, and in the pre-workflow I am using a dataset-proxy to iterate over the populate-dataset. However I get the following error when I compile:
XMLCommand.initialize failed: java.lang.NullPointerException
at nz.co.aviarc.xml.command.dataset.DatasetProxy.initialize(DatasetProxy.java:35)
at com.aviarc.framework.xml.command.XMLCommandElementImpl.finalize(XMLCommandElementImpl.java:90)
at com.aviarc.framework.xml.compilation.XMLSAXHandler.endElement(XMLSAXHandler.java:336)
at net.sf.saxon.event.ContentHandlerProxy.endElement(ContentHandlerProxy.java:391)
at net.sf.saxon.event.NamespaceReducer.endElement(NamespaceReducer.java:213)
at net.sf.saxon.event.ReceivingContentHandler.endElement(ReceivingContentHandler.java:443)
at org.apache.xerces.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:598)
at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanEndElement(XMLNSDocumentScannerImpl.java:673)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1645)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324)
at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:875)
at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:798)
at org.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108)
at org.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1198)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:564)
at net.sf.saxon.event.Sender.sendSAXSource(Sender.java:404)
at net.sf.saxon.event.Sender.send(Sender.java:193)
at net.sf.saxon.IdentityTransformer.transform(IdentityTransformer.java:30)
at com.aviarc.framework.xml.compilation.AviarcXMLResourceCompiler.compile(AviarcXMLResourceCompiler.java:137)
...
I get exactly the same error even when I use the code example straight out of the documentation (com.aviarc.dataset:1.1.0):
<workflow xmlns:ds="urn:aviarc:xmlcommand:com.aviarc.dataset">
<ds:dataset-proxy dataset="ds" proxyname="dsproxy">
<set-current-row dataset="dsproxy" position="2" />
<set-field field="dsproxy.email" value="test#test.com" />
</ds:dataset-proxy>
</workflow>
Turns out that the documentation is wrong, as proxyname is not a valid attribute on dataset-proxy. I didn't see it at first (because of the huge stack trace) but I was also getting this warning:
Unknown attribute 'proxyname'
The correct attribute is name, not proxyname. Changing this resolved the error.

Resources