In SCC does it make any difference w.r.t. programming languages that the contracts are written in Groovy? - spring-cloud-contract

Pact document lists this as a flaw in SCC. To quote Pact:
Pact generates language-neutral acceptance contracts, in the form of
JSON pact files. These pact files can be created, or tested, by
anything that implements the Pact specification, whether the code is
Ruby, Javascript, the JVM, or any other language. Even though it is
possible to use SCC with non-JVM languages, it has no native support
for them and requires that contracts are written manually in YAML and
the use of Docker to run the tests.
However, on going through lots of docs for SCC, I find that it doesn't matter if you're using Groovy for writing the contract. The reason is that you're not really tied to Java because (based on my understanding)
The SCC plugin will do the job of creating the stubs for you + running the contract against the service
At the consumer side the stub can be used directly so there's no question of parsing the contract file directly
So my question is does it matter what language you use for SCC w.r.t. contracts written in Groovy? In other words, does writing SCC contracts in Groovy tie you down to a particular language anywhere in the entire workflow?

No it doesn't. Most likely that part of the Pact docs was written a couple of years ago. You can define the contract in YAML, Java, Groovy, Kotlin or Pact JSON and the functionality will be the same.

Related

ARM ADF Deployment Features

I am not sure that this is the appropriate forum but we are having some issues with the recommended CI/CD flow for Azure Datafactory which is requiring us to create our own script to deploy ADF resources using the ADF REST APIs instead of auto-generated ARM templates.
Before undertaking this work, we wanted to clarify a few assumptions about the ARM template deployment for ADF resources. Are all of the assumptions true?
ARM template deployment for ADF resources simply calls the ADF REST APIs to deploy resources and has the same limitations as calling the REST APIs ourselves?
ARM template deployment for ADF does not perform any optimizations before calling the REST APIs such as reading the current definition of resources before writing and only writing if the definition has changed.
Are there any other ARM limitations or optimizations that we should be aware of in order to make sure that our performance is as optimal as ARM?
As we know An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
ARM has its own benefits over REST APIs such as Incremental and complete deployments which are not available when using the REST APIs.
David Gaspard: Coming to your ask, ARM Templet is collection of JSON defines where you can include multiple REST APIs calls in single module and use it for deployment. You can re-use. It also allows you to use Variables. It dependence on what best suits for your requirement.
You can use incremental deployment in ARM which can take care of Optimization.
Few reference that might help.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-rest
related discussion : Azure ARM Templates and REST API
ARM Limitations : https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/best-practices

What is the use of GateWayScripts in DataPower?

Could you please let me know Any useful online resource to learns and implement some scenarios to explore more about it. Thanks.
Datapowers are historically, in order:
XML Tranformation acceleration Devices (that used to be a thing, XSLT was too slow to process)
SSL offloading devices (again, that used to be a thing, same reason)
Web site and Applications Gateways. Both web sites and web services security, concentrated around HTTP and SOAP/XML application layer mechanisms and standards (SSL/TLS, WS-S, SAML, etc.), but also token management, security conversion ... think "super SSO" + application security gateway
More specialized integration tools : Transformation of XML (with XSLT), Transformation to/from non-XML format (like CSV), Database connections, integration patterns (like routing, composing, and a LOT more). Some called the Datapower a lightweight ESB.
More specialized uses : B2B(EDI), JSON processing, REST/JSON support, API Mgmt (when used as deployment point for API Connect)
Notice that all later features needs the former ones (ESB is based on WS Security, etc.)
As you may know, most of Datapower devlopement is done with tranformations. The default, established language for them is XSLT (XQuery is also and historic, less popular option).
XSLT is both one of the most powerful and most horrible language to work with. Kind of like the Perl+REGEX of the XML world...
... but there is another problem with XSLT. It is not designed to work with JSON. Making the Datapower of 10 years ago heading for a fats retirement.
At first, IBM designed pseudo-XML ways of dealing with JSON. You could convert inbound JSON to XML and work with the JSON AS XML in XSLT. The inverse operation was to use XSLT to generate JSON... it worked perfectly but kind a looked like old school HTML/PHP merging code.
So IBM came up with a good idea: GatewayScript.
(Mostly based on many other good ideas)
GatewayScript is basically ECMAScript 2015 (ES6) + CommonJS 1.0 + Many super popular JS crypto libraries.
ECMAScript is obviously more known as JavaScript.
Pertaining to your question, the main advantage of GatewayScript is to enabled easier JSON Web Services Development of all the features in the list above, for modern REST/JSON APIs, instead of older (but still good) SOAP/XML Web Services.
GatewayScript has now been present for years, no longer a "beta" option.
Here are some other neat GatewayScript features:
Access to a DOM model, representing the incoming and outgoing version of the document, in simple JS notation.
Better errors in the logs when something does not work (you get the .js line number, unlike with the XSLT errors)
Better debugging options (you can enable a line-by-line debugger)
Some examples from the web written in Node.js and other JS frameworks can work... which is amazing
A very useful IBM site (Datapower Playground) where you can learn and test GatewayScripts examples without your own Datapower, à-la-w3cschool
And more.
I hope this helps.
GhislainCote's answer is very complete but basically GatewayScript is Node.js with an added framework for handling the session object which will contain your data/payload.
There are also some special objects, e.g. service-metadata and header-metadata that will contain DataPower variables and headers.
Sample scripts are available in the store:///gatewayscript/ directory and as the store:///healthcheck.js for example.
Also review the Knowledgecenter, it contains a lot of help and information about GatewayScript:
https://www.ibm.com/support/knowledgecenter/SS9H2Y_7.7.0/com.ibm.dp.doc/gatewayscript_model.html
GatewayScript is very powerful, I've coded support for AS2 de-/en-veloping (for customers not having the B2B Module option) and RosettaNet handling in GatewayScript so there is pretty much no limit to what you can achieve!

Step definitions library for Meteor-cucumber/chimp

Hi I am looking for predefined (common) step definitions for Meteor-cucumber\chimp.
I used PHP's Behat (BDD cucumber framework). There is this extensions and this class. Which allows you to have a common step definitions out of the box. You don't need to write those step definitions by yourself.
Down below it is the list of step definitions you got from Behat.
Short Answer
This sort of step-def library doesn't exist and we (the authors of Chimp) won't be adding them because we have seen they are very harmful in the long run.
It looks like you are wanting to write test scripts, in which case, you would be better off using Chimp with Mocha + Customer WebdriverIO commands and not Cucumber to write these.
Long Answer
Features files with plain language scenarios and steps are intended to discover and express the domain of your application. The natural freeform text encourages you to use language that you can use with the entire team - otherwise known as the ubiquitous domain language.
You are wanting to make one of the most common mistakes when it comes to Cucumber, and that is to use it as a UI testing tool. Using UI based steps breaks the ubiquitous language principle.
The step reuse should be around the business domain so that you create a ubiquitous domain language. If you use UI steps instead of specs, you end up creating technical debt without knowing it. Gherkin syntax is not easy to refactor and if you change your step implementations, you need to update in multiple places. For domain concerns, this is usually not a big issue, but for UI tests, it's likely you will heavily reuse steps.
It sounds like you are interested in good code reuse. If you think about it, WebdriverIO already has a great API and most of the steps you are wanting to use would just be wrappers around the API.
Rather than create this extraneous translation, you should just Mocha to write the tests and access WebdriverIO's API directly. This way, you have the full JavaScript language to employ some software engineering practices instead of the simplistic Gherkin parser.
WebdriverIO also has a great custom commands command that allows you to create all of the methods you have mentioned above. An extension file that adds a ton of these scripts would be VERY useful.
We have written a repository with best practices and some do's and don'ts lessons. In particular, you should see:
Lesson #1: Test Scripts !== Executable Specifications
Lesson #2: Say No To Natural Language Test Scripts
You might also want to read:
Aslak's view of BDD
BDD Tool Cucumber is Not a Testing Tool
To test my UI I will use Mocha. I don't need cucumber specs.
As a task runner I will use Chimp (Chimp uses webdriver.io).
Here is quick Mocha+Chimp how to.

Does Apache Thrift provide reflection API?

Does thrift provide a way to inspect struct fields at runtime?
My use case is with C# but the question is regarding the standard Thrift API.
There is no standard thrift API across languages so what you can do beyond serialization is highly language dependent. If you can't accomplish what you want using just reflection, examine the code that is generated by the thrift compiler for the thrift object you are interested in. I've not seen C# thrift generated code but it may contain additional data that could be useful to you.
I am very familiar with the Java implementation and I can tell you that using thrift with Java there is no need to use reflection at all. Every thrift generated class contains information that allows the deserializer to reconstruct the class from field id numbers. The java thrift compiler creates static members and methods which contain pretty much everything you would ever want. For Java it is actually better than reflection because it contains the element types for lists/maps/sets.
Now there is no guarantee that the formats of this data won't change in future versions of thrift but given that all of the various protocols depend on them the 'hidden' API should be fairly stable.
If you have access to the IDL at runtime you could use a parser for the IDL and infer the generated fields that way.
I'm not an expert in C# but you could maybe link to the native libparse library used in the Thrift executable (I'm not sure if the parse library is generic enough to use like that, I'm just assuming).
Alternatively you could use the parser from Facebook's Swift (https://github.com/facebook/swift/tree/master/swift-idl-parser, or download the JAR from http://central.maven.org/maven2/com/facebook/swift/swift-idl-parser/0.13.2/swift-idl-parser-0.13.2.jar). This is probably easier or better for your case IMO, even though it is a Java library I think it should convert just fine to CLR using IKVM.net.
A third stupidly simple and hackish way to do this would be to use the Thrift HTML generator to generate HTML documentation and parse that using regex or run it through HTML Tidy and parse it as XML

Working With JAVA Callout in apigee?

can any one explain java callout a little help will do.Actually i am having several doubts regarding where to add the expressions and message flow jar and where to add my custom jar.
Can i access the resources/java folder directly and can i use it to store my data?
First, check the docs on apigee at
Customize an API using Java
http://apigee.com/docs/api-services/content/customize-api-using-java
Keep in mind Java Callouts are only supported in the paid, Apigee Edge product, not the free Developer platform.
As you decide how to use Java, you should consider this basic hierarchy of policy management:
Policy Configuration First: Apigee policy configurations are in broad use and therefore tested daily by clients and most performant.
Javascript Callout: For stuff you can't do in a standard policy there is Javascript -- keep in mind this is "Compiled Javascript" which means at the time you deploy your project the JS gets interpreted by the Java Rhino engine and then runs like native code. Very fast, very scalable, and very easy to manage as your code is all in plain text files.
Java: You have to have a pretty compelling reason to use Java. Most common cases are where you have some complex connection that needs to be negotiated with custom encryption schemas or manipulating binary content. While perfomant, it's the most difficult code to manage (you upload compiled jars, so if someone takes over your work, the source code is in a separate place than your deployment bundle), and it's the most difficult to debug in the event of a failure.
To your specific question: All Apigee variables are available in Java and Java gives you pretty much god-like powers on the local server where the code is executed. Keep in mind, Apigee's physical architecture is distributed -- your jar may run on different servers for different API calls, so any persistent data (that you might want to store locally) should really be put into Key Value Map and read as needed. Keep your API development as stateless as possible.
Hope that helps.

Resources