In the book 'ActionScript Developer’s Guide to Robotlegs' it is said that services should use parsers for transforming data. In which package should I put a parser for a service in com.stackoverflow.services.fooService?
There is no strict rule in this, what works well for you.
A possible setup could be:
besides:
com.stackoverflow.control
com.stackoverflow.event
com.stackoverflow.model
you could have:
com.stackoverflow.remote
with another split up according the different API's which are used (divided according to endpoints + type e.g. JSON/AMF/SOAP/...)
com.stackoverflow.remote.api -> own back-end
com.stackoverflow.remote.twitterapi
from there you can go further:
com.stackoverflow.remote.api.service OR com.stackoverflow.remote.api.resource
com.stackoverflow.remote.api.host -> endpoint setup, security,... to inject in the service class
com.stackoverflow.remote.api.parser -> data parsers
Additionally you could include a control package to include the commands to configure the api (setup events, hosts, injection for parsers,...) and how it integrates to the main context.
Related
Making a basic https-get request from a pipeline transform results in "connectionError".
How should one consume an API using the "requests" library to extend some data within a pipeline?
from transforms.api import Input, Output, transform_pandas
import requests
#transform_pandas(
Output("..."),
df=Input("..."),
)
def compute(df):
# Random example
response = requests.get('https://api.github.com')
print(response.content)
return df
results in
Is this a configuration issue?
Following #nicornk's comment and the docs on external transforms, using external APIs from within Palantir is restricted by default and requires several administrative steps. The steps to call external APIs from within pipeline transforms are:
Check the settings of your code repository
Add the library "transforms-external-systems" to your code repositories libraries.
Adding the library, adds a new icon in the control panel on the left.
Click on the new icon and import an egress policy, if any available. Otherwise try to create a "network egress policy", which usually involves an approval process.
After having managed to "import"/install an egress policy into your code repository, import methods from the package "transforms.external.systems" and decorate your transformation, following the docs.
I'm trying to write contract tests for an object that contains a dictionary of objects. I want to verify the entries respect my contract. The keys are changing between the consumer and provider. Right now, the matching rules of my contract are trying to find specific keys in the body of my message such as "$.properties.desired.deploymentsRemovals['4JgEA5GCeqwVsu6Qada9XS'].appId"
Is it possible to write contract tests in my situation?
I'm using the PactNet nuget version 4.0.0-beta.3.
Using a matcher on the key such as
deployments = new Dictionary<object, object> {
{Match.Type("6XKISmGMWynbwM52mxov6S"),
new {...
produces a contract searching for "pactNet.Matchers.TypeMatcher" as the key
"deployments": {
"pactNet.Matchers.TypeMatcher": {
I'm Yousaf, A developer advocate here at Pact https://pact.io/ and Pactflow - https://pactflow.io/
We have an open forum about contract testing in our Pact Foundation Slack, you can join over at https://slack.pact.io
You may find the pact-net channel of particular interest.
.NET isn't my forte, and I haven't spend much time on StackOverflow in past, I hope to now!
You should be able to use matchers in your pact-net library, they were designed in V2 Pact specification onwards to solve that exact problem
Which particular version and library are you using, there are various implementations, both official and community supported.
There should be examples of their implementation in your respective libraries readme, but let me know if there isn't, and we can look to resolve.
We plan to display these matcher implementations across the various languages very soon
We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();
In cloudera is there a way to update list of configurations at a time using CM-API or CURL?
Currently I am updating one by one one using below CM API.
services_api_instance.update_service_config()
How can we update all configurations stored in json/config file at a time.
The CM API endpoint you're looking for is PUT /cm/deployment. From the CM API documentation:
Apply the supplied deployment description to the system. This will create the clusters, services, hosts and other objects specified in the argument. This call does not allow for any merge conflicts. If an entity already exists in the system, this call will fail. You can request, however, that all entities in the system are deleted before instantiating the new ones.
This basically allows you to configure all your services with one call rather than doing them one at a time.
If you are using services that require a database (Hive, Hue, Oozie ...) then make sure you set them up before you call the API. It expects all the parameters you pass in to work so external dependencies must be resolved first.
I am currently working on a project where we need to test the database packages and functions.
We need to provide the input parameters to the database package and test the packages returns the expected value, also we want to test the response time of the request.
Please advice, if there is any tool available to perform this or we can write our test cases in Junit or some other framework.
Which one will be best approach?
I've used a more native approach when I had to do DWH testing. I've arranged the Test framework around the Dev data integration framework that was already in place. So i had a lot of reusable jobs, configurations and code. But using OOP like you suggest
write our test cases in Junit
is a way to go too. But keep in mind that very often the DWH design is very complex (with a lot of aspects to consider) and interacting with the Persistence layer is not always the best candidate for testing strategy. A more DB oriented solution (like tSQLt) offers a significant performance.
Those resources helped me a lot:
dwh_testing
data-warehouse-testing
what-is-a-data-warehouse-and-how-do-i-test-it
My framework Acolyte provides a JDBC driver & tools, designed for such purposes (mock up, testing, ...): http://tour.acolyte.eu.org
It's used already in some open source projects (Anorm, Youtube Vitess, ...), either in vanilla Java, or using its Scala DSL.
handler = handleStatement.withQueryDetection(...).
withQueryHandler(/* which result for which query */).
withUpdateHandler(/* which result for which update */).
// Register prepared handler with expected ID 'my-unique-id'
acolyte.Driver.register("my-unique-id", handler);
// then ...
Connection con = DriverManager.getConnection(jdbcUrl);
// ... Connection |con| is managed through |handler|