I've created a very small symfony2 bundle here: https://github.com/BranchBit/AirGramBundle
Just a simple service, which calls a remote url.
Now, unit-testing-wise, should I even be testing this? Really all that can go wrong, is the remote host not being available, but that's not my code's issue.
If this were your bundle to maintain, what would you suggest, I would like 100% code coverage, however 50% of the code, is just calling the remote url ..
Suggestions?
This test is unnecessary in my opinion. If you try to test it, then you can use WebTestCase. Your service in the Bundle should accept a Symfony\Component\BrowserKit\Client for client implementation. You can choose between Goutte\Client for production firing with cURL or Symfony\Component\HttpKernel\Client for testing. Then you have to dynamically add a route or a request listener.
Doesn't matter how, but the test is more complex than the implementation, so how can you be sure, your test is okay?
Related
End-to-end logging between multiple components using Application Insights (AI) is based on a hierarchical Request-Id header. So each component is responsible to honer a possible incoming Request-Id. To get the full end-to-end hierarchical flow correct in Application Insights the Request-Id header needs to be used as the AI Operation.Id and Operation.ParentId (as described here).
But when making a request with a Request-Id header to a Azure Function using a HttpTrigger binding for example (Microsoft.NET.Sdk.Functions 1.0.24) with integrated Application Insights configured (as described here) a completely new Operation.Id is created and used - causing the whole flow in AI to be lost. Any ideas on how to get around this?
Setting up a separate custom TelemetryClient might be an option. But seems to require a lot of configuration to get the full ExceptionTrackingTelemetryModule and DependencyTrackingTelemetryModule right - especially when using Functions v2 and Core (ref to AI config). Anyone go that successfully working?
This is not yet supported by Functions but should start working sometime early next year.
If you want to hack it through, you can add a reference to ApplicationInsights SDK for AspNetCore (v 2.4.1) and configure RequestTrackingTelemetryModule.
static Function1()
{
requestModule = new RequestTrackingTelemetryModule();
requestModule.Initialize(TelemetryConfiguration.Active);
}
private static RequestTrackingTelemetryModule requestModule;
This is pretty sketchy, not fully tested and has drawbacks. E.g. request collected is no longer augmented with functions details (invocation id, etc). To overcome it you need to get the real TelemetryConfiguration from the Function dependency injection container and use it to initialize the module.
It should be possible, but is blocked by some issue.
But even with the code above, you should get requests that respect incoming headers and other telemetry correlated to the request.
Also, when out-of-the-box support for correlation for http request is rolled out, this may break. So this is a hacky temporary solution, use it only if you absolutely have to.
We use nomad to deploy our applications - which provide gRPC endpoints - as tasks. The tasks are then registered to Consul, using nomad's service stanza.
The routing for our applications is achieved with envoy proxy. We are running central envoy instances loadbalanced at IP 10.1.2.2.
The decision to which endpoint/task to route is currently based on the host header and every task is registered as a service under <$JOB>.our.cloud. This leads to two problems.
When accessing the service, the DNS name must be registered for the loadbalancer IP which leads to /etc/hosts entries like
10.1.2.2 serviceA.our.cloud serviceB.our.cloud serviceC.our.cloud
This problem is partially mitigated by using dnsmasq, but it is still a bit annoying when we add new services
It is not possible to have multiple services running at the same time which provide the same gRPC service. If we e.g. decide to test a new implementation of a service, we need to run it in the same job under the same name and all services which are defined in a gRPC service file need to be implemented.
A possible solution we have been discussing is to use the tags of the service stanza to add tags which define the provided gRPC services, e.g.:
service {
tags = ["grpc-my.company.firstpackage/ServiceA", "grpc-my.company.secondpackage/ServiceB"]
}
But this is discouraged by Consul:
Dots are not supported because Consul internally uses them to delimit service tags.
Now we were thinking about doing it with tags like grpc-my-company-firstpackage__ServiceA, ... This looks really disgusting, though :-(
So my questions are:
Has anyone ever done something like that?
If so, what are recommendations on how to route to gRPC services which are autodiscovered with Consul?
Does anyone have some other ideas or insights into this?
How is this accomplished in e.g. istio?
I think this is a fully supported usecase for Istio. Istio will help you with service discovery w/ Consul and you can use route rules to specify which deployment will provide the service. You can start explore from https://istio.io/docs/tasks/traffic-management/
We do something similar to this, using our own product, Turbine Labs.
We're on a slightly different stack, but the idea is:
Pull service discovery information into our control plane. (We use Kubernetes but support Consul).
Organize this service discovery information by service and by version. We use the tbn_cluster, stage, and version (like here).
Since version for us is the SHA of the release, we don't have formatting issues with it. Also, they don't have to be unique, because the tbn_cluster tag defines the first level of the hierarchy.
Once we have those, we use UI / API to define all the routes (e.g. app.turbinelabs.io/stats -> stats_service). These rules include the tags, so when we deploy a new version (deploy != release), no traffic is routed to it. Releases are done by updating the rules.
(There's even some nice UI affordances for updating those rules for the common case of "release 10% of traffic to the new version," like a slider!)
Hopefully that helps! You might check out LearnEnvoy.io -- lots of tutorials and best practices on what works with Envoy. The articles on Service Discovery Integration and Incremental Blue/Green Releases may be helpful.
when I use the cloudify(2.7) to deploy an application(e.g. an application app includes two services A and B ),I try to use the Admin.addEventListener() to add some eventListener,but it does't work !
I try to add the ProcessingUnitStatusChangedEventListener ,when I debug the code,the value of (ProcessingUnitStatusChangedEvent)event.getNewStatus() changes from SCHEDULED to INTACT,then SCHEDULED,then INTACT again,
I also try to add the ProcessingUnitInstanceLifecycleEventListener,when I debug the code,the status is intact,but the service is not available!
Is there any other listener or method to know the application(not the services) is available,or I use the listener in the wrong way?
First, the Admin API is internal - use it at your own risk. And you should not be using it the way you are - Cloudify adds a lot of logic on top of the internal Admin API.
Second, it is not exactly clear where you are executing your code from.
You can always use the rest client to get an accurate state of the application. Look at https://github.com/CloudifySource/cloudify/blob/master/rest-client/src/main/java/org/cloudifysource/restclient/RestClient.java#L388
In addition, if you are running this code in a service lifecycle event handler, the easiest way to implement this is to have your 'top' level service, the one that should be available last, write an application entry to the shared attributes store in its 'postStart' event. Everyone else can just periodically poll on this entry. The polling itself is very fast, all in-memory operations.
If you do not have a top-level service, or your logic is more complicated then that, you would need to use the Service Context API to scan each service and its instances to see if they are up. An explanation on getting service instance state is available here:
cloudify service dependsOn other service
I have the following requirement:
I want to test my UI (using graphene and drone) but I need to login to the application before all tests. I have a Database authentication Realm in jboss which for testing uses an H2 in-memory database. So what I need is to add a user (username, password) before all tests in the users table so I can login successfully in my application.
I first tried to inject my users EJB in the test class so I can create a user before all tests in the database. This is impossible because the UI tests run on the client (testable=false). It was obvious that at the time I did not know exactly how arquillian works...
I then tried to use arquillian persistence extension and #UsingDataSet annotation but this also fails for the same reason (although I am not sure why, since I don't know exactly how this annotation works).
Finally, I tried to create a Singleton EJB with #Startup annotation and on its #PostConstruct method create the user I need. When debugging, I can see in the H2 console that the user is created. But when I run my tests the login still fails.
Can someone explain why this last case fails because I do not understand. But most importently if someone knows how to make this work I would greately appreciate it!
So, when you run a client side test nothing gets added to your archive. If it's possible for you, mark your archive as testable=true and then these extensions will be added to the test. Your can do things like have a setup method that runs on the server, however the values for the user will not be sent you. You'll need to insert them. As long as you're ok with that need this should work for you.
Our web services are distributed across different servers for various reasons (such as decreasing latency to the client), and they're not always all up-to-date. Rather than throwing an exception when a method doesn't exist because the particular web service is too old, it would be nicer if we could have the client check if the service responds to a given method before calling it, and otherwise disable the feature (or work around it).
Is there a way to do that?
Get the WSDL (append ?wsdl to the URL) - you can parse that any way you like.
Unit test the web service to ensure its signatures don't break. When you write code that breaks the method signature, you'll know and can adjust the other applications accordingly.
Or just don't break the web services and publish them in a way that enable syou to version them. As in http://services.domain.com/MyService/V1.1/Service.asmx (for .NET) so that way your applications that use v1.1 won't break when you publish v1.2 and make breaking changes.
I would also check out using an internal UDDI server if it's really that big of a hasle to manage your web services. Using the Green Pages of UDDI will tell you what you want to know about the service.
When you are making a SOAP request you are just sending an HTTP request to a server. If the server understands it, it will respond with an HTTP 200 and some XML back, if it doesn't it will send you some error HTTP code (404, 500, ...)
There is no general way to ask for the existance of a "method" exposed by a web service. Try to use the WSDL exposed if it is automatic, or just try to use the "method" and check for an error in the response (you don't have to send an exception to the user...)
Also, I don't know if I understood you well, but you are thinking of quering the server twice, once to check if the method exists, and second to make the actual call it if it does? I would just check for the error if it doesn't, and proceed normally if it does.