Hi I would like to monitor a Java application using the browser but at the same time utilising the existing JMX infrastructure.
I know that JMX provides a HTTP interface but I think it provides a standard web gui and its not possible to mashup its functionality with an existing system.
Are you aware of any REST interface for JMX?
My research on google currently shows that there is one project which does something similar. Is this the only option?
Jolokia is a new (at this time) JMX Agent you can install in your JVM and exposes the MBeanServer over HTTP in JSON format.
Tomcat provides a JMX Proxy Servlet in its Manager Application. I don't think it's exactly REST, but it's stateless and is built from simple HTTP requests, so it should be close enough.
For posterity, I've recently added a little web server to my SimpleJMX package. It exposes beans from the platform MBeanServer to HTTP via Jetty if in the classpath. There is also text versions of all pages that make it easy to scrape.
// create a new JMX server listening on a specific port
JmxServer jmxServer = new JmxServer(8000);
jmxServer.start();
// register any beans to jmx as necessary
jmxServer.register(someObj);
// create a web server publisher listening on a specific port
JmxWebServer jmxWebServer = new JmxWebServer(8080);
jmxWebServer.start();
There's a little test program which shows it in operation. Here's an image of java.lang:type=Memory accessed from a browser. As you can see the output is very basic HTML.
You might want to have a look at jmx4perl. It comes with an agent servlet which proxies REST request to local JMX calls and returns a JSON structure with the answers. It supports read, write, exec, list (list of mbeans) and search operations and knows how to dive into complex data structures via an XPath like expression. Look at the protocol description for more details.
The forthcoming release can deal with bulk (== multiple at once) requests as well and adds the possibility to post a JSON request as alternative to a pure REST GET-request.
In one of the next releases there will support a proxy mode so that no agent servlet needs to be deployed on the target platform, but only on an intermediate, proxy server.
MX4J is another alternative., quoting below from the it's home page -
MX4J is a project to build an Open Source implementation of the Java(TM) Management Extensions (JMX) and of the JMX Remote API (JSR 160) specifications, and to build tools relating to JMX.
Related
In an application .Net XCC being used to make communication with marklogic module database to execute module, function and adhoc queries etc.
I want to replace the same XCC calls with REST calls so that we can run application in marklogic 9 as .Net XCC has been deprecated in Marklogic 9.
I have tried in built rest api in marklogic. It only allows to execute module exiting in module database.
Is there any online source stuffs available or anything that could help us.
Any help would be appreciated.
Thanks,
ArvindKr
There is /v1/invoke to invoke modules in the modules database attached to the REST app-server you are addressing, but also /v1/eval that allows running ad hoc queries.
HTH!
If you're going to replace XCC.NET with RESTful calls, try out XQRS, it allows you to build services in XQuery in a manner similar to JAX-RS for Java.
I only consider the following for cases such as yours, where compatibility with legacy code is useful or required and where other options are exausted. This is not an elegant approach, but it may be useful in special cases.
The XDBC protocol (which is what XCC uses) is supported natively on the exactly same app servers and ports which the REST API is exposed. You can see this on port 8000 in a default install. The server literally cannot tell a 'REST Application' and an 'XCC Application' apart except by the URI requested in the request (and in some cases additional headers like cookies). REST and XDBC are both HTTP based, and at the HTTP layer are very similar to the extent that they can share the same ports and configurations.
XDBC is 'passed through' the REST processing via the XML Rewriter. XDBC uses /eval and /invoke while REST uses /v1/eval and /vi/invoke. If you look at the default rewriter.xml for port 8000 you can see how the routing is made. While the XDBC protocol is not formally published its not difficult to 'reverse engineer' by looking at the XCC code (public java source) and the rewriter. For example its not difficult to construct URL and payload data to do a basic eval or invoke call. You should be able to replicate existing XCC.NET client behaviour exactly by using the /eval and /invoke endpoints (look for the xdbc attribute set in the rewriter.xml, this causes the request handling to use pure XDBC protocol and behaviour.
Another alternative, if you cannot solve the external variables problem is to write new 'REST Friendly' apis that then xdmp:invoke() on the legacy APIS passing in the appropriate namespaces. An option is to put the legacy code in an entirely seperate modules DB and then replicate the module URIs exactly with the new code. If you don't need to maintain co-existing versions then you modify the old code to remove the namespaces from the parameters or assign local variable aliases.
Im new to Spring Cloud contract. I have written the groovy contract but the wiremock tests are failing. All I see in the console is
org.junit.ComparisonFailure: expected:<[2]00> but was:<[4]00>
Can anyone please guide me how to enable more debugging ad also is there a way to print the request and response sent by wiremock?
I have set the logging.level.com.github.tomakehurst.wiremock=DEBUG in my spring boot app but no luck
If you use one of the latest version of sc-contract, WireMock should print exactly what wasn't matched. You can also read the documentation over here https://cloud.spring.io/spring-cloud-static/Finchley.RELEASE/single/spring-cloud.html#_how_can_i_debug_the_request_response_being_sent_by_the_generated_tests_client where we answer your questions in more depth. Let me copy that part for you
86.8 How can I debug the request/response being sent by the generated tests client? The generated tests all boil down to RestAssured in some
form or fashion which relies on Apache HttpClient. HttpClient has a
facility called wire logging which logs the entire request and
response to HttpClient. Spring Boot has a logging common application
property for doing this sort of thing, just add this to your
application properties
logging.level.org.apache.http.wire=DEBUG
86.8.1 How can I debug the mapping/request/response being sent by WireMock? Starting from version 1.2.0 we turn on WireMock logging to
info and the WireMock notifier to being verbose. Now you will exactly
know what request was received by WireMock server and which matching
response definition was picked.
To turn off this feature just bump WireMock logging to ERROR
logging.level.com.github.tomakehurst.wiremock=ERROR
86.8.2 How can I see what got registered in the HTTP server stub? You can use the mappingsOutputFolder property on #AutoConfigureStubRunner
or StubRunnerRule to dump all mappings per artifact id. Also the port
at which the given stub server was started will be attached.
86.8.3 Can I reference text from file? Yes! With version 1.2.0 we’ve added such a possibility. It’s enough to call file(…) method in the
DSL and provide a path relative to where the contract lays. If you’re
using YAML just use the bodyFromFile property.
We use nomad to deploy our applications - which provide gRPC endpoints - as tasks. The tasks are then registered to Consul, using nomad's service stanza.
The routing for our applications is achieved with envoy proxy. We are running central envoy instances loadbalanced at IP 10.1.2.2.
The decision to which endpoint/task to route is currently based on the host header and every task is registered as a service under <$JOB>.our.cloud. This leads to two problems.
When accessing the service, the DNS name must be registered for the loadbalancer IP which leads to /etc/hosts entries like
10.1.2.2 serviceA.our.cloud serviceB.our.cloud serviceC.our.cloud
This problem is partially mitigated by using dnsmasq, but it is still a bit annoying when we add new services
It is not possible to have multiple services running at the same time which provide the same gRPC service. If we e.g. decide to test a new implementation of a service, we need to run it in the same job under the same name and all services which are defined in a gRPC service file need to be implemented.
A possible solution we have been discussing is to use the tags of the service stanza to add tags which define the provided gRPC services, e.g.:
service {
tags = ["grpc-my.company.firstpackage/ServiceA", "grpc-my.company.secondpackage/ServiceB"]
}
But this is discouraged by Consul:
Dots are not supported because Consul internally uses them to delimit service tags.
Now we were thinking about doing it with tags like grpc-my-company-firstpackage__ServiceA, ... This looks really disgusting, though :-(
So my questions are:
Has anyone ever done something like that?
If so, what are recommendations on how to route to gRPC services which are autodiscovered with Consul?
Does anyone have some other ideas or insights into this?
How is this accomplished in e.g. istio?
I think this is a fully supported usecase for Istio. Istio will help you with service discovery w/ Consul and you can use route rules to specify which deployment will provide the service. You can start explore from https://istio.io/docs/tasks/traffic-management/
We do something similar to this, using our own product, Turbine Labs.
We're on a slightly different stack, but the idea is:
Pull service discovery information into our control plane. (We use Kubernetes but support Consul).
Organize this service discovery information by service and by version. We use the tbn_cluster, stage, and version (like here).
Since version for us is the SHA of the release, we don't have formatting issues with it. Also, they don't have to be unique, because the tbn_cluster tag defines the first level of the hierarchy.
Once we have those, we use UI / API to define all the routes (e.g. app.turbinelabs.io/stats -> stats_service). These rules include the tags, so when we deploy a new version (deploy != release), no traffic is routed to it. Releases are done by updating the rules.
(There's even some nice UI affordances for updating those rules for the common case of "release 10% of traffic to the new version," like a slider!)
Hopefully that helps! You might check out LearnEnvoy.io -- lots of tutorials and best practices on what works with Envoy. The articles on Service Discovery Integration and Incremental Blue/Green Releases may be helpful.
What is ISAPI or ISAPI extension or filters? The more I read the more I am confused.
See e.g. here: http://searchwindowsserver.techtarget.com/definition/ISAPI
ISAPI (Internet Server Application Program Interface) is a set of Windows
program calls that let you write a Web
server application that will run
faster than a common gateway interface
(CGI) application. A disadvantage of a
CGI application (or "executable file,"
as it is sometimes called) is that
each time it is run, it runs as a
separate process with its own address
space, resulting in extra instructions
that have to be performed, especially
if many instances of it are running on
behalf of users. Using ISAPI, you
create a dynamic link library (DLL)
application file that can run as part
of the Hypertext Transport Protocol
(HTTP) application's process and
address space. The DLL files are
loaded into the computer when HTTP is
started and remain there as long as
they are needed; they don't have to be
located and read into storage as
frequently as a CGI application.
A special kind of ISAPI DLL is called
an ISAPI filter, which can be
designated to receive control for
every HTTP request. You can create an
ISAPI filter for encryption or
decryption, for logging, for request
screening, or for other purposes.
Or see another definition with a graphical explanation here:
ISAPI Definition from PC Magazine
ISAPI! yup this thread is old but may be my 2 cents worth something here.
ISAPI stands for internet server application programmable interface.
As the name suggests this is an interface provided in IIS for the developers. Where you can tap into IIS core functionality and either you can provide a custom functionality in IIS using ISAPI extension (e.g. .net dll) or ISPI filter (e.g. a custom file uploader).
There are set of built in ISAPI api's for this.
Additionally building ISAPI "extension" developments are a tough task you need C++ and STL advance exposure. Mainly allocate buffers for http post data and need to be extremely careful about buffer overflow errors and parsing posted data as such error in ISAPI will bring down whole IIS. Having said that once developed correctly these extensions work pretty good. You can also implement worker thread pool and custom IIS load balancing etc.
But be prepared to sleep under your working table I'm talking by my own experience.
If you know that incoming HTTP messages are handled by a pipeline (IIS/ASP.NET are both part of the pipeline), you can treat ISAPI/filters as components who extend this pipeline.
As many ISAPI modules filter out some messages, they are also called filters naturally.
http://learn.iis.net/page.aspx/101/introduction-to-iis-7-architecture/
http://learn.iis.net/page.aspx/243/aspnet-integration-with-iis-7/
ISAPI filters are libraries loaded by the IIS web server. Every incoming request and outgoing response passes through the filters, and they're free to perform any handling or translation they wish. They can be used for authentication, content transformation, logging, compression, and myriads of other uses.
ISAPI is a framework/API that is provided by Microsoft's web server, Internet Information Services (IIS), that allows you to programmatically inspect and alter web requests.
Since Flashbuilder does not support WCF over https, i am considering to use weborb remoting as alternative, but not really sure how flash is going to know weborb location, if they are sitting on different servers. Looked at destination, source fields, but not really find a field called url in remoteObject in Flex. Has anyone done similar things?
I know this is an old question, but thought I'd answer it anyway. You can expose your WCF services to remoting clients (Flash, Flex) via WebORB. WebORB supports both self-host and IIS-hosted WCF services. Here are links to instructions for both models.
Self-hosted: http://www.themidnightcoders.com/fileadmin/docs/dotnet/v4/guide/index.html?standalone_wcf_services.htm
IIS-hosted: http://www.themidnightcoders.com/fileadmin/docs/dotnet/v4/guide/index.html?iis_hosted_wcf_services.htm
Both documents address your questions. Here is an example of one approach:
Invoking Self-Hosted Service From Flex/AIR
Flex and AIR clients can use the RemoteObject API to invoke methods on self-hosted WCF services which use the AMF endpoint. There are two approaches for invoking self-hosted WCF service. The first approach requires less code, but creates a dependency on configuration files declaring destinations and channels (the files located in WEB-INF/flex). The second approach does not have any dependencies on the configuration files, but results in a few additional lines of code.Consider the examples of the API below:
Approach 1 (with dependency on configuration files):
var remoteObject:RemoteObject = new RemoteObject("GenericDestination");
remoteObject.endpoint = "http://localhost:8000/WCFAMFExample/amf"
remoteObject.GetQuote.addEventListener( ResultEvent.RESULT, gotResult );
remoteObject.GetQuote.addEventListener( FaultEvent.FAULT, gotError );
remoteObject.GetQuote( "name" );
The endpoint URL uniquely identifies the WCF service. Notice the /amf at the end of the URL, it is required for the AMF endpoint. With the approach demonstrated above, the destination name in the RemoteObject constructor is required however it is not used. As a result, for the code to work, the Flex/AIR application must be compiled with additional compile argument:
-services "C:\Program Files\WebORB for .NET\4.0\web-inf\flex\services-config.xml"
I hope this helps.
K