Tracking outgoing RestTemplate/Feign requests in JProfiler - resttemplate

when using JProfiler I'd like to be able to see outgoing RestTemplate requests in the calltree/hot spots of CPU views in the similar way JDBC/JPA/Mongo queries are shown and aggregated.
Is there some kind of configuration/scripting/extension to achieve this goal?
Right now when monitoring microservices that make a lot of REST calls it is hard to understand where is the time spent - locally or waiting for answer and to which endopoint.
Thanks
LZ

This feature is currently under development and will be available in JProfiler 11.0.

Related

Speed of Firebase Node library vs REST

I'm currently building a Koa app with the Firebase Node library. Is there a speed difference between using that compared to REST?
This is something that would best be determined by some profiling or jsperf-style testing.
The simplest answer is that there would naturally be a difference, particularly at higher frequencies of transcations. The node.js SDK works over a socket connection, whereas the REST client would have the overhead of establishing and tearing down connections with each payload.
One helpful tool that can help narrow the gap in performance is to utilize HTTP 1.1's keep-alive feature. However, it's certainly not going to be comparable to web sockets.
Firebase is a web-socket real-time database provider, so it's much more faster than HTTP REST calls, that has a big overhead of creating a new connection for each call, so you can use this link below to have an idea:
http://blog.arungupta.me/2014/02/rest-vs-websocket-comparison-benchmarks/

BlazeDS default messaging for 1500 clients

I am having Flex + Spring BlazeDS Integration + Java combination for my project. This project is deployed on weblogic server. As we know whenever a client connects to blazeDS it blocks one thread on the server and it is a limitation for the maximum number of concurrent clients for one BlazeDS instance.
In my case I am supposed to have around 300,000 updates every hour and at any moment of time around 500 concurrent client can be there. In extreme case it can be all 1500 clients connected to the application. What is the best possible solution for that?
If I try to convince my clients to use LCDS they would like to know the exact number that our current setup can support. For that I tried to use neoload but could not make much progress in that direction.
So If any body has used such a setup and can advise me what shall I do, it would be really great!!
After some research (we may have a similar situation, it seems that blazeDS is not able to use NIOs. Here is a link about it. They offer a solution that seems broken with newer versions of tomcat. So I guess blazeDS is not the one to use in your usecase.
If you cannot go with LCDS, a good free solution is graniteDS, supporting asynchronous servlets

BlazeDS,GraniteDS, LiveCycleDS, WebOrb for Java which gives best performance?

Does anyone have experiance in a lot of these?
I'm not so intrested in the pdf creation part of LCDS.
Just for flex messaging which would give me the best performance? As far as I know LCDS and WebOrb both do real time streaming is that correct?
Basically the question is which gives quickest response and which will allow for most client connected to a single servlet container.
Thanks
Edit 1
This may be clearer what I want. I'm looking to server at least 5000 clients with sub second response times with push messages, I'm trying to figure out which is the most scalable option, I've been quoted several million push messages a day. Obviously we can throw more servers at the problem I'm not convinced thats the most maintainable option.
Its not media streaming I'm looking for, but more event updates. It must work without sticky sessions.
LiveCycleDS & WebOrb are the only ones providing messaging using sockets through RTMP protocol. Note that in this case the clients are not connected to a servlet container, but to a dedicated server included in the product distribution (bypassing the servlet mechanism).
There are more messaging servers on the market, Lightstreamer is one of them. Or Flash media server.
There are many more things to be taken into consideration when choosing a solution however (price, integration with various architectures (like DMZ) and frameworks, paid support, documentation, your relation with the sales representative etc).

ASP.Net Web Farm Monitoring

I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to:
Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load
Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit.
Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time)
Here are more details, things I've tried, etc.
I am not interested in logging. We have logging solutions in place.
I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm.
ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis.
We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time
Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring.
The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment.
Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles.
In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down.
Any problems with this, best practices, or other ideas?
Thanks for any help/feedback.
UPDATE: We looked into Cloudwatch a bit more and we may focus on trying it out. This forum post is the most official thing I can find. In it, an Amazon representative says that the offical delay window for data is 4 minutes. However, the user says that 2 minute old data is always reliable and 1 minute is sometimes reliable. We're going to try it out and hope it is enough for our needs.
Used Quest software and it seemed to be a good monitoring solution. Here is a link.
http://www.quest.com/application-performance-monitoring-solutions/
Also performance monitoring of Windows may also help.

Polling webservice performance - Will this work?

Our app need instant notification, so obvious I should use some some WCF duplex, or socket communication. Problem is the the app is partial trust XBAP, and thus I'm not allowd to use anything but BasicHttpBinding. Therefore I need to poll for changes.
No comes the question: My PM says the update interval should be araound 2 sec, and run on a intranet with 500 users on a single web server.
Does any of you have experience how polling woould influence the web server.
The service is farly simple, it takes a guid as an arg, and returns a list of guids. All data access are cached, so I guess the load on the server is minimal for one single call, but for 500...
Except from the polling, the webserver has little work.
So, based on this little info (assume a standard server HW, whatever that is), is it possible to make a qualified guess?
Is it possible or not to implement this, will it work?
Yes, I know estimating this is difficult, but I'd be really glad if some of you could share some thoughts on this
Regards
Larsi
Don't estimate, benchmark.
Polling.. bad, but if you have no other choice, then it's ideal :)
Bear in mind the fact that you will no doubt have keep-alives on, so you will have 500 permanently-connected users. The memory usage of that will probably be more significant than the processor usage. I can't imagine network access (even in a relatively bloaty web service) would use much network capacity, but your network latency might become an issue - especially as we've all seen web applications 'pause' for a little while.
In the end though, you'll probably be ok, but you'll have to check it yourself. There are plenty of web service stress testers, you can use Microsoft's WAS tool for one, here's a few links to others.
Try using soapui, a web service testing tool, to check the performance of your web service. There is a paid version and an open source version that is free.
cheers
I don't think it will be a particular problem. I'd imagine the response time for each request would be pretty low, unless you're pulling back a hell of a lot of data, so 500 connections spread over 2 seconds shouldn't hit it too hard.
You can use a stress testing tool to verify your webserver can handle the load though, before you commit to this design.
250 qps probably is doable with quite modest hardware and network bandwidth provided you do minimize the data sent back & forth. I assume you're caching these GUID lists on the client so you can just send a small "no updates" response in the normal case.
Should be pretty easy to measure with a simple prototype though to be more confident.

Resources