Spring boot jar on azure websites performance issues - spring-mvc

I have an application built as a spring boot fat jar.
I host it in azure websites according to "official" documentation with a web.config similar too:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="httpPlatformHandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified" />
</handlers>
<httpPlatform processPath="%JAVA_HOME%\bin\java.exe" arguments="-Djava.net.preferIPv4Stack=true -Dserver.port=%HTTP_PLATFORM_PORT% -jar "%HOME%\site\wwwroot\my-web-project.jar"">
</httpPlatform>
</system.webServer>
</configuration>
The application is monolithic in structure, not too large but does some mapping and has some layers to initialize so startup time is about 12 seconds locally.
It runs an H2 in memory database just for testing purposes.
Actual deployment and getting it running on azure websites were never really a problem, but there's some real performance issues, at least with the settings I have.
Some settings of interest:
Standard S1 instance (at the time of writing costing about ~$40USD/month).
Webapp configured with:
Tomcat8 (shouldn't really matter as the fat jar runs embedded tomcat)
JDK 8
Always on enabled
To be able to compare the numbers to something I ran the application on an Azure VM I have with similar (but not the same) specs and price range and ran the application on that one.
Here are some of the results:
Startup time of the application:
Azure websites: ~2 minutes
VM: 30 sek
Cold call:
Deployed/started the application and left it to make a call the day after.
Azure websites: 31119 ms
VM: 219 ms
Consequent call:
A call directly after the cold call, but to another endpoint.
Azure websites: 2685 ms
VM: 223 ms
My question here is:
Do any one know if it is viable to run spring boot fat jars hosted on azure websites?
As there is official documentation from Microsoft one would think that it is, and of course technically it is, but is it viable in production?
I'm not really after any debating about AWS vs Azure vs Google App Engine .....,
or to write wars/jars or how to host it.
I have reasons to want it this way. If it's not possible I have other options but would like to explore the idea first and see if any one else has better experiences?
Edit: Just to add to the information. The database was empty for all calls. So that shouldn't add any overhead to speak off. No data was actually fetched only empty lists.

Although it is an old question and my answer is rather not valuable since I have no knowledge on Azure, but I would like to share results of my research. I have couple of Spring Boot microservices and one relatively big service (170 MB fat jar that on my local machine starts in about 70 seconds). Azure starts small microservices in tens (if not in hundreds) of seconds... and I mean like really small microservices (an example of config server from Spring Cloud etc... there is nothing fancy going on there). If it comes to the big one - Azure starts it and after 2 minutes... starts it again... and again... and again. As a result it never gets started.
and it was all on B3 app service plan. So quite big plan.
Wrapping up, in my (strongly subjective) opinion - Azure is not a viable option for Java apps.

It's viable for your question about running spring boot fat jars hosted on Azure websites, please refer to my answer for the SO thread Deploying Springboot to Azure App Service and focus on the offical document for the part of spring boot.
Hope it helps. Any concern, please feel free to let me know.

Related

Azure (Free F1 WebApp) Session Management

Is there is a quick, cheap, reliable session-state mechanism available for Free F1 WebApp ASP.NET for demonstration configurations?
I know Windows Azure Session management is discussed a lot on SO, but this particular configuration issue seems to slip through the cracks.
The available configuration options seem to be:
InProc mode: Not recommended in the cloud but for a single instance Azure F1 WebApp it should, in theory, be reliable for demonstration websites. However, my experience is that the sessions time out unpredictably and have no bearing on the settings in the web.config file.
Azure Redis Cache: Recommended but not supported for Free F1 WebApps; an upgrade to paid plan is required. (Reference: https://azure.microsoft.com/en-gb/pricing/details/cache/)
Azure SQL: Not recommended or supported.
** This would seem to be a viable option if you already have an Azure SQL server running. Reference: https://azure.microsoft.com/en-gb/blog/using-sql-azure-for-session-state/ . Unfortunately I have found the amended InstallSqlState.sql impossible to find and all links to downloads fail (Example: https://support.microsoft.com/en-us/kb/2006191).
** Universal Web Providers may offer a solution (Refernce:http://www.hanselman.com/blog/IntroducingSystemWebProvidersASPNETUniversalProvidersForSessionMembershipRolesAndUserProfileOnSQLCompactAndSQLAzure.aspx) but it is unclear - at least to me - whether these offer Session management support as well as SQL Server connection support.
Azure Table Storage: A potential option, but seems to be an out-of-date solution as all links are broken. (Reference:https://www.simple-talk.com/cloud/platform-as-a-service/managing-session-state-in-windows-azure-what-are-the-options/). I've never used Azure Tables so this seems esoteric.
StateServer: Not possible with a Free F1 webapp. Presumably a virtual machine would be required.
Custom: Yes, this would work. **Redis Cache is a custom session manager, so I presume another one could be used. **The alternative AppFabric (Reference: ASP.NET session state provider in Azure) no longer seems to be supported and was a paid solution. **Perhaps there is an alternative custom solution available I haven't researched yet.
Azure Cache Service and Role Based Cache: Retired November 30, 2016.(Reference:https://azure.microsoft.com/en-gb/documentation/articles/cache-faq/#which-azure-cache-offering-is-right-for-me).
The bottom line seems to me that there isn't. I would really appreciate it if someone could prove otherwise.
EDIT:
I've been thinking about and tinkering with this throughout the day. My findings so far are:
1) Upgrading to the D1 WebApp service plan improves the reliability of the session-state management considerably but it is still prone to unpredictability. As Hrvoje Hudo points out below you can't expect reliability with Free or Shared.
2) Installing ASP.NET Universal Providers would seem to be a solution. I created a new MVC WebApp project using MSVC2013, selected Azure hosting, typed:
install-package Microsoft.Aspnet.Providers
in the package management console, added:
Session["Time"]=DateTime.Now.ToString()
to the Home Index view and referenced it in the about View using:
ViewBag.Message = Session["Time"]
And updated the web.config to:
<sessionState mode="Custom" customProvider="DefaultSessionProvider" timeout="1">
<providers>
<add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=2.0.0.0, Culture=neutral, PublicKeyToken=Whatever" connectionStringName="DbaseName"/>
</providers>
</sessionState>
When I publish the website the default database is created on my Azure SQL Server, courtesy of the publish scripts.json containing all my config settings and swapping them in, and a dbo,session table is created on the database.
Browsing to the website, new sessions are created on the database table - I can see them using MS Management Studio 2013 - and deleted after they time out (60s+).
I downgraded the website to Free (to be sure it wasn't configured to a higher plan by default, which MSVC used to do) and everything still seems to work. This seems to be what I was looking for.
3) Azure Tables. Can't get my head around these yet! If anyone knows of a good "how to" tutorial I'd love to look at it.
EDIT 2:
After a day's soak testing I'm now 90% convinced ASP.NET Universal Providers is the way to go for quick & reliable sessions using a Free Azure Web App. I can't says it's free, as the SQL Server must be paid for (Basic = ~£4pm See: https://azure.microsoft.com/en-gb/pricing/details/sql-database/) , but it's certainly not expensive, and if you're using an SQL Server anyway - like me - it makes sense.
There's no "silver bullet" session state, that's why you have all those options, chose what fits best with your requirement and business case. For Free web app you can't expect reliable service, Azure will restart your app pool regularly and there are lot of limits, so default InProc can be an issue - but again you're using Free tier. So since you want to save few bucks - invest time into writing your own, which uses Azure Storage (table or blob, I would use Table) or some other storage mechanism.

Referencing an unstable DLL

We are referencing a 3rd party proprietary CLI DLL in our .net project. This DLL is only an interface to their proprietary C++ library. Our project is an asp.net (MVC4/Web API) web application.
The C++ unmanaged library is rather unstable. Sometimes it crashes with e.g. dangling pointers. We have no way of solving it, and using this library is a first-class customer requirement.
When the application crashes, the application pool in IIS doesn't respond anymore. We have to restart it, and doing so takes a couple minutes (yes, that long!).
We would like to keep this unstable DLL from crashing our application. What's the best way of doing it? Can we keep the CLI DLL in a separate AppDomain? How?
Thanks in advance.
I think every answer to this question will be some kind of work around.
My workaround would be to not interact directly with the DLL from your web application.
Instead write your requests from the web application to either a Message Queue or a SQL table. You can then have another application such as a Windows Service which reads the requests, interacts with the DLL and then writes the results back for your web application to read.
I'm not saying that SQL / Message Queues are the right way, I'm more thinking of the general process flow.
I had this exact problem with a third party library that accessed protected memory for purposes of interacting with a hardware copy protection dongle. It worked fine in a console or winforms app, but crashed like crazy when called from an IIS application.
We tried several different things, some of which are mentioned in other answers on this page. But ultimately, the best solution for us was to us a very old technology - .Net Remoting. I know - it's somewhat frowned on these days. But it fit this particular need quite well.
The unstable code was placed in a Windows Service application. The web application made remoting calls to this service, which relayed the commands to the third-party library.
Now I'm sure you could do the same thing with WCF, sockets, etc. But remoting was quick and easy to setup, and since we only talk to the same server it works without opening any ports. It just talks on a named pipe.
It does mean a second service to install besides the web application, but that was acceptable in my particular use case.
If you did something similar, and the third-party code actually crashed the service, you could probably write some code in your main application to bring it back up.
So perhaps a process boundary is more useful than an App Domain when you have unstable code to wrangle.
I would first increase the IIS process recyling rate, maybe the the DLL code fails after a certain number of calls, or after the process reaches a certain amount of memory usage.
You can find information on the configuration of IIS 7.0 recycling options here: http://technet.microsoft.com/en-us/library/cc753179(v=ws.10).aspx
In your case I would recycle the process at a specific time, when you know there is less load on the application. And after a certain number of requests (lower than the default) to try and have "fresh" process most of the time.
The recycling process is graceful in the sense that the the old process is not terminated until the one that will replace it is ready, so there should be no noticeable downtime.
More information about the recycling mechanism here: http://technet.microsoft.com/en-us/library/cc745955.aspx
If the above does not solve the problem I would wrap the calls in my own code that manages the unstable DLL execution.
This code should recover from the failures for example by repeating the failing calls until a result is obtained and failing with a graceful error if it is not possible after a number of attempts.
Internally the calls to the unstable DLL could be made in a spawned thread or even the code could be in an new external executable that you could launch with Process.Start.
This last option has more overhead but it might be your only option. See this SO question for more information on this: How do you handle a thread that has a hung call?
I suggest following solution.
Wrap this dll with another web application. Can be one of the following ones. Since you already use web api, it is most suitable for you.
Simple ASMX Web Service
WCF Service
Asp.Net MVC - WEB Api Service
Control your p-invoke code so that you do not have any bug? See following articles.
The Black Art of P/Invoke and Marshaling in .NET
P/Invoke Revisited
Publish this application to IIS with different application pool.
Use standard techniques suggested before like. I suggest configure recycling IIS for both memory and scheduled times.
IIS process recycling rate
How to limit the memory used by an application in IIS?

How to improve performance on a .net wcf service

I'm porting a cpu-heavy .net 4.0 windows application to a .net 4.0 wcf service. Basically I just imported the .net classes to the wcf service.
All is working well except for performance at the wcf service - a task that takes 6267947 ticks (2539ms) uses 815349861 ticks (13045ms) on the aspx.net wcf service running locally on the same develop machine.
I allready have uploaded the service + a test client to appharbor where the performance is as bad as on my local machine - the link to my test app is: http://www.wsolver.com/. Any ideas on how I can improve performance?
Check any dependencies on your service that may be constructed at Request Time. These Include constructor dependencies and field/property dependencies. Maybe one of them is causing the delay? If this is the case consider using a singleton to instantiate the long running class.
Have you confirmed that subsequent requests still cause the delay?
Also create a band new service that does something simple like Datetime.Now.toString() and see if it has the same problem.
Please take a look at the articles and whitepapers below. I think they should give you enough concrete performance considerations to explore, and likely some very practical settings to tweak, optimize, or change.
Performance Tuning WCF Services
Optimizing WCF Web Service Performance
Using ServiceThrottlingBehavior to Control WCF Service Performance
Transport Quotas
Optimizing IIS Performance
ASP.NET Performance Overview
A Performance Comparison of Windows Communication Foundation (WCF) with Existing Distributed Communication Technologies
If you need to do time-consuming initialization of a complex datastructure, you should to that once in Application_Start() and assign the generated datastructure to a static variable on the MvcApplication object. Doing it just once on application start is going to be much faster that doing it in each request.
I would take a full memory dump during the 13 seconds (or several using procdump) and then acutally look at what is occurring in the process (windbg and sos.dll). Then, you can narrow down which code is the culprit.
I take it that the dictionary tree is only loaded once, into cache? You're not loading it on every call are you?

High traffic ASP.NET MVC coding considerations

I have been asked the following question a couple times and feel like I could have had a better answer for it so I'm relaying it here in hopes of finding more resources, blogs books or pointers on writing scalable MVC3 C# code. If you have any pointers on writing better performing code that is hosted across multiple servers I would greatly appreciate it. For the sake of argument lets say it would be for code that expects upwards of 10-20K hits a day.
Question:
What coding considerations do you take into account for writing scalable code that is distributed over several servers?
My gut tells me the answer lies in handling session. My background over the last few years has been in writing services and forms applications not as much for web applications so I'm looking for information that can help me with web application specific development particularly for C# MVC3. Any blogs or books that you suggest I'll definitely look into!
One of the rules for implementing scalable web applications is for them to be stateless. Session is the first thing that should be thrown out of the equation as this is exactly what makes an application stateful. If you have a completely stateless application you could throw hardware when traffic increases and the application will be able to handle it. So start by putting the following line in your web.config:
<system.web>
<sessionState mode="Off" />
...
</system.web>
The problem will now lie on the data tier as this is where the state goes. So in order to improve performance and limit the number of requests to this node would be to use caching. Cache as much data as you can. Preferably store this cache on separate machines than the web servers. Dedicated machines for doing caching.

Web Service dying after an hour

Recently we have upgraded our main product to be an ASP.NET 4.0 Project (upgraded from 2.0). This project is the main source of traffic to the web service, the other forms have not changed in this release. The web service has been running without issue for a couple years now.
Following the release of the 4.0 environment to our main product our web service starting to die after about an hour and would do nothing but timeout until we restarted the worker process.
We have changed it so that the web service was also in the 4.0 Framework. However this did nothing. Other possible attempts we have tried have also failed such as making it a service instead of a web reference, and updating our certificate validation to also use the proper/current framework techniques.
The worst part is that there is no log in the event viewer being created so I have no leads as to what the problem is. We have applied a band-aid solution of recycling the app pool every 20 min (the recompile takes a second as the web service is very small) and it seems to be holding for now, but we would rather fix the problem than rely on this.
So does anyone have any additional idea/suggestions as to where our problem may be coming from? Or anyone experience anything similar?
Both projects exist in the same web farm and all machines are using IIS 6 32 bit.
Thanks!
edit- Some more info, The web service has a couple basic functions:
1 - accepts XML documents loads them into a dataset then updates internal DBs with the information sent - and simply returns true
2 - receives a request for processed data through XML and queries the DB for it builds XML response and sents it out
3 - receives a confirmation that the data requested in step 2 was recieved and deletes it from the DB
4 - hits a function that updates a DB so we can monitor some applications on our clients systems.
First of all, can you reproduce the problem on a dev box (using some load testing platform to simulate production load)?
If so, then it sounds like an issue with your code. You could then take it a step further and profile the code on the dev server to pinpoint the issue.
If that isn't an option, I would fire up Process Explorer and watch the server resources over time. I'm not sure what your service does, but it sounds like it could be spinning up threads and then not cleaning up after itself.
It might help if you posted some sample code so we could see what could've been affected by the change from .NET 2.0 to 4.0.

Resources