Comparison of JBoss AS 7.x and Glassfish 3.x? - glassfish-3

I was wondering if there is a (mostly) objective comparison between the JBoss AS 7 and Glassfish 3.x?
I don't care for any differences in standards or their implementation, I was thinking more about startup time, failover, scalability, performance, memory footprint, known problems, administration, security, clustering etc.
Real world examples & experiences are very welcome!

My opinion is that after the aquisition of Red Hat to JBoss the general quality of documentation has dropped significantly. Even if JBoss is the better product in my next project I will shift to GlassFish because of the better documentation. Who cares about the startup plan if you loose 2/3 of your time dealing with lack of proper documentation?
JBoss 4 and 5 wer an examples of a properly documented products. Things have changes for worse.

Antonio Goncalves recently did a comparison of all the latest application servers including some of the metrics you requested - http://agoncal.wordpress.com/2011/10/20/o-java-ee-6-application-servers-where-art-thou/ .

I found this introduction to JBoss AS7 with short memory/startup comparison to Glassfish 3.1.1: http://hwellmann.blogspot.com/2011/10/jboss-as-7-catching-up-with-java-ee-6.html

Real-world experiences for GlassFish are here: http://blogs.oracle.com/stories

Related

What about OpenEJB? Is it worth it? Any opinions?

I whould like to know some opinions about OpenEJB: we are considering to use it on a new project, but really didn't found many opinions about it.
So, here is my question: how about it? Does it perform well? Is it stable enough for a production environment?
We switched to OpenEJB (deployed embedded in our app on Tomcat). Performance tests showed better or not worse results processing our transactions compared to JBoss (transactions include data access, JMS, and servlets). We use ActiveMQ within OpenEJB for JMS. There are no stability problems as of yet - we are still in staging (pre-production) environment though. The documentation is definitely lacking, but not as poor as other embedded choices. Overall, we consider this as a good choice if you run on Tomcat. Deploying it on other application servers turned out to be much more difficult (JBoss, Weblogic, Websphere) but there are not many reasons for this usually (we had few but dropped this after several attempts basically failed).
And as in all open source products: expect lack of support (documentation, troubleshooting, bugs, etc.) to be compensated by free access to sources.
We've had experience with Oracle OAS and JBOSS before. We decided to give OpenEJB a try. We've found out that it is not only very fast but it also much easier to setup and configure, and it has much better defaults.
Currently we implement our own failure measures in the client, so we don't know how they compare for clustering, or other advanced features that we don't use.
We we have to go back and deal with JBOSS in the developer side, we see a drop on productivity, because it takes too long to bootstrap.

Java and tomcat vs ASP.NET and IIS

Until recently I'd considered myself to be a pretty good web programmer (coming up for 10yrs commercial experience on a variety of e-commerce, static and enterprise applications). I'm self taught and have always used the Microsoft product stack (ASP, ASP.NET)...
My applications are always functional, relatively bug free, but have never been lightening quick. As a frequent web user I always found this to be the norm... how fast are the websites from the big tech players (eBay, Facebook, Microsoft, IBM, Dell, Telerik etc etc) - in truth none are particularly fast. I always attributed this to "the way things are with web apps"...
...then I cam across a product called Jira from atlasian and this has stopped me in my tracks...
This application is fast, and I mean blindingly fast.. too fast to time the switches between pages, fully live content, lots of images and data and cross references etc etc...
I run this on an intranet, with a large application DB, and this is running on a very normal server (single processor, SATA HDD, 8GB RAM).
Am I missing something?? Are my programming techniques that bad?? I am wondering if this speed gain is down to it being written in Java and running on Tomcat.
Does anyone have any benchmarks to compare JSP / ASP or Tomcat / IIS???
Thanks,
Mark
NOTE: this isn't a blatant plug for Jira. I don't work for them or have any affiliation to them... but I would like to be able to write applications like them :)
YMMV. But one of the longest-lived Things That Aren't True Anymore is the assertion that "Java Is Slow". Excepting floating-point (where most Java implementations aren't at liberty to use the floating-point hardware), Java is generally as fast or faster than compiled code. Some of the best and brightest have spent years of effort ensuring this, including such things as dynamic recompilation/re-optimization of code based on run-time metrics - something that statically-compiled languages like C or assembler cannot boast.
ASP is sort of the opposite extreme, since the original ASP had to recompile each page request each and every time it was made. ASPX addressed this by allowing retention of the compiled page code. That got rid of a lot of useless overhead.
A more compelling reason to prefer Java over ASPanything/IIS is freedom. A Java/Tomcat webapp will run under almost any OS on almost any hardware. IIS runs on Windows. Period. And for the most part, that also means Intel. Not Sparc, Not zSeries. Maybe you don't care. But then again, maybe next week IBM will offer your employer a can't-refuse deal on a mainframe.
I don't have benchmarks, and there are a lot of things that can make one platform preferable. But I permanently gave up on the "Java is slow" idea when I encountered the Poseidon UML tool with its cool real-time graphics UI and the FreeMind mindmapper tool. A small hit to startup the JVM, but after that, you'd never know what language you were working under.
The great debate. Java vs. .Net.
When .Net first came out there was an application written called "The Pet Shop." Which was a .Net port of Sun's J2EE reference application, "The Pet Store". It was announced that Microsoft's implementation was "faster."
As with anything, especially anything to do with marketing, you have to dig deeper to find the truth.
Any technology can be fast with enough hardware and the correct design.
In my experience there are two factors to speed: What type of hardware is used and how you architect your application (this includes database tuning).
Caching at various levels (response, db, etc.) makes a huge different in responsiveness of a web application. There is also a lot of things that are done to reduce time consuming operations like db connection pooling, sql statement caching, etc. As much as I'd like to say Java is better :-), I think in this case the performance is due to the way Jira was written and the fact that it's being run internally (probably with few users as compared to eBay, Facebook, Microsoft). This site, Stackoverflow, uses ASP.NET MVC and IIS and is very responsive and my guess (since code is not open sourced, yet) is that they use many of the same techniques you would find in Jira or any other web application built to scale.
I think that it is not typically the frameworks and languages used that make an application slow. In my experience, some frameworks like JSF or .NET server side controls give developers alot of freedom to make too many database calls and look things up too often, but that's definitely not the fault of the framework used.
Keep your application as light as possible and focus on keeping the data sent to the client as small as possible, and you will have a fast application. It's usually faster to develop fast applications too.
The Jira folks have written a best in class application (and charge for it) - nice work crocodile dundees.
I also suggest to consider also two aspects:
the maintenance activities: logging and deployment. In my opinion under a unix like server is more easier to log, deploy, and maintain new release than doing the same on a Windows server.
if the project require to use some open source application (i.e. Alfresco repository) Java is better solutions
People's opinion is mostly biased. Most people have never really tried the other while claiming the other is slower. I wouldn't trust any answer: it's mere opinion. It's boring to always read the same 4 cents again and again.

Performance of ASP.NET in Mono(Linux) vs IIS(Window)

Is there any performance different between hosting your asp.net in mono on linux and iis on window server?
Of course there is a difference, just like there is a performance difference between Java and .Net. However, it is going to vary widely based on what the application is doing.
There are things where .Net is much faster than Mono. There are things where Mono is much faster than .Net. There are things where they perform roughly equal. The same holds true when comparing applications running on Windows or Linux. The same holds true when comparing applications running on IIS and Apache.
Likely, either can run your application fast enough, and you will find that your performance is going to be driven by your programming techniques. The difference of a few requests per second probably isn't a huge issue unless you have a large server farm, in which case you most likely have the resources to test on both and see which is faster for your particular application.
In regards to the suggestion by lextm that publishing the results of perf comparisons is "not possible", the End-User License Agreement (aka EULA) for Windows Vista Ultimate allows it, with conditions.
MICROSOFT .NET BENCHMARK TESTING. The
software includes one or more
components of the .NET Framework 3.0
(“.NET Components”). You may conduct
internal benchmark testing of those
components. You may disclose the
results of any benchmark test of those
components, provided that you comply
with the conditions set forth at
http://go.microsoft.com/fwlink/?LinkID=66406.
Notwithstanding any other agreement
you may have with Microsoft, if you
disclose such benchmark test results,
Microsoft shall have the right to
disclose the results of benchmark
tests it conducts of your products
that compete with the applicable .NET
Component, provided it complies with
the same conditions set forth at
http://go.microsoft.com/fwlink/?LinkID=66406.
The conditions, as I read them, are reasonable disclosure requirements: the source code you used to do the testing, the versions of software you tested, the date you conducted the tests, the configuration and optimizations you made, etc.
The EULA for Windows Server 2003 includes the same provisions. I couldn't find the EULA for Windows Server 2008 (the latest incarnation) but I assume the benchmarking provisions remain.
Addendum: If you look in the EULA for Windows7, you will probably find a no-benchmarking clause, or more accurately a no-publish clause; this is because Win7 is still in pre-release. When it is officially released, expect the standard benchmark publishing conditions to be present.
In the past Microsoft had a more restrictive policy on this topic. Basically: you need permission from us (Microsoft) to disclose performance comparisons. This policy has been relaxed, even retroactively to .NET v1.0 and v1.1, as per the link in the above EULA.
Mono sucks!
http://art-blog.no-ip.info/cppcms/blog/post/27
http://www.phpvs.net/2008/02/08/benchmarking-mono-aspnet-vs-php-a-slight-problem/
Or more politically correct: Mono is not yet ready for prime time at least for Asp.Net web applications:
No support of caching
Performance is terribly unstable and dropping after application start.
EDIT: Added quotes for my post, answer to latest comment.
However, in order to make fair comparison ... I should enable caching ... adding following line at the header of aspx file should help me.
<%# OutputCache Duration="20" VaryByParam="None" %>
I'd done it — no result! The performance is the same.
Note: after deeper check, the implementation of cache in modo is very limited and poor, after recent checks it still holds in newer versions of mono.
Ok, anyway I did some benchmarks(...)simple clock gives me about 750 pages per second
for cached variant and 650 for non cached one.
The tests were done under IIS 5.0 on Dual Core Pentium D 3G
The same code ... with mod_mono (under Signle Core AMD Athlon 3000)
had given me:
350 pages per second.
Next run had given 300.
next 200
and next 150
So, benchmarking is impossible.
Does referring to post is still not augmentative?
No mono is definitely not ready for prime-time.
Here is a nice benchmark where someone tested the difference of windows/IIS vs Linux/Apache/Mono(mod_mono). Crazy enough mod_mono (apache's mono plugin) was significantly more performant. Granted I am sure that in certain circumstances it would be different, but given how low profile linux and apache are plus the great job the mono guys have done, it stands to reason that Linux/Apache/Mono is a better way to go. Now that being said, hopefully with the new open source ASP, we will see some super performant Linux .Net servers coming soon (primed and ready for the cloud).
graph of the performance comparison
I've run mono apps under mod_mono. From a usability, it functions fine, though I didn't do any benchmarks. Still IIS really is an incredibly convenient environment to work in. Given the choice I'd still hosts my web-server in IIS and use linux mono clients to connect to it.
First, it was said that publishing performance statistics to compare CLR implementations (.NET vs Mono) is not possible.I am not sure what is the source, but Mono team only published comparison among Mono versions (1.x, 2.0, 2.2, and 2.4), so I assume the saying is real. Therefore, you can only test the performance in your own environment.
Second, Mono is evolving much faster lately, which gives you a chance to gain performance boost simply by upgrading Mono runtime.
Third, please use a different attitude to judge an open source product. For closed source products, you can do nothing but begging its vendor to improve performance or providing your support on how to tune your applications. For open source projects, you have access to the code base, and you can tailor it to suit your own needs, and fix the issues for your own applications.
As jpobst mentioned, even if you cannot fix issues yourself, you can contact the Mono guys.

Any experiences with Websphere Integration Developer (WID)?

My company (a large organization) is developing a "road-map" for evolving their rather old, tangled confederation of systems to an SOA model. A few people are pushing hard for using Websphere Integration Developer and Websphere Process Server as the defacto platform for developing future applications...because they feel IBM is a stable vendor, the tools are made for the enterprise, they drank the "business agility" BPEL kool-aid, etc.
Does anyone have positive or negative thoughts on this platform? Do the GUI tools help eliminate monotonous/redundant coding...or just obscure things and make things harder to maintain? Basically, do the benefits justify the complexity?
My experience with the IBM Java tool set is pure pain. Days to install lots of different versions of different components all incompatible with each other, discover a bug in component A get told to update to see if it fixes, updating component A breaks component B and C, get told to update these etc.
I find Eclipse with out the IBM extensions far more stable and quicker and provides more features (as its stable versions are a couple releases ahead of WID/RAD).
I would advise against going the IBM way for development tools. As for process server I have less experience but the people in my team using it seemed to enjoy it as much as I enjoyed WID. not a lot.
So far I havent been impressed by any tools with the "SOA" and/or "BPM" labels on them. My "roadmap" would be very very iterative to see some results with the archetecture as fast as possible while trying to grab some of the easy fruits. That way you gain your feel for what works for you and your people.
I would never let any vendor push me anywhere in the "scuplturing" of the architecture.
I agree with other users complaining about WID. The only reason we are using WID is that a decision was made a while back to use IBM products across the board by our sales department.
That's right, our sales department made the decision to use IBM products.
Development has been painful and frustrating. We have lots of stability problems with Process Server, sometimes it doesn't want to start or shutdown properly. Yeah you can easily draw processes in the IDE, but most any toolset provides that functionality these days. It is nothing special or unique to WID or IBM. IBM is a few iterations behind mainstream.
There are plenty of open source implementations out there that offer great support. Checkout JBoss or RedHat, they are pretty good. If that doesn't float your boat, you can always use Apache tools.
Walter
Developers don't choose WID, WMB, or WPS. Managers do, because IBM is a "stable vendor".
Look at JBoss, or K.I.S.S.
WID/WPS is actually pretty simple. The original intention was for analysts and business people to "compose" services (DO NOT LET THEM DO THIS!) so the UI is simple and easy.
Most of the work will be in defineing and implementing the back end services which depending on the platform will mostly involve wrapping existing code in SOA service.
The most important thing to bear in mind is that SOAP is technoligy and SOA is an architecture and a state of mind.
There is a zen to a succesful SOA implementation. Its all about "business services", if you have a service that you cannot describe to a business user in less than six words you have done it wrong! Ideally the service name alone should be enough to describe the functionality of the service.
If you end up with a service called "MyApp.GetContactData" described as "get name, addresses tel fax etc." then you are there. If You have a service called MyAppGetFaxNoFromOldSys" described as "Retrieve current-fax-nmbr from telephony table in legacy system" you are doomed!
Incidently most of the Websphere tooling for WS* is pretty nice. But I would recommend the very wonderful SAOPUI tool from http://www.eviware.com which is very good for compsing/reading WSDL based messages and also function as a useful test client or server.
Do the GUI tools help eliminate monotonous/redundant coding...or just obscure things and make things harder to maintain? Basically, do the benefits justify the complexity?
As a Developer, I find the tools at varying levels of being bug free. 6.0.1 was a pain, 6.2 is so much better. But once you develop with the tool, there is minimal effort to maintain it. I develop in hours what java developers take days to do. It is also easy to maintain as changes can be made very quickly. I cannot answer your question from the perspective of an architect or a Manager but i would agree with comments of some others here.

FOSS ASP.Net Session Replication Solution?

I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations.
Indexus - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though.
Memcached - Little replication/failover support without going to a db backend.
Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial.
Microsoft Velocity - Not OSS, but seems nice. Unfortunately, I didn't see where CTP1 supported failover, and there is no clear roadmap for this one. I fear that this one could fall off into the ether like many other MS dev projects.
I am fairly used to the Java world where it is kind of taken for granted that many solutions to problems such as this will be available from the FOSS world.
Are there any suitable alternatives available on the .Net world?
As far as Velocity is concerned I have heard some great things about that project lately. It's still in the developing stages and probably not primetime ready yet. But I think the project has a solid footing and will become a strong mature product from Microsoft and not fall off into the ether like you predict.
Recently I've heard podcasts from Scott Hanselman and Polymorphic Podcast regarding Velocity.
BTW Windows Server AppFabric is out of beta. That's what i mentioned in my previous post.
here is the link on general availability;- http://blogs.technet.com/b/appfabric/archive/2010/06/07/windows-server-appfabric-now-generally-available.aspx
which specific features do you think one can get on NCache and not on AppFabric?
Just a quick update on this thread for the sake of completion.
Velocity (now known as Windows Server AppFabric) is already out in the production and offers a great distributed caching platform. More details are available on the msdn site
http://msdn.microsoft.com/en-us/windowsserver/ee695849.aspx
Although Velocity has made progress from CTP1 to CTP2, it still leaves much to be desired. It will be some time before they provide all the important features in a distributed cache and even longer before it is tested in the market. I wish them good luck.
In the meantime, NCache already provides all CTP2 & V1, and many more features. NCache is the first, the most mature, and the most feature-rich distributed cache in the .NET space. NCache is an enterprise level in-memory distributed cache for .NET and also provides a distributed ASP.NET Session State. Check it out at Distributed Cache.
NCache Express is a totally free version of NCache. Check it out at Free Distributed Cache.

Resources