Some former colleague of mine once told me before, remotely distributing EJBs should always be treated as a last resort. According to him, oftentimes, the drawbacks on this implementation outweigh the benefits.
So, when would remotely distributing EJBs be really recommended? what type of situation?
I mean, if I have a web centric app suffering performance degradation because its server cant handle the load, I can have its server load balanced instead.....rather than separate the business component using EJB.
Anyone can enlighten me on this?
Not all so simple. EJB technology is the best for applications integrity, based on javaee. Some examples:
Remote EJB is simplest way to get access to server business
functions from remote application (remote client). But now, when
application clients became more thinner and thinner, role of the
remote EJBs lost.
If your business service spreads among several
java application, EJB services is simple way (not only one) to
integrate them together.
Some more background regarding the actual problem might help. I was not able to understand the question completely since it began with EJBs and why or why not should we use them remotely and ended up talking about performance degradation. How are these two related in the context of the problem you are facing ?
Related
Suppose I have two or more different server applications developed in Clojure using ZeroMQ and BSON as protocols. How can I deploy them using a single JVM instance while also sharing common dependencies?
It seems a waste of memory to use a JVM instance for each standalone application. I plan to develop several Clojure applications in the future and VPS memory is not cheap.
Although not explicitly said, applications running in an application server (Jetty, Glassfish) seem to share the same JVM while isolating their state. However, they require a container and neither Servlets or Enterprise JavaBeans have an implementation that I could easily adapt to my custom protocol.
I've been thinking about using Servlets and implementing a dummy service() method though I don't like the idea of having a pointless HTTP server overhead. As for the EJB container, I cannot even figure out its implementation.
It would be nice to have a container requiring only init() and destroy() methods but I can't find an application server providing it.
Maybe there is a way around or I don't even need an application server. Could somebody point me in the right direction?
It sounds like you would be okay using an EJB container, but only if it were easier or simpler to work with. Have you looked at Immutant? It's basically a wrapper around JBossAS for Clojure, written by guys at Red Hat (who also own JBossAS).
In addition to being an application server, those guys have wrapped JMS and other Java-EE features around Clojure, such that sending messages between apps appears pretty simple:
Also, they have Daemons and Jobs, which may provide something similar to what you were describing as simple services with init() and destroy().
That being said, I haven't used it, so I'm can't vouch for it's awesomenss/awfulness.
So you have two applications that both share the same dependencies and both want to respond to and/or generate events on a message bus.
If I understand what you're saying, this should be as simple as starting the JVM with access to all code in the classpath and initializing your message bus and your code from a main method.
If you wanted to use a container, you could create some dummy message driven beans that sat between your clojure code and the message bus assuming there is a JMS adapter for your message bus. Using netbeans/glassfish, this may not be that bad. You might gain some in terms of monitoring, but I'm not sure what else you would gain.
I kept searching and found out that some application servers implementing the OSGi service platform have simpler lightweight containers than those offered by Java EE.
Apache Karaf for instance can load POJO applications directly from JAR files.
I am not sure what DDs are but any JAR is a valid bundle. Since clojure is not type safe you will need to bridge the clojure world and OSGi/Java world but the OSGi API is a wet dream for such bridges.
Not that in OSGi bundle do not automatically provide their content, in OSGi a bundle is by default private. However, the API allows you to punch holes where ever you want.
I am having Flex + Spring BlazeDS Integration + Java combination for my project. This project is deployed on weblogic server. As we know whenever a client connects to blazeDS it blocks one thread on the server and it is a limitation for the maximum number of concurrent clients for one BlazeDS instance.
In my case I am supposed to have around 300,000 updates every hour and at any moment of time around 500 concurrent client can be there. In extreme case it can be all 1500 clients connected to the application. What is the best possible solution for that?
If I try to convince my clients to use LCDS they would like to know the exact number that our current setup can support. For that I tried to use neoload but could not make much progress in that direction.
So If any body has used such a setup and can advise me what shall I do, it would be really great!!
After some research (we may have a similar situation, it seems that blazeDS is not able to use NIOs. Here is a link about it. They offer a solution that seems broken with newer versions of tomcat. So I guess blazeDS is not the one to use in your usecase.
If you cannot go with LCDS, a good free solution is graniteDS, supporting asynchronous servlets
my plan is to develop or use a Java-based integration framework (ESB, SOA whatever) that deals with services, with the following constraints:
a Service can be deployed on multiple machines but doesn't have to be present on every one of them
a Service can be deployed and re-deployed (with a newer version) separately
a Service is connected to other services either by:
in-memory connections
(async / sync) remoting to other machines
the routing logic of the Service connections should be configurable on the fly, without re-deploying or restarting anything
I know that OpenESB is close to these requirements, however it requires redeployment of the service to change the routing (suppose the connections are HTTP BC based), but I'm unfamiliar in this regard with MuleESB, WSO2, JBossESB, whatever open source ESB... Is there any good solution for this (e.g. configurable in-memory and/or remoting routing)? I don't really care about clustering as I plan to use the servers separately, and the designated (if required) JMS solution would be HornetQ if that matters.
You mention several different concepts, but a combination of an ESB pattern, Apache Load Balancer and Maven should get you close. Do not get to hung up on the product, settle on a paradigm/pattern and the decision of the product will be easy, it either does things the way you like or does not.
Here is the pattern I use.
SOA Design Patterns
This may also interest you SOA for executives
Cheers
After a long discussions about the pros and cons, we are going to have a HornetQ-based (JMS MQ) solution, where we create message routing rules and sometimes processing codes that handle the different kind of routing. HornetQ is able to handle the in-jvm requirement too, but that part will be covered under the hood.
I am building an ASP.NET website which will collect data from a user and submit it to a 3rd party webservice. The webservice is somewhat unreliable and for this reason there is a backup service.
If a call to the primary service fails (timeout or some other error) then I need to flip a bit in a static class which will trip the system to use the secondary service.
At this point, I need to start polling the primary service (with dummy data) to see if it is back up (at which point I will receive an OK code in return). At this point I need to flip the bit back so that the website starts using the primary service again.
I've had a read of this Should I use a Windows Service or an ASP.NET Background Thread? and I think that separating out the code into a Windows Service would be the cleanest method of performing the polling, but then how would I communicate with the web appication.
One thought I've had is to expose a webservice that the Windows Service could use to communicate into the webapp but this seems both messy and over-kill.
I'd appreciate your thoughts and experiences performing similar tasks.
Thanks
I think the Windows service is the way to go, definitely.
As for the communication between the service and your web site, the best answer depends on the size and scale of your solution. If you are building something that needs to be reliable, I'd suggest you implement some sort of queue between your ASP.NET site and your Windows service. You have a lot of options here too, depending on budget and ability: BizTalk, MSMQ, and SQL Server queues (SSIS). Alternatively if you are looking for something smaller scale, I'd recommend you just stick it into a database table somewhere.
I would avoid using files on the file system because you will encounter issues with file locks and multithreading. I would also avoid directly communicating with the service because you risk losing the in-memory queue if the service fails for any reason.
Edited to add:
If reliability isn't a concern here, you could use a WPF named-pipes hosted service for communication between your website and your Windows service. This avoids much of the overheads normally involved in classic Web Services and is surprisingly quick. The only down-side is that self-hosting a WPF service is tricky and can be difficult to keep the service up.
I'm working on a .net portal which would be having lots of concurrent users.
so scalability,performance need to be addressed in the design and architecture.
We plan to use load balancing in the application.
Keeping this in mind,what would be the best way of communicating between IIS web server(hosting aspx,aspx.cs files) and application server (hosting .net assemblies like business logic and data access layer)?
Should it be .net remoting or soap web service?or is there any other approach?
Thanks.
Is there another approach? Yes - don't distribute your objects.
The most scalable approach is to NOT to distribute your objects away from each other. Ask yourself, why do you want to deploy one flavor of code to an "app server" while another flavor of code goes to a "web server"? The communication that goes on between those two layers, if they are distributed, will be much much much much (etc etc) more expensive than a local call.
With today's 64-bit servers, with all of that memory, and the hot CPUs, and with ASP.NET's superior memory management, why not put your business logic and DAL on the same physical machine as the ASPX files? Why not?
If you need to scale, add more servers. Simple.
There are good reasons, of course, to distribute. The most common good reasons have to do with domains of ownership - along several axes: security management, or even budget and control. In other words, to take the latter case, if team is responsible for running the business logic and a separate team is responsible for building and running the web layer -then it may make sense to distribute those two things to allow independence of management. Most of the good reasons for distributing computer code, have their origins in the structures of the human organizations using or developing the code.
There is no good technical reason why a web page should not run on the same CPU, sharing the same CLR VM and memory heap, as the database access layer.
Regardless what you do with distribution, it would be unwise to architect your system with less-than-formal interfaces defining the connections between the layers. If you keep formal interfaces, then it should be no problem for you to measure the perf and efficiency of a distributed approach versus a co-located approach.
Do you really need an app server? Just how big are you talking exactly? For example Stackoverflow.com has ~50k uniques a day and doesn't have an app server so I assume you are talking much bigger than that? Most performance bottle necks come down to database issues so I would concentrate on that.
I suggest you take a look at the Patterns and Practices groups guidelines for performance, more specifically Chapter 6 - Improving ASP.NET Performance of the guideline. I agree with Cheeso that you should seriously consider NOT physically splitting your application layer and UI layer if you can. The P&P guideline has the following notes:
Avoid Unnecessary Process Hops
Although process hops are not as expensive as machine hops, you should avoid process hops where possible. Process hops cause added overhead because they require interprocess communication (IPC) and marshaling. For example, if your solution uses Enterprise Services, use library applications where possible, unless you need to put your Enterprise Services application on a remote middle tier.
Understand the Performance Implications of a Remote Middle Tier
If possible, avoid the overhead of interprocess and intercomputer communication. Unless your business requirements dictate the use of a remote middle tier, keep your presentation, business, and data access logic on the Web server. Deploy your business and data access assemblies to the Bin directory of your application. However, you might require a remote middle tier for any of the following reasons:
You want to share your business logic between your Internet-facing Web applications and other internal enterprise applications.
Your scale-out and fault tolerance requirements dictate the use of a middle tier cluster or of load-balanced servers.
Your corporate security policy mandates that you cannot put business logic on your Web servers.
If you absolutely have to split the application logic up anyways, you could use WCF as a transport mechanism. I'm not sure how it stacks up against remoting when it comes to performance. However, I seem to remember that this is the guideline Microsoft is pushing.
Clemens Vasters (Technical Lead for the Microsoft .NET Service Bus) talks about WCF vs. Remoting in this answer on MSDN forums.
Learn to write asynchronously.
Explore the CCR runtime for example.
Each thread that is blocked waiting for IO responses is one less available to your system.
Turn off 'idealised logging' leave the ability to switch it back on via admin console. But logging is often a hidden bottle neck.
CACHE CACHE CACHE!
If it was expensive to get the data the first time, don't pay for it the second!
Avoid ASP.net's Session State - This can seriously bloat and lead to a large slow down in page responsiveness.
Modify the http headers to specify short browser caching (5sec - 20sec) (Depends on the nature of the content)
Utilise GZIP while you are at it!
AND USE LOTS OF RAM
Here are my tips
1)Move all your static files - images , css, js to a load balancer like nginx. This will greatly reduce the load on IIS server and it will have enough free resources to serve the main request.
2)Think about caching and avoiding database access altogether.
3)Try to implement REST principles are far as possible.
4)Keep session state to a bare minimum - if possible avoid it altogether.
There are some good performance and scalability points in these articles from Omar Al Zabir.
10 ASP.NET Performance and Scalability Secrets
and
99.99% available ASP.NET and SQL Server SaaS Production Architecture
(also check out his book Building a Web 2.0 Portal with ASP.NET 3.5)