What are the difference between NGINX Plus and NGINX Community edition? Some doubts in the use of NGINX in the WSO2 EI cluster creation - nginx

I am absolutly new in NGINX and I have the following doubts about this product.
I have to create a WSO2 EI cluster and reading the official documentation it says to use NGINIX as load balancer:
https://docs.wso2.com/display/EI650/Clustering+the+ESB+Profile#ClusteringtheESBProfile-Configuringtheloadbalancer
On the official documentation it is specified that:
Follow the steps below to configure NGINX Plus version 1.7.11 or NGINX
community version 1.9.2 as the load balancer.
So the first doubt: what is the difference between NGINX Plus and NGINX Community? Is the first the payment version and the second a free version?
In case my assertion is correct what are the limitation in the use of the community edition?
Another doubt is that going on the NGNIX website:
https://www.nginx.com/solutions/adc/
it seems to me that it offers different products (from load balancer to web server and other stuff). Is it a single product doing more jobs or is it composed by different modules that have to be installed separately?
Another doubt is: basing on the amount of traffic that the load balancer have to handle changes the hardware requirements of the VM where I have to install it?
Thank you

Is the first the payment version and the second a free version?
Basically - yes. Plus additional features.
NGINX Plus as well suports out of box sticky sessions needed for HA setup for carbon console, active service healthcheck and more. I needed the two mentioned.
In theory you could build (compile) additional addon modules (e. g. for sticky sessions and healthchecks) with the community edition too, but it's not always working as smoothly as I expected. (you may as well consider Apache httpd)
It may worth to have support at hand, mainly for critical deployments. I prefer this solution, rather than clients calling me during weekends to check my custom builds.
Is it a single product doing more jobs or is it composed by different modules that have to be installed separately?
NGINX offers more products (APIM, WAF,..) as far I know it's all the NGINX Plus with additional modules. But for load balancing you may be ok with basic web server (load balancer) and keepalived
Another doubt is.. changes the hardware requirements of the VM where I have to install it?
NGINX can handle A LOT of traffic even on modest infrastructure, much more than the wso2ei itself, imho nginx won't be your bottleneck until you don't do anything special (WAF) or stupid (log payloads)

Related

Where does web server come into play in OpenStack - CloudFoundry stack

I work for a small web startup. They have decided to use OpenStack as IaaS and then on top of it, cloudfoundry as PaaS. I am trying to learn about this technology stack. But I am really confused even after going through documentations and related materials on the web.
What do I want?
I have a web site, that currently runs on a RHEL system (aws instance), with
nginx as web server. I want to shift this to OpenStack-cloudfoundry
stack because the company's management has decided to do so. They also
want me to evaluate if I can put Docker to use anywhere.
From my understanding, OpenStack (Iaas) will provide me with all stuff related to hardware software needs, and cloudfoundry will help me on the development front.
Now, where does nginx (or any web server) come into the picture? Is it part of Openstack or Is it part of cloudfoundry?
On my aws RHEL system, Do I just install Openstack and Cloudfoundry, and then push my app and not at all bother about what happens beneath? I am really confused.. please help out.
And, Is there anywhere I can utilize Docker, in this setup?
You would generally not deploy OpenStack on top of AWS. OpenStack is similar to AWS in that it provides a service for you to create and destroy virtual machine instances, manage networking between and around your VMs, attach and detach block devices to instances, etc. In other words, both are services for managing "infrastructures", where "infrastructure" here means a virtualized datacenter, which at its core means a bunch of hardware running hypervisors that allow you regard the datacenter as a bunch of virtual machines that can be spun up and down on demand, rather than a bunch of "static" physical machines.
AWS is an Infrastructure-as-a-Service provided by Amazon, so you don't have to install AWS yourself, you can just start using it to provision VM instances within Amazon's datacenters. OpenStack is software you install yourself (or pay a vendor to manage for you) on hardware you own or pay for yourself, and once installed OpenStack provides a similar service/interface to AWS.
With a Platform-as-a-Service, you concern yourself more with your application code, and "just pushing it", and don't have to concern yourself as much with what's happening on the underlying machine. You don't have to worry as much about the underlying OS, making sure you have the right runtime and code dependencies of your application, generally don't have to care about the webserver that's serving your code, etc. And you get many more higher level features, e.g. easy ability to scale vertically or horizontally, dynamic routing, automatic log aggregation, automatic health management, etc.
As far as how nginx fits in, it depends how you're using nginx, and what kind of application you have. Cloud Foundry has few couple ways of dealing with applications.
One is the buildpack model, where you simply push your source code to the platform, and it will automatically detect the appropriate runtime and dependencies for your application. For instance, if your application is a Ruby application, it will automatically detect this, and by default automatically run the application using the WEBrick server. However, you can choose other Ruby webservers such as Phusion, Passenger, etc. [1]
If your application is primarily serving static content, it will use nginx as the webserver. [2]
Another is using Docker. You can deploy applications based on Docker images on Cloud Foundry, in which case you could have a container running nginx and your application inside the container, or not, it depends on whether you still need nginx. Pushing a docker application is as simple as:
cf push trainingwebapp --docker-image training/webapp -c 'python app.py'
Here, this uses the sample Hello World web app from the Docker documentation. [3]
[1] https://docs.cloudfoundry.org/buildpacks/ruby/ruby-prod-server.html
[2] https://docs.cloudfoundry.org/buildpacks/staticfile/index.html
[3] https://docs.docker.com/engine/userguide/containers/usingdocker/

Run multiple instances of IBM BPM

I have the IBM Business Process Manager Advanced 7.5 installed.
Question:
Is it possible to install and run newer version - IBM BPM 8.5 on the same machine?
I worry about ports conflict (for example port 9043 to IBM Console).
Maybe I should ask how to change default port configuration?
Please help.
Technically it can be possible, however I suggest you do not do this as ibm bpm requires a lot of system resources to run and installing two versions of ibm bpm can make the system slower than ever before.
However I have seen multiple instances of same ibm bpm version running on a single cluster on server VM. This is practically stable and in use from considerable tenure.
PS. - I had administered a huge ibm bpm infra containing 80+ ibm bpm servers.
As Gas already commented, in theory this is possible. But you have to be aware, that IBM BPM is not only using the specified ports for web access, it also uses ports for internal communication. In my opinion, this is not an easy task to get right.
On the other hand, the system requirements for IBM BPM are quite challenging for the server, if You want to run both instances in parallel, you should consider that your server will need to be capable. WebSphere is kind of greedy and not really designed to share its resources ;)
Yes, you can run multiple versions of BPM on the same system. The primary concerns are going to be port conflict and OS system resources. Use the BPMConfig to create a new profile and installation that is on different ports. On my lab machines with VMs, I install all the BPM installs with the default ports and only have one (1) running at a time. If I need 2, I just spin up a new VM from the base template and go from there.
By Default, the port conflicts are addressed by the WebSphere Application server code. If needed you can specify "initialPortAssignment" for Dmgr, node and cluster members while creating the environment using BPMConfig command. You can even specify specific port numbers using the
https://www.ibm.com/support/knowledgecenter/en/SSFPJS_8.6.0/com.ibm.wbpm.ref.doc/topics/samplecfgprops.html
You can also provide Websphere options like "-startingPort starting_port | -portsFile ports_file_path | -defaultPorts" for Dmgr bpm.dmgr.profileOptions= and nodes bpm.de.node.#.profileOptions in the BPMConfig properties file. For cluster members just have option to indicate the starting port.
REf: https://www.ibm.com/support/knowledgecenter/cs/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/rxml_manageprofiles.html
I would not advise on changing the port numbers once you start using the BPM environment.
As indicated by others make sure you have enough resources if you are planning to run both environments at the same time.
Yes, I am using two versions for evaluation. Port conflicts can be handled using server (WebSphere Integrated Solutions Console) console or BPMConfig utils.

Alfresco Community 5 Share Clustering

I'm seeing a lot of conflicting information on the internet about Alfresco Share clustering. From what I can find, it looks like clustering was removed completely from Alfresco Community in versions 4.2 and above.
I did find some documentation showing that Alfresco One 5 has Share clustering and I noticed that I can enable hazelcast in Alfresco Community 5 but the clustering doesn't work at all.
Is there a way to have more than 1 instance of Alfresco Community 5 behind a load balancer and have proper synchronization/replication/clustering occur between the share instances?
Short answer
There is no cluster and no load balancer support for the Alfresco Community version (I know of). Alfresco removed that feature from the community version starting with 4.2 when they refactored the whole cluster thing.
Long answer
What are you trying to archive?
If scalability is your goal you should focus on the bottlenecks in the Alfresco architecture which will not be solved by clustering / load balancing. I haven't seen a system where Share tier was the bottleneck.
quite the contrary: If load from share against the repository tier is too high you will fall into a timeout and thread escalation since Alfresco follows the "retrying transaction" principle: If errors occur, share will retry - which means: if repositry is answering too slow share will create new requests/threads until the OS reaches kernel or process limits without any result.
So instead you should focus on optimizing the repository tier to become as fast as possible to avoid thread escalations in share (This also can't be achived by clustering):
transformation --> understand, replace or disable sync transfomation stuff running on repository tier
search --> understand, optimize tracking and run SOLR on separate host(s), but tracking will rely on the transformation performance of the repository tier
caching --> use smart reverse proxys to cache Share stuff on client and proxy side to minimize traffic
very fast/smart storage concepts on db and index tier
If availability is your concern you may get better results by using HA features from virtualisation platforms like VMWare ESX and your support efforts will be a fraction compared to clustered Alfresco.

Replacement for Hamachi for SVN access

My company has been using Hamachi to access our SVN repository for a number of years. We are a small yet widely distributed development team with each programmer in a different country working from home. The server is hosted by a non-techie in our central office. Hamachi is useful here since it has a GUI and supports remote management.
This system worked well for a while, but recently I have moved to a country with poor internet speeds. Hamachi will no longer connect 99% of the time - instead I get a "Probing..." message that doesn't resolve. It's certain to be a latency issue, as the same laptop will connect without problems when I cross the border and connect using a different ISP with better speeds.
So I really need to replace Hamachi with some other VPN/protocol that handles latency better. The techie managing the repository is not comfortable installing and configuring Apache or IIS, so it looks like HTTP is out. I tried to convince my boss to go for a web hosting company, but he doesn't trust a 3rd party with our source.
Any other recommended options / experiences out there for accessing our SVN repos that would be as simple as Hamachi for setup; but be more tolerant of network latency issues?
Perhaps it's a bit much to ask of your team, but if you have a distributed team then you could switch to a distributed version control system (eg. Mercurial or Git). These don't need to use the network so much and you won't suffer from latency problems. It is an entirely new paradigm though and your team's development processes will have to change, so you might not consider it appropriate in your case.
First I should ask why you need a VPN in the first place. Subversion can operate over HTTPS, so as long as you open the proper port on the server there shouldn't be any security or connectivity issues.
Assuming that you do need a VPN, I find it difficult to believe that an administrator uncomfortable with Apache would be more comfortable installing a whole new VPN system (much more complicated and tricky, in my estimation).

Are there any all-in-one packages that help install wamp on a production server?

I need to install amp on a windows2003 production server. I'd like, if possible, an integrated install/management tool so I don't have to install/integrate the components of amp separately. Those that I've found are 'development' servers. Are there any packages out there that install amp in a production ready (locked down state)?
I'm aware of LAMP... Windows, since we have IIS apps already and we've paid for this box, is a requirement. I'll take care of all the other hangups. I just want a simple way to install, integrate, and manage AMP.
I'm not sure running WAMP as a production server is a good idea. I use wamp to stage proyects and then I move them to a Linux server.
You can try any of this solutions:
http://www.uniformserver.com/
Some people state that they are working fine with WAMP Server, but again, I wouldn't recommend it.
Xampp is quite popular, i just don't know how "production level" it is:
http://www.apachefriends.org/en/xampp.html
Without wanting to sound elite: For "real" production Environments, it's possibly not a bad idea to setup and configure the components individually, but this requires some deeper knowledge than "hit setup and run".
There doesn't appear to be any all-in one packages that are up to date and 'designed' for production. You just can't trust the default installs to be secure on whats out there.
I ended up just doing this manually. It wasn't painful though. Each component's install procedure was documented reasonably well. Took me about 3.5hrs. A nice side effect of the involved setup was that it gave me a much better understanding of each component's dependencies and the ways in which they touch. In hind sight I should have done it manually from the start.
Note: make sure you read the comments below each component's documentation pages. Some contain valuable corrections to the install process.
Since the time this question was asked Zend has released Zend Server.
Zend Server is a complete,
enterprise-ready Web Application
Server for running and managing PHP
applications that require a high level
of reliability, performance and
security.
There doesn't appear to be any all-in one packages that are up to date and 'designed' for production. You just can't trust the default installs to be secure on whats out there.
WampDeveloper Pro is a commercial WAMP package that is specifically designed for production use (which I use).
I don't think that when this question was asked there was a viable solution for the above.

Resources