Effects of installing add-on on a stateful horizontal scaled node - scaling

I am interested on the effects of installing an add-on on a stateful horizontally scaled node. When a node is scaled and has 3 nodes (1 master and 2 regular), does the installed add-on only have effect on the Master node or is it installed on the regular nodes too? If not, can we explicitly install an add-on on a regular node?
Could not find information about this situation on the documents.

This is somewhat defined by the add-on itself depending on what it has defined in its onBeforeScaleOut and onAfterScaleOut event handlers.
Ideally, every add-on should be written with those handlers in place (or appropriate behaviour without the necessity of those handlers), but in practice that's not guaranteed... depends entirely on the author.
If we assume that the add-on in question did not define what should happen, stateful scaling means that the entire filesystem of the master node will be copied - so any configuration files or system packages that were deployed on the master node by the add-on will also appear on the newly horizontally scaled node.
The documentation mentions a warning for the inverse scenario (applicable for stateless scaling):
add-ons - any add-ons installed on the layer won’t be available
Edit to address the question clarification from the comment:
I was not so interested of up or down scaling. I am interested at the effects of installing the add-on (with a manifest) on a stateful, horizontally scaled node.
Stateful vs. stateless only matters during scaling - it describes how they are created (for existing nodes, it doesn't matter how they were created; they are treated the same way).
As far as I know it is installed exclusively on the master node, but I want to be sure and also if it is possible to force it to install on the regular nodes.
This depends entirely on the add-on and how it targets its operations. It can be designed to only execute commands / install things on the master node for a given layer, or it can be designed to work on all nodes within the layer (or environment).
The JPS scripting behind each add-on is usually open source. You can find some examples at https://github.com/jelastic-jps but I recommend to address your question to the add-on author (or your Jelastic provider, if it's one that they provide you with by default).

Related

CMS - How to work with multiple environments? Do I really need them?

I've never worked with any CMS and I simply wanted to play with such ones. As originally I come from .NET roots, so I was thinking about choosing Orchard Core CMS.
Let's imagine very simple scenario, together with my colleague I'd like to create a blog. As I'm used to work with web based systems and applications for a business for me it's kinda normal to work with code repository, having multiple environments dev/test/stage/prod, implementing CI / CD, adjusting database via migrations or scripts.
Now the question is do I need all of this with working on our blog with a usage of CMS.
To be more specific I can ask few questions:
Shall I create blog using CMS locally (My PC) -> create few articles and then deploy it to the web or I should create a blog over the internet and add articles in prod environment directly.
How to synchronize databases between environments (dev / prod).
I can add, that as I do not expect many visitors on a website I was thinking to use Orchard Core CMS together with SQLite. Also I expect that I can customize code, add new modules, extend existing ones etc. - not only add content (articles). You can take that into consideration in answering the question
So basically my question is what should be the workflow of a person who want to create / administer and maintain CMS (let it be blog) as a single person or as a team.
Shall I work and create content locally, then publish it and somehow synchronize both application and database (database is my main question mark - also in a context how to do that properly using SQLite).
Or simply all the changes - code + content should be managed directly on a server let's call it production environment.
Excuse me if question is silly and hard to understand, but I'm looking for any advice as I really didn't find any good examples / information about that or maybe I'm seeking in totally wrong direction.
Thanks in advance.
Great question, not at all silly ;)
When dealing with a CMS, you need to think about the data/content in very different terms from the code/modules, despite the fact that the boundary between them is not always completely obvious.
For Orchard, the recommendation is not to install modules in production, but to have a dev - staging - production type of environment: install new modules on a dev environment, test them in staging, and then deploy to production when it's safe to do so. Depending on the scale of the project, the staging may be skipped for a more agile dev to prod setting but the idea remains the same, and is not very different from any modular application.
Then you have the activation and configuration of the settings of the modules you deploy. Because in a CMS like Orchard, those settings are considered data and stored in the database, they should be handled like content. This includes metadata such as the very shape of the content of your site: content types are data.
Data is typically not deployed like code is, with staging and prod environments (although it can, to a degree, more on that in a moment). One reason for this is that a CMS will often feature user-provided data, such as reviews, ratings, comments or usage stats. Synchronizing all that two-ways is very impractical. Another even more important reason is that the very reason to use a CMS is to let non-technical owners of the site manage content themselves in a fast and direct manner.
The difference between code and data is also visible in the way you secure their changes: for code, usual source control is still the rule, whereas for the content, you'll setup database backups.
Also important to mention is the structure of the database. You typically don't have to worry about this until you write your own modules: Orchard comes with a rich data migration feature that makes sure the database structure gets updated with the code that uses it. So don't worry about that, the database will just update itself as you deploy code to production.
Finally, I must mention that some CMS sites do need to be able to stage contents and test it before exposing it to end-users. There are variations of that: in some cases, being able to draft and preview content items is enough. Orchard supports that out of the box: any content type can be marked draftable. When that is not enough, there is an optional feature called Deployments that enables rich content deployment workflows that can be repeated, scheduled and validated. An important point concerning that module is that the deployment only applies to the subset of the site's content you decide it should apply to (and excludes, obviously, stuff like user-provided content).
So in summary, treat code and modules as something you deploy in a one-way fashion from the dev box all the way to production, with ordinary source control and deployment methods, and treat data depending on the scenario, from simple direct in production database instances with a good backup policy, to drafts stored in production, and then all the way to complex content deployment rules.

Translucent-Overlays configuration

is anyone familiar on how setting translucent overlay for openldap 2.4.40.
I searched the internet without any hope.
what I want to implement is two openldap server so that one server get the search information from the other one, override some information based on its database and then give the final attributes
His question would be "How do you start". I've also read the "documentation"; it's terrible on this subject.
The slapo-translucent man page has no effective information other than "this is the translucent overlay, you can enable it." There's nothing about how you configure it to point to the remote ldap server. There's very little information
on how you can determine what cn/dn/du/o/fu that you desire to add/modify about the remote search results. (I just want to add to a user's group membership and
there isn't an example about something as simple as that.)
Everything regarding OpenLDAP 2.4 says you should be using ldapadd/modify to change the slapd dynamic configuration in /etc/ldap/slapd.d and yet ALL examples/tutorials for translucent overlays reference outdated slapd.conf usage.
Basically, none of the documentation is in any way educational unless you are already a full wizard at administering OpenLDAP.
Add to that the community documentation comes from a wide flavor of unix distributions, none of which conform to each other, and it just maximizes
the confusion.
My interaction with OpenLDAP leaves me with the impression that it has, easily, the most confusing configuration and usage architecture of any service that I have ever seen.
A directory service is something that an admin should be able install and standup in a day with no prior experience. It's clearly going to take weeks of
time trying to untie the configuration knot that this requires.

Cascading microservices using Meteor

I've been looking into scaling Meteor, and had an idea by using the Meteor Cluster package;
Create a super-service*, which the user connects to, containing general core packages to be used by every micro-service (api, app, salesSite, etc. would make use of its package),
The super-service then routes to the appropriate micro-service (e.g., the app), providing it with the functionality of its own packages.
(* - as in super- and sub-, not that it's awesome... I mean it is but...)
The idea being that I can cascade each service as a superset of the super-service. This would also allow me to cleverly inherit functionality for other services in a cascading service style. E.g.,
unauthedApp > guestApp > userApp > modApp > adminApp,
for the application, where the functionality of the previous service are inherited to the preceding service (e.g., the further right along that chain, the more extra functionality is added and inherited).
Is this possible?
EDIT: If possible, is there a provided example of how to implement such a pattern using micro-services?
[[[[[ BIG EDIT #2: ]]]]]
Think I'm trying to make a solution fit the problem, so let me re-explain so this question can be answered based on the issue rather than the solution I'm trying to implement.
Basically, I want to "inherit" (for lack of a better word) the packages depended on needed functionality, so that no code is unnecessarily sent through the wire.
So starting with the core packages, which has libraries I want all of my services to have, I then want to further "add" the functionality as needed. Then I want to add page packages if serving a page-based service (instead of, say, the API service, which doesn't render pages), then the appropriate role-based page packages, etc., until the most specific packages are added.
My thought was that I could make the services chain in such a way that I could traverse through from the most generic to most specific service, and that would finally end with a composition of packages from multiple services. So, for e.g., the guestApp, that might be the core packages + generic page packages + generic app packages + unauthApp packages + guestApp packages, so no unneccessary packages are added.
Also with this imaginary pattern I'm describing, I don't need to add all my core packages to each microservice - I can deal with them all within the core package right at the top of the package traversal I've discussed above and not have to worry about forgetting to add the packages to the "inherited" packages.
Hope my reasoning here makes sense, and I hope you guys know of a best practice for doing this. Thank you!
Short answer:
Yes! That's a good use to a microservice architecture.
Long answer:
Microservices don't necessarily provide you an inheritence mechanism as in OOP. You should consider microservices as independent "functions" which take in an input and respond with an output/action. Any microservice can depend on another to complete its own task.
And then, you "compose" necessary microservices in order to achieve the final output/action.
You can have one or few web facing "frontend" services that use a mix of few other backend microservices whose ports are not open to the public network.
The drawback with a microservice would be its "minimum footprint". The idea with microservices is around some main benefits:
Separate core services so that they can be "maintained" independently
Separate core services so that they can be "replaced" independently
Separate core services so that they can be "scaled" independently
But then, each microservice, being a node/meteor app, will have its minimum cpu/ram footprint even when they are just idle and waiting for a connection.
Furthermore, managing a single monolithic app, or just a few "largish" services is much easier, from a devops standpoint, than managing tens of individual deployments.
So with all engineering decisions, the right answer would imply some kind of "balance".
Edit: reference to inheritence
As per the OP's comment, the microservices can indeed be referenced from a parent code as either functions or classes and be composed (functions) or inherited from (classes) because after all the underlying functionality are DDP endpoints.
If you are using the cluster package from meteorhacks
// create a connection to your microservice
var someService = Cluster.discoverConnection("someService");
// call a normal meteor method from that service
var resultFromSomeService = someService.call("someMethodFromSomeService");
So as with any piece of javascript code, you can wrap the above piece of code in a function or a class with its constructor and all and inherit from it, exposing its interfaces as you desire.

Migrating to the EndecaExperienceManager

What are the kind of challenges faced when we migrate/move from versions of ATG commerce (<10) that were not using EndecaExp Manager to versions that use it. For ex, would all the JSPs undergo a change in the way they are rendered, given that the pages will now have to be template driven ?
What would be some best practices here to have a minimum impact of the move on the UI & maximize the reuse of the JSPs ?
I have read the migration docs but they do not seem to cover this aspect.
As you know ATG and Endeca only really started integrating in ATG 10.2.x. So in older versions of ATG the integration requires a lot more work from the developer. I've worked on an ATG 9.2 and Endeca 3.1.2 implementation that does exactly that. Your question should really be how far off are you from migrating to a later version of ATG that does integrate nicely with Endeca and how much of your current system would you want to retain after such a migration? This is important as it will mean you either need to build a solution that mimics the ATG Assembler Pipeline functionality (giving you the most control over your templates and cartridges when integrating with Experience Manager) or a less intrusive approach, based on the InvokeAssembler droplet.
The other thing to consider is how much do you want to render through Experience Manager. Typically you would do the homepage and category pages. The product detail page would call some components from Experience Manager (for example breadcrumbs) but the data in the index isn't usually as accurate as the data in the database (for example inventory levels) so for the PDP you go directly to the repository. You are also unlikely to build your checkout flow in Experience Manager. This should give you an indication that you are likely to retain a large number of your existing pages.
Your quickest approach would be to build a droplet that will retrieve your contentItems from Experience Manager and then start to render them. Keep in mind that the content items are just glorified JSON responses so you can easily parse them when you get hold of them.

Best Practices for Self Updating Desktop Application in a network environment

I have searched through google and SO for possible answers to this question, but can only find small bits of information scattered around the place, most of which appear to be personal opinion.
I'm aware that this question could be considered subjective, but I'm not looking for personal opinion, rather facts with reasons (e.g. past experience) or even a single link to a blog/wiki which describes best practices for this (this is what I'd prefer to be honest). What I'm not looking for is how to make this work, I know how to create a self updating desktop application.
I want to know about the best practices for creating a self updating desktop application. The sort of best practices I'm especially curious about are:
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
How often should you check for updates? Weekly/daily/hourly and exactly why?
Should the update be visible to the user or run behind the scenes from a UI point of view?
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
Should you allow users to update from a central location or only allow updating through the specified application? (for closed business applications).
Surely there is some written rules/suggestions about this stuff? One of the most annoying things about a lot of applications is the updating, as it's hard to find a good balance between "out of date" and "in the users face".
If it helps consider this to be written in .net C# for a single client, running on machines with constant available connectivity to the update server, all of these machines talk to each other through the application, and all also talk to a central database server.
One best practice that many software overlook: ask to update when the user is closing your application, NOT when it has just launched it.
It's incredible how many apps don't do that (Firefox, for example). You just ran the app, you want to use it now, and instead, it prompts you if you want to update, which of course is going to take 5 minutes and require restarting the app.
This is non-sense. Just do the update at the end.
It's hard to give a general answer. It depends on the context: criticality of the update, what kind of app is it, user preferences, #users, network width, etc. Here are some of the options/trade-offs.
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
As a developer your best interest is to have all apps out there to be as up to date as possible. This reduces your maintenance effort. Thus, if the user does not mind you should update.
How often should you check for updates? Weekly/daily/hourly and exactly why?
If the updates are transparent to the user, do not require an immediate restart of the app, then I'd suggest that you do it as often as your the communication bandwidth allows (considering both the update check-frequent but small-and the download-infrequent but large)
Should the update be visible to the user or run behind the scenes from a UI point of view?
Depends on the user preferences but also on the type of the update: bug fixes vs. functionality/UI changes (the user will be puzzled to see the look and feel has changed with no previous alert)
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
same arguments as the previous question
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
if app size is small download it from scratch. This will prevent all sort of weird bugs created to mismatch between the different patches ("DLL hell"). However, this may require large download times or impose heavy toll on your network.
Should you allow users to update from a central location or only allow updating through the specified application? (for closed business applications).
I think both
From practical experience, don't forget to add functionality for updating the update engine. Which means that performing an update is usually a two step approach
Check if there are updates to the update engine
Check if there are updates to the actual application
Do you force an update if the clients
software is out of date, but not going
to break when trying to communicate
with other version of the software or
the database itself? If so how do you
signify this breaking change?
A common practice is to have a "ProtocolVersion" method which indicates the lowest/oldest version allowed.
The "ProtocolVersion" can either supplied by the client or the server depending on the trust level you have between the client and the server. In a low trust level it is probably better to have the client provide the "ProtocolVersion" and then deny access server side until the client is updated. In a "high trust level" scenario it will be easier to have the server supply the "ProtocolVersion" it accepts, and then all the logic for adapting to this - including updating the client application - implemented in the client only. Giving the benefit that the version check/handling code only needs to be in one place.
Do not ever try to force an update unless your lawyers demand that. Show the the user a update notification she can either accept or ignore. Try not to spam the same version too much is she rejected it. The help her make the decision, include a link to release notes or a short summary of changes.
Weekly would be a good default update check interval but let the user choose this, including completely disabling update check from the web. Do not check too often because she might be on an expensive mobile data plan, or she just doesn't like the idea of an application phoning home.
The update check part should be completely silent. If an update was found, display a notification for the user. During download and installation, show a progress bar.
To keep this simple, notify the user about any newer version. If you do not want to annoy them with frequent updates including just a few minor bug fixes, do not release every minor version at the download location watched by the update checker
Maintaining patches for all previously released versions is too much work. If the download size becomes a problem, figure out some other way than patches to make it smaller (7-zip compressed self-extracting exe, splitting the application to multiple MSI packages that have independent versions etc)
Two more things:
Do not implement the update engine as a process that is constantly running in the background even when I'm not using your application. My PC already ~10 such processes hogging resources, which is very annoying.
When updating the update engine itself, on one hand you need to have the engine running to show the installation progress UI but on the other hand the update process must be closed to avoid the reboot that would result from the exe file being locked. There are a number of things like running a helper program from %TEMP%, using Windows Installer restart manager, renaming the updater exe file before starting the installation package etc. Keep this in mind when architecting the update engine.
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
Ask the user.
How often should you check for updates? Weekly/daily/hourly and exactly why?
Ask the user.
Should the update be visible to the user or run behind the scenes from a UI point of view?
Ask the user.
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
Ask the user (notice a trend here?).
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
Typically, patch, if the application is of any significant size.
As far as the "ask the user" responses go, it doesn't mean always prompt them every single time. Instead, give them the option to set what they should be prompted for and what should just be done invisibly (and the first time a given thing occurs, ask them what should be done in the future, and remember that). This shouldn't be very difficult and you gain a lot of goodwill from a larger portion of your user base, since it's very hard to have fixed settings suit the desires of everyone who uses your app. When in doubt, more options are better than less - especially when they're the kind of option that's fairly trivial to code.

Resources