I'm looking for AWS-like security groups in Jelastic platform.
In AWS all the things are pretty straightforward: you create vpc, define subnets, define in/out rules and that's it.
There's options to set public/private IPs for the boxes, get the runtime information using API or cloudformation and many other useful things.
Is there something like this in Jelastic platform? I've lurked through UI but didn't found anything except of endpoints which allows me to open some node for the world.
From my perspective few options are possible now:
1) New version of Jelastic has an ability to use isolated networks per each environment, but this version is not in production yet. You can wait until this version will be available in production, but I don't think that option can be good for you as the biggest waste in our life is a waste of time.
2) Write a simple JPS addon that automatically apply custom firewall rule set per each container in your environment. Such an addon can be written once and then can be applied to all your environments in future. Actually CloudScripting way helps to make automation of any level (including infrastructure behaviour / events subscription / deployments and etc.) In that way any topology modifications can be aumaticaly reflected in firewall rules and applied.
3) Manual firewall configuration using this article
Manual Firewall configuration in Jelastic
Probably can be the fastest solution, but it depends. If you have let's say 5 containers - that is fine as a temporary solution until more advanced feature will be available. If you have 100 containers - it's easier to write an addon. There are many examples available on Github JPS
Related
As I heard that there are many things need to be considered before Deploying a Meteor App, However, it's still quite vague. Anyone please give me some opinions about this issue. Thanks
This is probably the wrong forum for this type of question, since it's highly opinionated and not really in the Q&A format, but I'll give you some of my personal opinions.
Where am I hosting my app?
This has a lot to do with what your app is (Web-based app, Android-only), how many users you plan to have, if this is a public app or something private, how much you have to spend, and many other factors. Options include:
Host it yourself - Could be on a VPS (Virtual Private Server, like Digital Ocean and others), some cloud offering (AWS or similar), or a bare-metal server you have hosted somewhere (like in your closet).
Pay for dedicated hosting - Several out there that offer many different features, like Galaxy or Modulus, etc.
If you host it yourself, then you have to maintain the hosting solution, I.E you need to support it all on your own. This may mean provisioning/installing the OS, installing and configuring the server apps (MongoDB, Node.js, web servers, etc), and maintaining everything over time. The benefits, however, are potentially cheaper hosting costs (although this could be debated) and custom setups/architectures. If you are creating an app that should remain private (I.E. not a public app) you may want to consider this option so you can host it internal to the company and not make it public-facing. There are some tools out there that can ease the process of setting up the server, for instance MUP/MUPX.
For dedicated Meteor hosting, the benefit is that someone else does the installation, configuration, and maintenance of the core apps, and all you have to do is push the button to move your code in. These options can be more pricey as they are covering the IT costs of supporting the environment, but they usually come with the benefit that a) You don't have to install all that stuff yourself, b) you don't have to be an expert at all the "plumbing", c) You don't have to hire a staff of people to support your infrastructure, d) These hosting services usually know how to optimize things properly to get better performance for your app.
Do I deploy manually or do I use some tool?
This depends highly on the answer to #1, as your hosting decision may come with it's own set of tools used to deploy (ex. Galaxy), or you may need to shop around for the best tool for you. For manual deploys, I'd suggest looking at MUP/MUPX, which can automate the deployment of your app, even configuring the web server, DB server, and setting up everything as Docker images. Or, if you want to have more control, maybe take a look at something like Grunt or Gulp, which are more build-scripting solutions (similar to ANT/Maven/Gradle are for Java).
Do I expect a fast growth, or a slow trickle?
Again, this has a lot to do with where you plan to host. Lots of cloud services make it easy to grow/shrink your cluster of servers based on load, but this takes a LOT of configuration past just installing an OS. VPS and bare-metal solutions will be the hardest to expand. Dedicated hosting will depend on the provider.
You need to serious need to think about how you might handle a fast-growth situation, even if you don't think your app will take off. The internet is riddled with apps that failed because they didn't think they would be as successful as they were. It only takes one mention on something like Reddit, Y Combinator, or Product Hunt for your app to get a sudden and unexpected rush of traffic that takes down the server(s). If you know your growth will be controlled in some way, like if you have a private app with a pre-set number of users, then you might not need to worry about it.
Do I need to monitor my app?
The answer to this is always "Yes", but to what extent? Do you need to provide 24x7 uptime? Would an outage cost you lots of money, make you loose your biggest client, etc? Can the users live without service for a little while if it goes down, or would I loose face and customers? Depending on how serious you need to be, you should consider some sort of monitoring of your infrastructure and app. Again, there are LOTS of options here, and your decision might be swayed quite a bit by the answer to the first question.
I am sure there are other questions, but these are the biggest I could come up with.
I am wondering if it is feasible to deploy wordpress as a series of lambda functions on AWS API gateway. Any pointers on the feasibility/gotchas would be greatly appreciated!
Thanks in advance,
PKK
You'll have a lot of things to consider with persistence and even before that, Lambda doesn't support PHP. I'd probably look at Microsoft Azure Functions instead that do support PHP and do have persistent storage.
While other languages (such as Go, Rust, Swift etc.) can be "wrapped" to run in AWS Lambda with relative ease, compiling PHP targeting the same platform and running it is a bit different (and certainly more painstaking). Think about all the various PHP modules you'd need for starters. Moreover, I can't imagine performance will be as good as something like a Go binary.
If you can do something clever with the Phalcon framework and come up with an easy build and deploy process, then maayyyybee.
Though, you'd probably need to really overhaul something like WordPress which was not designed for this at all. It still uses some pretty old conventions due to the age of the project and while that is all well and good for your typical PHP server, it's a different ball game in the sense of this "portable" PHP installation.
Keep in mind that PHP sessions are relied upon as well and so you're going to need to move those elsewhere due to the lack of persistence with AWS Lambda. You can probably find some sort of plugin for WordPress that works with Redis?? I have to imagine something like that has been built by now... But there will be many complications.
I would seriously consider using Azure Functions to begin with OR using Docker and forgoing the pricing model that cloud functions offers. You can still find some pretty cheap and scalable hosting out there.
What I've done previously was use AWS ECS (Docker) with EFS (network storage) for persistence and RDS for the database. While this doesn't carry the same pricing model as Lambda, it is still cost efficient. You can set up your ECS Service to autoscale up and down. So that way you're running the bare minimum until you need more.
I've written a more in depth article about it here: https://serifandsemaphore.io/how-to-host-wordpress-like-a-boss-b5993fcfbd8e#.n6fbnf8ii ... but it's basically just the idea of running WordPress in Docker and using EFS to offload the persistent storage issues. You can swap many of the pieces of the puzzle out if you like. Use a database hosted in some other Docker service or Compose or where ever. That part need not be RDS for example. Even your storage could be handled in a different way, though EFS worked pretty well! The only major thing to note about EFS is the write speed. Most WordPress sites are read heavy though. Your mileage will vary depending on your needs.
Is it possible? Yes, anything is possible with enough time and effort. Is it worth it? That is a question best to ask yourself.
PHP can be run on Lambda as per the documentation located here: https://aws.amazon.com/blogs/compute/scripting-languages-for-aws-lambda-running-php-ruby-and-go/ .
The bigger initial problem as stated in other comments is a persistent file system. S3 for media storage is doable via Wordpress plugin (again from the comments) but any other persistent storage for the request / script execution is the initial biggest hurdle. Tackle one problem at a time till you get to the end!
We are migrating from WebSphere BPM 8.0.1.3 to 8.5.6, our plan is to move application by application rather than in a big-bang. The idea would be that when we move an application to the new server, we would create an IHS rule which redirects the related URLs to the new server. That would mean that we keep some applications running on the old server while some are already migrated to the new one.
Is this possible to achieve? Or any other idea alternate to re writing IHS rules? Like make use of WebServer plugins?
Unfortunately, I don't think that your current approach is going to work well for you. I've outlined the various options for IBM BPM upgrades here. I see several major problems with your approach, all of which come down to the fact that many of the URLs used by IBM BPM contain no details about the context for the request.
The first issue I see IBM uses a portal for a given user's work. That is all their tasks across the various BPM solutions will appear in the same web UI. This URL is not different across the Process Applications in the install. This means that all your users are trying to get their task list by going to a url like - https://mybpmserver/portal. There isn't a way to understand the process app a given user may be working with in this context, so you don't know who to redirect to the new server.
The second issue is that users are able to work with multiple process apps, so even if the context was known in the above url, you would enter complexities with respect to users working in 2 different process apps unless both have been migrated.
The third issue is that BPM is essentially a state engine. IBM does not supply a way to "migrate" that state from an old install to a new install on a per Process App (PA) basis, you have to migrate all or none. Assuming "none" because it feels like you want to follow the drain approach in my article, then you have the problem that the URLs for executing a task do not have the PA context and therefore you won't know which server to direct which task to. That is for a given PA you will have tasks on both the old server which existed before the upgrade, and the new server which were created after the upgrade, but the URLs for these tasks will look essentially the same.
There are additional issues, but the main one comes down to properly understanding how the run time BPM engines work. Some of the above issues may be mitigated if you have a separate UI layer for presenting the tasks the users (my company make a portal replacement that can do this) which would permit it to understand the context of the tasks, but if you have this, then you can get the correct behavior in that code and not worry about WAS configuration settings.
You could use the plugin-cfg.xml merge tool on the two generated plugin-cfg.xml's. That way the WAS Plugin would always know which server had which applications.
In WebsiteAzure we have an Staging feature. So we can deploy to one staging site, test it, fill all caches and then switch production with stage.
Now how could I do this on a normal Windowsserver with IIS ?
Possible Solution
One stragey i was thinking about is having a script which copies content from one folder to an other.
But there can be file locks. Also as this is not transactional there is some time a kind of invalid state in the websites.
First Poblem:
I've an external loadbalancer but it is externally hosted and unfortunately currently not able to handle this scenario.
Second problem As I want my scripts to always deploy to the staging i want to have a fix name in IIS for the staging site which i'm using in the buildserver scripts. So I would also have to rename the sites.
Third Problem The Sites are synced between multiple servers for loadbalancing. Now when i would rebuild bindings on a site ( to have consistent staging server) i could get some timing issues because not all Servers are set to the same folder.
Are there any extensions / best practices on how to do that?
You have multiple servers so you are running a distributed system. It is impossible by principle to have an atomic release of the latest code version. Even if you made the load-balancer atomically direct traffic to new sites some old requests are still in flight. You need to be able to run both code versions at the same time for a small amount of time. This capability is a requirement for your application. It is also handy to be able to roll back bad versions.
Given that requirement you can implement it like this:
Create a staging site in IIS.
Warm it up.
Swap bindings and site names on all servers. This does not need to be atomic because as I explained it is impossible to have this be atomic.
as explained via Skype, you might like to have a look at "reverse proxy iis". The following article looks actually very promising
http://weblogs.asp.net/owscott/creating-a-reverse-proxy-with-url-rewrite-for-iis
This way you might set up a public facing "frontend" website which can be easily switched between two (or more) private/protected sites - even they might reside on the same machine. Furthermore, this would also allow you to actually have two public facing URLs that are simply swapped depending on your requirements and deployment.
Just an idea... i haven't tested it in this scenario but I'm running a reverse proxy on Apache publicly and serving a private IIS website through VPN as content.
I am about to write a tender. The solution might be a PHP based CMS. Later I might want to integrate an ASP.NET framework and make it look like one site.
What features would make this relatively easy.
Would OpenId and similar make a difference?
In the PHP world Joomla is supposed to be more integrative than Druapal. What are the important differences here?
Are there spesific frameworks in ASP.NET, Python or Ruby that are more open to integration than others?
The most important thing is going to be putting as much of the look-and-feel in a format that can be shared by any platforms. That means you should develop a standard set of CSS files and (X)HTML files which can be imported (or directly presented) in any of those platform options. Think about it as writing a dynamic library that can be loaded by different programs.
Using OpenID for authentication, if all of your platform options support it, would be nice, but remember that each platform is going to require additional user metadata be stored for each user (preferences, last login, permissions/roles, etc) which you'll still have to wrangle between them. OpenID only solves the authentication problem, not the authorization or preferences problems.
Lastly, since there are so many options, I would stick to cross-platform solutions. That will leave you the most options going forward. There's no compelling advantage IMHO to using ASP.NET if there's a chance you may one day integrate with other systems or move to another system.
I think that most important thing is to choose the right server. The server needs to have adequate modules. Apache would be good choice as it supports all that you want, including mod_aspnet (which I didn't test, but many people say it works).
If you think asp.net integration is certanly going to come, I would choose Windows as OS as it will certanly be easier.
You could also install reverse proxy that would decide which server to render content based on request - if user request aspx page, proxy will connect to the IIS and windoze page, if it asks for php it can connect to other server. The problem with this approach is shared memory & state, which could be solved with carefull design to support this - like shared database holding all state information and model data....
OpenID doesn't make a difference - there are libs for any framework you choose.