Manager IP & BlockStorage configuratio Openstack - cloudify

I have a question with CLoudify 2.7:
How do I (if possible of course :) ) force Cloudify to give a specific IP to the manager when bootstraping a cloud?
Thanks.

The cloud drivers provided with Cloudify out of the box do not include an option to set a specific IP during bootstrapping. That said, the cloud drivers are pluggable and designed for extensibility, so it should be easy to add this functionality to the existing cloud drivers.
It is also worth mentioning that you can extend the built-in cloud drivers (or create new ones) using Groovy. This is often easier to do for small project as it does not require a compile cycle, just place the Groovy file with the Cloud driver implementation in the lib directory of the Cloud configuration directory. See "Tweaking an existing cloud driver" here: http://getcloudify.org/guide/2.7/clouddrivers/developing_custom_clouddriver.html

Related

Is there a way to install Google Cloud Shell Theia extensions?

I want to edit some programing languages files with help of Theia, but default extension list doesn't contain their language-server extension.
It looks impossible for me, but I'm not certain.
Official documentation about Google Cloud Shell doesn't explain about how their Theia-based editor service is implemented.
No, it is not currently possible to install new extensions into the Cloud Shell's Theia editor as the configuration of the Cloud Shell VM is curated by Google. However, Cloud Shell VMs are updated weekly, so please submit in-product feedback for any specific requests for the team to consider.

Google stackdriver database agent for OracleDB?

We know that Google's stackdriver supports monitroing for third-party applications like postgresql, mysql, couchdb and others mentioned here. They have also defined the service configuration files for the monitoring agent here.
As per my understanding, I think they somehow use collectd's third-party plugins somewhere in this. Also, since there exists a plugin for Oracle, stackdriver should support that too. But I can't see Oracle in the list of supported third-party applications. So, does stackdriver support it or not?
The Stackdriver monitoring agent package does not bundle the oracle plugin, so it's not supported. You may be able to write a shell script (invoked via the exec plugin) or a Python script (invoked via the python plugin) to query your database, and the custom metrics mechanism to ingest metrics.
You could also try BindPlane from our partner, Blue Medora.
Disclaimer: I'm an engineer on the Stackdriver team.

AWS-like security groups and networking in Jelastic

I'm looking for AWS-like security groups in Jelastic platform.
In AWS all the things are pretty straightforward: you create vpc, define subnets, define in/out rules and that's it.
There's options to set public/private IPs for the boxes, get the runtime information using API or cloudformation and many other useful things.
Is there something like this in Jelastic platform? I've lurked through UI but didn't found anything except of endpoints which allows me to open some node for the world.
From my perspective few options are possible now:
1) New version of Jelastic has an ability to use isolated networks per each environment, but this version is not in production yet. You can wait until this version will be available in production, but I don't think that option can be good for you as the biggest waste in our life is a waste of time.
2) Write a simple JPS addon that automatically apply custom firewall rule set per each container in your environment. Such an addon can be written once and then can be applied to all your environments in future. Actually CloudScripting way helps to make automation of any level (including infrastructure behaviour / events subscription / deployments and etc.) In that way any topology modifications can be aumaticaly reflected in firewall rules and applied.
3) Manual firewall configuration using this article
Manual Firewall configuration in Jelastic
Probably can be the fastest solution, but it depends. If you have let's say 5 containers - that is fine as a temporary solution until more advanced feature will be available. If you have 100 containers - it's easier to write an addon. There are many examples available on Github JPS

google cloud and wordpress

I have just started playing with Google cloud. I used to work on normal servers so I need advice.
I created my first instance and deployed Wordpress. I installed woocommerce plugin. The shop is quite fast and I am happy (with the lowest settings) but now:
I wanted edit function.php but I can't. The attributes are read only so How can I change it?
How to get access to my all files I can't see them in storage cloud. How to set up ftp?
What about database for my shop? I understand I can create new data base but where to access to current data base of my wordpress.
What should I deploy more to work comfortable with my wordpress?
About ssl
SNI SSL certificate slots are offered for no additional charge for
accounts that have billing activated. Free accounts are limited to 5
certificates.
I have no experience with ssl but I plan run shop so what it means. Free certificates for 5 instances or 5 deployement ? How many certificates do I need to run one shop?
I know there are many questions but I wanted to go further and all advise on internet is outdated because are for older versions of google cloud. Please help me to understand this all.
I assume you're attempting to use WordPress on Google App Engine.
GAE has no real filesystem, so you cannot write to it (unless you juggle with the API GAE offers). Editing happens locally using the GAE SDK development server and you deploy your changes to the App Engine ecosystem using the SDK interface (GUI or CLI). All application writes should go to Google Cloud Storage (which is similar to Amazon S3 and the like).
I'm not certain whether the Google Cloud Storage can be accessed via traditional FTP. There might be some middleware required. You can see and browse the contents of your buckets in the developer project console (https://console.developers.google.com/).
The databases are on a separate "server" when using GAE. MySQL instances are spawned into the Google Cloud SQL ecosystem, which are available for App Engine and Compute Engine instances (and why not other places too). You can define the GCSQL address and port to wp-config.php like normally. You need to create a local MySQL database for your local installation. More: https://cloud.google.com/appengine/docs/php/cloud-sql/
When working with Google App Engine you should deploy the whole WordPress installation (wp-config.php, wp-includes/, wp-admin/, wp-content/, etc.) in order for it to work in the GAE system. For a "better" deployment system you should do some searching or ask a new question dedicated for that issue.
The certificates themselves on GAE are not free, but the "slots" you put the certificates into are. Free projects (no billing enabled) offer 5 free slots where you can put your purchased certificates. SSL SNI means that you can use multiple different domain/host certificates under a single listening IP address (which some years back was not that simple to do). What this all means that GCP offers a way to use certificates with their services, but you still need to get the certificates themselves elsewhere.
Have you seen the GAE starter project offered by Google: https://googlecloudplatform.github.io/appengine-php-wordpress-starter-project/ ? It makes your live a bit easier when developing WP sites for Google App Engine.
If you're working with Google Compute Engine instances, then they should operate just like regular VPS machines, with some Google restrictions applied. I have not used them so I do not know the specifics.

Best way to export and import all Apigee Edge objects related to an org?

Are there scripts for exporting and importing all Apigee Edge objects, such as developers, users, apps, caches, key value maps, etc?
To clarify, it would be nice to have non-runtime objects as a priority vs. the runtime data contained within. E.g., the current content of caches are not as critical as just having the cache object available.
I have released a tool that can be used to retrieve Apigee organization settings. This tool has been in use internally at Apigee for some time, but this is the first time it has been released to the public. It uses the Apigee management API to pull configuration data, and that data to be pulled is configurable. The data is stored in a hierarchical directory structure, which can be archived, explored, or used to compare organizations. It can be used with both the Apigee Edge cloud and on-prem offerings.
A few caveats:
This tool does not retrieve all data from an organization. For example, it does not retrieve API proxies. Use the Apigee management UI or management API to retrieve API proxies.
The tool is composed of a few bash scripts. It has been successfully run on Linux and Mac OS X.
The tool does not write data back into the organization, although the files it retrieves can often be POSTed back to the organization using the management API.
This tool is released as-is. It is not officially supported by Apigee.
Find the tool at the api-platform samples site (https://github.com/apigee/api-platform-samples) in the tools/org-snapshot directory.
There is work planned to provide a tool that will export/import provisional data (such as apps, developer, products). Other aspects of an org's configuration require access to the production Cassandra database, which cannot be given out publicly. We have a provisional tool for in-house use that we are currently hardening. If the consumer tool (when it is available) doesn't provide all of the backup support you need, you will need to log a support ticket for them to run the in-house tool.
There are scripts for importing a set of objects (developers, apps, API products) that work with the sample proxies that you can find on GitHub:
https://github.com/apigee/api-platform-samples/tree/master/setup
For Perl programmers: see also Apigee::Edge on CPAN

Resources