Cloudify Agent installation - cloudify

I was trying getting hands on Cloudify deployments and learnt of cloudify agents lately which are required to do vm configurations.
I was reviewing the following plugin :
https://github.com/cloudify-cosmo/cloudify-cloudstack-plugin/blob/master/plugin.yaml
and am particularly trying understand the agent installation method here.
From what I understand so far, any plugin to be used in the blueprint or .yaml files being imported, should be imported or defined.
The above plugin.yaml file includes the below node :
cloudify.cloudstack.nodes.WindowsServer:
derived_from: cloudify.cloudstack.nodes.VirtualMachine
interfaces:
cloudify.interfaces.worker_installer:
install:
implementation: agent.windows_agent_installer.tasks.install
inputs: {}
start:
implementation: agent.windows_agent_installer.tasks.start
stop:
implementation: agent.windows_agent_installer.tasks.stop
inputs: {}
uninstall:
implementation: agent.windows_agent_installer.tasks.uninstall
inputs: {}
restart:
implementation: agent.windows_agent_installer.tasks.restart
inputs: {}
cloudify.interfaces.plugin_installer:
install:
implementation: agent.windows_plugin_installer.tasks.install
inputs: {}
I want to understand how the agent plugin is being used here as
implementation: agent.windows_agent_installer.tasks.start
if no traces of importing that plugin are there in the yaml file.
Any thoughts are welcome.
Thanks

I think you are confusing the terms.
A plugin — an extension of Cloudify Orchestrator.
An agent — a service running on a VM created by Cloudify to run tasks on it.
If you want to use the CloudStack plugin, you should import it at the beginning of your blueprint, as in:
imports:
- https://github.com/cloudify-cosmo/cloudify-cloudstack-plugin/blob/master/plugin.yaml
You didn't mention the Cloudify version you are using, but if you are using the latest version (4.6) or any version > 4.2, you should upload the plugin to the manager before using it, and then import it via:
imports:
- plugin:cloudify-cloudstack-plugin
The agent installation process could be done in several ways, you can follow the documentation here and choose the best method for you.
The default method is remote and it would be done via SSH or WinRM.
You can look at this example for agent installation on Windows.

Related

How to modify a Jelastic installation when wrapping a jps manifest in my own manifest?

The Jelastic Marketplace is full of interesting software. However, sometimes, they do not comply to my security needs. In those cases, I would like to write my own manifest that would install the manifest from the marketplace and add up the components that I need for my use-case. Let's take an example: I would like to wrap the kubernetes installation with the addition of a load-balancer. I would like to do something like this:
type: install
name: My Example Manifest
onInstall:
- install:
jps: https://github.com/jelastic-jps/kubernetes/blob/1.23.6/manifest.jps
envName: env-${fn.random}
settings:
deploy: cmd
cmd: echo "do nothing"
topo: 0-dev
dashboard: general
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: 1.23.6
jaeger: false
- addNodes:
- nodeType: nginx-dockerized
nodeGroup: bl
count: 1
fixedCloudlets: 1
flexibleCloudlets: 4
The issue I am having here is that the manifest cannot add the nodes, because of the following error:
user [xyz] doesn't have any access rights to app [dashboard]
What am I doing wrong? How can I make this manifest work? I tried to set user: root in the addNodes function but it doesn't help.
Of course, I am interested in suggestions involving one single install manifest. I know I could make it happen by first installing the kubernetes manifest and then running an update manifest that would add my load-balancer nodes. I would like, however, to package the whole thing within one single step, as described by my manifest above.

How can I separate my flutter firebase/firestore implementation and providers to a independent package or library?

Currently, I am developing a flutter web app using Provider and Firebase. I would like to develop others apps (for example and admin console) with the same database and data providers.
My app currently have the following folders
my_app
- models
- providers
- services (firebase/firestore crud)
- pages
- utils
- widgets
I would like to have models, providers, and services as a local package or library. Something like:
my_lib
- models
- providers
- services
my_app
- pages
- utils
- widgets
my_other_app
- pages
- utils
- widgets
First, I don't know how to create a local lib, and how to make it a dependency of my_apps.
Second, since I will be using Firebase and Firestorage on my_lib, I don't know how the instances initialized in main are used by the package. Is it enough to initiate the global variables on the my_lib?
In your project folder you can create a folder packages and then create your 'packagesusingflutter create --template=package my_package`,
Have your app can depend on it in pubspec.yaml
my_package:
path: ../packages/my_package
in this case you will have
- my_project
-lib
-packages
- my_package
This works well when its just your app depending on the packages.
If you need to access your packages in different projects you can use github/gitlab to host your packages .
Say you have two repos that need to share a private package, my_app and my_admin , you can create a new repo my_packages and have your packages in it I.E
- my_packages
- package1
- package2
in your apps you can now depend on the packages using git eg
package1:
git:
url: git#github.com:username/my_packges.git
ref: main // branch
path: ./package1 //package name
package2:
git:
url: git#github.com:username/my_packges.git
ref: main // branch
path: ./package2 //package name
For github you need ssh key

Symfony 5 custom reusable bundle installation using flex - how to test and run a private recipe server

I have created a test reusable Symfony 5 bundle and have written a Flex recipe to automatically install and configure it within any project which is private.
My problem is, I have no idea how to run and test this. I cannot find any clear complete instructions anywhere. The official documentation does not specify how this would be done and only specifies how to create the manifest.json file.
https://github.com/symfony/recipes
I found the following info which specifies uploading the recipe to a private repository on GitHub and then activating Symfony Recipe Server for the repository which I have done.
https://blog.mayflower.de/6851-symfony-4-flex-private-recipes.html
but then what?
If understood this correctly, you want to add custom domain from where recipe would be downloaded and installed. Check this project:
Github https://github.com/moay/server-for-symfony-flex
Docs https://server-for-symfony-flex.readthedocs.io/en/latest/
Eventually you get to the point where you add custom endpoint from where to download the recipe like this:
Using the server in your Symfony project is as easy
as adding the proper endpoint to your composer.json:
{
...
"symfony": {
"endpoint": "https://your.domain.com"
}
}
I apologize if this is not in the desired format of an answer.

Azure Pipeline Error: vstest.console process failed to connect to testhost process

I am converting my azure pipeline to YAML pipeline. When I trigger the build, it gets failed on the Unit test step and gives the error as below
[error]vstest.console process failed to connect to testhost process after 90 seconds. This may occur due to machine slowness, please set environment variable VSTEST_CONNECTION_TIMEOUT to increase timeout.
I could not find a way to add the VSTEST_CONNECTION_TIMEOUT value anywhere. Could you please help me with this.
Here is the sample .yml I am using
- task: VSTest#2
displayName: 'Test'
inputs:
testAssemblyVer2: '**\bin\**\Tests.dll'
testFiltercriteria: 'TestCategory=Unit'
runSettingsFile: XYZ.Tests/codecoverage.runsettings
codeCoverageEnabled: true
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
diagnosticsEnabled: true
I would recommend you to use the dotnetCli task instead. It is shorter, clearer and straight forward (It will have the "same" effect as executing dotnet test in your console)
- task: DotNetCoreCLI#2
displayName: 'Run tests'
inputs:
command: 'test'
Even in microsoft documentation page, they use the DotNetCoreCLI task.
If the vstest task can run successfully on your classic pipeline. It should be working in yaml pipeline too. You can check the agent pool selection and the task's settings to make sure they are same both in yaml and classic pipeline.
1,Your Unit tests seem like running on Vs2017 in the yaml pipeline. You can try running the pipeline on windows-latest agent to run the tests on Vs2019.
If your pipeline has to run on specific agent. You can use VisualStudioTestPlatformInstaller task to download the latest version. Then set the vsTestVersion: toolsInstaller for Vstest task. See below:
- task: VisualStudioTestPlatformInstaller#1
- task: VSTest#2
displayName: 'Test'
inputs:
testAssemblyVer2: '**\bin\**\Tests.dll'
...
...
vsTestVersion: toolsInstaller
2,You can also check out the solution in this thread. As it mentioned in the solution deleting the entire solution folder, re-cloning the project. If you were running your pipeline on your self-hosted agent. You can try using Checkout in the yaml pipeline to clean the source folder before cloning your repo. See below:
steps:
- checkout: self
clean: true
You can also try adding below to your codecoverage.runsettings file under element <CodeCoverage> to exclude the microsoft assemblies as mentioned in the thread.
<ModulePath>.*microsoft\.codeanalysis\.csharp\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.csharp\.workspaces\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.workspaces\.dll$</ModulePath>
3,You can also try updating 'Microsoft.NET.Test.Sdk' to latest version for test projects.

Kong refuses to recognise custom plugin as enabled

I was developing a Custom plugin for Kong.
To start off I followed guidelines listed in this tutorial
http://streamdata.io/blog/developing-an-helloworld-kong-plugin/
Few changes that I made along the way were changing dependency in the rockspec file for "lrexlib-pcre" from version 2.8.0-1 to 2.7.2-1 due to compilation problems that I faced with 2.8.0-1 version.
Please note that I am working in the next branch. The master branch has version 2.7.2-1 listed.
The tutorial assumes Kong version 0.4.2-1 while I am working with Kong version 0.5.2-1.
I have listed my plugin in kong.yml. Last listed is helloworld plugin
plugins_available:
- ssl
- jwt
- acl
- cors
- oauth2
- tcp-log
- udp-log
- file-log
- http-log
- key-auth
- hmac-auth
- basic-auth
- ip-restriction
- mashape-analytics
- request-transformer
- response-transformer
- request-size-limiting
- rate-limiting
- response-ratelimiting
- helloworld
I have listed the helloworld files in rockspec file at the last.
["kong.plugins.helloworld.handler"] =
"kong/plugins/helloworld/handler.lua",
["kong.plugins.helloworld.access"] = "kong/plugins/helloworld/access.lua",
["kong.plugins.helloworld.schema"] = "kong/plugins/helloworld/schema.lua"
Compilation is successful but kong refuses to list helloworld plugin as available in the node. All other builtin plugins are shown as available in the server
I tried enabling the plugin anyway with mock api. It doesn't work as expected and trying to restart kong lists error
nginx: [error] [lua] init_by_lua:5: Startup error:
/usr/local/share/lua/5.1/kong.lua:82: You are using a plugin that has
not been enabled in the configuration: helloworld [INFO] dnsmasq
stopped [ERR] Could not start Kong
I know there were some breaking changes introduced in Kong version 0.5. I followed the changelog, but I found nothing that would help.
Am i missing a setting a configuration somewhere?
Any help would be appreciated.
Try the following in your kong.yml:
custom_plugins:
- helloworld
I fixed this issue by adding things in custom_plugins and lua_package_path.
Here are the steps to enable and use custom plugin in kong Env.
1 - Add custom plugin name in --- custom_plugins = hello-world
2 - Install hello-world plugin by using following steps ---
If you have source code of your plugin then move into it and execute the command --- "luarocks make"
it will install your plugin.
now you have to execute a command "make install-dev" make sure your plugin have makefile like as --
Once you execute this command "make install-dev". it will create lua file at a location something like that -
/your-plugin-path/lua_modules/share/lua/5.1/kong/plugins/your-plugin-name/?.lua
just copy this path and add it into the kong configuration file in lua_package_path
Something like that --
lua_package_path=/your-plugin-path/lua_modules/share/lua/5.1/kong/plugins/your-plugin-name/?.lua
Now you done your job.
Just start kong -- kong start --vv
You will see that the plugin loaded into kong plugin env.
#Enjoy

Resources