It's not possible customize timeout values - openstack

I'm using Cloudify 2.7 with OpenStack Icehouse.
I have tried to insert the section "installer" into the cloud driver (computeTemplate, as suggested by the official guide):
installer {
connectionTestRouteResolutionTimeoutMillis 120000
connectionTestIntervalMillis 5000
connectionTestConnectTimeoutMillis 10000
fileTransferConnectionTimeoutMillis 10000
fileTransferRetries 3
fileTransferPort -1
fileTransferConnectionRetryIntervalMillis 5000
remoteExecutionPort -1
remoteExecutionConnectionTimeoutMillis 10000
}
However, trying to bootstrap the management vm, I receive the following error:
cloudify#default> bootstrap-cloud --verbose <openstack-icehouse-xxx>
Setting security profile to "nonsecure".
No such property: fileTransferPort for class: dslEntity :
groovy.lang.MissingPropertyException: No such property: fileTransferPort for class: dslEntity
Can someone explain me where the problem is?
Thanks

Related

Airflow Scheduler HA

Can anyone guide me if am doing anything wrong :
Objective : Want to set up scheduler HA .
Versions : Backend db - Postgres 12.6 , Airflow 2.1.1
Challenges: When scheduler is started on first machine , it works as expected and i was able to trigger the example _bash_operator but when scheduler is started on another host with the same backend connection .
My first scheduler fails and it gives me the below error when am trying to click on the bash_oprator_example dag in WebUI
ValueError: unsupported pickle protocol: 5
ValueError: unsupported pickle protocol: 5 generally occurs when you have a different version of Python running on both machines.
Verify that you have same versions of Python on both machines

Tables not creating after running worker and dashborad in wso2 api manager 3.2.0?

Following ables not creating after running worker and dashborad in wso2 api manager 3.2.0 oracle config:
a. WSO2_DASHBOARD_DB
b. BUSINESS_RULES_DB
c. WSO2_PERMISSIONS_DB
d. WSO2_METRICS_DB
what is the problem?
name: WSO2_PERMISSIONS_DB
description: The datasource used for permission feature
jndiConfig:
name: jdbc/PERMISSION_DB
useJndiReference: true
definition:
type: RDBMS
configuration:
jdbcUrl: 'jdbc:oracle:thin:#apigwdb-scan.shoperation.net:1521/APIGWDB'
username: 'WSO2_PERMISSIONS_DB'
password: 'apigw14'
driverClassName: oracle.jdbc.driver.OracleDriver
maxPoolSize: 10
idleTimeout: 60000
connectionTestQuery: SELECT 1 FROM DUAL
validationTimeout: 30000
isAutoCommit: false
connectionInitSql: alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'
By default all these dbs have h2 configs in the deployment.yaml. You have to create the relevant dbs in the Oracle server and change each config for the tables to be created in those dbs.
Also, please check whether the user you have used have sufficient permission.
For more information, please check https://apim.docs.wso2.com/en/3.2.0/learn/analytics/configuring-apim-analytics/#step-42-configure-the-analytics-dashboard

Imaging Backend not working in eucalyptus

I have installed eucalyptus 4.4.4 on Centos 7 and I have already done all installation steps but it is still showing "imagingbackend" as not ready.
The imaging service is not always required when using eucalyptus. Particularly for smaller deployments it can be a good choice to skip configuration of the imaging service and save the resources (virtual machine instances) that would have been used by the imaging service for user workloads.
To enable the imaging service you need to install and register the service image (v5 output shown):
# esi-describe-images
SERVICE VERSION ACTIVE IMAGE INSTANCES
imaging 5.0.100 * emi-b54e3b35170d2c56e 1
loadbalancing 5.0.100 * emi-b54e3b35170d2c56e 0
and create the stack:
# esi-manage-stack -a check
Stack 'euca-internal-imaging-service' currently is in CREATE_COMPLETE state.
the steps to get to this state are covered in the documentation:
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/configure_imaging_service.html
You can also use esi-manage-stack to delete / create the imaging stack:
# esi-manage-stack -a delete
services.imaging.worker.configured = false
# esi-manage-stack -a create
Stack 'euca-internal-imaging-service' currently is in DELETE_IN_PROGRESS state. Please wait till the end of previous stack change operation.
# esi-manage-stack -a create
services.imaging.worker.configured = true
#
If you have gone through these steps but the service is not enabled you should do basic checks to verify that you can run any instances in your cloud:
# euca-describe-instance-types --show-capacity
# euserv-describe-events
and also check the log for errors /var/log/eucalyptus/cloud-debug.log

EUCA 4.4.5 VPCMIDO Instances Terminate at Launch

I have achieved a small test cloud on 3 pieces of hardware. It works fine when in EDGE mode but when I try to configure it for VPCMIDO, new instances begin to launch but then timeout and move to a terminated state. I can also see the instances' initial volume and config data appear in the NC and CC data directories. Below is my system layout and network.json.
HOST 1 : CLC/UFS/WALRUS/MIDO CLUSTER/MIDO GATEWAY/MIDOLMAN AGENT:
em1 (All Services including Mido Cluster): 10.0.0.21
em3 (Target VPCMIDO Adapter): 10.0.0.22
HOST 2 : CC/SC
em1 : 10.0.0.23
HOST 3 : NC/MIDOLMAN AGENT
em1 : 10.0.0.24
{
"Mido": {
"Gateways": [
{
"Ip": "10.0.0.22",
"ExternalDevice": "em3",
"ExternalCidr": "192.168.0.0/16",
"ExternalIp": "192.168.0.2",
"ExternalRouterIp": "192.168.0.1"
}
]
},
"Mode": "VPCMIDO",
"PublicIps": [
"10.0.100.1-10.0.100.254"
]
}
I may be misunderstanding the intent of reserving an interface just for the mido gateway. All of my eucalyptus/zookeeper/cassandra/midonet configs use the 10.0.0.21 interface and seem to communicate fine. The midonet tunnel zone reports my CLC host and NC host successfully in the tunnel zone. The only part of my config that references the interface I intend to use for the midonet gateway is the network.json. No errors were returned at any time during my config so I think I may be missing something conceptual.
You may need to start eucanetd as described here:
https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/starting_euca_clc.html
The eucanetd component in vpcmido mode runs on the cloud controller and is responsible for controlling midonet.
When eucanetd is not running instances will fail to start as the required network resources will not be created.
I configured a bridge on the NC and instances were able to launch and I no longer got an error in my nc.log. Docs and the eucalyptus.conf file comments tell me I shouldn't need to do this in VPCMIDO netowrking mode: https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/configuring_bridge.html
Despite all that adding the bridge fixed this issue.

Oracle 11g XE installation on docker RHEL 7 image

While installing oracle 11g XE on docker i am getting the error.
Following are the output:-
/etc/init.d/oracle-xe configure
Oracle Database 11g Express Edition Configuration
This will configure on-boot properties of Oracle Database 11g Express
Edition. The following questions will determine whether the database should
be starting upon system boot, the ports it will use, and the passwords that
will be used for database accounts. Press to accept the defaults.
Ctrl-C will abort.
Specify the HTTP port that will be used for Oracle Application Express [8080]:8080
Specify a port that will be used for the database listener [1521]:1521
Specify a password to be used for database accounts. Note that the same
password will be used for SYS and SYSTEM. Oracle recommends the use of
different passwords for each database account. This can be done after
initial configuration:
Confirm the password:
Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]:y
Starting Oracle Net Listener...Done
Configuring database...
Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for details
[root#b7c63c4e1da8 Disk1]# cd /u01/app/oracle/product/11.2.0/xe/config/log
[root#b7c63c4e1da8 log]# ls
CloneRmanRestore.log cloneDBCreation.log postDBCreation.log postScripts.log
[root#b7c63c4e1da8 log]# cat CloneRmanRestore.log
ORA-00845: MEMORY_TARGET not supported on this system
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
declare
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
One of the possible solution that I got was to mount the temp file to provide extra space to it which only contains 6GB approx in the docker. But i am unable to mount the memory in docker.
Got the solution for the same :-
we have to modify the files init.ora and initXETemp.ora at the path /u01/app/oracle/product/11.2.0/xe/config/scripts
with the values :-
###########################################
# Miscellaneous
###########################################
compatible=11.2.0.0.0
diagnostic_dest=/u01/app/oracle
#memory_target=1073741824
pga_aggregate_target=200540160
sga_target=601620480
You may encounter
ORA-00845: MEMORY_TARGET not supported on this system
when starting Oracle DB in an unprivileged container. Try running the container with the --privileged flag, e.g.
docker run --name oracle12 --hostname oracledb --privileged local/oracle12:12.1.0.2

Resources