Cannot start container in JBoss Fuse 6.0.0 - apache-karaf

I try to running old JBoss Fuse (6.0.0.redhat-024) server which installed some containers inside
JBossFuse:karaf#myapps> fabric:container-list
[id] [version] [alive] [profiles] [provision status]
myapps* 1.1 true fmc, fabric, fabric-ensemble-0000-1, jboss-fuse-full success
managed1 1.1 false default, myprofile success
managed2 1.1 false default, myprofile success
I tried to start managed1 and managed2 but they seems cannot started and also the log tells nothing.
JBossFuse:karaf#myapps> fabric:container-start managed1
JBossFuse:karaf#myapps> fabric:container-start managed2
JBossFuse:karaf#myapps> fabric:container-list
[id] [version] [alive] [profiles] [provision status]
myapps* 1.1 true fmc, fabric, fabric-ensemble-0000-1, jboss-fuse-full success
managed1 1.1 false default, myprofile success
managed2 1.1 false default, myprofile success
JBossFuse:karaf#myapps> log:tail
.........
2016-10-20 22:23:51,148 | INFO | l Console Thread | FabricServiceImpl | ? ? | 57 - org.fusesource.fabric.fabric-core - 7.2.0.redhat-024 | Starting container managed1
2016-10-20 22:23:56,221 | INFO | l Console Thread | FabricServiceImpl | ? ? | 57 - org.fusesource.fabric.fabric-core - 7.2.0.redhat-024 | Starting container managed2
After read the logs, i found an error during starting fuse:
20:17:09,542 | ERROR | guration Watcher | NIOServerCnxnFactory | 53 - org.fusesource.fabric.fabric-linkedin-zookeeper - 7.2.0.redhat-024 | Thread Thread[ActiveMQ Configuration Watcher,5,org.jboss.amq.mq-fabric-6.0.0.redhat-024] died
java.lang.NullPointerException
at org.fusesource.mq.fabric.ActiveMQServiceFactory$ConfigThread$$anonfun$run$3.apply(ActiveMQServiceFactory.scala:400)[128:org.jboss.amq.mq-fabric:6.0.0.redhat-024]
at org.fusesource.mq.fabric.ActiveMQServiceFactory$ConfigThread$$anonfun$run$3.apply(ActiveMQServiceFactory.scala:399)[128:org.jboss.amq.mq-fabric:6.0.0.redhat-024]
at scala.collection.mutable.HashMap$$anon$2$$anonfun$foreach$3.apply(HashMap.scala:102)[56:scala-library:2.9.1]
at scala.collection.mutable.HashMap$$anon$2$$anonfun$foreach$3.apply(HashMap.scala:102)[56:scala-library:2.9.1]
at scala.collection.Iterator$class.foreach(Iterator.scala:660)[56:scala-library:2.9.1]
at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)[56:scala-library:2.9.1]
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)[56:scala-library:2.9.1]
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:43)[56:scala-library:2.9.1]
at scala.collection.mutable.HashMap$$anon$2.foreach(HashMap.scala:102)[56:scala-library:2.9.1]
at org.fusesource.mq.fabric.ActiveMQServiceFactory$ConfigThread.run(ActiveMQServiceFactory.scala:399)[128:org.jboss.amq.mq-fabric:6.0.0.redhat-024]
What is happening and how to start the containers?

Related

Can't validate keystone endpoint when I trying to define an OpenStack cloud for juju

I am trying to define an OpenStack cloud for juju. To do this, I have first deployed Devstack using the following configuration in the local.conf file:
$ cat local.conf | grep -v "#" | grep -v "^$"
[[local|localrc]]
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=172.29.21.181
FLOATING_RANGE=172.29.20.1/22
Q_FLOATING_ALLOCATION_POOL=start=172.29.21.182,end=172.29.21.184
PUBLIC_NETWORK_GATEWAY=172.29.21.181
ENABLED_SERVICES+=,tls-proxy
ENABLED_SERVICES+=,g-api,g-reg
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
After a successful deployment, these are the endpoints:
$ openstack endpoint list
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| 0b489b8a683d4be489448230437e39ca | RegionOne | cinder | block-storage | True | public | https://172.29.21.181/volume/v3/$(project_id)s |
| 0b9e96cfe0b440b781171ac0b082de3a | RegionOne | keystone | identity | True | admin | https://172.29.21.181/identity |
| 29ce5b2061dd474492f3aebda164acd0 | RegionOne | cinderv2 | volumev2 | True | public | https://172.29.21.181/volume/v2/$(project_id)s |
| 45e10e75eb6848f5a934674373962e11 | RegionOne | glance | image | True | public | https://172.29.21.181/image |
| 8c35460b8c0d4c21ac9b7dd27bc92c48 | RegionOne | keystone | identity | True | public | https://172.29.21.181/identity |
| af451150c3094497936fd6877380d877 | RegionOne | placement | placement | True | public | https://172.29.21.181/placement |
| b3907f627f684ada8526b89c2c9683f9 | RegionOne | neutron | network | True | public | https://172.29.21.181:9696/ |
| c642b07700b54be39e1dd537e8c0f8be | RegionOne | nova | compute | True | public | https://172.29.21.181/compute/v2.1 |
| dbb94215bc89457383a390a0490a89f6 | RegionOne | nova_legacy | compute_legacy | True | public | https://172.29.21.181/compute/v2/$(project_id)s |
| e1037ed336d541b080e365caa0020e78 | RegionOne | cinderv3 | volumev3 | True | public | https://172.29.21.181/volume/v3/$(project_id)s |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
But when I try to add the cloud to juju using the "juju add-cloud" command (I am following the indications of this link: https://juju.is/docs/olm/openstack) I get the following error:
$ juju add-cloud openstack
This operation can be applied to both a copy on this client and to the one on a controller.
No current controller was detected and there are no registered controllers on this client: either bootstrap one or register one.
Cloud Types
lxd
maas
manual
openstack
vsphere
Select cloud type: openstack
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181/identity
Can't validate endpoint: No Openstack server running at https://172.29.21.181/identity
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181/identity/v3
Can't validate endpoint: No Openstack server running at https://172.29.21.181/identity/v3
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: http://172.29.21.181/identity
Can't validate endpoint: No Openstack server running at http://172.29.21.181/identity
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181:5000/v3
Can't validate endpoint: No Openstack server running at https://172.29.21.181:5000/v3
I can curl the url:
$ curl https://172.29.21.181/identity
{"versions": {"values": [{"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "https://172.29.21.181/identity/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}
And I can connect to the port where Keystone is listening:
$ nc -vz 172.29.21.181 5000
Connection to 172.29.21.181 5000 port [tcp/*] succeeded!
I set no_proxy=127.0.0.1,localhost,172.29.21.181 and NO_PROXY=127.0.0.1,localhost,172.29.21.181
as environment variables, because searching for solutions on the Internet I understood that maybe it could solve my problem. But it didn't work.
Apart from this cloud I have another one deployed through Openstack-Ansible. In this cloud I have not encountered this error, the only difference I see is that the url is https://{HOST_IP}:5000/v3.
If anyone has any ideas it would be very helpful, thank you.
I have found a way to bypass this error, but I don’t know exactly why. I have modified the OS_AUTH_URL environment variable to end in “/v3”:
$ unset OS_AUTH_URL
$ export OS_AUTH_URL=https://172.29.21.181/identity/v3
Now, after using it as suggested value when running “juju add-cloud”, I don’t get the error when running “juju bootstrap”. I guess when you enter the url manually, juju checks the validity of it and fails for some code reason maybe. Having skipped that check, I guess the “juju bootstrap” command will directly use the url ending in “/v3” which is correct and works.
Now I get the following error:
$ juju bootstrap openstack --verbose
Adding contents of "/opt/stack/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys
Creating Juju controller "openstack-regionone" on openstack/RegionOne
Loading image metadata
ERROR failed to bootstrap model: no image metadata found
But I guess I just have to add Swift to my deployment and follow the instructions in this link: https://juju.is/docs/olm/cloud-image-metadata

OpenStack Mistral workflow error while executing using GUI

I am getting error while executing OpenStack simple mistral workflow on OpenStack(wallaby) devstack environment. While I can execute the workflow from CLI command and got success But it fails if I try the same thing with GUI
root#openstack:~# openstack workflow definition show test_get
---
version: '2.0'
test_get:
description: Test Get.
tasks:
my_task:
action: std.http
input:
url: http://www.google.com
root#openstack:~# openstack workflow execution create test_get
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| ID | 482e3803-45ef-411e-a0f4-1427abfc8649 |
| Workflow ID | 9dc0d4a4-8c5b-4288-8126-e1147da3bd02 |
| Workflow name | test_get |
| Workflow namespace | |
| Description | |
| Task Execution ID | <none> |
| Root Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2021-06-21 16:58:54 |
| Updated at | 2021-06-21 16:58:54 |
| Duration | ... |
+--------------------+--------------------------------------+
But while executing in GUI I get **
Execution is missing field "workflow_identifier"
**
Faced the same issue in Yoga release. Spent a few hours to investigate it and found interesting thing:
/usr/local/lib/python3.8/dist-packages/mistralclient/api/v2/executions.py
class ExecutionManager(base.ResourceManager):
resource_class = Execution
def create(self, wf_identifier='', namespace='',
workflow_input=None, description='', source_execution_id=None,
**params):
self._ensure_not_empty(
workflow_identifier=wf_identifier or source_execution_id
)
But! in the webform we are using workflow_identifier instead of wf_identifier
/usr/local/lib/python3.8/dist-packages/mistraldashboard/workflows/forms.py
def handle(self, request, data):
try:
data['workflow_identifier'] = data.pop('workflow_name')
data['workflow_input'] = {}
for param in self.workflow_parameters:
value = data.pop(param)
if value == "":
value = None
data['workflow_input'][param] = value
ex = api.execution_create(request, **data)
FIX is to rename workflow_identifier to wf_identifier in the form like
data['wf_identifier'] = data.pop('workflow_name')
After that mistral-dashboard works fine with execution creating.

Error!: SQLSTATE[42000] [1226] ‘max_user_connections’ resource (current value: 30), but max_user_connections is configured to 1000

Visitors get a Mysql error when a mysql user has exceeded the max_user_connections, returning the current value as 30, but max_connections and max_user_connections are set to 1000. When the problem occurs, the CPU reaches almost 98 %.
On mysql error logs we received a lot of access denied errors from another user, around 5000 denied connections. My problem is not why the PHP script takes all these connections, but to know why the configured variables, max_user_connections and max_connections are not applied. Those are configured to 1000, but the error message returns 30. How it is possible ?
I activated log_warnings=2, to get more information, but we don't get an extra information. Any idea why this behavior ? or How to audit mysql to find the source of this problem ?
The error message received is :
Error!: SQLSTATE[42000] [1226] User ‘some_user’ has exceeded the ‘max_user_connections’ resource (current value: 30)
select ##session.max_user_connections, ##global.max_connections;
+--------------------------------+--------------------------+
| ##session.max_user_connections | ##global.max_connections |
+--------------------------------+--------------------------+
| 1000 | 1000 |
+--------------------------------+--------------------------+`
show global variables like '%connections%';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| extra_max_connections | 1 |
| max_connections | 1000 |
| max_user_connections | 1000 |
+-----------------------+-------+
show status like '%connected%';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Threads_connected | 4 |
+-------------------+-------+
select user,max_user_connections from mysql.user where host='localhost'\G
user: some_user
max_user_connections: 0
user: another_user
max_user_connections: 0`
The error seems to be :
Error: 1226 SQLSTATE: 42000 (ER_USER_LIMIT_REACHED)
Message: User '%s' has exceeded the '%s' resource (current value: %ld)
and not :
Error: 1203 SQLSTATE: 42000 (ER_TOO_MANY_USER_CONNECTIONS)
Message: User %s already has more than 'max_user_connections' active connections
We are using MariaDB, version :
select version();
+------------------------+
| version() |
+------------------------+
| 5.5.44-MariaDB-cll-lve |
+------------------------+
Solution :
You can reproduce the error with the following command :
mysqlslap -a --concurrency=500 --number-of-queries 5000 --iterations=500 --engine=innodb --debug-info -utest -p
The problem was caused by Governor, we have Cloudlinux installaed on the server, but this option was set OFF by default, but in this server was set to abusers. If the CPU is higher that 400 Gobernor set the max_user_connections for the user at 30.
You can check the logs on /var/log/dbgovernor-restrict.log
The solution si to set correctly this value or set off
dbctl --lve-mode off
/etc/container/mysql-governor.xml
<lve use="abuser"></lve>
<restrict level1="60s" level2="15m" level3="1h" level4="1d"
timeout="1h" log="/var/log/dbgovernor-restrict.log"
user_max_connections="30"></restrict>
<statistic mode="on"></statistic>
<default>
<limit name="cpu" current="400" short="380" mid="350" long="300">
</limit>

Routing checking user role Symfony2

I have two bundles and I want routes from one of the bundle only accessible if the user have a defined role.
The logic from the router matcher should be:
if the user have the role
| name | path | success |
|------------------|-------|---------|
| bundle_1_route_1 | / | false |
| bundle_1_route_2 | /test | true |
If the user don't have the role
| name | path | success |
|------------------|-------|---------|
| bundle_1_route_1 | / | false |
| bundle_1_route_2 | /test | false |
| bundle_2_route_1 | /aaa | false |
| bundle_2_route_2 | /test | true |
The problem is I can't do that using the security because the path are the same
I tried with the #security annotation http://symfony.com/doc/current/bundles/SensioFrameworkExtraBundle/annotations/security.html
But I have an access denied when on bundle_1_route_2 with no role and the others urls are not checked.
I want to continue checking all the available url if the role is not matching
I found an other solution, but this is not very clean, and it will create error if the session does not exists
bundle_1:
resource: "#Bundle1/Controller/"
type: annotation
prefix: /
condition: "'ROLE_FILTER' in request.getSession().get('bundle1.user').getRoles()"
Is there a way to create completely custom conditions on routing ?

flyway outOfOrder is not working as expected

I am trying to apply an outOfOrder migration using maven on a "production support" branch (i.e. V3.1). The 3.1 branch has 12 migrations 3.1.0.1 through 3.1.0.12. The first 11 have been applied and in my development environment I have two migrations from the next release 3.3 already applied. the info looks like this:
+----------------+----------------------------+---------------------+---------+
| Version | Description | Installed on | State |
+----------------+----------------------------+---------------------+---------+
| 1 | > | 2013-08-16 16:35:22 | Success |
| 3.1.0.1 | CCI DDL | 2013-08-16 16:41:04 | Success |
| 3.1.0.2 | Update 1 | 2013-08-19 12:17:43 | Success |
| 3.1.0.3 | Add SVT ITEM HISTORY | 2013-08-21 16:24:28 | Success |
| 3.1.0.4 | Drop Col Event Key From ED | 2013-08-27 14:15:36 | Success |
| 3.1.0.5 | Add Job Begin Time COL | 2013-10-10 14:59:14 | Success |
| 3.1.0.6 | Update SVT Column Lengths | 2013-10-23 10:25:33 | Success |
| 3.1.0.7 | Add Seq Number to EDC ECRF | 2013-12-03 14:59:31 | Success |
| 3.1.0.8 | Set EDC ECRF ITEM Seq Numb | 2013-12-03 15:27:08 | Success |
| 3.1.0.9 | Add Table EDC USV FORM | 2013-12-03 15:37:47 | Success |
| 3.1.0.10 | Add Table SVT USV FORM MAP | 2013-12-03 15:52:24 | Success |
| 3.1.0.11 | Add Tables SUBJECT VISIT Q | 2014-04-29 17:09:13 | OutOrde |
| 3.1.0.12 | Add Table BOGUS ERIC TEST | | Ignored |
| 3.3.0.1 | Insert iMedidata CRS Info | 2014-04-24 10:50:38 | Future |
| 3.3.0.2 | Insert Study OBJECT TYPE | 2014-04-24 11:14:37 | Future |
+----------------+----------------------------+---------------------+---------+
I run the following command in my mvn build output folder in the V3.1 branch:
mvn flyway:migrate -Dflyway.outOfOrder=true -P
and I get the following output:
[ERROR] Failed to execute goal org.flywaydb:flyway-maven-plugin:3.0:migrate (default-cli) on project mdmws: org.flywaydb.core.api.FlywayException: Validate failed. Found differences between applied migrations and available migrations: Detected applied migration missing on the classpath: 3.3.0.1 -> [Help 1]
It seems to want to find the 3.3 migrations that have already been applied to the database in the same classpath target/db/migrations folder, but of course these files exist in a later release branch. Either I am missing some configuration setting or I do not understand the way the outOfOrder works. I do not want to pull these files back from the V3.3 branch to the V3.1 branch.
Can somebody please help explain?
My pom inherits the following from a parent pom and most of the configuration values are passed in from the profile:
<groupId>org.flywaydb</groupId>
<artifactId>flyway-maven-plugin</artifactId>
<version>3.0</version>
<configuration>
<driver>${flyway.driver}</driver>
<url>${flyway.url}</url>
<user>${flyway.user}</user>
<password>${flyway.password}</password>
<outOfOrder>${flyway.outOfOrder}</outOfOrder>
</configuration>
<dependencies>
<dependency>
<groupId>com.oracle.ojdbc</groupId>
<artifactId>ojdbc6</artifactId>
<version>11.2.0.3</version>
</dependency>
</dependencies>
</plugin>
Set validateOnMigrate to false and you should be OK. By default it will check whether the resolved and the applied migrations match. In your specific situation this won't work, so you have to disable it.
That's my solution. I configured flyway in Java code. Yes, I set validateOnMigrate - false.
#Bean(name = "flyway")
#Lazy(false)
public Flyway buildConfiguredFlyway() {
Flyway flyway = configure();
if (!validate(flyway)) {
migrate(flyway);
}
return flyway;
}
private Flyway configure() {
Flyway flyway = new Flyway();
flyway.setDataSource(datasource);
flyway.setBaselineOnMigrate(true);//Create meta-data table if it did not exist.
flyway.setValidateOnMigrate(false);
return flyway;
}
private boolean validate(Flyway flyway) {
try {
flyway.validate();
return true;
} catch (FlywayException o) {
return false;
}
}
private void migrate(Flyway flyway) {
try {
int result = flyway.migrate();
LOGGER.info("Number of DB mirgations successfully applied: " + result);
} catch (FlywayException e) {
LOGGER.error(e.getMessage(), e);
((ConfigurableApplicationContext) applicationContext).stop();
}
}

Resources