I have succesfully installed openstack instance with Neutron using Devstack(all-in-one). Now I have a set of IPv4 addresses which I need to assign to my instances as floating IP and make them pingable / SSHable from out side the host.
Though I am able to assign the intended IP as Floating IP to my instances but neither they are pingable inside the host nor outside. I have modified the Security group rules to allow SSH and PING. Here is my network details -
stack#tanmoy:/etc/init.d$ neutron net-list
+--------------------------------------+-----------+------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-----------+------------------------------------------------------+
| 1566fc4f-60a9-4170-b860-333a264f22d8 | my-public | 101675c6-7c92-4ea0-b361-7cade98fa5a2 10.158.XXX.0/24 |
| be6f76d4-954f-475e-853e-adb860508e9c | public | 0604470a-761e-4913-998c-cc5413dcd5a6 172.24.4.0/24 |
| e816c35f-45a0-446b-b3ff-ca3196c98eb2 | private | f4d617a7-e250-45fa-bb0a-95290cfafb20 10.0.0.0/24 |
+--------------------------------------+-----------+------------------------------------------------------+
stack#tanmoy:/etc/init.d$ neutron subnet-list
+--------------------------------------+----------------+-----------------+----------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+----------------+-----------------+----------------------------------------------------+
| 0604470a-761e-4913-998c-cc5413dcd5a6 | public-subnet | 172.24.4.0/24 | {"start": "172.24.4.2", "end": "172.24.4.254"} |
| 101675c6-7c92-4ea0-b361-7cade98fa5a2 | ipcloud-dev | 10.158.XXX.0/24 | {"start": "10.158.XXX.56", "end": "10.158.XXX.62"} |
| f4d617a7-e250-45fa-bb0a-95290cfafb20 | private-subnet | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} |
+--------------------------------------+----------------+-----------------+----------------------------------------------------+
stack#tanmoy:/etc/init.d$ neutron router-list
+--------------------------------------+--------------+-----------------------------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------------+-----------------------------------------------------------------------------+
| 811a483a-6faf-4dad-9d28-d51aa9530691 | ExternalLink | {"network_id": "1566fc4f-60a9-4170-b860-333a264f22d8", "enable_snat": true} |
| f71a6574-75c8-424e-ab57-ff0f9a20ef54 | router1 | {"network_id": "be6f76d4-954f-475e-853e-adb860508e9c", "enable_snat": true} |
+--------------------------------------+--------------+-----------------------------------------------------------------------------+
My security rules are as follows -
stack#tanmoy:$ nova secgroup-list-rules default
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 443 | 443 | 0.0.0.0/0 | |
| | | | | default |
| | | | | default |
| icmp | -1 | -1 | 0.0.0.0/0 | |
| tcp | 22 | 22 | 0.0.0.0/0 | |
| tcp | 80 | 80 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
I have tried pinging using netns but that also did not work.
stack#tanmoy:/var/log$ sudo ip netns exec qrouter-f71a6574-75c8-424e-ab57-ff0f9a20ef54 ping 10.158.XXX.60
PING 10.158.XXX.60 (10.158.XXX.60) 56(84) bytes of data.
From 10.158.XXX.71 icmp_seq=1 Destination Host Unreachable
Please let me know if I am missing something.
Check whether the br-ex has an ip address ? If not assign 172.24.4.1 ip address and try pining.
I do not think that br-ex should have an IP address assigned to it. I have a all-in one setup but built manually. I noticed that you have two routers defined. When you try to ping via ip netns you are using the namespace of router1. However if I interpret correctly your neutron router-list command this router is not attached to the outside network 10.158.XXX.0. Try doing the ip netns ping from the other router namespace.
Here is my setup that seems to work:
root#columbo:~# ifconfig br-ex
br-ex Link encap:Ethernet HWaddr 08:00:27:f9:7b:07
inet6 addr: fe80::a83d:11ff:fe5e:b595/64 Scope:Link
inet6 addr: fd17:625c:f037:1064:19a0:c74a:caf0:b3bd/64 Scope:Global
inet6 addr: fd17:625c:f037:1064:a00:27ff:fef9:7b07/64 Scope:Global
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:29 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2454 (2.4 KB) TX bytes:924 (924.0 B)
root#columbo:~# neutron net-list
+--------------------------------------+---------------+----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------------+----------------------------------------------------+
| 120a6fde-7e2d-4856-90ee-5609a5f3035f | SecondVlan | 5432f1c9-0bb6-4619-b897-65d301071f72 5.5.5.0/25 |
| f2597437-a005-44ad-9ce2-168fbc331e56 | outside_world | 3fe35e71-53d7-4432-8c82-a06856b79316 |
| b7ab2080-a71a-44f6-9f66-fde526bb73d3 | SERVER_VLAN_1 | 87d769f1-5cf3-48cf-8741-44a01479ff3e 10.255.1.0/24 |
+--------------------------------------+---------------+----------------------------------------------------+
My router is attached to the extrenal network (f2597437-a005-44ad-9ce2-168fbc331e56):
root#columbo:~# neutron router-list
+--------------------------------------+-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| e53979a8-8bab-4da5-9b57-58dba6d5db7b | CORE1 | {"network_id": "f2597437-a005-44ad-9ce2-168fbc331e56", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "3fe35e71-53d7-4432-8c82-a06856b79316", "ip_address": "172.16.100.50"}]} |
+--------------------------------------+-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
My instance has the floating ip 172.16.100.51 and from the router namespace I can ping it:
root#columbo:~# nova list
+--------------------------------------+-----------+---------+--------------+-------------+------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+---------+--------------+-------------+------------------------------------------+
| 624c747f-520c-4215-acac-aaa41eef2815 | CIRROSone | SHUTOFF | - | Shutdown | SERVER_VLAN_1=10.255.1.12 |
| 6529c62c-0754-4cc6-a012-e77e71795eb1 | CIRROSone | ACTIVE | - | Running | SERVER_VLAN_1=10.255.1.15, 172.16.100.51 |
| 7784c6ed-eea8-49c9-a312-8c40a77c1758 | CIRROStwo | ACTIVE | powering-off | Running | SERVER_VLAN_1=10.255.1.14 |
| 7b6bfc23-f0df-4c40-b558-f8e4bb71028f | UBUNTUone | SHUTOFF | - | Shutdown | SERVER_VLAN_1=10.255.1.13 |
| 5c06344c-d5c1-4c0c-b074-c9a30e34759d | UBUNTUtwo | SHUTOFF | - | Shutdown | SecondVlan=5.5.5.2 |
+--------------------------------------+-----------+---------+--------------+-------------+------------------------------------------+
root#columbo:~# ip netns exec qrouter-e53979a8-8bab-4da5-9b57-58dba6d5db7b ping 172.16.100.51
PING 172.16.100.51 (172.16.100.51) 56(84) bytes of data.
64 bytes from 172.16.100.51: icmp_seq=1 ttl=64 time=5.68 ms
64 bytes from 172.16.100.51: icmp_seq=2 ttl=64 time=1.86 ms
^C
--- 172.16.100.51 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.866/3.776/5.687/1.911 ms
If I compare my neutron router-list output to yours there are 2 things different:
Your router is not linked to the external network (I am talking about router1 from whose namespace you ran the ping). When you set it as default gateway to a specific network that one is listed there. So again try ping from the other namespace
I do not see an IP address mentioned in your output.Maybe you did not copy it... For me I get the first IP in the external network (that is the default behavior)
I hope it helps.
Related
I am trying to define an OpenStack cloud for juju. To do this, I have first deployed Devstack using the following configuration in the local.conf file:
$ cat local.conf | grep -v "#" | grep -v "^$"
[[local|localrc]]
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=172.29.21.181
FLOATING_RANGE=172.29.20.1/22
Q_FLOATING_ALLOCATION_POOL=start=172.29.21.182,end=172.29.21.184
PUBLIC_NETWORK_GATEWAY=172.29.21.181
ENABLED_SERVICES+=,tls-proxy
ENABLED_SERVICES+=,g-api,g-reg
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
After a successful deployment, these are the endpoints:
$ openstack endpoint list
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| 0b489b8a683d4be489448230437e39ca | RegionOne | cinder | block-storage | True | public | https://172.29.21.181/volume/v3/$(project_id)s |
| 0b9e96cfe0b440b781171ac0b082de3a | RegionOne | keystone | identity | True | admin | https://172.29.21.181/identity |
| 29ce5b2061dd474492f3aebda164acd0 | RegionOne | cinderv2 | volumev2 | True | public | https://172.29.21.181/volume/v2/$(project_id)s |
| 45e10e75eb6848f5a934674373962e11 | RegionOne | glance | image | True | public | https://172.29.21.181/image |
| 8c35460b8c0d4c21ac9b7dd27bc92c48 | RegionOne | keystone | identity | True | public | https://172.29.21.181/identity |
| af451150c3094497936fd6877380d877 | RegionOne | placement | placement | True | public | https://172.29.21.181/placement |
| b3907f627f684ada8526b89c2c9683f9 | RegionOne | neutron | network | True | public | https://172.29.21.181:9696/ |
| c642b07700b54be39e1dd537e8c0f8be | RegionOne | nova | compute | True | public | https://172.29.21.181/compute/v2.1 |
| dbb94215bc89457383a390a0490a89f6 | RegionOne | nova_legacy | compute_legacy | True | public | https://172.29.21.181/compute/v2/$(project_id)s |
| e1037ed336d541b080e365caa0020e78 | RegionOne | cinderv3 | volumev3 | True | public | https://172.29.21.181/volume/v3/$(project_id)s |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
But when I try to add the cloud to juju using the "juju add-cloud" command (I am following the indications of this link: https://juju.is/docs/olm/openstack) I get the following error:
$ juju add-cloud openstack
This operation can be applied to both a copy on this client and to the one on a controller.
No current controller was detected and there are no registered controllers on this client: either bootstrap one or register one.
Cloud Types
lxd
maas
manual
openstack
vsphere
Select cloud type: openstack
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181/identity
Can't validate endpoint: No Openstack server running at https://172.29.21.181/identity
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181/identity/v3
Can't validate endpoint: No Openstack server running at https://172.29.21.181/identity/v3
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: http://172.29.21.181/identity
Can't validate endpoint: No Openstack server running at http://172.29.21.181/identity
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181:5000/v3
Can't validate endpoint: No Openstack server running at https://172.29.21.181:5000/v3
I can curl the url:
$ curl https://172.29.21.181/identity
{"versions": {"values": [{"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "https://172.29.21.181/identity/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}
And I can connect to the port where Keystone is listening:
$ nc -vz 172.29.21.181 5000
Connection to 172.29.21.181 5000 port [tcp/*] succeeded!
I set no_proxy=127.0.0.1,localhost,172.29.21.181 and NO_PROXY=127.0.0.1,localhost,172.29.21.181
as environment variables, because searching for solutions on the Internet I understood that maybe it could solve my problem. But it didn't work.
Apart from this cloud I have another one deployed through Openstack-Ansible. In this cloud I have not encountered this error, the only difference I see is that the url is https://{HOST_IP}:5000/v3.
If anyone has any ideas it would be very helpful, thank you.
I have found a way to bypass this error, but I don’t know exactly why. I have modified the OS_AUTH_URL environment variable to end in “/v3”:
$ unset OS_AUTH_URL
$ export OS_AUTH_URL=https://172.29.21.181/identity/v3
Now, after using it as suggested value when running “juju add-cloud”, I don’t get the error when running “juju bootstrap”. I guess when you enter the url manually, juju checks the validity of it and fails for some code reason maybe. Having skipped that check, I guess the “juju bootstrap” command will directly use the url ending in “/v3” which is correct and works.
Now I get the following error:
$ juju bootstrap openstack --verbose
Adding contents of "/opt/stack/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys
Creating Juju controller "openstack-regionone" on openstack/RegionOne
Loading image metadata
ERROR failed to bootstrap model: no image metadata found
But I guess I just have to add Swift to my deployment and follow the instructions in this link: https://juju.is/docs/olm/cloud-image-metadata
Centos7.8 install openstack mitaka version, control the node to install mirror service glance, the mirror contains problems
According to the official documentation Mitaka official documentation operations, Step 3 Upload the image to the image service using the QCOW2 disk format, bare container format, and public visibility so all projects can access it:
I execute the following command
openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
The size of the image in the output is zero. How should I check this problem
[root#controller ~]# openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | d41d8cd98f00b204e9800998ecf8427e |
| container_format | bare |
| created_at | 2020-05-24T14:45:54Z |
| disk_format | qcow2 |
| file | /v2/images/c89f6866-0c48-4ee5-84f1-bf7fa0998edf/file |
| id | c89f6866-0c48-4ee5-84f1-bf7fa0998edf |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | a9629b19eb9348adbf02a5432dd79411 |
| protected | False |
| schema | /v2/schemas/image |
| size | 0 |
| status | active |
| tags | |
| updated_at | 2020-05-24T14:45:54Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
I need to edit an existing powershell runbook which uses a template to create a cosmosDb in Azure.
I need to enable TTL without a default TTL value, in the examples I found so far there is always a value, this means that this value is used to delete expired documents.
How do I enable only the TTL without setting a default?
My reference: https://learn.microsoft.com/en-us/azure/cosmos-db/manage-with-powershell#create-container-unique-key-ttl
After digging in the Microsoft documentation I found this key table with examples:
+-------------+--------------------------------------------------------------------+
| TTL on item | Result |
+-------------+--------------------------------------------------------------------+
| TTL on container is set to null (DefaultTimeToLive = null) |
| |
| ttl = null | TTL is disabled. The item will never expire (default). |
| ttl = -1 | TTL is disabled. The item will never expire. |
| ttl = 2000 | TTL is disabled. The item will never expire. |
| | |
+-------------+--------------------------------------------------------------------+
| TTL on container is set to -1 (DefaultTimeToLive = -1) | |
| |
| ttl = null | TTL is enabled. The item will never expire (default). |
| ttl = -1 | TTL is enabled. The item will never expire. |
| ttl = 2000 | TTL is enabled. The item will expire after 2000 seconds. |
| | |
+-------------+--------------------------------------------------------------------+
| TTL on container is set to 1000 (DefaultTimeToLive = 1000) |
| |
| ttl = null | TTL is enabled. The item will expire after 1000 seconds (default). |
| ttl = -1 | TTL is enabled. The item will never expire. |
| ttl = 2000 | TTL is enabled. The item will expire after 2000 seconds. |
+-------------+--------------------------------------------------------------------+
This is not exactly referred to runbook and template but if I set -1 I can achieve my intent, as shown in the table above, setting in the container a TTL -1, this will be enabled and the TTL value in the documents will be used.
Using Get-Help New-CosmosDbCollection -full I could find the parameter -DefaultTimeToLive, this is what I am going to use because it looks like there is no option to do it in the ARM Template
I'm setting up Openstack on some machines. I was following this guide http://docs.openstack.org/liberty/install-guide-ubuntu/ until I ran into this problem:
When I'm verifying Image service (Glance), I got the following problem:
$ cat admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=passw0rd
export OS_AUTH_URL=http://Renaissance:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
$ source admin-openrc.sh
$ glance --debug image-create --name "cirros" \
> --file cirros-0.3.4-x86_64-disk.img \
> --disk-format qcow2 --container-format bare \
> --visibility public --progress
curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}7ce8d893ef6cdaca2ed5a876c8211a841455ba65' -H 'Content-Type: application/octet-stream' http://Renaissance:9292/v2/schemas/image
Request returned failure status 401.
Invalid OpenStack Identity credentials.
I would get same error using any other glance function (e.g. glance image-list).
I think I'm having my configurations correct since I followed the guide.
Here's my Openstack services, projects, users, roles and endpoints
+----------------------------------+----------+----------+
| ID | Name | Type |
+----------------------------------+----------+----------+
| bf585630a5cb475b9e883493de3813fa | glance | image |
| fc29e468dae849e6afb97ecc3bf487f6 | keystone | identity |
+----------------------------------+----------+----------+
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 0bc473b2e77a4a9bb7871ed2afacb995 | admin |
| dcaf480621164c409b6704c3f42e0869 | service |
| e9f709d860fe46e2819b6bf1c78ccd0f | nonadmin |
+----------------------------------+----------+
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 485374adcbe54ce5b9ef465b84aa2c9f | admin |
| 7447f4cd56f64ccfb111cba74f9a4b92 | nonadmin |
| d9ffc32240d24328b10af8b2550ec414 | glance |
+----------------------------------+----------+
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 466fea231ef54d3ca4564fb42f51bb5c | admin |
| a36c726d27f04ebf92d336c3acfcd945 | user |
+----------------------------------+-------+
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------+
| 01f62a7b9f7f4fa782e8bc695e74afc1 | RegionOne | glance | image | True | internal | http://Renaissance:9292 |
| abb7e5052d8646428e82ef58ca21b376 | RegionOne | keystone | identity | True | public | http://Renaissance:5000/v2.0 |
| d5b3180255b44a0eafe0810a20e104bc | RegionOne | glance | image | True | public | http://Renaissance:9292 |
| e0392842c6f64ac389a5688bc2581192 | RegionOne | keystone | identity | True | internal | http://Renaissance:5000/v2.0 |
| e0eb3dd0ed774669bce9a74dd3831c05 | RegionOne | keystone | identity | True | admin | http://Renaissance:35357/v2.0 |
| ec855dca8f87454e997fd55c47f17703 | RegionOne | glance | image | True | admin | http://Renaissance:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------+
My auth configuration of glance (in glance-api.conf and glance-registry.conf) is listed below:
...
[keystone_authtoken]
# Complete public Identity API endpoint. (string value)
auth_uri = http://Renaissance:5000
auth_uri = http://Renaissance:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = passw0rd
...
And I can get token using Openstack:
$ openstack token issue
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2016-10-01T01:16:48.482839Z |
| id | 2a4e052a2c4140a28f550158d95ecd3b |
| project_id | 0bc473b2e77a4a9bb7871ed2afacb995 |
| user_id | 485374adcbe54ce5b9ef465b84aa2c9f |
+------------+----------------------------------+
I'm guessing its the api version problem, but I've been changing the version number in the uri but it didn't work. Any help is appreciated. Thanks!
in your glance configuration, the project name is service, but your env var project name is admin.
solutions:
ensure passw0rd is the real pw to glance:service account
change glance conf to use admin project instead
By following this guide
https://github.com/telefonicaid/fiware-connectors/blob/master/flume/doc/quick_start_guide.md
I tried to use
/usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf/ -f /usr/cygnus/conf/agent_1.conf -n cygnusagent -Dflume.root.logger=INFO,console
But I got this error
time=2015-03-11T17:35:01.965CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed SocketConnector#0.0.0.0:8081: java.net.BindException: Address already in use
time=2015-03-11T17:35:01.965CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed Server#57c59fac: java.net.BindException: Address already in use
time=2015-03-11T17:35:01.965CET | lvl=FATAL | trans= | function=run | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer[63] : Fatal error running the Management Interface. Details=Address already in use
And besides this error. I use service cygnus status and start correctly.
time=2015-03-11T17:46:52.337CET | lvl=ERROR | trans= | function=run | comp=Cygnus | msg=org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable[253] : Unable to start EventDrivenSourceRunner: { source:org.apache.flume.source.http.HTTPSource{name:http-source,state:IDLE} } - Exception follows.
java.lang.IllegalStateException: Running HTTP Server found in source: http-source before I started one.Will not attempt to start.
at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at org.apache.flume.source.http.HTTPSource.start(HTTPSource.java:137)
at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
I change the port to 8085 8084 8083 ... see that he read the conf and ignore this conf ...
[root#alex alex]# /usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf -f /usr/cygnus/conf/cygnus_instance_1.conf -n cygnusagent -Dflume.root.logger=INFO,console [-p 8085]
+ exec /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.34.x86_64//bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/usr/cygnus/conf:/usr/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication -f /usr/cygnus/conf/cygnus_instance_1.conf -n cygnusagent '[-p' '8085]'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.7.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
time=2015-03-11T19:47:50.882CET | lvl=INFO | trans= | function=start | comp=Cygnus | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider[61] : Configuration provider starting
time=2015-03-11T19:47:50.895CET | lvl=INFO | trans= | function=run | comp=Cygnus | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable[133] : Reloading configuration file:/usr/cygnus/conf/cygnus_instance_1.conf
time=2015-03-11T19:47:50.906CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CONFIG_FILE = /usr/cygnus/conf/agent_1.conf
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CONFIG_FOLDER = /usr/cygnus/conf
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: AGENT_NAME = cygnusagent
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CYGNUS_USER = root
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: LOGFILE_NAME = cygnus.log
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: ADMIN_PORT = 8085
time=2015-03-11T19:47:50.907CET | lvl=INFO | trans= | function=validateConfiguration | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[140] : Post-validation flume configuration contains configuration for agents: []
time=2015-03-11T19:47:50.908CET | lvl=WARN | trans= | function=getConfiguration | comp=Cygnus | msg=org.apache.flume.node.AbstractConfigurationProvider[138] : No configuration found for this host:cygnusagent
time=2015-03-11T19:47:50.913CET | lvl=INFO | trans= | function=startAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[138] : Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.925CET | lvl=INFO | trans= | function=startManagementInterface | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication[85] : Starting a Jetty server listening on port 8081 (Management Interface)
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=stopAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[101] : Shutting down configuration: { sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : jetty-6.1.26
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=startAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[138] : Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.949CET | lvl=INFO | trans= | function=startManagementInterface | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication[85] : Starting a Jetty server listening on port 8081 (Management Interface)
time=2015-03-11T19:47:50.958CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : jetty-6.1.26
time=2015-03-11T19:47:50.978CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed SocketConnector#0.0.0.0:8081: java.net.SocketException: Address already in use
time=2015-03-11T19:47:50.980CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : Started SocketConnector#0.0.0.0:8081
time=2015-03-11T19:47:50.982CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed Server#6e811049: java.net.SocketException: Address already in use
time=2015-03-11T19:47:50.982CET | lvl=FATAL | trans= | function=run | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer[63] : Fatal error running the Management Interface. Details=Address already in use
Alejandro, this is a well-known bug for Cygnus 0.7.0. A new 0.7.1 version was uploaded at the beginig of this week to the FIWARE repo. Anyway, that supposedly FATAL error (it is an error, but not FATAL :)) does not affect the behaviour of Cygnus since it only affects the Management Interface (which currently has only one method that returns the version you are running). Thus, Cygnus should be working properly in the port you have configured for the HTTPSource at your /usr/cygnus/conf/agent_1.conf file:
cygnusagent.sources.http-source.port = 5050
Before installing the new version, I recommend you to remove the previous one. I mean, do not simply run yum install cygnus in order to update the existent installacion, but actively yum remove cygnus and then yum install cygnus. The reason is another bug regarding the RPM deployment that was fixed witin version 0.7.1 as well.