I am trying to use Openstack (liberty) swift with Ceph (Jewel) using radosgw. The aim is that the objects should be stored under ceph osds. I have a working Openstack and Ceph cluster.
To use Ceph as object storage backend I installed and configured radosgw in ceph cluster . In openstack node I installed "python-swiftclient", created an object-store service and added an endpoint for that service with the URL of radosgw.
I followed the instructions given in the link below.
http://docs.ceph.com/docs/jewel/radosgw/keystone/
ceph.conf
[client.rgw.rgw]
rgw_frontends = "civetweb port=7480"
rgw enable ops log = true
rgw ops log rados = true
rgw thread pool size = 2000
rgw override bucket index max shards = 23
ms dispatch throttle bytes = 209715200
[client.radosgw.gateway]
rgw keystone url = http://controller:35357
rgw keystone admin token = ADMIN
rgw keystone accepted roles = _member_,admin
rgw keystone token cache size = 200
rgw keystone revocation interval = 60
rgw s3 auth use keystone = true
nss db path = /var/ceph/nss
Openstack endpoints
# openstack endpoint list |grep -i object
| 8efd00b48db249e69244a5f3e35356b1 | RegionOne | swift | object-store | True | internal | http://rgw:7480/swift/v1 |
| b7d1c7ccc84640138116d8e6676b28a3 | RegionOne | swift | object-store | True | admin | http://rgw:7480/swift/v1 |
| c7844842b53647a4b623905c54cc6c75 | RegionOne | swift | object-store | True | public | http://rgw:7480/swift/v1 |
Output of swift list from command line
# swift list -v
test_CONTAINER
Output of swift stat from command line
# swift stat -v
StorageURL: http://rgw:7480/swift/v1
Auth Token: AUTH_rgwtk0e00000074657374757365723a737769667431dd200c6d2136112ee6d657300feb16d05ffa8f80a2e53ce6c257b32ec5505ff396e5e8
Account: v1
Containers: 7
Objects: 12
Bytes: 168
Meta Temp-Url-Key: healthseq
X-Account-Bytes-Used-Actual: 40960
X-Timestamp: 1473615022.41820
X-Trans-Id: tx0000000000000000006b3-0057d594ae-1f5cb-default
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
When I try to access the object store - container in openstack dashboard, I get the following error.
http://pastebin.com/ALEvYCX8
Please see the image below for the error that I get while trying to access the object store from dashboard.
just remove this line in your code
[client.radosgw.gateway] and merge the setting
Related
Returning to an app from a few months ago, I ran:
amplify push
which returned
Current Environment: dev
| Category | Resource name | Operation | Provider plugin |
| -------- | --------------------- | --------- | ----------------- |
| Api | e9app201907021400api | Update | awscloudformation |
| Auth | eauth201907021400 | No Change | awscloudformation |
? Are you sure you want to continue? Yes
GraphQL schema compiled successfully.
Edit your schema at /Projects/2019/june/e9-app/amp<snip>0api/schema
✖ An error occurred when pushing the resources to the cloud
The AWS Access Key Id you provided does not exist in our records.
So I generated a new set of credentials in the console and installed them with aws configure.
I ran aws configure list
and got
Name Value Type Location
---- ----- ---- --------
profile default manual --profile
access_key ****************CAGH shared-credentials-file
secret_key ****************uU0C shared-credentials-file
region eu-west-1 config-file ~/.aws/config
checked:
cat ~/.aws/credentials
which returned:
[default]
aws_access_key_id = ****************CAGH
aws_secret_access_key = ****************uU0C
amplify push continues to return the same message.
When I go back to the console and look at the user it says "access key age Today" - as opposed to 45 days ago (before I requested new credentials).
Any clues as to what else I can check please?
Try to check your configured 'profileName' in /amplify/.config/local-aws-info.json.
In my case, I was trying to run the push command using a different profile and that didn't work. Switching to the correct profile solved the issue.
It would appear the Inactive key associated with the user account was invalidating the Active key. To test the theory I reactivated the Inactive key. I've since delete the inactive key.
So it would seem to me that amplify doesn't see the non-primary key.
I have prepaired Gnocchi by documentation at https://gnocchi.xyz/stable_4.2/rest.html. I am using Keystone, but I am not able to make a request.
Sample:
GET http://<serverIP>:8041/v1/metric/0b5aa633-3ebf-49d5-99ad-e78302c41376 HTTP/1.1
Response:
date: Fri, 30 Mar 2018 20:24:26 GMT
server: Apache/2.4.18 (Ubuntu)
www-authenticate: Keystone uri='http://127.0.0.1/identity'
content-length: 114
connection: close
content-type: application/json
{
"error": {
"message": "The request you have made requires authentication.",
"code": 401,
"title": "Unauthorized"
}
}
I have OpenStack Queens installed by DevStack script on my Ubuntu 16.
I have only admin credentials, how can i get X-Auth-Token required?
Get token by using Openstack CLI:
openstack token issue
Output:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2018-04-11T16:48:58+0000 |
| id | gAAAAABazi5qBuJ27ZJ_F_EbtE3kKTheImVW1nrazoB6_LKumLeRnacwavmmLdTThVLENQ0Idj4vm-L1OG1xnjvuRwqOQq1lFpSDP8N_Cazb-QGEIIgMaqflp9Z_NaScKkekrHmddnzRNM1-LHRHoAi5WMwMO2Yyf8CjR8331ME1G6KY1SHzGWo |
| project_id | 97335134c06949fea2caebb0c5baa11a |
| user_id | 35b0022e29ee4e5588fca36d30e95afb |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
It is necessary to use X-Auth-Token header in request.
X-Auth-Token: gAAAAABazi5qBuJ27ZJ_F_EbtE3kKTheImVW1nrazoB6_LKumLeRnacwavmmLdTThVLENQ0Idj4vm-L1OG1xnjvuRwqOQq1lFpSDP8N_Cazb-QGEIIgMaqflp9Z_NaScKkekrHmddnzRNM1-LHRHoAi5WMwMO2Yyf8CjR8331ME1G6KY1SHzGWo
I already got the solution, I was about to post it..
Basically, after the gnocchi is installed on openstack, either real openstack or devstack,
The gnocchi needs to be enabled and authenticated correctly,
The problem before was i was only authenticating the openstack admin, I need to autenticate gnocchi as seperate admin the the real openstack admin,
so here is what i did,
I kinda created a gnocchi-openrc with this
> export OS_PROJECT_DOMAIN_NAME=Default export
> OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=service export
> OS_USERNAME=gnocchi export OS_PASSWORD=********** export
> OS_AUTH_URL=http://20.3.39.13/identity/v3 export
> OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 export
> OS_AUTH_TYPE=password
Then makes it permanent
sudo openstack role add --project service --user gnocchi admin
Then I tried to upgrade ceilometer with this
ceilometer-upgrade,
If there is no error with the ceilometer upgrade, then
We can now source the openstack admin rc file,
source admin-openrc.sh
The gnocchi will be enabled, and the metrics can then be exposed to OSM,
Promethus and graffana then works automatically
I followed the problem here
https://ask.openstack.org/en/question/110863/gnocchi-keystone-unable-to-validate-token/
https://bugzilla.redhat.com/show_bug.cgi?id=1434345 if there is error with ceilometer upgrade
I've configured on 2 servers(srv50/51),
one of them is Master and the second one is slave,
Here the configuration of my configuration file /etc/maxscale.cnf :
[Read-Only Service]
type=service
router=readconnroute
servers=server50, server51
user=YYYYYYYYYYYYY
passwd=XXXXXXXXXXXXXX
router_options=slave
[Write-Only Service]
type=service
router=readconnroute
servers=server50, server51
user=YYYYYYYYYYYYY
passwd=XXXXXXXXXXXXXX
router_options=master
[Read-Only Listener]
type=listener
service=Read-Only Service
protocol=MySQLClient
port=4008
[Write-Only Listener]
type=listener
service=Write-Only Service
protocol=MySQLClient
port=4009
As i understool the router_options look who is the master and send the writing query to the master
Maxscale (via maxadmin) seems to discover the 2 serveur and understand witch one is the Master :
MaxScale> list servers
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
server51 | 192.168.0.51 | 3306 | 0 | Slave, Running
server50 | 192.168.0.50 | 3306 | 0 | Master, Running
-------------------+-----------------+-------+-------------+--------------------
But even if I connect in Mysql in local on my Maxscale Write-Only Listener port (4009), Listener are in Stopped mode, is it normal ?
MaxScale> list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Read-Only Service | MySQLClient | * | 4008 | Stopped
Write-Only Service | MySQLClient | * | 4009 | Stopped
MaxAdmin Service | maxscaled | * | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
I've try to create a database in srv51 (slave), and it was created only on srv51, not in srv50.
Is something wrong in my configuration ? It's strange because it's not my first cluster, and on other cluster all write go to the master (but listeners are Running). Do i don't understand well the meaning of "router_options=master" ? How to start listeners ? I prefere to keep the 51 in Write list to detect topology change
===== UPDATE =====
After watching Log file /var/log/maxscale/maxscale1.log
I found that my monitor user didn't have the correct password :
[MySQL Monitor]
type=monitor
module=mysqlmon
servers=server50, server51
user=MONITOR
passwd=MONITOR_PASS
monitor_interval=10000
I corrected password for user and restarted maxscale, Now everything is running :
MaxScale> list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Read-Only Service | MySQLClient | * | 4008 | Running
Write-Only Service | MySQLClient | * | 4009 | Running
MaxAdmin Service | maxscaled | * | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
But write query are still done on Slave and not on Master
Thanks to MariaDb support, I was trying to connect like this :
mysql -h localhost --port=4009 -u USER -p
But Maxscale & Mysql were installed in the same server, even if Mysql bind port 3306, when you specify 'localhost', the connection is done on Mysql port 3306 and not in Maxscale port 4009, the port is ignore !!
The solution is to connect like this :
mysql -h 127.0.0.1 --port=4009 -u USER -p
or like this :
mysql -h localhost --protocol=tcp --port=4009 -u USER -p
I've try both solution and they works.
The solution about the listener not Running is on update of the question.
If writes are done on the slaves, the simplest explanation would be that you're executing writes on the wrong port or your configuration is wrong. To diagnose these problems, enable the info log level by adding log_info=true under the [maxscale] section.
If enabling the info log and inspecting the log files does not provide any clues, I'd suggest opening a bug report on the Maxscale Jira.
After a power failure of the host machine, the openstack cinder volumes have entered a state in which they can not be attached nor detached.
~$ nova volume-attach ### ###
ERROR: Invalid volume: already attached (HTTP 400) (Request-ID: req-###)
~$ nova volume-detach ### ###
ERROR: Invalid volume: already detached (HTTP 400) (Request-ID: req-###)
The status of the volume its self is listed as attached
cinder list
+-----+-----------+---------------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+-----+-----------+---------------------+------+-------------+----------+-------------+
| ### | available | volume-data | 690 | Storage | false | ### |
+-----+-----------+---------------------+------+-------------+----------+-------------+
The volume is not found on the instance even though its listed as attached in the cli and in horizon (the device isnt found in /dev/ nor the mount point in /mnt/)
I am having a consistent
Error: Failed to launch instance-id": Please try again later [Error: Timeout while waiting on RPC response -topic: "network", RPC method: "get_instance_nw_info" info: ""]
every time I am launching an instance in Openstack. I've tried it both using the OpenStack dashboard and via terminal (nova). Using the terminal, here's the command I ran:
nova boot --flavor "2" --image "b26c9acf-06c0-4ff8-b1c7-aca3052485c8" --num-instances "2" --security-groups "default" --meta description="test" test
When I check the list of instances, here's the output:
+--------------------------------------+-------------------------------------------+---
-----+------------+-------------+----------+
| ID | Name |
Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------------------------+---
-----+------------+-------------+----------+
| a0477666-b810-4c73-94e6-2a66576bccac | test-a0477666-b810-4c73-94e6-2a66576bccac |
ERROR | None | NOSTATE | |
| c5822a6f-4270-4718-95c4-9f28fea8de82 | test-c5822a6f-4270-4718-95c4-9f28fea8de82 |
ERROR | None | NOSTATE | |
Here's a snapshot of the error I am encountering:
Am I missing a configuration entry (i.e. using dashboard) or sub-command (i.e. in using nova via terminal) during launching?
Any feedback is greatly appreciated. Thanks in advance!