I installed devstack and it seems working fine.
But when focusing on the dashboard precisely in the project tab I don't have neither the "Manage Network" Section nor the "Object Store" Section.
I noticed this after comparing with the trystack sandbox.
Is that normal?
The list of enabled services in devstack is configured using the localrc file. The neutron network service and the Swift storage service are off by default.
The following modifications should enable both:
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
enable_service swift
Devstack docs say put the following in local.conf:
enable_service s-proxy s-object s-container s-account
The list of enabled services in devstack is configured in local.conf file.
When you need to enable object storage (swift) with Devstack, you need to put below lines into your local.conf file:
# Enable swift services
enable_service s-proxy
enable_service s-object
enable_service s-container
enable_service s-account
# Enable tempurls and set credentials
SWIFT_HASH=your_hash ("abc123" for example :D)
SWIFT_TEMPURL_KEY=your_key ("abc123" for example :D)
SWIFT_ENABLE_TEMPURLS=True
Related
Currently I have setup an instance with one interface and a vip with keepalived. Communication to the primary interface is working but not to the vip. I have tried adding an additional port with the ip address but with no luck. Below is what I have tried and the error. (192.168.1.50 - is the vip)
openstack port create --network l_network --fixed-ip subnet=10990c09-5893-4r68-ecre-307ed7740ey6,ip-address=192.168.1.50 --mac-address=fb:17:3d:a6:08:37 port1
Unable to complete operation for network
f6601b8f-dhb2-4567-t399-124fb5hd8895. The mac address
fb:17:3d:a6:08:37 is in use.
I managed to get it working by creating an additional port and then linking it to the Openstack Instance
Create the port for VIP
neutron port-create --fixed-ip subnet_id=<subnet_id>,ip_address=192.168.1.50 --no-security-groups --name "vip" <id_of_net>
to find id of subnet and network id:
neutron net-list
link the port to the instances:
neutron port-update <port_id_of_current_instance> --allowed-address-pairs type=dict list=true ip_address=192.168.1.50
to find the port_ids:
neutron port-list
I have a .Net core 5.0 application that uses the Serilog console sink to log. A part of the application starts a Prometheus metrics endpoint, this part is not integrated with Serilog and prints also to console.
When running as a docker both serilog and the endpoint prints to console.
When I install as a systemd service in ubuntu 20.04 LTS I see only the Prometheus logs, no Serilog.
app.service
[Unit]
Description=App1
Wants=network-online.target
After=network-online.target
[Service]
User=appuser
Group=appuser
WorkingDirectory=/usr/local/bin/app
EnvironmentFile=/usr/local/bin/app/app.env
ExecStart=dotnet /usr/local/bin/app/app.dll
Restart=always
[Install]
WantedBy=multi-user.target
Running journalctl -f -u app shows only the Prometheus logs. Is there a way to configure serilog to work with systemd?
Configuration error, the application was still trying to log to file. Once that was fixed serilog console logging shows up in systemd journal just fine.
Following official documentation, I'm trying to deploy a Devstack on an Ubuntu 18.04 Server OS on a virtual machine. The devstack node has only one network card (ens160) connected to a network with the following CIDR 10.20.30.40/24. I need my instances accessible publicly on this network (from 10.20.30.240 to 10.20.30.250). So again the following the official floating-IP documentation I managed to form this local.conf file:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
PUBLIC_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.40/24
PUBLIC_NETWORK_GATEWAY=10.20.30.1
Q_FLOATING_ALLOCATION_POOL=start=10.20.30.240,end=10.20.30.250
This would lead to form a br-ex with the global IP address 10.20.30.40 and secondary IP address 10.20.30.1 (The gateway already exists on the network; isn't PUBLIC_NETWORK_GATEWAY parameter talking about real gateway on the network?)
Now, after a successful deployment, disabling ufw (according to this), creating a cirros instance with proper security group for ping and ssh and attaching a floating-IP, I only can access my instance on my devstack node, not on the whole network! Also from within the cirros instance, I cannot access the outside world (even though I can access the outside world from the devstack node)
Afterwards, watching this video, I modified the local.conf file like this:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
FLAT_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.240/28
After a successful deployment and instance setup, I still can access my instance only on devstack node and not from the outside! But the good news is that I can access the outside world from within the cirros instance.
Any help would be appreciated!
Update
On the second configuration, checking packets on tcpdump while pinging the instance floating-IP, I observed that the who-has broadcast packet for the floating-IP of the instance reaches the devstack node from the network router; however no is-at reply is generated and thus ICMP packets are not routed to the devstack node and the instance.
So, with some tricks I created the response and everything works fine afterwards; but certainly this isn't solution and I imagine that the devstack should work out of the box without any tweaking and probably this is because of a misconfiguration of devstack.
After 5 days of tests, research and lecture, I found this: Openstack VM is not accessible on LAN
Enter the following commands on devstack node:
echo 1 > /proc/sys/net/ipv4/conf/ens160/proxy_arp
iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE
That'll do the trick!
Cheers!
I set up a Kubernetes Cluster using kubeadm.
[root#master fedora]# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
I installed nginx-ingress using helm.
helm install stable/nginx-ingress --name=nginx --namespace=ingress-nginx -f nginx-values.yaml
The configuration file looks like this.
I also installed Jupyterhub using helm with this configuration file:
helm version
Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
helm install jupyterhub/jupyterhub --version=v0.7-fd73c61 --name=jh07 --namespace=jh07 -f config.yaml --timeout=14400
Everything works fine, except the forwarding to the GitHub-Authentication service. I think it might have to do with this issue.
What settings do I have to change in the helm configuration files to make nginx forward the literal requests?
It was not a matter of configuration, my configuration was not all that wrong, but it was a port problem. The machines I am using are related to two different accounts on an OpenStack server. The OpenStack server has an ingress/engress controller. I thought I opened up all necessary ports... but it did not work... what struck me, was that it sometimes did work... I figured that when all pods were created on nodes belonging to one account it did work.
So I decided to only use one account (open all necessary ports for the kubernetes cluster listed here) and it worked.
I will update my answer if I find out which ingress and engress rules I have to apply to the other account.
I will update if I find out which ingress and engress rules I have to apply to the
I am trying to set up a machine with a single network card running DevStack with Neutron, but shack.sh is failing with
2014-12-16 23:39:47.221 | [ERROR] /home/stack/devstack/functions-common:1091 Failure creating private IPv4 subnet for f49997e9027f47fbbe7ea97c9bfd6d62
This is the result of trying to execute:
neutron subnet-create --tenant-id f49997e9027f47fbbe7ea97c9bfd6d62 --ip_version 4 --gateway 10.0.0.1 --name private-subnet 3c5f8df0-bfd0-4c92-9c8c-fd66fd26fd30 10.11.12.0/24
I have tried changing to gateway to 10.11.12.1 and this works.
My local.conf is:
[[local|localrc]]
HOST_IP=192.168.2.54
FLOATING_RANGE=192.168.2.224/27
FIXED_RANGE=10.11.12.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=p2p1
SERVICE_TOKEN=...
ADMIN_PASSWORD=...
MYSQL_PASSWORD=...
RABBIT_PASSWORD=...
SERVICE_PASSWORD=$ADMIN_PASSWORD
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service tempest
I suspect that there are some setting I am missing that control this better. Can anyone advise what these are?
The floating range is actually the external network range. The network gateway needs to be part of this range. There is a separate setting needed to specify the floating IP addresses. I found that the following worked:
HOST_IP=192.168.2.54
FLOATING_RANGE=192.168.2.0/24
FIXED_RANGE=10.11.12.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=p2p1
NETWORK_GATEWAY=10.11.12.1
PUBLIC_NETWORK_GATEWAY=192.168.2.1
Q_FLOATING_ALLOCATION_POOL=start=192.168.2.225,end=192.168.2.250
Your GATEWAY_NETWORK is your gateway which is the same IP range with your HOST_IP. For example 192.168.2.1