We have created an openstack cluster with one proxy server and three storage nodes. Configuration consist of two regions and three zones.
We are able to create containers
But while trying to upload files we are getting 503 service unavailable and seeing below logs in swift.log
Related
I am receiving the data to my Elasticsearch from Openshift log forwarder (Fluentd) directly without any pipeline or integrations. I am using an open-source Elasticsearch which is outside of the Openshift platform, where I am getting the logs in 3 different tags which are "Application, Infra & Audit".
All the application logs are segregated in one particular index only which is Application, but I would like to segregate them with the applications I have with different index names in order to create dashboards and visualizations.
How can I try to resolve this? I need any samples to try.
I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?
I want to create a web application which is using a graph database hosted as part of the Amazon Web Services (AWS). As far as I understand, to use a Graph database with AWS DynamoDB as storage backend, you need to run a Titan server. Such a server can be set up on an EC2 instance.
Now, to remain scalable, I will eventually want to deploy multiple such instances behind (a couple of) load balancers. The question that arises is:
Can multiple Titan DB instances work with the same, shared storage backend (such as DynamoDB)?
Yes. Titan Server is a Gremlin Server, which is based on Netty. You configure it with a graph properties file which points to your storage backend (DynamoDB) and indexing backend (optional). As long as you use the same graph properties file for each Titan Server, it should work as your described architecture.
I have several meteor apps which I currently run in individual meteor instances. This requires that I have several meteor processes running at the same time, each occupying it's own port. I therefore need to configure the server whenever I add a new project (using reverseproxy to redirect incoming request on port 80 to the corresponding meteor ports).
I could only find information about running multiple meteor instances for scaling, but nothing about having multiple apps in one instance.
I have a graphite relay and webapp installed on one servers, that is supposed to be communicating with 4 carbon caches (and respective webapps) on 4 other servers. I've validated that the relay is working by observing that different whisper files are being updated on different carbon-relay servers.
However, the webapp is only showing metrics that are stored on the first carbon cache server in the list and I'm not sure what else to look at.
The webapps on the carbon relays are set up to listen on port 81, and I have the following in local_settings.py on the relay server (the one I'm pointing my browser at):
CLUSTER_SERVERS = ["graphite-storage1.mydomain.com:81", "graphite-storage2.mydomain.com:81", "graphite-storage3.mydomain.com:81", "graphite-storage4.mydomain.com:81", ]
However - at one point I did have all metrics on all servers - I've migrated from a single instance to this federated cluster. I've since removed the whisper files that weren't active on each carbon-cache server. I've restarted all carbon-caches, the carbon-relay and the webapp server several times. Is there somewhere the metrics-->carbon-cache mapping is getting cached? Have I missed a setting somewhere?