Is it possible to configure an external or privatly hosted tilemap server in Kibana of AWS Elasticsearch? My goal is to have zoom levels beyond level 7 (the current max in AES Kibana). According to the config page at elastic.co (https://www.elastic.co/guide/en/kibana/current/settings.html) I should modify map.tilemap.url but this config is not available in the Kibana "Advanced Settings".
Is there any way of configuring a tile server within AWS Elasticsearch?
Related
I would like to know if I can make the airflow UI accessible to all people who have a user, web page type. For this, I would have to connect it to a server, no? Which server do you recommend for this? I was looking around and some were using Amazon EC2.
If your goal is just making the airflow UI visible to public, there is a lot of solutions, where you can do it even in your local computer (of course it is not a good idea).
Before choosing the cloud provider and the service, you need to think about the requirements:
in your team, do you have the skills and the time to manage the server? if no you need a managed service like GCP cloud composer or AWS MWAA.
which executor yow want to use? KubernetesExecutor? CeleryExecutor on K8S? if yes you need a K8S service and not just a VM.
do you have a huge loading? do you need a HA mode? what about the scalability?
After defining the requirements, you can choose between the options:
Small server with LocalExecutor or CeleryExecutor on a VM -> AWS EC2 with a static IP and Route 53 for DNS name
A scalable server in HA mode on a K8S cluser -> AWS EKS or google GKE
A managed service and focusing only on the development part -> google cloud composer
I am trying to create MWAA as root user and I have all AWS services (s3 and EMR )in North California. MWAA doesn't exist in North California. Hence created this in Oregon.
I am creating this in a private network, it also required a new s3 bucket in that region for my dags folder.
I see that it also needed a new vpc and private subnet as we dont have anything in that region created by clicking on "Create VPC ".
Now when I click on airflow UI. It says
"This site can’t be reached". Do I need to add my Ip to the security group here to access Airflow UI?
Someone, please guide.
Thanks,
Xi
From AWS MWAA documentation:
3. Enable network access. You'll need to create a mechanism in your Amazon VPC to connect to the VPC endpoint (AWS PrivateLink) for your Apache Airflow Web server. For example, by creating a VPN tunnel from your computer using an AWS Client VPN.
Apache Airflow access modes (AWS)
The AWS documentation suggests 3 different approaches for accomplishing this (tutorials are linked in the documentation).
Using an AWS Client VPN
Using a Linux Bastion Host
Using a Load Balancer (advanced)
Accessing the VPC endpoint for your Apache Airflow Web server (private network access)
I have deployed API manager 4.0.0 All-in-one on 2 VMs. I am using Mysql as DB which is on a separate VM and I am sharing the databases as mentioned in the document. Now I am trying to cluster these 2 nodes as mentioned in this document. There are a few things that are not clear to me from this document.
If I give the https_endpoint with domain "gw.am.wso2.com" behind the gateway. environments, I need to add 8243 in my reverse proxy server, or not, it fails when trying out API in the console. Why?
As Kibana is the webUI for elasticsearch, it is better make it high availability. After reading the doc and make a demo, i can not find a way to set up two Kibana instances simultaneously for a single Elasticsearch cluster.
After some deep leaning about Kibana, i finally find that Kibana will store its data and configuration about dashboard and searches in the backend ES. This way Kibana is just like a proxy and ES serves as the DataBase for it.
So, the answer is yes. Kibana supports High Availability through ES.
You could simply change the server.port value to another free port (ie: something like 6602) in your kibana yml since 5601 is the default. Hence you're pointing to the same ES instance and having two (one running on the default port and the other running on port 6602) kibana instances as well.
I know how to AutoScale. Now I need to know how to configure a web server so that I can add it to ELB and make it trigger AutoScaling (up and down).
I have an EC2 server running a web server. I have mounted an EBS and am using it as the web root. Now I want to make an AMI based on this server and tell AutoScaling to launch new servers based on this AMI.
Everyday my Wordpress site gets updated with new posts. If I make the AMI today and after two days I have a traffic spike that causes the ELB to scale up to meet demand, how will my EBS data be updated on the AMI?
I want to understand the role of the AMI in AutoScaling. How will the newly launched server in the scaling group have the www data that is on the attached EBS. I know that an EBS volume can only be attached to one server.
Also, when the AMI is used to launch a new server, will it grab the latest data from the source server and update the AMI at the moment it is launched so that the new server will have the most recent changes.
Can someone guide me through this?