How Terraform output info from internal AWS ec2 - terraform-provider-aws

I use Terraform to build AWS EC2 with Microk8s. I did it successfully.
But now I have a requirement that I want to use Terraform out the API Server of Microk8s.
I know how to find API server manually.
SSH AWS EC2
use command "sudo microk8s kubectl add-node"
But I have no idea about using Terraform output the info above.
Could you please teach me the way ?
Thanks,
Ziv

Related

How Do I Configure the Gremlin Console to AWS Neptune?

I have downloaded the Apache Tinkerpop Gremlin Console but I cannot figure out how to connect this to my AWS Neptune Instance. Please provide me with step by step instructions to get this connected to the Neptune.
The official procedure is provided from AWS here -> https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-console.html
Please be aware that by default, your Neptune instance does not permit a remotely accessible port. That has to be prepared via an Application Load Balancer or having the AWS VPN connection to your VPC. For this reason, I highly recommend that you launch a small Linux instance on your VPC and SSH to this to follow the instructions first. You will also need to install Java 8 or later on that machine. If using a VPN, you would also ensure that the inbound traffic to port 8182 be enabled on the VPC's subnet(s) that's serviced by the AWS Open VPN endpoint. These are not the only options but are answered elsewhere.
Download the AWS CA Certificate from https://www.amazontrust.com/repository/AmazonRootCA1.pem. It will come up as a text on your browser. Just copy and paste as something like aws.pem This is to allow TLS connection from the Gremlin Console.
Using openssl tool (installone if you don't have it.) export this pem to p12 file. p12 or pkcs12 is the format the Java Certificat Store recognizes. It would go like this:
openssl pkcs12 -export -out aws.p12 -in aws.pem
From here on I have cd to the root of the gremlin console distribution.
Copy above aws.p12 under the conf directory.
Obtain the full DNS address of your Neptune instance from your AWS Console
Open conf/remote.yaml and use the following pattern example to edit the host and add connectionPool configuration.
hosts: [test.cluster-abcdefzxyz.planet-earth-1.neptune.amazonaws.com]
connectionPool: { enableSsl: true, trustStore: conf/aws.p12 }
Create a file conf/remote.txt with the following lines. This is an optional step but otherwise, you will be typing thses two :remote commands each time you start the console.
:remote connect tinkerpop.server conf/remote.yaml
:remote console
Finally issue the following line on your terminal.
cd bin
gremlin.bat -i conf/remote.txt
The gremlin console should start, connect to the Neptune and be ready to accept your Gremlin queries. To quickly test this.
g.V().limit(1)

Airflow stored in the cloud?

I would like to know if I can make the airflow UI accessible to all people who have a user, web page type. For this, I would have to connect it to a server, no? Which server do you recommend for this? I was looking around and some were using Amazon EC2.
If your goal is just making the airflow UI visible to public, there is a lot of solutions, where you can do it even in your local computer (of course it is not a good idea).
Before choosing the cloud provider and the service, you need to think about the requirements:
in your team, do you have the skills and the time to manage the server? if no you need a managed service like GCP cloud composer or AWS MWAA.
which executor yow want to use? KubernetesExecutor? CeleryExecutor on K8S? if yes you need a K8S service and not just a VM.
do you have a huge loading? do you need a HA mode? what about the scalability?
After defining the requirements, you can choose between the options:
Small server with LocalExecutor or CeleryExecutor on a VM -> AWS EC2 with a static IP and Route 53 for DNS name
A scalable server in HA mode on a K8S cluser -> AWS EKS or google GKE
A managed service and focusing only on the development part -> google cloud composer

OpenStack and Open Source MANO: Instantiating a Kubernetes Cluster on the Openstack server for OSM

I am trying to deploy a 5G network using open-source software. I ran into an issue that seems to have nothing on it. When I try to instantiate a Network Service, it says that it cannot find the K8s cluster that meet the following requirements: {}.
This is how it actually looks:
No k8scluster with requirements='{}' at vim_account=34510160-24e6-4c6a-93ec-787d00a2518a found for member_vnf_index=oai_cn5g_amf
The VIM account is simply the Openstack server. Do you guys know how to link a kubernetes cluster in Openstack?
Thanks,
Taylor

How do I connect to a dataproc cluster with Jupyter notebooks from cloud shell

I have seen the instructions here https://cloud.google.com/dataproc/docs/tutorials/jupyter-notebook for setting up Jupyter notebooks with dataproc but I can't figure out how to alter the process in order to use Cloud shell instead of creating an SSH tunnel locally. I have been able to connect to a datalab notebook by running
datalab connect vmname
from the cloud shell and then using the preview function. I would like to do something similar but with Jupyter notebooks and a dataproc cluster.
In theory, you can mostly follow the same instructions as found https://cloud.google.com/shell/docs/features#web_preview to use local port forwarding to access your Jupyter notebooks on Dataproc via the Cloud Shell's same "web preview" feature. Something like the following in your cloud shell:
gcloud compute ssh my-cluster-m -- -L 8080:my-cluster-m:8123
However, there are two issues which prevent this from working:
You need to modify the Jupyter config to add the following to the bottom of /root/.jupyter/jupyter_notebook_config.py:
c.NotebookApp.allow_origin = '*'
Cloud Shell's web preview needs to add support for websockets.
If you don't do (1) then you'll get popup errors when trying to create a notebook, due to Jupyter refusing the cloud shell proxy domain. Unfortunately (2) requires deeper support from Cloud Shell itself; it'll manifest as errors like A connection to the notebook server could not be established.
Another possible option without waiting for (2) is to run your own nginx proxy as part of the jupyter initialization action on a Dataproc cluster, if you can get it to proxy websockets suitably. See this thread for a similar situation: https://github.com/jupyter/notebook/issues/1311
Generally this type of broken websocket support in proxy layers is a common problem since it's still relatively new; over time more and more things will start to support websockets out of the box.
Alternatively:
Dataproc also supports using a Datalab initialization action; this is set up such that the websockets proxying is already taken care of. Thus, if you're not too dependent on just Jupyter specifically, then the following works in cloud shell:
gcloud dataproc clusters create my-datalab-cluster \
--initialization-actions gs://dataproc-initialization-actions/datalab/datalab.sh
gcloud compute ssh my-datalab-cluster-m -- -L 8080:my-datalab-cluster-m:8080
And then select the usual "Web Preview" on port 8080. Or you can select other Cloud Shell supported ports for the local binding like:
gcloud compute ssh my-datalab-cluster-m -- -L 8082:my-datalab-cluster-m:8080
In which case you'd select 8082 as the web preview port.
You can't connect to Dataproc through a Datalab installed on a VM (on a GCE).
As the documentation you mentionned, you must launch a Dataproc with a Datalab Initialization Action.
Moreover the Datalab connect command is only available if you have created a Datalab thanks to the Datalab create command.
You must create a SSH tunnel to your master node ("vmname-m" if your cluster name is "vmname") with:
gcloud compute ssh --zone YOUR-ZONE --ssh-flag="-D 1080" --ssh-flag="-N" --ssh-flag="-n" "vmname-m"

Accessing host machine from Minikube

I have a Google Cloud Datastore emulator running on my local machine at localhost:8742. I'd like to access this from a pod running in minikube. Is there a way to do this?
You should be able to access the Google Cloud Datastore emulator by using the ip address for the host from the VM. For the virtualbox driver (the default in minikube) this ip address is: 10.0.2.2
Telepresence can do this and may prove generally useful to you for your k8s development work.
After installing Telepresence, and following the guide at www.telepresence.io/tutorials/kubernetes-rapid, create a proxy service,
localhost$ telepresence --new-deployment some-name-you-like --expose 8742
Then, you can access the service some-name-you-like from the cluster as you might have done via localhost, for example via some shell or specialized data store client image,
kubectl --restart=Never run -i -t --image=alpine console /bin/sh

Resources