I'm Trying to make a connecting Imap in my Apache Airflow DAG, Can anybody show how to do a basic Imap connection in Apache Airflow.
In Airflow you can create connections as environment variables by using the convention of AIRFLOW_CONN_{CONN_ID} (all uppercase).
So for example you can have:
export AIRFLOW_CONN_IMAP_DEFAULT='imap://username:password#myimap.com:993?use_ssl=true'
or you can setup IMAP connection via the UI: Admin -> Connections.
You can follow IMAP provider documentation that explains how to setup the fields.
Related
I am trying to create MWAA as root user and I have all AWS services (s3 and EMR )in North California. MWAA doesn't exist in North California. Hence created this in Oregon.
I am creating this in a private network, it also required a new s3 bucket in that region for my dags folder.
I see that it also needed a new vpc and private subnet as we dont have anything in that region created by clicking on "Create VPC ".
Now when I click on airflow UI. It says
"This site can’t be reached". Do I need to add my Ip to the security group here to access Airflow UI?
Someone, please guide.
Thanks,
Xi
From AWS MWAA documentation:
3. Enable network access. You'll need to create a mechanism in your Amazon VPC to connect to the VPC endpoint (AWS PrivateLink) for your Apache Airflow Web server. For example, by creating a VPN tunnel from your computer using an AWS Client VPN.
Apache Airflow access modes (AWS)
The AWS documentation suggests 3 different approaches for accomplishing this (tutorials are linked in the documentation).
Using an AWS Client VPN
Using a Linux Bastion Host
Using a Load Balancer (advanced)
Accessing the VPC endpoint for your Apache Airflow Web server (private network access)
I have created an R Shiny app that connects to a Cloud SQL instance. It runs fine on my local server, but when I upload to either shinyapps.io or to Cloud Run via Dockerfile, it is unable to connect.
Here is the code I am using to connect using RPostgres package:
conn <- dbConnect(
drv=RPostgres::Postgres(),
dbname='postgres',
sslrootcert=path to 'server-ca.pem',
sslcert=path to 'client-cert.pem',
sslkey=path to 'client-key.pem',
host='xxxxxxxxxxxxxxxxxxx',
port=5432,
user='username',
password='password_string',
sslmode='verify-ca')
I've checked the logs in Cloud Run, the error message I am seeing is the following:
Warning: Error in : unable to find an inherited method for function 'dbGetQuery' for signature '"character", "character"'
The dbGetQuery() function is called after the dbConnect function, and since it runs fine on my local server, I am fairly confident that what I am seeing is a connection issue, rather than a package namespace issue. But could be wrong.
I have opened up to all IPs by adding 0.0.0.0/0 as an allowed network. The weird thing is that OCCASIONALLY I CAN CONNECT from shinyapps.io, but most of the time it fails. I have not yet got it to work once from Cloud Run. This is leading me to think that it could be a problem with a dynamic IP address or something similar?
Do I need to go through the Cloud Auth proxy to connect directly between Cloud Run and Cloud SQL? Or can I just connect via the dbConnect method above? I figured that 0.0.0.0/0 would also include Cloud Run IPs but I probably don't understand how it works well enough. Any explanations would be greatly appreciated.
Thanks very much!
I have opened up to all IPs by adding 0.0.0.0/0 as an allowed network.
From a security standpoint, this is a terrible, horrible, no good idea. It essentially means the entire world can attempt to connect to your database.
As #john-hanley stated in the comment, the Connecting Cloud Run to Cloud SQL documentation details how to connect. There are two options:
via Public IP (the internet) using the Unix domain socket on /cloudsql/CLOUD_SQL_CONNECTION_NAME
via Private IP, which connects through a VPC using the Serverless VPC Access
If a Unix domain socket is not supported by your library, you'll have to use a different library or choose Option 2 and connect over TCP. Note that Serverless VPC Access connector has additional costs associated with using it.
Anybody knows is it possible to associate Elastic IP with scheduled data pipeline? I have configured data pipeline to run every day. During data pipeline execution, I need access to Google DB. To have access to Google DB I should add IP (CIDR) in DB authorization settings. But without known public IP of EC2 instance created by data pipeline I cannot configure it.
So I need to have a possibility to setup Elastic IP once to be used for EC2 instance which is creating automatically by data pipeline each time when data pipeline is runned by scheduler.
I am not aware how you can associate a EIP, however, you can create a VPC with with a NAT gateway. When you create your EC2 put it in your subnet that you've created and if everything is setup properly then your public IP will always be the same.
A second option would be to run your pipeline on a Task Runner.
I'm learing the concept of telegraf. It's a system that collects data and will write it to influxdb. So I've setup a influxdb, a telegraf and an nginx which will create the logs. Telegraf will collect the logs and write it to influxdb.
But I see a lot of configurations which are also using aerospike. I don't see what aerospike is doing in this conecpt?
I see it configured as input plugin for telegraf:
[[inputs.aerospike]]
servers = ["aerospike:3000"]
Am I wrong with the nginx part and is aerospike providing logs or how do I have to interpret this concept with using nginx, influxdb, telegraf and aerospike
I would assume Aerospike is simply a potential source of logs to be ingested by telegraf...
https://github.com/influxdata/telegraf/tree/master/plugins/inputs/aerospike
nginx is just another potential input plugin... here is the list apparently:
https://github.com/influxdata/telegraf/tree/master/plugins/inputs
What I am trying to achieve is to have a central Webmail client that I can use in a ISP envioroment but has the capability to connect to multiple mail servers.
I have now been looking at Perdition, NGINX and Dovecot.
But most of the articles have not been updated for a very long time.
The one that I am realy looking at is NGINX imap proxy as it can almost do everything i require.
http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript
But firstly the issue I have is you can no longer compile NGINX from source with those flags.
And secondly the GitRepo for this project https://github.com/falcacibar/nginx_auth_imap_perl
Does not give detailed information about the updated project.
So all I am trying to achieve is to have one webmail server that can connect to any one of my mailservers where my location is residing in a database. But the location is a hostname and not a IP.
You can tell Nginx to do auth_http with any http URL you set up.
You don't need an embedded perl script specifically.
See http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html to get an idea of the header based protocol Nginx uses.
You can implement the protocol described above in any language - CGI script with apache if you like.
You do the auth and database query and return the appropriate backend servers in this script.
(Personally, I use a python + WSGI server setup.)
Say you set up your script on apache at http://localhost:9000/cgi-bin/nginx_auth.py
In your Nginx config, you use:
auth_http http://localhost:9000/cgi-bin/nginx_auth.py