Using Amazon Appflow in AWS to move data - terraform-provider-aws

I work with AWS and I mainly use terraform for a lot of things.
I want to implement Amazon Appflow and be able to move data from
salesforce to a S3 bucket.
AppFlow is a wizard and needs to be setup step by step.
I will assume you cannot use Terraform to implement this right? Is this
thinking correct?

Yes, you can use Terraform to deploy AppFliw reaources. There are two providers you can use. The AWS Provider or the AWS Cloud Control Privider. I have had more luck with AWS Cloud Control currently as it is designed to support new resources sooner. It supports connectors, profiles and flows as well as supporting custom connectors. The AWS provider only supports Connectos and Profiles (no flows). I have also found it doesn't have good support for custom connectors yet.
Right now I'd recommend Cloud Control
Here is a good introduction.
https://www.hashicorp.com/resources/using-the-terraform-aws-cloud-control-provider
And the AWS Cloud Control provider.
https://registry.terraform.io/providers/hashicorp/awscc/latest/docs/resources/appflow_connector
And here are the AWS Provider AppFlow reaources.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/appflow_connector_profile
resource "aws_s3_bucket" "example_source" {
bucket = "example_source"
}
resource "aws_s3_bucket_policy" "example_source" {
bucket = aws_s3_bucket.example_source.id
policy = <<EOF
{
"Statement": [
{
"Effect": "Allow",
"Sid": "AllowAppFlowSourceActions",
"Principal": {
"Service": "appflow.amazonaws.com"
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::example_source",
"arn:aws:s3:::example_source/*"
]
}
],
"Version": "2012-10-17"
}
EOF
}
resource "aws_s3_object" "example" {
bucket = aws_s3_bucket.example_source.id
key = "example_source.csv"
source = "example_source.csv"
}
resource "aws_s3_bucket" "example_destination" {
bucket = "example_destination"
}
resource "aws_s3_bucket_policy" "example_destination" {
bucket = aws_s3_bucket.example_destination.id
policy = <<EOF
{
"Statement": [
{
"Effect": "Allow",
"Sid": "AllowAppFlowDestinationActions",
"Principal": {
"Service": "appflow.amazonaws.com"
},
"Action": [
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads",
"s3:GetBucketAcl",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::example_destination",
"arn:aws:s3:::example_destination/*"
]
}
],
"Version": "2012-10-17"
}
EOF
}
resource "aws_appflow_flow" "example" {
name = "example"
source_flow_config {
connector_type = "S3"
source_connector_properties {
s3 {
bucket_name = aws_s3_bucket_policy.example_source.bucket
bucket_prefix = "example"
}
}
}
destination_flow_config {
connector_type = "S3"
destination_connector_properties {
s3 {
bucket_name = aws_s3_bucket_policy.example_destination.bucket
s3_output_format_config {
prefix_config {
prefix_type = "PATH"
}
}
}
}
}
task {
source_fields = ["exampleField"]
destination_field = "exampleField"
task_type = "Map"
connector_operator {
s3 = "NO_OP"
}
}
trigger_config {
trigger_type = "OnDemand"
}
}

Related

Getting "parent resource not found" during ARM template deployment

I have private DNS zone zone.private which is already deployed in resource group and I'm trying to add A record to it with ARM template below which fails with Status Message: Can not perform requested operation on nested resource. Parent resource 'zone.private' not found. (Code:ParentResourceNotFound)
I'm supposed to be able to refer to refer to resources deployed in the same resource group to deploy nested resources but it fails for whatever reason. I have another zone called zone.domain.com deployed to the same resource group and deploying to that succeeds with no issues.
{
"type": "Microsoft.Network/dnsZones/A",
"apiVersion": "2018-05-01",
"name": "[concat('zone.private', '/', 'webexport-lb')]",
"properties": {
"TTL": 3600,
"ARecords": [
{
"ipv4Address": "1.1.1.1"
}
]
}
},
If you have a private DNS zone, you could use Microsoft.Network/privateDnsZones/A instead of Microsoft.Network/dnsZones/A.
So change it like this:
{
"type": "Microsoft.Network/privateDnsZones/A",
"apiVersion": "2018-09-01",
"name": "[concat('zone.private', '/', 'webexport-lb')]",
"properties": {
"ttl": 3600,
"aRecords": [
{
"ipv4Address": "1.1.1.1"
}
]
}
}

Google Vision API Document Text multiple images in base64 String

I use the Google Vision API OCR (Document Text Detection) to get the text from a scanned document (base64 String). It works perfekt for one image. But how can I send more than one image, e.g. the second page of a document.
I´ve tried to merge the base64 strings but it do not work.
var base64ImagesArrayConcarved = base64ImagesArray.join('')
Cloud Vision API has the method files.asyncBatchAnnotate.
which enables sending a bunch of files in the same request. To add individual files use async file annotation requests. An example of including two images in a batch request is the following:
{
"requests":[
{
"inputConfig": {
"gcsSource": {
"uri": "gs://<your bucket name>/image1.jpg"
},
"mimeType": "image/jpg"
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
],
"outputConfig": {
"gcsDestination": {
"uri": "gs://<your bucket name>/output/"
}
}
},
{
"inputConfig": {
"gcsSource": {
"uri": "gs://<your bucket name>/image2.jpg"
},
"mimeType": "image/jpg"
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
],
"outputConfig": {
"gcsDestination": {
"uri": "gs://<your bucket name>/output/"
}
}
}
]
}
If you specifically are working with pdf files, I found this post that explains how to send a request using also asyncBatchAnnotate.

How to map a user to a domain other than Federated in OpenStack federation?

I am trying to understand direct mapping on OpenStack. I want to map a user to a domain other than Federated domain. But I always get user mapped to Federated domain. Here follows the link for direct mapping that I am using:
https://specs.openstack.org/openstack/keystone-specs/specs/kilo/federated-direct-user-mapping.html
Here follows the rule for mapping that I am using:
[
{
"local": [
{
"user": {
"name": "{0}",
"domain": {"name": "Default"}
}
},
{
"group": {
"id": "GROUP_ID"
}
}
],
"remote": [
{
"type": "HTTP_OIDC_SUB"
}
]
}
]
I have configured OpenID connect Idp for federation.
Could someone help me how I can do direct mapping to map a federated user to a domain other than Federated ?
the only way I've been able to get it to not be in the 'Federate' domain, is to force the user to be of type local, but then they need to exist in the backend (SQL/LDAP).
[
{
"local": [
{
"user": {
"name": "{0}",
"type": "local",
"domain": {"name": "Default"}
}
},
{
"group": {
"id": "GROUP_ID"
}
}
],
"remote": [
{
"type": "HTTP_OIDC_SUB"
}
]
}
]
The following bit of code in keystone is the culprit for doing this:
if user_type is None:
user_type = user['type'] = UserType.EPHEMERAL
if user_type == UserType.EPHEMERAL:
user['domain'] = {
'id': CONF.federation.federated_domain_name
}
It Basically overrides the domain to a pre-configured domain if your user doesn't have a type, or is ephemeral.

Azure Resource Manager set static IP using json template

Using Azure Resource Manager Json template, can we set internal static IP without having to assign IP? My template creates a couple of Vms. When I set privateIPAllocationMethod to Static I get error that I have to set the IP also. Is it possible to assign IP dynamically and set it static?
Or are you looking for something you can do in ARM after you get an IP from Azure using dynamic the switch to static.
{
"name": "SetStaticIP",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2015-01-01",
"dependsOn": [
"[concat(parameters('envPrefix'),parameters('vmName'),'nic')]",
"[concat(parameters('envPrefix'),parameters('vmName'))]",
"Microsoft.Insights.VMDiagnosticsSettings"
],
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[concat(parameters('_artifactsLocation'), '/SetStaticIP.json', parameters('_artifactsLocationSasToken'))]",
"contentVersion": "1.0.0.0"
},
"parameters": {
"VirtualNetwork": {
"value": "[parameters('VirtualNetwork')]"
},
"VirtualNetworkId": {
"value": "[parameters('VirtualNetworkId')]" },
"nicName": {
"value": "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]"
},
"ipAddress": {
"value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]"
}
}
}
}
YES you can change dynamically assigned IP to static. Try this-
$nic=Get-AzureRmNetworkInterface -Name "TestNIC" -ResourceGroupName "TestRG"
$nic.IpConfigurations[0].PrivateIpAllocationMethod = "Static"
$nic.IpConfigurations[0].PrivateIpAddress = "x.x.x.x"
Set-AzureRmNetworkInterface -NetworkInterface $nic
You can refer to this article- https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-static-private-ip-arm-ps/
Thanks.

Serve assets from s3 bucket in Rails 4.2.1

I am just wondering if the configuration setup has changed at all (since Rails 4) with regards to setting up asset_sync and serving my assets from an s3 bucket.
I cannot seem to be able to serve my css or js assets (though images, jpg, png) are ok. I can upload everything to my s3 bucket.
I am trying to get this to work in my development environment before I push to production.
So my asset_sync.rb initializer is setup like so
if defined?(AssetSync)
AssetSync.configure do |config|
config.fog_provider = 'AWS'
config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
config.fog_directory = ENV['FOG_DIRECTORY']
config.existing_remote_files = "delete"
config.gzip_compression = true
config.manifest = true
config.custom_headers = { '.*' => { cache_control: 'max-age=315576000', expires: 1.year.from_now.httpdate } }
end
end
So I know that this part is ok as assets are uploaded to the bucket, but when I try and render my page no css/js or image are shown. I have set this in my development.rb
config.action_controller.asset_host = "https://#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com"
All I get back at the moment on each asset is a 403 forbidden error.
This is my IAM policy, maybe someone could spot something in here that isn't correct?
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListAllMyBuckets",
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
I have also added a bucket policy
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::mybucket/*"]
}
]
}
What I am looking for is for someone to show how they setup their Rails app with asset_sync please to see if I have missed anything.

Resources