We are encountering a very strange condition with Apache NiFi and SQS. We are using the AWSCredentialsProviderControllerService to manage our authentication. If we use an unencrypted queue it works fine, however, if using an encrypted queue it doesn't fail but nothing gets written. It doesn't appear to be generating anything in the NiFi or cloud trail logs either. Just wondered if there is anything special that needs to get done to support this condition. If it is failing, we are not able to figure out where that is occurring. Any suggestions or ideas would be greatly appreciated.
I was able to reproduce the silent failure with PutSQS under the following conditions:
SQS Queue configured with server-side encryption using a custom KMS customer master key rather than the default AWS key
AWS credentials used by NiFi had permission to send a message, but not permissions to use the custom KMS key
The solution was to provide NiFi's AWS credential with the permissions to use both SQS and KMS. I found the example policy below documented in What AWS KMS Permissions Do I Need to Use SSE for Amazon SQS?:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"kms:GenerateDataKey",
"kms:Decrypt"
],
"Resource": "arn:aws:kms:us-east-2:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
}, {
"Effect": "Allow",
"Action": [
"sqs:SendMessage",
"sqs:SendMessageBatch"
],
"Resource": "arn:aws:sqs:*:123456789012:MyQueue"
}]
}
Related
Currently facing an issue about creating a notification rule for code pipeline
Resource handler returned message: "Invalid request provided: AWS::CodeStarNotifications::NotificationRule" (RequestToken: 4cf585ed-150e-78ee-6c23-d01870c1dbc4, HandlerErrorCode: InvalidRequest)
My problem is the same as in this StackOverflow post
CDK Unable to Add CodeStarNotification to CodePipeline.
Suggested solutions focus on whether or not access policy on topic is set and this is all taken care in my case
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AWSCodeStarNotifications_publish",
"Effect": "Allow",
"Principal": {
"Service": "codestar-notifications.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-east-1:123456789:PipelineNotifications"
}
]
}
I am importing already existing topic in that stack, that topic already have properly configured access policy so I don’t really understand what’s the problem - tried several times but it never succeeds creating it and it fails.
cdk version 2.31.0
I was following the instructions provided by Codemagic to add a WebHook to CodeCommit. Which includes creating a topic, adding to it a subscription and then configuring Notify in repository.
Anyhow, after merging or changing my master directly no build is still triggered.
Here is my setup:
Webhook in Codemagic:
Topic with a subscription:
Notification rule targets:
What I did notice is that notification target status is unreachable. But I have no clue what it actually means.
Does my problem occur because of the unreachable status?
What exactly does it mean then?
do you reference this document? https://docs.codemagic.io/configuration/webhooks/#setting-up-webhooks-for-aws-codecommit
Have you done following steps and can you see any incoming requests from AWS in Codemagic?
6. In the Codemagic UI, navigate to your application and select the Webhooks tab.
7. Under Recent deliveries, choose the most recent webhook, and copy the subscription link under the Results tab to your browser.
Well, apparently the documentation here has been updated:
https://docs.codemagic.io/configuration/webhooks/#setting-up-webhooks-for-aws-codecommit
There is a configuration that you have to update for your topic's access policy:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "CodeNotification_publish",
"Effect": "Allow",
"Principal": {
"Service": "codestar-notifications.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:REGION:ACCOUNT_ID:REPOSITORY"
}
]
}
Make sure to update Resource!
"Resource": "arn:aws:sns:REGION:ACCOUNT_ID:REPOSITORY"
Copy ARN from your topic:
Apart from that(as said above) this step is important:
Under Recent deliveries (in Codemagic -> App -> Webhooks), choose the most recent webhook, and copy the
subscription link under the Results tab to your browser.
on a Wordpress site linking to pdfs successfully up till now, adding a new course and associated links to pdfs stored in S3.
here's my bucket policy
"Version": "2012-10-17",
"Id": "Policy1495663956019",
"Statement": [
{
"Sid": "Stmt1495663819956",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::courses-example-com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://example.com/*",
"http://www.example.com/*",
"https://example
"https://www.example.com/*"
]
}
}
}
]
}
The bucket has different folders for each course - course1/ course2/ etc
I copy the Object URL from S3 as the link within Wordpress
Here is an example of a link used
Download the course text here.
Some of the links, both those that work or do not (access denied) may have rel="nooperner noreferer", yet some of these work, while others do not. When trying to remove the noreferer - Wordpress just adds it back in.... changing the link to open in the same window is not desired as it loses the student's place in the course...
I am not sure how to check the header for the referer that is likely sent from the different pages... might provide a clue. Also, not sure how to override the Wordpress automatic addition of noreferer.
check S3 permissions, went over the bucket policy carefully, cleared cache, tried different browsers, rebooted, cleared cache on site host side,
ideas?
cheers,
Fred
I am trying to invoke AWS API Gateway endpoint from one of the EC2 instance with IAM Role. I have boto3 library installed on EC2 instance and trying to execute simple gateway API using below code but still getting Authentication missing error.
import boto3
import json
import requests
from aws_requests_auth.aws_auth import AWSRequestsAuth
session = boto3.Session()
credentials = session.get_credentials()
headers = {'params': 'ABC'}
response = requests.get('https://restapiid.execute-api.us-east-1.amazonaws.com/stage/resource_path',
auth=credentials, headers=headers)
This should be very simple from EC2 Instance with IAM Role. Please any advise.
Due to lack of details in your question, (missing instance role details, API gateway policy, unknown headers, or wheather iam_auth is enabled) I can only provide and comment on the python code given.
The python code to use role should be (this is example that I used to verify the code):
import boto3
import requests
from aws_requests_auth.aws_auth import AWSRequestsAuth
session = boto3.Session()
credentials = session.get_credentials()
auth = AWSRequestsAuth(aws_access_key=credentials.access_key,
aws_secret_access_key=credentials.secret_key,
aws_token=credentials.token,
aws_host='fzoskzctgd.execute-api.us-east-1.amazonaws.com',
aws_region='us-east-1',
aws_service='execute-api')
response = requests.get('https://fzoskzctgd.execute-api.us-east-1.amazonaws.com/test', auth=auth)
print(response.content)
I tested this with authorizationType set to AWS_IAM for the resource a tested.
API resource policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456:role/instance-role"
},
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:170576413884:fzoskzctgd/test/*"
}
]
}
instance-role
Does not need to have any api invocation permissions as they are provided through API resource policy. The instance-role must only exist and be attached to the instance.
I have an api set up on Google Cloud Functions (https://europe-west1-myproject-name.cloudfunctions.net/api/v1/ical.ics).
This works well, but I wish to set up a "friendly" domain name for the api. :)
According to Googles documentation this is seems easy, but it does not seem to work for cloud functions outside the USA, eg. europe-west1.
I have updated the firebase.json file with the below code according to documentation.
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "/api/**",
"function": "api"
},
{
"source": "**",
"destination": "/index.html"
}
]
}
When accessing https://myproject-name.web.app/api/v1/ical.ics
I get redirected to https://us-central1-myproject-name.cloudfunctions.net/api/api/v1/cal.ics with error 403 and the below error message.
Error: Forbidden
Your client does not have permission to get URL /api/api/ical.ics from this server.
I must be overlooking something really basic here, since this seems like a really easy operation? :)
Kind regards
/K
As stated in the documentation (see blue text block):
If you are using HTTP functions to serve dynamic content for Firebase
Hosting, you must use us-central1.
You will also find a similar warning in the doc you refer to in your question about "Serve dynamic content and host microservices with Cloud Functions" (See blue text block as well):
Firebase Hosting supports Cloud Functions in us-central1 only.
It is not completely the answer you are looking for since it is not possible with Firebase Hosting.
But it is possible to get a custom domain in front of your cloud functions hosted in EU by using Cloud Run.
I followed this guide:
https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions
And after that i added the custom domain under the Manage custom domains on Cloud Run.