How to retrieve associated CloudFront Distribution URL in Amplify+Terraform? - next.js

Within Terraform, is it possible to retrieve the URL of an Amplify app's CloudFront Distribution?
If I was to create a new Next.js app within the web console, there is an important 200 rewrite that gets added in 'Redirects & Rewrites', which points all front-end paths to the associated CloudFront Distribution. However, I'm now migrated to Terraform aws_amplify_app, which doesn't add this rewrite automatically.
So, does anyone know how to do this within the TF provider? I appreciate that this may be impossible because the CloudFront Distribution is only created as part of the Amplify App itself. So, alternatively, if it is not exposed, is there a way for Amplify to handle this Next-specific logic when using Terraform (rather than just through the web console)?
Here is the relevant code from my main.tf, with a TODO comment showing what needs to be automatic.
resource "aws_amplify_app" "ui" {
name = "my-ui"
...
custom_rule {
source = "/<*>"
status = "200"
## TODO -- this should be automatically set as the URL of the newly created CloudFront distribution
target = "https://xxxxxxxxx.cloudfront.net/<*>"
}
}

Related

Сhanging users ' email addresses with GitLab API

I need to write a python script for GitLab that allows me to change the email addresses of users
The problem is that I manage to change various user attributes, such as "bio" and etc
But I can't change the "email".
The script reports a successful change of these attributes, but in fact they do not change.
I am working as the root user, using his token
To change the user attributes, I use this construction
gl = gitlab.Gitlab(arg.url, private_token=arg.token)
user = gl.users.list(username = 'name')[0]
user.bio = f"{user.username}#EXAMPLE.COM"
user.save()
I also tried working with classic requests, instead of the gitlab library, but the result was the same

keycloak starts with a new realm and some client configurations

I try to use keycloak as the authentication service in my design. In my case, when the keycloak starts, I need one more realm besides default master realm. Assuming the new agency is called "demo".
So it means when keycloak starts, it should have two realms (master and demo).
In addtion, in the realm demo, I need to configure the default client "admin-cli" to enable "Full Scope Allowed". Also need to add some buildin mapper to this client.
In this case, I wonder whether I can use something like initialization file which keycloak can load when starting ?
Or I need to use keycloak client APIs to do this operations (e.g., Java keycloak admin client)?
Thanks in advance.
You can try the following:
Create the Realm;
Set all the options that you want;
Go to Manage > Export;
Switch Export groups and roles to ON;
Switch Export clients to ON;
Export.
That will export a .json file with the configurations.
Then you can tested it be deleting your Demo Realm and:
Go to Add Realm;
Chose the .json file that was exported;
Click Create.
Check if the configurations that you have changed are still presented on the Demo Realm, if there are then it means that you can use this file to import the Realm from. Otherwise, for the options that were not persistent you will have to create them via the Admin Rest API.

Getting server error on firebase dynamic link CreateManagedShortLinkRequest with the Ruby client

I am trying to create a dynamic link using the Ruby SDK. I believe I have everything right, but I'm getting a
Google::Apis::ServerError: Server error
When creating the URL
Could you help me figure out what I'm missing/doing wrong or if this is a Google issue ?
Assuming I have generates Oauth credentials requesting the appropriate scopes, I am doing
request = ::Google::Apis::FirebasedynamiclinksV1::CreateManagedShortLinkRequest.new(
dynamic_link_info: ::Google::Apis::FirebasedynamiclinksV1::DynamicLinkInfo.new(
domain_uri_prefix: Rails.application.secrets.firebase_dynamic_link_prefix,
link: campaign.linkedin_url,
),
suffix: ::Google::Apis::FirebasedynamiclinksV1::Suffix.new(
option: 'SHORT',
),
# name: "Linkedin acquisition URL of #{camp.utm_campaign_name} for #{camp.contractor.name} <#{camp.contractor.email}>",
name: "Test of generation",
)
# => <Google::Apis::FirebasedynamiclinksV1::CreateManagedShortLinkRequest:0x000021618baa88
# #dynamic_link_info=#<Google::Apis::FirebasedynamiclinksV1::DynamicLinkInfo:0x000021618bad80
# #domain_uri_prefix="https://example.page.link",
# #link="https://www.example.com/?invitation_code=example&signup=example&utm_campaign=example&utm_medium=example&utm_source=example">,
# #name="Test of generation",
# #suffix=#<Google::Apis::FirebasedynamiclinksV1::Suffix:0x000021618babf0
# #option="SHORT">
# >
link_service.create_managed_short_link(request)
def link_service
#link_service ||= begin
svc = ::Google::Apis::FirebasedynamiclinksV1::FirebaseDynamicLinksService.new
svc.authorization = oauth_service.credentials
svc
end
end
I know OAuth scopes seem to be working as previously I was getting
Google::Apis::ClientError: forbidden: Request had insufficient authentication scopes.
But I fixed it after increasing OAuth scopes to cover firebase. Also, my request seems correct, as when I try to omit one of the parameters (like the name) I'm getting appropriate validation errors like
Google::Apis::ClientError: badRequest: Created Managed Dynamic Link must have a name
My only clue, is that the create_managed_short_link actually takes more parameters. In the example given above, I also have substituted our real firebase prefix by example but I do own the real firebase prefix I am using, and link generation directly from the Firebase frontend console actually works.
I've updates my google sdk to the most recent version up to date
- google-api-client-0.30.3
Unfortunately generating managed short links through the REST API is not currently supported.
As stated here by someone who works(ed) in the dynamic links team itself.
For now we can only use CreateShortDynamicLinkRequest, however this endpoint does not allow to specify a custom_suffix (i.e. https://example.com/my-custom-suffix)

Telegraf - how to monitor multiple Tomcat instances?

I managed to gather data from single Tomcat instance to Telegraf as follows.
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:19090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
## Request timeout
# timeout = "5s"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
Now, I want to monitor multiple Tomcat instances, but there does not seem to be an example of how to monitor multiple. Does anybody know?
The answer turned out to be very simple. Just declare the inputs.tomcat block multiple times as follows.
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:19090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:29090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
So as far as I recall there are couple of ways.
1) Easiest way is to create, use and try via using different configuration files where you may create tomcat1.conf place it under /etc/telegraf/telegraf.d/tomcat1.conf folder where you'd end up using the same plugin that you have mentioned above (inputs.tomcat) and similarly, create another configuration file for tomcat2.conf etc.. for all Tomcat instances. This way you may be able to monitor multiple Tomcat instances. See if that helps! Con of this approach is, you have to create N no. of tomcatXX.conf files under telegrad.d folder (Which can be easily fixed if you create these files on the fly while provisioning a machine using Ansible/similar tools - templating the file and iterating over the tomcatXX list).
2) Other way, which which may help as well using just one configuration file.
In one configuration file, use the following plugins together to capture what you are looking for. PS: If you use inputs.exec plugin, then the output you'll generate from your custom script (which you'll call in inputs.exec plugin) must generate the output in a known format (InfluxDB/Line Protocol) that Telegraf and InfluxDB can understand / store or you'll see some minor errors for which you can see few of my posts.
exec plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
http_* plugin (especially http_response): https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
filestat plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/filestat
logparser plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logparser
procstat plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/procstat
Look at the plugin links mentioned above for what they do and how to set them up in Telegraf and that'd get you most of what you are looking at if you don't want to have multiple conf files for each Tomcat instance.
https://github.com/influxdata/telegraf/tree/master/plugins/inputs contains all input plugins (see if there are some that you may be interested in).
See if you can utilize how to use prefix property efficiently to distinguish between various metrics/events coming from using these plugin(s).

AWS API Gateway as Serivce proxy for S3 upload

I have been reading about creating an API which can be used to upload objects directly to S3. I have followed the guides from Amazon with little success.
I am currently getting the following error:
{"message":"Missing Authentication Token"}
My API call configuration:
The role ARN assigned is not in the image, but has been set up and assigned.
The "Missing Authentication Token" error can be interpreted as either
Enabling AWS_IAM authentication for your method and making a request to it without signing it with SigV4, or
Hitting a non-existent path in your API.
For 1, if you use the generated SDK the signing is done for you.
For 2, if you're making raw http requests make sure you're making requests to /<stage>/s3/{key}
BTW, the path override for s3 puts needs to be {bucket}/{key}, not just {key}. You may need to create a two-level hierarchy with bucket as the parent, or just hardcode the bucket name in the path override if it will always be the same. See: http://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html

Resources