I'm new enough to terraform and think I am misunderstanding something with count and count.index usage.
I am creating some EC2 instances using the count parameter and it works fine
resource "aws_instance" "server" {
ami = data.aws_ami.app_ami.id
instance_type = "t2.micro"
key_name = "DeirdreKey"
subnet_id = aws_subnet.my_subnet_a.id
count = 2
tags = {
Name = "server.${count.index}"
}
I want to associated a security group with both instances so I created the below
resource "aws_network_interface_sg_attachment" "sg_attachment" {
security_group_id = aws_security_group.allow_internet.id
network_interface_id = aws_instance.server.primary_network_interface_id
}
However I am running into this error
Error: Missing resource instance key
on lb.tf line 57, in resource "aws_network_interface_sg_attachment" "sg_attachment":
57: network_interface_id = aws_instance.server.primary_network_interface_id
Because aws_instance.server has "count" set, its attributes must be
accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_instance.server[count.index]
I understand what the error is complaining about . Its because the local resource name I am referring to in not unique as I have created a count of 2 aws instances called "server". I dont know how to fix it though. I tried with the below
resource "aws_network_interface_sg_attachment" "sg_attachment" {
security_group_id = aws_security_group.allow_internet.id
network_interface_id = aws_instance.server[count.index].primary_network_interface_id
But then I get the below error
Error: Reference to "count" in non-counted context
on lb.tf line 53, in resource "aws_network_interface_sg_attachment" "sg_attachment":
53: network_interface_idaws_instance.server[count.index].primary_network_interface_id
The "count" object can only be used in "module", "resource", and "data"
blocks, and only when the "count" argument is set.
Does this mean I have to introduce the count.index into the local resource name? I tried it a few ways and it doesnt seem to work
resource "aws_instance" "server${count.index}" {
You need a count statement on the resource to use count.index. Count statements can get out of hand, so if you have multiple resources that logically need the same count, use a variable or local value:
local {
replications = 2
}
resource "aws_instance" "server" {
count = local.replications
ami = data.aws_ami.app_ami.id
instance_type = "t2.micro"
key_name = "DeirdreKey"
subnet_id = aws_subnet.my_subnet_a.id
tags = {
Name = "server.${count.index}"
}
}
resource "aws_network_interface_sg_attachment" "sg_attachment" {
count = local.replications
security_group_id = aws_security_group.allow_internet.id
network_interface_id = aws_instance.server[count.index].primary_network_interface_id
}
This creates one security group attachment per server, and gives you a list of servers you can reference as aws_instance.server[0] and aws_instance.server[1], and a list of attachments that you can reference in the same way.
Related
How to create a S3 bucket policy for the multiple existing manually created (not through terraform) s3 buckets using terraform
For Example : I have A,B,C buckets created manually and now I wanted to add a s3 buket policy for all 3 S3 buckets , How can I achieve this through Terraform? Can we use some loop sort of thing here Please advise
Do you want the same policy applied to each bucket? That doesn't really work because the policies need to include the bucket name in the resource section. It's a weird limitation of S3 bucket policies that I've never quite understood, you can't use a wildcard (*) for the bucket name.
Anyways, you could do something like this, where you dynamically set the bucket name in the policy.
I'm just typing this from memory so please excuse any syntax errors.
locals {
// The names of the buckets you want to apply policies to
buckets = [
"mybucket",
"anotherbucket",
]
}
// Create a unique policy for each bucket, using
// the bucket name in the policy.
data "aws_iam_policy_document" "bucket_policies" {
for_each = toset(local.buckets)
statement {
actions = [
"s3:PutObject"
]
resources = [
"arn:aws:s3:::${each.key}/*"
]
principals {
type = "AWS"
identifiers = [
var.my_user_arn
]
}
}
}
// Apply each policy to its respective bucket
resource "aws_s3_bucket_policy" "policies" {
for_each = toset(local.buckets)
bucket = each.key
policy = data.aws_iam_policy_document.bucket_policies[each.key].json
}
Tried following pre requested script
var name = '{{$randomFirstName}}';
pm.globals.set("Firstname", name);
var Lname = '{{$randomLastName}}';
pm.globals.set("Lastname", Lname);
pm.variables.set("name",pm.variables.replaceIn("{{$randomFirstName}}") )
pm.variables.set("name",pm.variables.replaceIn("{{$randomLastName}}") )
pm.collectionvariables.set("name",pm.variables.replaceIn("{{$randomFirstName}}") );
pm.collectionvariables.set("name",pm.variables.replaceIn("{{$randomLastName}}") );
used name, Lname in request body
Service1
"firstName": "{{name}}",
"lastName": "{{Lname}}",
request is throwing error on all the scripts. for each request different names are getting generated. These services are related to one employee
It's a dynamically changing value so each time you use it, it will set a new value in the variable.
Set it once in the pre-request to a local variable and then use that to set the global variable.
var name = pm.variables.replaceIn("{{$randomFirstName}}");
pm.globals.set("Firstname", name);
var Lname = pm.variables.replaceIn("{{$randomLastName}}");
pm.globals.set("Lastname", Lname);
Then use {{name}} and {{Lname}} syntax in the other requests.
I'm trying to ingest data with client.IngestFromStreamAsync or client.IngestFromStream from my c# application but always get the following error:
MonitoredActivityContext=(ActivityType=KustoManagedStreamingIngestClient.IngestFromStream, Timestamp=2021-02-08T17:57:52.0547851Z, ParentActivityId=570e5455-1c3d-4cb4-82ff-a06761e66a30, TimeSinceStarted=1921,1077 [ms])IngestionSourceId=f182ab29-812a-4b44-9d06-1951c7aa972f
IngestionSource=Stream
Error=Not Found (404-NotFound): . This normally represents a permanent error, and retrying is unlikely to help.
Error details:
DataSource='https://table.southcentralus.kusto.windows.net/v1/rest/ingest/bla/table?streamFormat=json&mappingName=JsonMapping',
DatabaseName=,
ClientRequestId='KI.KustoManagedStreamingIngestClient.IngestFromStream.ad8c8892-7495-483d-90bc-8585483445fa;73864817-246c-479c-a2da-138aca01b9a2;f182ab29-812a-4b44-9d06-1951c7aa972f',
ActivityId='00000000-0000-0000-0000-000000000000,
Timestamp='2021-02-08T17:57:53.9596167Z'.
This is how I define ingestion mapping
var kustoIngestionProperties = new KustoIngestionProperties(databaseName: databaseName, tableName: rtable)
{
Format = DataSourceFormat.json,
IngestionMapping = new IngestionMapping()
{
IngestionMappingReference = "JsonMapping",
IngestionMappingKind = Kusto.Data.Ingestion.IngestionMappingKind.Json
}
};
Before referencing the mapping I create it like this:
.create table ingest_table ingestion json mapping 'JsonMapping' '[{"column":"Timestamp","Properties":{"path":"$.Timestamp"}},{"column":"AskRatioVar","Properties":{"path":"$.AskRatioVar"}},{"column":"score_BidRatioVar","Properties":{"path":"$.score_BidkRatioVar"}},]'
Any ideas what could cause the error?
All Streaming examples seems to be outdate here: https://github.com/Azure/azure-kusto-samples-dotnet/tree/master/client/StreamingIngestionSample
Thank you
You should verify that the names of the database, table and ingestion mapping you're passing actually exist in your cluster.
Specifically - in the activity you referenced, you reference an ingestion mapping named JsonAnomalyMapping1, whereas the table only has a mapping named JsonAnomalyMapping
I'm now implement a program to migrate large amount of data to ADX base on Ingest from Storage feature of ADX and I'm need to check that status of each ingestion request each time the request finish but I'm facing an issue
Base on MS document in here
If I set the persistDetails = true for example with the command below it must save the ingestion status but currently this setting seem not work (with or without it)
.ingest async into table MigrateTable
(
h'correct blob url link'
)
with (
jsonMappingReference = 'table_mapping',
format = 'json',
persistDetails = true
)
Above command will return an OperationId and when I using it to check export status when the ingest task finish I always get this error message :
Error An admin command cannot be executed due to an invalid state: State='Operation 'DataIngestPull' does not persist its operation results' clientRequestId: KustoWebV2;
Can someone clarify for me what is the root cause relate to this? With me it seem like a bug relate to ADX
Ingesting data directly against the Data Engine, by running .ingest commands, is usually not recommended, compared to using Queued Ingestion (motivation included in the link). Using Kusto's ingestion client library allows you to track the ingestion status.
Some tools/services already do that for you, and you can consider using them directly. e.g. LightIngest, Azure Data Factory
If you don't follow option 1, you can still look for the state/status of your command using the operation ID you get when using the async keyword, by using .show operations
You can also use the client request ID to filter the result set of .show commands to view the state/status of your command.
If you're interested in looking specifically at failures, .show ingestion failures is also available for you.
The persistDetails option you specified in your .ingest command actually has no effect - as mentioned in the docs:
Not all control commands persist their results, and those that do usually do so by default on asynchronous executions only (using the async keyword). Please search the documentation for the specific command and check if it does (see, for example data export).
============ Update sample code follow suggestion from Yoni ========
Turn out, other member in my team mess up with access right with adx, after fixing it everything work fine
I just have one concern relate to PartiallySucceeded that need clarify from #yoni or someone have better knowledge relate to that
try
{
var ingestProps = new KustoQueuedIngestionProperties(model.DatabaseName, model.IngestTableName)
{
ReportLevel = IngestionReportLevel.FailuresAndSuccesses,
ReportMethod = IngestionReportMethod.Table,
FlushImmediately = true,
JSONMappingReference = model.IngestMappingName,
AdditionalProperties = new Dictionary<string, string>
{
{"jsonMappingReference",$"{model.IngestMappingName}" },
{ "format","json"}
}
};
var sourceId = Guid.NewGuid();
var clientResult = await IngestClient.IngestFromStorageAsync(model.FileBlobUrl, ingestProps, new StorageSourceOptions
{
DeleteSourceOnSuccess = true,
SourceId = sourceId
});
var ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
while (ingestionStatus.Status == Status.Pending)
{
await Task.Delay(WaitingInterval);
ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
}
if (ingestionStatus.Status == Status.Succeeded)
{
return true;
}
LogUtils.TraceError(_logger, $"Error when ingest blob file events, error: {ingestionStatus.ErrorCode.FastGetDescription()}");
return false;
}
catch (Exception e)
{
return false;
}
I already have a resource created (DynamoDB table) using this
resource "aws_dynamodb_table" "my_dynamo_table" {
name = "my_table"
hash_key = "Id"
}
Now, I would like to enable streams on this table, if I do this by updating the above resource like this
resource "aws_dynamodb_table" "my_dynamo_table" {
name = "my_table"
hash_key = "Id"
stream_enable = true
stream_view_type = "NEW_IMAGE"
}
Terraform plans to delete the table and re-create it. Is there any other way, I can just update the table to enable the stream?