Application Gateway: ResourceNotFound during AKS provisioning (bicep) - azure-resource-manager

When we deploy our environment with e.g. AKS and Application Gateway through bicep, we get sometimes this error during AKS provisioning:
{
"status": "Failed",
"error": {
"code": "ResourceNotFound",
"message": "The Resource 'Microsoft.Network/applicationGateways/xxx-agw' under resource group 'xxx-rg' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"
}
}
We create the Application Gateway in the same bicep file as the AKS, the Application Gateway is referenced here in the bicep code of the AKS:
addonProfiles: {
ingressApplicationGateway: {
enabled: true
config: {
applicationGatewayId: applicationGateway.id
effectiveApplicationGatewayId: applicationGateway.id
}
}
When we run the bicep file again, everything works. So is this a timing issue or do I miss something?
Thanks,
Peter

Related

Ingest pipeline is not working over logs obtained from an event hub wih filebeat

I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana.
The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries.
The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working.
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: "filebeat-otc"
Does anybody knows what I am missing? THANKS in advance.
EDITION. I will add an example of my pipeline and my data. In the simulation is working properly:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}"
],
"pattern_definitions": {
"LOGLEVEL" : "\\[[^\\]]*\\]",
"TEXT" : "[a-zA-Z0-9- ]*"
}
}
}
]
},
"docs": [
{
"_source": {
"message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}"
}
},
{
"_source": {
"message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}"
}
}
]
}
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored.
So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline".
I hope this helps to anybody.
This was thanks to leandrojmp

Serilog- Add "severity" property to top level of LogEvent for GKE?

I'm using Serilog with the Serilog.Formatting.Json.JsonFormatter formatter in a .NET Core app in GKE. I am logging to Console, which is read by a GKE Logging agent. The GKE logging agent expects a "severity" property at the top level of the Log Event: GCP Cloud Logging LogEntry docs
Because of this, all of my logs show up in GCP Logging with severity "Info", as the Serilog Level is found in the jsonPayload property of the LogEntry in GCP. Here is an example LogEntry as seen in Cloud Logging:
{
insertId: "1cu507tg3by7sr1"
jsonPayload: {
Properties: {
SpanId: "|a85df301-4585ee48ea1bc1d1."
ParentId: ""
ConnectionId: "0HM64G0TCF3RI"
RequestPath: "/health/live"
RequestId: "0HM64G0TCF3RI:00000001"
TraceId: "a85df301-4585ee48ea1bc1d1"
SourceContext: "CorrelationId.CorrelationIdMiddleware"
EventId: {2}
}
Level: "Information"
Timestamp: "2021-02-03T17:40:28.9343987+00:00"
MessageTemplate: "No correlation ID was found in the request headers"
}
resource: {2}
timestamp: "2021-02-03T17:40:28.934566174Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T17:40:32.020942737Z"
}
My first thought was to add a "Severity" property using an Enricher:
class SeverityEnricher : ILogEventEnricher
{
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
logEvent.AddOrUpdateProperty(
propertyFactory.CreateProperty("Severity", LogEventLevel.Error));
}
}
The generated log looks like this in GCP, and is still tagged as Info:
{
insertId: "wqxvyhg43lbwf2"
jsonPayload: {
MessageTemplate: "test error!"
Level: "Error"
Properties: {
severity: "Error"
}
Timestamp: "2021-02-03T18:25:32.6238842+00:00"
}
resource: {2}
timestamp: "2021-02-03T18:25:32.623981268Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T18:25:41.029632785Z"
}
Is there any way in Serilog to add the "severity" property at the same level as "jsonPayload" instead of inside it? I suspect GCP would then pick it up and log the error type appropriately.
As a last resort I could probably use a GCP Logging sink, but my current setup is much more convenient and performant with the GKE Logging Agent already existing.
Here's a relevant Stack Overflow post with no information or advice past what I already have, which is not enough to solve this: https://stackoverflow.com/questions/57215700
I found the following information detailing the severity of each SeriLog to Stackdriver log level, the next table might also help you
Serilog
Stackdriver
Verbose
Debug
Debug
Debug
Information
Info
Warning
Warning
Error
Error
Fatal
Critical
The complete information can be found at the following link
https://github.com/manigandham/serilog-sinks-googlecloudlogging#log-level-mapping
I think this code could help you to make Stackdriver recognize the severity of the logs given by SeriLogs.
private static LogSeverity TranslateSeverity(LogEventLevel level) => level switch
{
LogEventLevel.Verbose => LogSeverity.Debug,
LogEventLevel.Debug => LogSeverity.Debug,
LogEventLevel.Information => LogSeverity.Info,
LogEventLevel.Warning => LogSeverity.Warning,
LogEventLevel.Error => LogSeverity.Error,
LogEventLevel.Fatal => LogSeverity.Critical,
_ => LogSeverity.Default
};
I will leave the link to the complete code here
https://github.com/manigandham/serilog-sinks-googlecloudlogging/blob/master/src/Serilog.Sinks.GoogleCloudLogging/GoogleCloudLoggingSink.cs#L251
Greetings!

Running into AWS Elastic BeanStalk Event Error: Manifest file has schema validation errors

I am setting up pipelines to AWS Elastic BeanStalk via bitbucket and I am running into: Manifest file has schema validation errors: Error Kind: ArrayItemNotValid, Path: #/aspNetCoreWeb.[0], Property: [0] Error Kind: PropertyRequired, Path: #/parameters.appBundle, Property: appBundle Error Kind: NoAdditionalPropertiesAllowed, Path: #/parameters, Property: parameters.
It seems that I am have a problem with my manifest file. However due to there being very little documentation on how to fix this problem. I am not able to resolve this issue. How do I solve this problem?
Here is my aws-windows-deployment-manifest file:
{
"manifestVersion": 1,
"deployments": {
"aspNetCoreWeb": [
{
"name": "CareerDash",
"parameters": {
"archive": "site",
"iisPath": "/"
}
}
]
}
}
Look like I figured it out. The issue is that the aws-windows-deployment-manifest.json file should be like the following:
{
"manifestVersion": 1,
"deployments": {
"aspNetCoreWeb": [
{
"name": "CareerDash",
"parameters": {
"appBundle": "site.zip", /*This line is where your web app file location is. The Web app folder should be in .zip file. */
"iisPath": "/" /* This line is the path to where your web app files are located in site.zip file, specifically the path to web.config file (which should be in the same level as your main web app files */
}
}
]
}
}
Overall your app bundle should be a zip file that contains the site.zip file and aws-windows-deployment-manifest.json file. In a hierarchy like so:
appBundleName.zip
site.zip
aws-windows-deployment-manifest.json

Openstack API Authentication

Openstack noob here. I have setup an Ubuntu VM with DevStack, and am trying to authenticate with Keystone to obtain a token to be used for subsequent Openstack API calls. The identity endpoint shown on the “API Access” page in Horizon is: http://<DEVSTACK_IP>/identity.
When I post the below JSON payload to this endpoint, I get the error get_version_v3() got an unexpected keyword argument 'auth’.
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"name": "admin",
"domain": {
"name": "Default"
},
"password": “AdminPassword”
}
}
}
}
}
Based on the Openstack docs, I should be hitting http://<DEVSTACK_IP>/v3/auth/tokens to obtain a token, but when I hit that endpoint, I get 404 Not Found.
I'm currently using Postman for testing this, but will eventually be doing programmatically.
Does anybody have any experience with authenticating against the Openstack API that can help?
Not sure whether you want to do it in a python way, but if you do, here is a way to do it:
from keystoneauth1.identity import v3
from keystoneauth1 import session
v3_auth = v3.Password(auth_url=V3_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_name=PROJECT_NAME,
project_domain_name="default",
user_domain_name="default")
v3_ses = session.Session(auth=v3_auth)
auth_token = v3_ses.get_token()
And you V3_AUTH_URL should be http://<DEVSTACK_IP>:5000/v3 since keystone is using port 5000 as a default.
If you do have a multi-domain devstack, you can change the domains, otherwise, they should be default
Just in case you don't have the client library installed: pip install python-keystoneclient
Here is a good doc for you to read about it:
https://docs.openstack.org/keystoneauth/latest/using-sessions.html
HTH

INVALID_ARGUMENT (400 error) when calling Stackdriver Error Reporting API

When trying to invoke the Stackdriver Error Reporting API (via the API explorer or via the Client-Side JavaScript library), I receive the following error:
Request:
{ "message" : "test" }
Response:
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}
The Stackdriver Error Reporting API is enabled and I have Owner rights to the App Engine project.
Is the API simply not functional? If I'm doing something wrong, can someone try to help?
The documentation for reporting events says that a ServiceContext is required.
If you're only sending a message (not a stacktrace / exception) you'll need to include a context with a reportLocation as well. This is noted in the documentation of the message field, but it's not obvious.
The following works from the API explorer:
{
"context": {
"reportLocation": {
"functionName": "My Function"
}
},
"message": "error message",
"serviceContext": {
"service": "My Microservice",
}
}
You might be interested in the docs on How Error are Grouped too.
FWIW, I work on this product and I think the error message is too generic. The problem is (?) that the serving stack scrubs the message unless they're annotated as being for public consumption. I'll chase that down.

Resources