Openstack database create instance BadRequestException - openstack

I used openstack sdk for create new database instance, but it raise error BadRequestException:
BadRequestException: 400: Client Error for url:
[URL]:8779/v1.0/ab77ed8ae3f744f4baf4fb7bc97848cc/instances,
Resource None cannot be found.
My data:
data: dict = {'name': 'nhinn-db-01',
'nics': [{'net-id': 'd57e7864-5961-4a64-a5ce-005e05d71ccf'}],
'datastore': {'type': 'mysql', 'version': '5.7.31'},
'flavor': object Flavor
}
conn: Connection
conn.database.create_instance(**data)

Related

Flutter: GraphQLError: invalid input syntax for type uuid: ""

I am new to GraphQL and Hasura. I am acquiring a firebase user JWT and passing it to a Hasura GraphQL endpoint, but I'm receiving error messages
GTMSessionFetcher invoking fetch callbacks, data {length = 3322, bytes = 0x7b0a2020 22616363 6573735f 746f6b65 ... 31303535 220a7d0a }, error (null)
flutter: OperationException(linkException: null, graphqlErrors: [GraphQLError(message: invalid input syntax for type uuid: "", locations: null, path: null, extensions: {path: $.selectionSet.insert_member_one.args.object, code: data-exception})])
You are getting this error because the URL passed in the initialization of graphql client is wrong or has special characters included. Try using
Uri.Parse('YOUR_URL_HERE)
in the GraphQLClient()
An empty string is 1: not null, if it's a nullable field, and 2: not a valid uuid (v4 at least, I ran into this same problem) use 00000000-0000-0000-0000-000000000000 as nil/empty uuid v4.
https://www.uuidgenerator.net/version-nil

The AWS Lambda returns "Error: connect ETIMEDOUT **.****.***.***:443"

I have to services, Admin Panel(Laravel) и Online Shop(Woocommerce). The relation between these to services was realized with AWS Lambda.
When I try to send an updating product request from my Admin panel to online shop, time to time the lambda couldn't connect to the Woocomerce API.
On time when the system is not updating the product, lambda returns the error "Error: connect ETIMEDOUT"
I originally thought that the Wordpress didn't have enought time for updating process. And decided to increase the lambda's timeout (60000 ms). But it didn't help. I still found the ETIMEDOUT errors in logs.
By the way, the time period between sending the updating request to woocommerce and showing an error is 2 min. If I right understand, the lambda had enough time for getting the answer from woocommerce.
Another strange thing. According the lambda's logs, on time when lambda got an error, the woocommerce API was available. It seems like something disconnects the internet on time when lambda is sending the request.
My question is, why lambda cannot send to Woocommerce API the request. Why it happens time to time?
P.S. Below I added the example of lambda's logs.
The log on starting sending the updating request.
2021-08-14T18:23:48.692Z b228455b-45a8-5cbf-8160-1cc INFO Inside edit Online List {
status: '1',
*********
is_delete: 0,
name: 'Omega Speedmaster Moonwatch Chronograph 42mm ',
price_on_request: 0,
on_sale: 0
}
The log with error.
2021-08-14T18:25:58.299Z b228455b-45a8-5cbf-8aae6 INFO WooCommerce editOnlineStock err::: { Error: connect ETIMEDOUT ***.****.***.***:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
address: '***.****.***.***',
port: 443,
config:
{ url:
'https://domain/wp-json/wc/v3/products/*****',
method: 'put',
params: {},
data:
'{"name":"Omega Speedmaster Moonwatch Chronograph 42mm ","type":"simple"***********',
headers:
{ Accept: 'application/json',
'Content-Type': 'application/json;charset=utf-8',
'User-Agent': 'WooCommerce REST API - JS Client/1.0.1',
'Content-Length': 681 },
auth:
{ username: 'ck_************',
password: 'cs_************' },
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 60000,
adapter: [Function: httpAdapter],
responseType: 'json',
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
****************************
Is your lambda associated with a VPC? If so, i) explore to see if the VPC has a route out to the internet with a NAT gateway/instance, ii) examine the VPC flow logs for errors.

serverless step functions: Getting error when passing more than one fields in the payload for lambda

Error: Invalid State Machine Definition: 'SCHEMA_VALIDATION_FAILED: The value for the field 'Date.$' must be a valid JSONPath at /States/Insert Data Dynamodb/Parameters' (Service: AWSStepFunctions; Status Code: 400; Error Code: InvalidDefinition;
below is the corresponding serverless.yaml code.
I tried wrapping the two parameters into encoded json string and passed it as single payload field and it resulted in the same error but when there is only one plain field in the payload this code deployed successfully
Any suggestions on how to pass two parameters?
service: service-name
frameworkVersion: '2'
provider:
name: aws
runtime: go1.x
lambdaHashingVersion: 20201221
stage: ${opt:stage, self:custom.defaultStage}
region: us-east-1
tags: ${self:custom.tagsObject}
logRetentionInDays: 1
timeout: 10
deploymentBucket: lambda-repository
memorySize: 128
tracing:
lambda: true
plugins:
- serverless-step-functions
configValidationMode: error
stepFunctions:
stateMachines:
sortData:
name: datasorting-dev
type: STANDARD
role: ${self:custom.datasorting.${self:provider.stage}.iam}
definition:
Comment: "Data Sort"
StartAt: Query Data
States:
Query Data:
Type: Task
Resource: arn:aws:states:::athena:startQueryExecution.sync
Parameters:
QueryString: >-
select * from table.data
WorkGroup: primary
ResultConfiguration:
OutputLocation: s3://output/location
Next: Insert Data Dynamodb
Insert Data Dynamodb:
Type: Task
Resource: arn:aws:states:::lambda:invoke
Parameters:
FunctionName: arn:aws:lambda:us-east-1:${account-id}:function:name
Payload:
OutputLocation.$: $.QueryExecution.ResultConfiguration.OutputLocation
Date.$: ${self:custom.dates.year}${self:custom.dates.month}${self:custom.dates.day}
End: true
Your Date.$ property has value of ${self:custom.dates.year}${self:custom.dates.month}${self:custom.dates.day}. Let's assume that:
const dates = {
"year": "2000",
"month": "01",
"day": "20"
}
The result will be Date.$: "20000120" which is not a valid JSON Path.
JSON Path needs to start with a $ sign and each level is divided by ..
Do you want to achieve something like this? $.2000.01.20?
As you see, the issue is not with passing 2 parameters but with wrong string JSON Path created by string interpolation for Date.$.
Some useful links:
https://github.com/json-path/JsonPath
https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-paths.html

Serilog- Add "severity" property to top level of LogEvent for GKE?

I'm using Serilog with the Serilog.Formatting.Json.JsonFormatter formatter in a .NET Core app in GKE. I am logging to Console, which is read by a GKE Logging agent. The GKE logging agent expects a "severity" property at the top level of the Log Event: GCP Cloud Logging LogEntry docs
Because of this, all of my logs show up in GCP Logging with severity "Info", as the Serilog Level is found in the jsonPayload property of the LogEntry in GCP. Here is an example LogEntry as seen in Cloud Logging:
{
insertId: "1cu507tg3by7sr1"
jsonPayload: {
Properties: {
SpanId: "|a85df301-4585ee48ea1bc1d1."
ParentId: ""
ConnectionId: "0HM64G0TCF3RI"
RequestPath: "/health/live"
RequestId: "0HM64G0TCF3RI:00000001"
TraceId: "a85df301-4585ee48ea1bc1d1"
SourceContext: "CorrelationId.CorrelationIdMiddleware"
EventId: {2}
}
Level: "Information"
Timestamp: "2021-02-03T17:40:28.9343987+00:00"
MessageTemplate: "No correlation ID was found in the request headers"
}
resource: {2}
timestamp: "2021-02-03T17:40:28.934566174Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T17:40:32.020942737Z"
}
My first thought was to add a "Severity" property using an Enricher:
class SeverityEnricher : ILogEventEnricher
{
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
logEvent.AddOrUpdateProperty(
propertyFactory.CreateProperty("Severity", LogEventLevel.Error));
}
}
The generated log looks like this in GCP, and is still tagged as Info:
{
insertId: "wqxvyhg43lbwf2"
jsonPayload: {
MessageTemplate: "test error!"
Level: "Error"
Properties: {
severity: "Error"
}
Timestamp: "2021-02-03T18:25:32.6238842+00:00"
}
resource: {2}
timestamp: "2021-02-03T18:25:32.623981268Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T18:25:41.029632785Z"
}
Is there any way in Serilog to add the "severity" property at the same level as "jsonPayload" instead of inside it? I suspect GCP would then pick it up and log the error type appropriately.
As a last resort I could probably use a GCP Logging sink, but my current setup is much more convenient and performant with the GKE Logging Agent already existing.
Here's a relevant Stack Overflow post with no information or advice past what I already have, which is not enough to solve this: https://stackoverflow.com/questions/57215700
I found the following information detailing the severity of each SeriLog to Stackdriver log level, the next table might also help you
Serilog
Stackdriver
Verbose
Debug
Debug
Debug
Information
Info
Warning
Warning
Error
Error
Fatal
Critical
The complete information can be found at the following link
https://github.com/manigandham/serilog-sinks-googlecloudlogging#log-level-mapping
I think this code could help you to make Stackdriver recognize the severity of the logs given by SeriLogs.
private static LogSeverity TranslateSeverity(LogEventLevel level) => level switch
{
LogEventLevel.Verbose => LogSeverity.Debug,
LogEventLevel.Debug => LogSeverity.Debug,
LogEventLevel.Information => LogSeverity.Info,
LogEventLevel.Warning => LogSeverity.Warning,
LogEventLevel.Error => LogSeverity.Error,
LogEventLevel.Fatal => LogSeverity.Critical,
_ => LogSeverity.Default
};
I will leave the link to the complete code here
https://github.com/manigandham/serilog-sinks-googlecloudlogging/blob/master/src/Serilog.Sinks.GoogleCloudLogging/GoogleCloudLoggingSink.cs#L251
Greetings!

How to map the API publisher query parameter values to the Backend URL resource path in WSO2 API M

I need to map the API publisher query parameter values to backend URL resource path. I have the API publisher URL be like
"https://102.20.10.3:8245/srno/v1.0/studentRecordNo?user_id=test29"
and the Backend URL like
"http://localhost:8280/services/getStudRecNo.HTTPEndpoint/getsrno/{uri.var.user_id}"
When i execute in swagger , i am getting error like below
{
"Fault": {
"Code": "",
"Reason": "DS Code: VALIDATION_ERROR\nNested Exception:-\njavax.xml.stream.XMLStreamException: DS Code: VALIDATION_ERROR\nSource Data Service:-\nName: getStudRecNo\nLocation: \\getStudRecNo.dbs\nDescription: get student unique id\r \r \r \nDefault Namespace: http://ws.wso2.org/dataservice\nCurrent Request Name: _getgetsrno_user_id\nCurrent Params: {user_id={studentRecordNo,test29}}\nNested Exception:-\nScalar type expected\nField Name: user_id\nField Value: {studentRecordNo,test29}\n\n",
"Detail": ""
}
}
Please help me.

Resources