Creating Service Bus with exported ARM - Create/Update of a $Default rule is not allowed - azure-resource-manager

I have created "Standard" tier Service Bus in Azure Portal. I Created Topic and subscription as well.
I exported ARM template.
I execute with PowerShell to re-create Service Bus with ARM. I have changed name.
I do get Service Bus succesfully created to Azure, but I'm wondering I do I get these errors:
Should I remove rules like ""type": "Microsoft.ServiceBus/namespaces/networkRuleSets","
which were automatically added to Export of ARM?
ERROR:
New-AzResourceGroupDeployment : 10.02.54 - Resource
Microsoft.ServiceBus/namespaces/topics/subscriptions/rules
'digiservicebusdev/newprofiletopic/newprofilesubscription/$Default' fa
iled with message '{
"error": {
"message": "Create/Update of a $Default rule is not allowed. CorrelationId: 6e135ad8-1bf2-4a33-b088-
ef6003c025be",
"code": "BadRequest"
}
}'
At C:\Azure\ServiceBusARM\SBARM.ps1:12 char:1
+ New-AzResourceGroupDeployment `
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AzResourceGroupDeployment], Exception
+ FullyQualifiedErrorId :
Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDeploymentCmdlet
New-AzResourceGroupDeployment : 10.02.55 - Resource Microsoft.ServiceBus/namespaces/networkRuleSet
s 'servicebusdev/default' failed with message '{
"error": {
"message": "Network Rules are available only on a Premium Messaging namespace. CorrelationId:
11135ad8-1bf2-4a33-b088-ef6003c025be",
"code": "BadRequest"
}
}'
At C:\Azure\ServiceBusARM\SBARM.ps1:12 char:1
+ New-AzResourceGroupDeployment `
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AzResourceGroupDeployment], Exception
+ FullyQualifiedErrorId :
Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDeploymentCmdlet

Should I remove rules like ""type": "Microsoft.ServiceBus/namespaces/networkRuleSets"," which were automatically added to Export of ARM?
Yes, removing the (automatically generated) networkRuleSets object from the ARM template is required when you want to deploy the Service Bus in the Standard Tier.

I test in my site and can reproduce your problem.
Each newly created topic subscription has an initial default subscription rule. If you don't explicitly specify a filter condition for the rule, the applied filter is the true filter that enables all messages to be selected into the subscription. The default rule has no associated annotation action.
I suspect this would be a server-side change involved to not create a default rules, hence the push back. For more details, you could refer to this issue.

Related

Cloud tasks permission error when adding TASK_ID

I have been creating a task following the convention from the documentation - projects/PROJECT_ID/locations/LOCATION_ID/queues/QUEUE_ID which in my real example would look something like this - projects/staging/locations/us-central1/queues/members. That is all working fine, but i wanted to add the TASK_ID so i can enable the de-duplication feature and i used this projects/PROJECT_ID/locations/LOCATION_ID/queues/QUEUE_ID/tasks/TASK_ID which translates to something like this projects/staging/locations/us-central1/queues/members/tasks/testing-id. When i try to use the TASK_ID i get the following error code:
{
"message": "The principal (user or service account) lacks IAM permission \"cloudtasks.tasks.create\" for the resource \"projects\/staging\/locations\/us-central1\/queues\/members\/tasks\/testing-id\" (or the resource may not exist).",
"code": 7,
"status": "PERMISSION_DENIED",
"details": [
{
"#type": "grpc-server-stats-bin",
"data": "<Unknown Binary Data>"
}
]
}
Why is this error happening? Why should adding the TASK_ID change what permission do i need?

Serilog- Add "severity" property to top level of LogEvent for GKE?

I'm using Serilog with the Serilog.Formatting.Json.JsonFormatter formatter in a .NET Core app in GKE. I am logging to Console, which is read by a GKE Logging agent. The GKE logging agent expects a "severity" property at the top level of the Log Event: GCP Cloud Logging LogEntry docs
Because of this, all of my logs show up in GCP Logging with severity "Info", as the Serilog Level is found in the jsonPayload property of the LogEntry in GCP. Here is an example LogEntry as seen in Cloud Logging:
{
insertId: "1cu507tg3by7sr1"
jsonPayload: {
Properties: {
SpanId: "|a85df301-4585ee48ea1bc1d1."
ParentId: ""
ConnectionId: "0HM64G0TCF3RI"
RequestPath: "/health/live"
RequestId: "0HM64G0TCF3RI:00000001"
TraceId: "a85df301-4585ee48ea1bc1d1"
SourceContext: "CorrelationId.CorrelationIdMiddleware"
EventId: {2}
}
Level: "Information"
Timestamp: "2021-02-03T17:40:28.9343987+00:00"
MessageTemplate: "No correlation ID was found in the request headers"
}
resource: {2}
timestamp: "2021-02-03T17:40:28.934566174Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T17:40:32.020942737Z"
}
My first thought was to add a "Severity" property using an Enricher:
class SeverityEnricher : ILogEventEnricher
{
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
logEvent.AddOrUpdateProperty(
propertyFactory.CreateProperty("Severity", LogEventLevel.Error));
}
}
The generated log looks like this in GCP, and is still tagged as Info:
{
insertId: "wqxvyhg43lbwf2"
jsonPayload: {
MessageTemplate: "test error!"
Level: "Error"
Properties: {
severity: "Error"
}
Timestamp: "2021-02-03T18:25:32.6238842+00:00"
}
resource: {2}
timestamp: "2021-02-03T18:25:32.623981268Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T18:25:41.029632785Z"
}
Is there any way in Serilog to add the "severity" property at the same level as "jsonPayload" instead of inside it? I suspect GCP would then pick it up and log the error type appropriately.
As a last resort I could probably use a GCP Logging sink, but my current setup is much more convenient and performant with the GKE Logging Agent already existing.
Here's a relevant Stack Overflow post with no information or advice past what I already have, which is not enough to solve this: https://stackoverflow.com/questions/57215700
I found the following information detailing the severity of each SeriLog to Stackdriver log level, the next table might also help you
Serilog
Stackdriver
Verbose
Debug
Debug
Debug
Information
Info
Warning
Warning
Error
Error
Fatal
Critical
The complete information can be found at the following link
https://github.com/manigandham/serilog-sinks-googlecloudlogging#log-level-mapping
I think this code could help you to make Stackdriver recognize the severity of the logs given by SeriLogs.
private static LogSeverity TranslateSeverity(LogEventLevel level) => level switch
{
LogEventLevel.Verbose => LogSeverity.Debug,
LogEventLevel.Debug => LogSeverity.Debug,
LogEventLevel.Information => LogSeverity.Info,
LogEventLevel.Warning => LogSeverity.Warning,
LogEventLevel.Error => LogSeverity.Error,
LogEventLevel.Fatal => LogSeverity.Critical,
_ => LogSeverity.Default
};
I will leave the link to the complete code here
https://github.com/manigandham/serilog-sinks-googlecloudlogging/blob/master/src/Serilog.Sinks.GoogleCloudLogging/GoogleCloudLoggingSink.cs#L251
Greetings!

using network-bootstrapper to generate node-info expects certificates with devMode=false

I am trying to use network-bootstrapper tool to generate node-infos(like certificates etc) by passing node.conf file as input with devMode=false, following is my node.conf file:
myLegalName="O=Bank,L=Paris,C=FR"
p2pAddress="localhost:10011"
devMode=false
rpcSettings {
address="localhost:10012"
adminAddress="localhost:10052"
}
security {
authService {
dataSource {
type=INMEMORY
users=[
{
password=test
permissions=[
ALL
]
user=user3
}
]
}
}
}
I am passing the path of node.conf file as an argument to the bootsrapper.jar, but it is exiting with error code 1, below is the screenshot of the error:
following is the log generated:
[INFO ] 2018-07-04T14:19:21,901Z [main] internal.Node.generateAndSaveNodeInfo - Generating nodeInfo ... {}
[ERROR] 2018-07-04T14:19:21,901Z [main] internal.Node.validateKeystore - IO exception while trying to validate keystore {}
java.nio.file.NoSuchFileException: C:\corda\work\keys- gen\Bank\certificates\sslkeystore.jks
......
......
And
[ERROR] 2018-07-04T14:19:21,917Z [main] internal.Node.run - Exception during node startup {}
java.lang.IllegalArgumentException: Identity certificate not found. Please either copy your existing identity key and certificate from another node, or if you don't have one yet, fill out the config file and run corda.jar --initial- registration. Read more at: https://docs.corda.net/permissioning.html
......
......
Can you please let me know how to generate certificates and place it already inside the folder {workspace}/{nodeName}/certificates which already does not exists and is being generated by the bootstrapper tool itself? can you help with certificate generation and usage of network-bootstrapper.jar tool with devMode turned off?
The bootstrapper tool can't be used outside of devMode. Outside of devMode, proper certificates and a network map server must be used.
This issue is being tracked here: https://r3-cev.atlassian.net/browse/CORDA-1735.

INVALID_ARGUMENT (400 error) when calling Stackdriver Error Reporting API

When trying to invoke the Stackdriver Error Reporting API (via the API explorer or via the Client-Side JavaScript library), I receive the following error:
Request:
{ "message" : "test" }
Response:
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}
The Stackdriver Error Reporting API is enabled and I have Owner rights to the App Engine project.
Is the API simply not functional? If I'm doing something wrong, can someone try to help?
The documentation for reporting events says that a ServiceContext is required.
If you're only sending a message (not a stacktrace / exception) you'll need to include a context with a reportLocation as well. This is noted in the documentation of the message field, but it's not obvious.
The following works from the API explorer:
{
"context": {
"reportLocation": {
"functionName": "My Function"
}
},
"message": "error message",
"serviceContext": {
"service": "My Microservice",
}
}
You might be interested in the docs on How Error are Grouped too.
FWIW, I work on this product and I think the error message is too generic. The problem is (?) that the serving stack scrubs the message unless they're annotated as being for public consumption. I'll chase that down.

Cannot Upload file to Alfresco with Curl

I have a php script that uploads files to Alfresco using curl, it works fine with local setup of Alfresco on my system. But, fails when the same is used to upload to production copy.
Below, is the curl response upon failure while uploading file,
{
"status" :
{
"code" : 500,
"name" : "Internal Error",
"description" : "An error inside the HTTP server which prevented it from fulfilling the request."
},
"message" : "00031437 Unexpected error occurred during upload of new content.",
"exception" : "org.springframework.extensions.webscripts.WebScriptException - 00031437 Unexpected error occurred during upload of new content.",
"callstack" :
[
"" ,"java.lang.IllegalArgumentException: Class {}application\/pdf has not been defined in the data dictionary"
,"org.alfresco.repo.policy.ClassPolicyDelegate.get(ClassPolicyDelegate.java:98)"
,"org.alfresco.repo.policy.ClassPolicyDelegate.get(ClassPolicyDelegate.java:83)"
,"org.alfresco.repo.node.AbstractNodeServiceImpl.invokeBeforeCreateNode(AbstractNodeServiceImpl.java:283)"
,"org.alfresco.repo.node.db.DbNodeServiceImpl.createNode(DbNodeServiceImpl.java:363)"
,"sun.reflect.GeneratedMethodAccessor1147.invoke(Unknown Source)"
,"sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)"
,"java.lang.reflect.Method.invoke(Method.java:606)"
,"org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:309)"
,"org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)"
,"org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)"
,"org.alfresco.repo.tenant.MultiTNodeServiceInterceptor.invoke(MultiTNodeServiceInterceptor.java:104)"
,"org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)"
,"org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)"
,"com.sun.proxy.$Proxy9.createNode(Unknown Source)"
,"sun.reflect.GeneratedMethodAccessor1147.invoke(Unknown Source)"
,"sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)"
,"java.lang.reflect.Method.invoke(Method.java:606)"
.
.
.
}
Any idea?
The value of contenttype must be a valid type qname. Have a look at the documentation at http://WEBURL:8000/alfresco/service/index/uri/api/upload. If in doubt, try cm:content or it's full representation.

Resources