Getting error while creating new task definition on conductor - netflix

I am creating a Worker task on conductor with
http://localhost:8080/api/metadata/taskdefs
but getting this error
{
"code": "INTERNAL_ERROR",
"message": "INTERNAL_ERROR - Can not deserialize instance of java.util.ArrayList out of START_OBJECT token\n at [Source: HttpInputOverHTTP#2179fb[c=323,s=STREAM]; line: 1, column: 1]",
"instance": "linkez-System-Product-Name"
}
my task definition JSON payload is
{
"name": "encode_task",
"retryCount": 3,
"timeoutSeconds": 1200,
"inputKeys": [
"sourceRequestId",
"qcElementType"
],
"outputKeys": [
"state",
"skipped",
"result"
],
"timeoutPolicy": "TIME_OUT_WF",
"retryLogic": "FIXED",
"retryDelaySeconds": 600,
"responseTimeoutSeconds": 3600
}

Yes I was able to resolve this. I made simple mistake. To create the task we have to send the array i,e [] as JSON not {} while posting new task

Related

MirageJS Error response not causing an error

Following the docs I've set up this handler inside routes():
this.put(
'/admin/features/error/environment/test',
// #ts-ignore
() => new Response(500, {}, { errors: ['The database went on vacation'] }),
);
Mirage does receive what I've set, sort of. Here is its response, from the browser console logs. Note that it's not an error although the 500 shows up in _bodyInit:
{
"type": "default",
"status": 200,
"ok": true,
"statusText": "",
"headers": {
"map": {
"content-type": "text/plain;charset=UTF-8"
}
},
"url": "",
"bodyUsed": false,
"_bodyInit": 500,
"_bodyText": "[object Number]"
}
Note that I need ts-ignore which is probably a clue. TS complains that new Response expects 0-2 arguments but got 3.
Try importing the Mirage Response class:
import { Response } from 'miragejs';
Otherwise, Response refers to a Fetch API Response object. This explains the type checking error and the unexpected behavior when calling the route.
After adding the import you can remove #ts-ignore and requests to the route should fail with status code 500.

Create groupPolicyDefinitionValue for new policy in Intune configuration policies using the Microsoft Graph

I am following the example here - https://learn.microsoft.com/en-us/graph/api/intune-grouppolicy-grouppolicydefinitionvalue-create?view=graph-rest-beta to create groupPolicyDefinitionValue using the Microsoft Graph.
I have already successfully used https://learn.microsoft.com/en-us/graph/api/intune-grouppolicy-grouppolicyconfiguration-create?view=graph-rest-beta to create groupPolicyConfiguration. However when I now try and create groupPolicyDefinitionValue I get a
400 Bad Request
{
"error": {
"code": "BadRequest",
"message": "{\r\n \"_version\": 3,\r\n \"Message\": \"An error has occurred - Operation ID (for customer
support):
So i tried it on the Graph Explorer using:
Method = POST
URL = https://graph.microsoft.com/beta/deviceManagement/groupPolicyConfigurations/<my policy GUID>/definitionvalues/
Body = (per document)
{
"#odata.type": "#microsoft.graph.groupPolicyDefinitionValue",
"enabled": true,
"configurationType": "preference"
}
and I still get:
{
"error": {
"code": "BadRequest",
"message": "{ \"_version\": 3, \"Message\": \"An error has occurred - Operation ID (for customer support): 00000000-0000-0000-0000-000000000000 - Activity ID: 4200746a-36b7-82f1-b267-5fd2d22a8652 - Url: https://fef.msud01.manage.microsoft.com/GroupPolicy/GroupPolicyAdminService/b15c97ad-ffff-9997-0356-041202451824/deviceManagement/groupPolicyConfigurations%28%2797381d7e-15f0-40ab-9d48-b41a09366468%27%29/definitionValues?api-version=5018-11-06\", \"CustomApiErrorPhrase\": \"\", \"RetryAfter\": null, \"ErrorSourceService\": \"\", \"HttpHeaders\": \"{}\"}",
"innerError": {
"date": "2021-04-12T07:48:12",
"request-id": "dc6daddd-f59a-4b65-b34a-b3129b59cba5",
"client-request-id": "4200746a-36b7-82f1-b267-5fd2d22a8652"
}
}
}
I must be missing something simple. any ideas?
You need to add a odata.bind to the definition:
{
"#odata.type": "#microsoft.graph.groupPolicyDefinitionValue",
"enabled": true,
"configurationType": "policy",
"definition#odata.bind": "https://graph.microsoft.com/beta/deviceManagement/groupPolicyDefinitions(\u002783dfbad5-0dd5-4d70-b7d0-483d1cdd40cb\u0027)"
}

Hangfire with Cosmos DB: Response status code does not indicate success: NotFound (404); Substatus: 0; ActivityId

I have Hangfire that use Cosmos DB as database. When I choose Retries tab I get error:
An unhandled exception occurred while processing the request.
AggregateException: One or more errors occurred. (Response status code does not indicate success: NotFound (404); Substatus: 0; ActivityId: 68d632fc-8c5a-4624-80a5-7e9bc0d252f0; Reason: ({
"Errors": [
"Resource Not Found. Learn more: https://aka.ms/cosmosdb-tsg-not-found"
]
});)
System.Threading.Tasks.Task.Wait(int millisecondsTimeout, CancellationToken cancellationToken)
CosmosException: Response status code does not indicate success: NotFound (404); Substatus: 0; ActivityId: 68d632fc-8c5a-4624-80a5-7e9bc0d252f0; Reason: ({
"Errors": [
"Resource Not Found. Learn more: https://aka.ms/cosmosdb-tsg-not-found"
]
});
Microsoft.Azure.Cosmos.ResponseMessage.EnsureSuccessStatusCode()
When I try to make select statement on Cosmos DB to get items with key='retries':
SELECT * FROM c where c.key = 'retries' and c.value = 'b7ad3971-8647-4a29-a2a7-412e2da41527'
I get error in Cosmos DB:
Failed to query item for container hangfire:
Gateway Failed to Retrieve Query Plan: Message: {"errors":[{"severity":"Error","location":{"start":46,"end":51},"code":"SC1001","message":"Syntax error, incorrect syntax near 'value'."}]}
ActivityId: 4bc1d85a-4ea6-47c2-95d0-81bdde78f389, Microsoft.Azure.Documents.Common/2.11.0, Microsoft.Azure.Documents.Common/2.11.0
When I make select like this:
SELECT * FROM c where c.key = 'retries'
I can see result without error. Error occurs when I add AND c.value=... to the where clause.
I can't open retries tab on Hangfire dashboard.
Json in Cosmos looks like:
{
"key": "retries",
"value": "b7ad3971-8647-4a29-a2a7-412e2da41527",
"score": 0,
"created_on": 1611580578,
"type": 7,
"id": "b04c5c23-d324-45e1-acce-cdd2379c4073",
"_rid": "QO5WANzKkljY-gEAAAAAAA==",
"_self": "dbs/QO5WAA==/colls/QO5WANzKklg=/docs/QO5WANzKkljY-gEAAAAAAA==/",
"_etag": "\"2d01ed73-0000-0100-0000-600ec4a30000\"",
"_attachments": "attachments/",
"_ts": 1611580579
},
The reason you're getting this error is because value is a keyword. Please try the following query and it should work:
SELECT * FROM c where c.key = 'retries' and c["value"] = 'b7ad3971-8647-4a29-a2a7-412e2da41527'

No HTTP resource was found that matches the request URI (...) for all tables

Following this tutorial as a guide (OData/EntityFramework/Asp.Net).
I'm able to execute a simple GET command on the root.
{
"#odata.context": "http://localhost:49624/$metadata",
"value": [
{
"name": "Appointments",
"kind": "EntitySet",
"url": "Appointments"
},
......
{
"name": "Clients",
"kind": "EntitySet",
"url": "Clients"
}
]
}
But anything more complex than that gives me an error message. (I'm using a null routePrefix.)
http://localhost:49624/Services
Gives me:
{
"Message": "No HTTP resource was found that matches the request URI 'http://localhost:49624/Services'.",
"MessageDetail": "No type was found that matches the controller named 'Services'."
}
Here's my super simple GET
[EnableQuery]
public IQueryable<Service> Get()
{
return db.Services;
}
If it matters I'm using Postman to test these commands. Although I imagine that is a non-factor.
I have a database & a DbSet for every table. I have no idea why I can't access any of this.
WebApiConfig:
config.MapHttpAttributeRoutes();
ODataModelBuilder builder = new ODataConventionModelBuilder();
builder.EntitySet<Appointment>("Appointments");
builder.EntitySet<Service>("Services");
builder.EntitySet<Employee>("Employees");
builder.EntitySet<Client>("Clients");
config.MapODataServiceRoute(
routeName: "ODataRoute",
routePrefix: null,
model: builder.GetEdmModel());
I'm sorry if this is a basic question but I'm really new to all this and have been at this wall too long already haha.
Jan Hommes above pointed out above that the controller class needs to be pluralized (In my case ServiceController -> ServicesController)

failed to parse date field, tried both date format [yyyy-MM-dd HH:mm:ss,SSS], and timestamp number with locale []

I am trying to implement the solution described in the following answer:
https://stackoverflow.com/a/27867252/740839
But Elasticsearch throws back the following exception saying that it is unable to parse the #timestamp field:
[2015-01-30 12:09:39,513][DEBUG][action.bulk ] [perfgen 1] [logaggr-2015.01.30][2] failed to execute bulk item (index) index {[logaggr-2015.01.30][logs][c2s5PliTSGKmZSXUWzlkNw], source[{"message":"2015-01-29 17:30:31,579 [ERROR] [pool-1-thread-9] [LogGenerator] invocation count=813,time=2015-01-29 17:30:31,578,metric=-9080142057551045424","#version":"1","#timestamp":"2015-01-30T19:10:53.891Z","host":"perfdev","path":"/home/user/work/elk/logaggr-test/LogAggr_Test.log","logts":"2015-01-29 17:30:31,579","level":"ERROR","thread":"pool-1-thread-9","classname":"LogGenerator","details":"invocation count=813,time=2015-01-29 17:30:31,578,metric=-9080142057551045424"}]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [#timestamp]
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:414)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:648)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:501)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:542)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:491)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:376)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:451)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:157)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:535)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:434)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.index.mapper.MapperParsingException: failed to parse date field [2015-01-30T19:10:53.891Z], tried both date format [yyyy-MM-dd HH:mm:ss,SSS], and timestamp number with locale []
at org.elasticsearch.index.mapper.core.DateFieldMapper.parseStringValue(DateFieldMapper.java:610)
at org.elasticsearch.index.mapper.core.DateFieldMapper.innerParseCreateField(DateFieldMapper.java:538)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:223)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:404)
... 12 more
Caused by: java.lang.IllegalArgumentException: Invalid format: "2015-01-30T19:10:53.891Z" is malformed at "T19:10:53.891Z"
at org.elasticsearch.common.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:754)
at org.elasticsearch.index.mapper.core.DateFieldMapper.parseStringValue(DateFieldMapper.java:604)
... 15 more
As seen in the "message", my log statement looks like this:
2015-01-29 17:30:31,579 [ERROR] [pool-1-thread-9] [LogGenerator] invocation count=813,time=2015-01-29 17:30:31,578,metric=-9080142057551045424
Not sure if the problem is with the logstash configuration. My logstash filter looks like this:
filter {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:logts}%{SPACE}\[%{LOGLEVEL:level}%{SPACE}]%{SPACE}\[%{DATA:thread}]%{SPACE}\[%{DATA:classname}]%{SPACE}%{GREEDYDATA:details}" ]
}
}
and my logstash output is:
output {
elasticsearch {
cluster => "perfgen"
host => "10.1.1.1"
port => 9201
index => "logaggr-%{+YYYY.MM.dd}"
protocol => "http"
template => "logaggr-test.json"
template_name => "logaggr"
}
}
and my template "logaggr-test.json" is:
{
"template": "logaggr-*",
"mappings": {
"logaggr": {
"date_detection": false,
"properties": {
"_timestamp": { "type": "date", "enabled": true, "store": true },
"logts": { "type": "date" },
"level": { "type": "string" },
"thread": { "type": "string" },
"classname": { "type": "string" },
"details": { "type": "string"}
}
}
}
}
I have tried adding a default mapping, etc., but I can't get past the parsing exception.
To reiterate the problem I am trying to solve, I am trying to setup logstash to parse my log file and index it into Elasticsearch. In the process, I want to capture the timestamp of my log message, #timestamp (added by logstash) and _timestamp (added by Elasticsearch)
Appreciate any help.
Turns out I had a template left over from some previous testing that had specified a different format for #timestamp. I deleted the template and now I am able to ingest my logs.

Resources