Almost crazy with such kind of warnings full in my arc lint response:
Warning (TXT3) Line Too Long
This line is 94 characters long, but the convention is 80 characters.
You can extend the line length by adding something like this "text.max-line-length": 120 to your .arclint file. Our .arclint file looks like this to allow 120 characters:
{
"linters": {
"text": {
"type": "text",
"include": "(.md|.php$)",
"exclude": "(vendor)",
"text.max-line-length": 120
},
"spelling": {
"type": "spelling",
"include": "(.*)",
"exclude": "(vendor)"
},
"phpcs": {
"type": "phpcs",
"include": "(\\.php$)",
"bin": ["vendor/bin/phpcs", "phpcs"],
"flags" : ["--standard=PSR1","--tab-width=2"],
"severity.rules" : {
"(^PHPCS\\.E\\.PSR1\\.Classes\\.ClassDeclaration\\.MissingNamespace)": "disabled"
}
},
"generated" : {
"type" : "generated",
"include" : "(.*)"
}
}
}
Also, if you feel confident in changing the code behind Phabricator, you can make similar changes in the code: https://secure.phabricator.com/book/phabricator/article/arcanist_extending_lint/
Related
I am preparing ARM template for "Schedule update deployment" in Update Management service. I want to add parameters like: "excludedKbNumbers" and "includedKbNumbers". I am deploying my templates using powershell. When I am passing KB numbers using mentioned parameters templates completed successfully. In case when I am putting KB number using one of the mentioned parameters, second is empty, template completed successfully. Problem is when I dont want to pass Included/Exluded KB numbers, in my powershell deployment command I am not putting parameter names "excludedKbNumbers" and "includedKbNumbers", and then I am receiving below error: "message": "{\"Message\":\"The request is invalid.\",\"ModelState\":{\"softwareUpdateConfiguration.properties.updateConfiguration\":[\"Software update configuration has same KbNumbers in
includedKbNumbers and excludedKbNumbers.\"]}}"
I am using this structure in my template json('null') and this is a problematic area.
extract from my template:
"parameters": {
"excludedKbNumbers": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "Specify excluded KB numbers, required data structure: 123456"
}
},
"includedKbNumbers": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "Specify included KB numbers, required data structure: 123456"
}
},
"resources": [
{
"type": "Microsoft.Automation/automationAccounts/softwareUpdateConfigurations",
"apiVersion": "2017-05-15-preview",
"name": "[concat(parameters('automationAccountName'), '/', parameters('scheduleName'))]",
"properties": {
"updateConfiguration": {
"operatingSystem": "[parameters('operatingSystem')]",
"windows": {
"includedUpdateClassifications": "[parameters('Classification')]",
"excludedKbNumbers": [
"[if(empty(parameters('excludedKbNumbers')), json('null'), parameters('excludedKbNumbers'))]"
],
"includedKbNumbers": [
"[if(empty(parameters('includedKbNumbers')), json('null'), parameters('includedKbNumbers'))]"
],
"rebootSetting": "IfRequired"
},
"targets": {
"azureQueries": [
{
"scope": [
"[concat('/subscriptions', '/', parameters('subscriptionID'))]"
],
"tagSettings": {
"tags": {
"[parameters('tagKey')]": [
"[parameters('tagValue')]"
]
},
"filterOperator": "All"
},
"locations": []
}
]
},
"duration": "PT2H"
},
"tasks": {},
"scheduleInfo": {
"isEnabled": false,
"startTime": "2050-03-03T13:10:00+01:00",
"expiryTime": "2050-03-03T13:10:00+01:00",
"frequency": "OneTime",
"timeZone": "Europe/Warsaw"
}
}
}
],
try doing this:
"excludedKbNumbers": "[if(empty(parameters('excludedKbNumbers')), json('null'), array(parameters('excludedKbNumbers')))]",
"includedKbNumbers": "[if(empty(parameters('includedKbNumbers')), json('null'), array(parameters('includedKbNumbers')))]"
I am using Pact for Consumer Driven Contracts testing.
In my usecase, my consumer "some-market-service-consumer" is using provider "market-service". The contract "produced" at some-market-service-consumer, looks like this:
{
"provider": {
"name": "market-service"
},
"consumer": {
"name": "some-market-service-consumer"
},
"interactions": [
{
"description": "Get all markets",
"request": {
"method": "GET",
"path": "/markets"
},
"response": {
"status": 200,
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": {
"markets": [
{
"phone": "HBiQOAeQegaWtbcHfAro"
}
]
},
"matchingRules": {
"$.headers.Content-Type": {
"regex": "application/json; charset=utf-8"
},
"$.body.markets[*].phone": {
"match": "type"
},
"$.body.markets": {
"min": 1,
"match": "type"
}
}
}
}
],
"metadata": {
"pact-specification": {
"version": "2.0.0"
},
"pact-jvm": {
"version": "3.3.6"
}
}
}
On provider-site, I am using pact-provider-verifier-docker¹. Here is my test-result:
WARN: Ignoring unsupported matching rules {"min"=>1} for path $['body']['markets']
.....
## -1,7 +1,7 ##
{
"markets": [
... ,
- Pact::UnexpectedIndex,
+ Hash,
]
}
Key: - means "expected, but was not found".
+ means "actual, should not be found".
Values where the expected matches the actual are not shown.
It seems, as if the testing works fine - "phone" is tested valid.
But at this point, I have no clue, what (expected) "Pact::UnexpectedIndex" means, and why it fails. Where does this expectation come from, how to fix it?
¹ In special, my own version, which uses most recent external dependencies.
As you can see in this test case here the Pact::UnexpectedIndex is used to indicate than an array is longer than was expected. I think what it's trying to say is that there is an extra hash at the end of the array. (I agree that it is not clear at all! I wrote this code, so I apologise for its confusing nature. It turns out that the hardest part of writing pact code was writing helpful diff output.)
What is confusing about that error is that it should be allowing you to have extra elements as you've specified $.body.markets to have a minimum length of 1. Perhaps the docker verifier is using version 1 matching instead of version 2?
I am using https://www.npmjs.com/package/swagger and I am trying to create a config. I have such a part of my swagger.js
"dummy\/{id}\/related_dummies": {
"get": {
"tags": [
"RelatedDummy"
],
"operationId": "get_by_parentRelatedDummyCollection",
"produces": [
"application\/ld+json",
"application\/hal+json",
"application\/xml",
"text\/xml",
"application\/json",
"text\/html"
],
"parameters": [
{
"name": "id",
"in": "path",
"required": true,
"type": "integer"
}
],
"summary": "Retrieves the collection of RelatedDummy resources.",
"responses": {
"200": {
"description": "RelatedDummy collection response",
"schema": {
"type": "array",
"items": {
"$ref": "#\/definitions\/RelatedDummy"
}
}
}
}
}
But when I run swagger validate swagger.js I keep getting this error:
Project Errors
--------------
#/paths: Additional properties not allowed: dummy/{id}/related_dummies
Results: 1 errors, 0 warnings
What could be the reason of the error? Thanks
The problem is that the path dummy/{id}/related_dummies doesn't begin with a slash.
To be recognized as a valid path item object, it needs to be /dummy/{id}/related_dummies.
Using the escaped syntax from your example, the name should be "\/dummy\/{id}\/related_dummies"
The relevant part of the Swagger–OpenAPI 2.0 specification is in the Paths Object definition.
How is it possible to use the Application Model with APNS settings and Postgre.
The Application Models has embedded Models.
I'm right that in tranditional databases the embedded models are simply saved as object?
The String field of pushsettings has a varchar(1024).
The Push Example does this:
pushSettings: {
apns: {
certData: config.push.apnsCertData,
keyData: config.push.apnsKeyData,
feedbackOptions: {
batchFeedback: true,
interval: 300
}
},
gcm: {
serverApiKey: config.push.gcmServerApiKey
}
}
}
the certData and keyData are to long for the 1024 chars.
So how to use this correct with Postgres?
Right now the only thing I see is to extend the application model and set the pushSettings field to a larger value, but I am not able to get this work too.
Please Please help me
Regards
I mananged to get it work like I thought in first place.
The extended application Model json:
{
"name": "application",
"base": "Application",
"properties": {
"pushSettings": {
"postgresql": {
"dataType": "text",
"dataLength": null,
"nullable": "YES"
},
"apns": {
"production": {
"type": "boolean",
"description": [
"Production or development mode. It denotes what default APNS",
"servers to be used to send notifications.",
"See API documentation for more details."
]
},
"certData": {
"type": "string",
"description": "The certificate data loaded from the cert.pem file"
},
"keyData": {
"type": "string",
"description": "The key data loaded from the key.pem file"
},
"pushOptions": {
"type": {
"gateway": "string",
"port": "number"
}
},
"feedbackOptions": {
"type": {
"gateway": "string",
"port": "number",
"batchFeedback": "boolean",
"interval": "number"
}
}
},
"gcm": {
"serverApiKey": "string"
}
}
I am currently using the FOSElasticaBundle in Symfony2 and I am having a hard time trying to build a search to match the longest prefix.
I am aware of the 100 examples that are on the Internet to perform autocomplete-like searches using this. However, my problem is a little different.
In an autocomplete type of search the database holds the longest alphanumeric string (in length of characters) and the user just provides the shortest portion, let's say the user types "jho" and Elasticsearch can easily provide "Jhon, Jhonny, Jhonas".
My problem is backwards, I would like to provide the longest alphanumeric string and I want Elasticsearch to provide me the biggest match in the database.
For example: I could provide "123456789" and my database can have [12,123,14,156,16,7,1234,1,67,8,9,123456,0], in this case the longest prefix match in the database for the number that the user provided is "123456".
I am just starting with Elasticsearch so I don't really have a close to working settings or anything.
If there is any information not clear or missing let me know and I will provide more details.
Update 1 (Using Val's 2nd Update)
Index: Download 1800+ indexes
Settings:
curl -XPUT localhost:9200/tests -d '{
"settings": {
"analysis": {
"analyzer": {
"edge_ngram_analyzer": {
"tokenizer": "edge_ngram_tokenizer",
"filter": [ "lowercase" ]
}
},
"tokenizer": {
"edge_ngram_tokenizer": {
"type": "edgeNGram",
"min_gram": "2",
"max_gram": "25"
}
}
}
},
"mappings": {
"test": {
"properties": {
"my_string": {
"type": "string",
"fields": {
"prefix": {
"type": "string",
"analyzer": "edge_ngram_analyzer"
}
}
}
}
}
}
}'
Query:
curl -XPOST localhost:9200/tests/test/_search?pretty=true -d '{
"size": 1,
"sort": {
"_script": {
"script": "doc.my_string.value.length()",
"type": "number",
"order": "desc"
},
"_score": "desc"
},
"query": {
"filtered": {
"query": {
"match": {
"my_string.prefix": "8092232423"
}
},
"filter": {
"script": {
"script": "doc.my_string.value.length() <= maxlength",
"params": {
"maxlength": 10
}
}
}
}
}
}'
With this configuration the query returns the following results:
{
"took" : 61,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 1754,
"max_score" : null,
"hits" : [ {
"_index" : "tests",
"_type" : "test",
"_id" : "AU8LqQo4FbTZPxBtq3-Q",
"_score" : 0.13441172,
"_source":{"my_string":"80928870"},
"sort" : [ 8.0, 0.13441172 ]
} ]
}
}
Bonus question
I would like to provide an array of numbers for that search and get the matching prefix for each one in an efficient way without having to perform the query each time
Here is my take at it.
Basically, what we need to do is to slice and dice the field (called my_string below) at indexing time with an edgeNGram tokenizer (called edge_ngram_tokenizer below). That way a string like 123456789 will be tokenized to 12, 123, 1234, 12345, 123456, 1234567, 12345678, 123456789 and all tokens will be indexed and searchable.
So let's create a tests index, a custom analyzer called edge_ngram_analyzer analyzer and a test mapping containing a single string field called my_string. You'll note that the my_string field is a multi-field declaring a prefixes sub-field which will contain all the tokenized prefixes.
curl -XPUT localhost:9200/tests -d '{
"settings": {
"analysis": {
"analyzer": {
"edge_ngram_analyzer": {
"tokenizer": "edge_ngram_tokenizer",
"filter": [ "lowercase" ]
}
},
"tokenizer": {
"edge_ngram_tokenizer": {
"type": "edgeNGram",
"min_gram": "2",
"max_gram": "25"
}
}
}
},
"mappings": {
"test": {
"properties": {
"my_string": {
"type": "string",
"fields": {
"prefixes": {
"type": "string",
"index_analyzer": "edge_ngram_analyzer"
}
}
}
}
}
}
}
Then let's index a few test documents using the _bulk API:
curl -XPOST localhost:9200/tests/test/_bulk -d '
{"index":{}}
{"my_string":"12"}
{"index":{}}
{"my_string":"1234"}
{"index":{}}
{"my_string":"1234567890"}
{"index":{}}
{"my_string":"abcd"}
{"index":{}}
{"my_string":"abcdefgh"}
{"index":{}}
{"my_string":"123456789abcd"}
{"index":{}}
{"my_string":"abcd123456789"}
'
The thing that I found particularly tricky was that the matching result could be either longer or shorter than the input string. To achieve that we have to combine two queries, one looking for shorter matches and another for longer matches. So the match query will find documents with shorter "prefixes" matching the input and the query_string query (with the edge_ngram_analyzer applied on the input string!) will search for "prefixes" longer than the input string. Both enclosed in a bool/should and sorted by a decreasing string length (i.e. longest first) will do the trick.
Let's do some queries and see what unfolds:
This query will return the one document with the longest match for "123456789", i.e. "123456789abcd". In this case, the result is longer than the input.
curl -XPOST localhost:9200/tests/test/_search -d '{
"size": 1,
"sort": {
"_script": {
"script": "doc.my_string.value.length()",
"type": "number",
"order": "desc"
}
},
"query": {
"bool": {
"should": [
{
"match": {
"my_string.prefixes": "123456789"
}
},
{
"query_string": {
"query": "123456789",
"default_field": "my_string.prefixes",
"analyzer": "edge_ngram_analyzer"
}
}
]
}
}
}'
The second query will return the one document with the longest match for "123456789abcdef", i.e. "123456789abcd". In this case, the result is shorter than the input.
curl -XPOST localhost:9200/tests/test/_search -d '{
"size": 1,
"sort": {
"_script": {
"script": "doc.my_string.value.length()",
"type": "number",
"order": "desc"
}
},
"query": {
"bool": {
"should": [
{
"match": {
"my_string.prefixes": "123456789abcdef"
}
},
{
"query_string": {
"query": "123456789abcdef",
"default_field": "my_string.prefixes",
"analyzer": "edge_ngram_analyzer"
}
}
]
}
}
}'
I hope that covers it. Let me know if not.
As for your bonus question, I'd simply suggest using the _msearch API and sending all queries at once.
UPDATE: Finally, make sure that scripting is enabled in your elasticsearch.yml file using the following:
# if you have ES <1.6
script.disable_dynamic: false
# if you have ES >=1.6
script.inline: on
UPDATE 2 I'm leaving the above as the use case might fit someone else's needs. Now, since you only need "shorter" prefixes (makes sense !!), we need to change the mapping a little bit and the query.
The mapping would be like this:
{
"settings": {
"analysis": {
"analyzer": {
"edge_ngram_analyzer": {
"tokenizer": "edge_ngram_tokenizer",
"filter": [
"lowercase"
]
}
},
"tokenizer": {
"edge_ngram_tokenizer": {
"type": "edgeNGram",
"min_gram": "2",
"max_gram": "25"
}
}
}
},
"mappings": {
"test": {
"properties": {
"my_string": {
"type": "string",
"fields": {
"prefixes": {
"type": "string",
"analyzer": "edge_ngram_analyzer" <--- only change
}
}
}
}
}
}
}
And the query would now be a bit different but will always return only the longest prefix but shorter or of equal length to the input string. Please try it out. I advise to re-index your data to make sure everything is setup properly.
{
"size": 1,
"sort": {
"_script": {
"script": "doc.my_string.value.length()",
"type": "number",
"order": "desc"
},
"_score": "desc" <----- also add this line
},
"query": {
"filtered": {
"query": {
"match": {
"my_string.prefixes": "123" <--- input string
}
},
"filter": {
"script": {
"script": "doc.my_string.value.length() <= maxlength",
"params": {
"maxlength": 3 <---- this needs to be set to the length of the input string
}
}
}
}
}
}