How do I convert part of a Terraform schema provider from interface {}([]interface {}) to an array of strings - terraform-provider-aws

I am creating a new resource in the aws-terraform-provider. In my schema, I have the following vpc options which is a complex type.
...
"vpc_options": {
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"subnets": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"security_groups": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
},
},
},
...
In hashicorp configuration language i am using this resource like this
vpc_options {
5 subnets = ["subnet-0be8cde92b74efcc8", "subnet-03a31237a28404445"]
6 security_groups = ["sg-0d3c85573d69473fb"]
7 }
I want to be able to use this resource. In order for me to use these fields effectively, I think I need the resource to be in the form of strings that I can use with the aws api calls for creation, deletion, update, and read. Even though I need these values to be strings, when I read the vpc_config in the debugger, I see that they are of type
interface {}([]interface {}) [map[string]interface {} ["subnets": *(*interface {})(0xc0063a8088), "security_groups": *(*interface {})(0xc0063a8098), ]]
Here is a picture of me inspecting the vpc_config in my debugger.
How should i go about converting this to an array of subnet strings and an array of security_groups strings. Then update the schema with d.Set("subnets",.... and d.Set("security_groups",....

Related

How can I persist nested redux store

I want to persist nested object of my redux store. I tried https://github.com/rt2zz/redux-persist package but it doesn't work in my case. I wonder if it's possible to define a whitelist like this: 'user.statuses.verification.isDone'
This is my store:
{
user: {
statuses: {
verification: { isPending: true, isDone: false },
activation: { isPending: true, isDone: false },
set1: { isPending: true, isDone: false, refNumber: xxx },
set2: { isPending: true, isDone: false, refNumber: xxx },
},
},
}
I want to persist only "isDone" in every of statuses and "refNumber".
Can anyone help me?
I already tried nested persist as described in redux persist documentation https://github.com/rt2zz/redux-persist#nested-persists but looks like it has a limitation to 2 levels.
I tried this https://stackoverflow.com/a/71616665 and it works perfectly. 
In this example you can see the blacklist but you just need to replace it with the whitelist.
const config = getPersistConfig({
key: 'root',
storage: AsyncStorage,
whitelist: [
'user.statuses.verification.isDone’,
'user.statuses.activation.isDone’,
'user.statuses.set1.isDone’,
'user.statuses.set1.refNumber’,
'user.statuses.set2.isDone’,
'user.statuses.set2.refNumber’,
],
rootReducer, // your root reducer must be also passed here
... // any other props from the original redux-persist config omitting the stateReconciler
})
You need to use this package: https://github.com/edy/redux-persist-transform-filter
The "issue" has already been addressed, it's more a precise implementation choice, not an issue according to the maintainers, and you have several different ways to address it:
redux-persist - how do you blacklist/whitelist nested state

Mukle 4 : RAML : how to define the schema of a POST BODY request in a RAML file?

A POST REST request having 3 body params as follows:
{
"name" : "ABC",
"age": 34,
"uniqueID": "12345sdfgh"
}
My requirement is to define constraints (type, maxlength, min length, regex, etc.) for each field name, age and unique id.
How can I define that?
There are some different ways to define it. The 'pure' RAML way it is to define a data type fragment for the data object using RAML definitions for types. Those should cover all your needs.
Example:
dataType.raml
#%RAML 1.0 DataType
type: object
displayName: Booking
properties:
BookingDetail:
type: object
required: true
displayName: "BookingDetail"
description: "BookingDetail"
properties:
Name:
type: string
required: true
displayName: "Name"
description: "Name"
example: "John"
NumberOfDays:
type: integer
required: true
minimum: 1
maximum: 10
API:
#%RAML 1.0
title: so-type
/bookings:
post:
body:
application/json:
type: !include dataType.raml
You can also use JSON schemas if you prefer:
/orders:
post:
body:
application/json:
type: !include schemas/OrdersSchema.json
One more thing, I think. To require input to comply with a regex, you might do this:
properties:
Name:
type: string
required: true
displayName: "Name"
description: "Name"
pattern: ^[-A-Za-z ]+$
example: "John"
That pattern is overly restrictive, but does match many Western traditional names. Your own regex is presumably more carefully constructed.

Retrieving values from Firebase database?

Here is my structure of realtime database in firebase
{
"student1" : {
"name" : "somename",
"skillset" : [
"cpp",
"c",
"java"
],
other properties
},
"student2" : {
"name" : "somename",
"skillset" : [
"javascript",
"c",
"python"
],
other properties
},
"student3" : {
"name" : "somename",
"skillset" : [
"cpp",
"java"
],
other properties
},
"student4" : {
"name" : "somename",
"skillset" : [
"java",
"kotlin"
],
other properties
} }
I want to retrieve all the students having some specific set of all skills
e.g. skills = ["cpp","java"]
then answer should be ["student1","student3"]
Your current structure allows you to easily determine the skills for a user. It does however not make it easy to determine the users for a skill. To allow that, you'll need to add a reverse index, looking something like:
skills: {
java: {
student1: true,
student3: true,
student4: true
},
kotlin: {
student4: true
}
...
}
With the above you can look up the user IDs for a skill, and from there look up each user. For more on this, see my answer here: Firebase query if child of child contains a value
But this still won't allow you to query for users by multiple skills. To allow that, you'll have to add skill combinations to the new data structure. For example with the above skills, there is one user who knows both kotlin and Java:
skills: {
java: {
student1: true,
student3: true,
student4: true
},
java_kotlin: {
student4: true
}
kotlin: {
student4: true
}
...
}
While this leads to extra data, it performs quite well in practice, since you can always directly access the data that you need (so there's no real database query needed).
That is not possible under this structure, as firebase only support filtering by one child.
In your case, you would need to get all the data and filter by code

meteor autocomplete server-side

I'm writing a meteor app and I'm trying to add an autocomplete feature to a search box. The data is very large and is on the server, so I can't have it all on the client. It's basically a database of users. If I'm not wrong, the mizzao:autocomplete package should make that possible, but I can't seem to get it to work.
Here's what I have on the server:
Meteor.publish('autocompleteViewers', function(selector, options) {
Autocomplete.publishCursor(viewers.find(selector, options), this);
this.ready();
});
And here are the settings I use for the search box on the client:
getSettings: function() {
return {
position: 'bottom',
limit: 5,
rules: [{
subscription: 'autocompleteViewers',
field: '_id',
matchAll: false,
options: '',
template: Template.vLegend
}],
};
}
But I keep getting this error on the client:
Error: Collection name must be specified as string for server-side search at validateRule
I don't really understand the problem. When I look at the package code, it just seems like it's testing whether the subscription field is a string and not a variable, which it is. Any idea what the problem could be? Otherwise is there a minimum working example I could go from somewhere? I couldn't find one in the docs.
Error: Collection name must be specified as string for server-side search at validateRule
You get this error because you don't specify a Collection name in quotes.
getSettings: function() {
return {
position: 'bottom',
limit: 5,
rules: [{
subscription: 'autocompleteViewers',
field: '_id',
matchAll: false,
collection: 'viewers', // <- specify your collection, in your case it is a "viewers" collection.
options: '',
template: Template.vLegend
}],
};
}
For more information please read here.
Hope this helps!

Proper way to apply custom analyzers to fields with elastic search, apply multiple analyzers to one field or multiple fields with single analyzers?

EDIT: Added my current query to the end
I have a large database of human names and am using elastic search (via symfony2's FOSElasticaBundle and Elastica) to do smarter searching of the names.
I have a full name field, and I want to index the people's names with standard, ngram, and phonetic analyzers.
I've got the analyzers set up in elastic search, and I can begin dumping data into the index. I'm wondering if the way I'm doing it here is the best way, or if I can apply the analyzers to a single field...the reason I ask is because when I do a get /website/person/:id, I see all three fields in plain text...I was expecting to see the analyzed data here, although I guess it must only exist in an inverted index rather than on the document. Examples I've seen use multiple fields, but is it possible to add multiple analyzers to a single field?
My config.yml:
fos_elastica:
clients:
default: { host: %elastica_host%, port: %elastica_port% }
indexes:
website:
settings:
index:
analysis:
analyzer:
phonetic_analyzer:
type: "custom"
tokenizer: "lowercase"
filter: ["name_metaphone", "lowercase", "standard"]
ngram_analyzer:
type: "custom"
tokenizer: "lowercase"
filter : [ "name_ngram" ]
filter:
name_metaphone:
encoder: "metaphone"
replace: false
type: "phonetic"
name_ngram:
type: "nGram"
min_gram: 2
max_gram: 4
client: default
finder: ~
types:
person:
mappings:
name: ~
nameNGram:
analyzer: ngram_analyzer
namePhonetic:
analyzer: phonetic_analyzer
When I check the mapping it looks good:
{
"website" : {
"mappings" : {
"person" : {
"_meta" : {
"model" : "acme\\websiteBundle\\Entity\\Person"
},
"properties" : {
"name" : {
"type" : "string",
"store" : true
},
"nameNGram" : {
"type" : "string",
"store" : true,
"analyzer" : "ngram_analyzer"
},
"namePhonetic" : {
"type" : "string",
"store" : true,
"analyzer" : "phonetic_analyzer"
}
}
}
}
}
}
When I GET the document, I see that all three fields are stored in plain text... maybe i need to set STORE: FALSE for these extra fields, or, is it not being analyzed properly?
{
"_index" : "website",
"_type" : "person",
"_id" : "1",
"_version" : 1,
"found" : true,
"_source":{
"name":"John Doe",
"namePhonetic":"John Doe",
"nameNGram":"John Doe"
}
}
EDIT: The solution I'm currently using, which still requires some refinement but tests well for most names
//Create the query object
$boolQuery = new \Elastica\Query\Bool();
//Boost exact name matches
$exactMatchQuery = new \Elastica\Query\Match();
$exactMatchQuery->setFieldParam('name', 'query', $name);
$exactMatchQuery->setFieldParam('name', 'boost', 10);
$boolQuery->addShould($exactMatchQuery);
//Create a basic Levenshtein distance query
$levenshteinMatchQuery = new \Elastica\Query\Match();
$levenshteinMatchQuery->setFieldParam('name', 'query', $name);
$levenshteinMatchQuery->setFieldParam('name', 'fuzziness', 1);
$boolQuery->addShould($levenshteinMatchQuery);
//Create a phonetic query, seeing if the name SOUNDS LIKE the name that was searched
$phoneticMatchQuery = new \Elastica\Query\Match();
$phoneticMatchQuery->setFieldParam('namePhonetic', 'query', $name);
$boolQuery->addShould($phoneticMatchQuery);
//Create an NGRAM query
$nGramMatchQuery = new \Elastica\Query\Match();
$nGramMatchQuery->setFieldParam('nameNGram', 'query', $name);
$nGramMatchQuery->setFieldParam('nameNGram', 'boost', 2);
$boolQuery->addMust($nGramMatchQuery);
return $boolQuery;
No, you can't have multiple analyzers on a single field. The way you are doing is correct way of applying multiple analyzers by having different field names for same field.
The reason you are getting namePhonetic and nameNGram also in _source field is use of
"store" : true
It tells the ElasticSearch that you need those extra fields also in response. Use
"store" : false
that will solve your problem.
If you want to see the analyzed data on a field you can use _analyze api of elasticsearch.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-analyze.html
Yes, these fields are stored in inverted index after analysis.
I hope I have answered all your doubts. Please let me know if you need more help on this.
Thanks

Resources