Openstack flavor validation in heat template - openstack

I am verifiying the user passed flavor in the heat template. At present the heat template allows me to mention the name of the flavor in the allowed values. Small code snippet below,
parameters:
flavor_type:
type: string
label: Flavor type
description: Type of instance (flavor) to be used
constraints:
- allowed_values: [m1.xlarge ]
description: Value must be one of m1.xlarge.
This works when user passes the flavor with name m1.xlarge but not with other names.
I would like to allow the custom flavors with specific sizes (RAM - 8, HD - 150, VCPU -8). These individual values I would like to verify in the heat template against with passed flavor.
I feel this is a valid use case to check the flavors. Is it something possible in the heat template ?
Thanks,
Rama Krishna

Heat Spec has the concept of custom constraints. Search for "custom constraint" at https://docs.openstack.org/heat/pike/template_guide/hot_spec.html. You can leverage that to validate the input to be one of the flavors available to the user using that template
flavor_type:
type: string
label: Flavor type
description: Type of instance (flavor) to be used
constraints:
- custom_constraint: nova.flavor

Related

How to debug what variables are needed in a template

We want to change the way we do the frontends in symfony and we'd like some level of "reflection":
We want the system to be able to detect "Template displayProduct.html.twig needs xxxx other file, defines yyyy block and uses the zzzz variable".
I would like a command similar to:
php bin/console debug:template displayProduct.html.twig
that responds something like this:
Template: displayProduct.html.twig
Requires: # And tells us what other files are needed
- widgetPrice.html.twig
- widgetAvailability.html.twig
Defines: # And tells us what {% block xxxx %} are defined
- body
- title
- javascripts_own
- javascripts_general
Uses these variables: # <= This is the most important for us now
- productTitle
- price
- stock
- language
We are now visually scanning complex templates for needed variables and it's a killer. We need to automatically tell "this template needs this and that to work".
PD: Functional tests are not the solution, as we want to apply all this to dynamically generated templates stored in databases, to make users able to modify their pages, so we can't write a test for every potential future unknown template that users will write.
Does this already exist somehow?

Create document set in Sharepoint with Graph API in a subfolder

I already implemented the creation of a document set at library root level. For this I used the following link: Is it possible to create a project documentset using graph API?
I follow the following steps :
1- Retrieve the document library's Drive Id:
GET https://graph.microsoft.com/v1.0/sites/${siteId}/lists/${listId}?$expand=drive
2- Create the folder:
POST https://graph.microsoft.com/v1.0/drives/${library.drive.id}/root/children
The body of the request is the following
{
"name": ${folderName},
"folder": {},
}
3- Get the folder's SharePoint item id:
GET https://graph.microsoft.com/v1.0/sites/${siteId}/drives/${library.drive.id}/items/${folder.id}?expand=sharepointids
4- Update the item in the Document Library so that it updates to the desired Document Set:
PATCH https://graph.microsoft.com/v1.0/sites/${siteId}/lists/${listId}/items/${sharepointIds.listItemId}
I do send the following body to the patch request:
{
"contentType": {
"id": "content-type-id-of-the-document-set"
},
"fields": {}
}
I'm looking now how to create a document set in a specific folder in Sharepoint.
For example, i want to create the following folder structure.
The documents folder is at the library root and I want to create a document set named billing.
documents
|_ billing
|_ 2021
|_11
|_01
|_ document1.pdf
|_ document2.pdf
|_ document3.pdf
|_02
...
|_03
...
|_04
|_10
...
thanks
I'm doing something similar but I'm a little behind you, haven't yet created the Document Set (almost there!), but may I respectfully challenge your approach?
I'm not sure it's a good idea to mix Folders and Document Sets, mainly because a Folder breaks the metadata flow, you could achieve the same results just using Document Sets.
I am assuming that your 'day' in the data structure above is your Document Set (containing document1.pdf, etc.) You might want to consider creating a 'Billing' Document Set and either specifically add a Date field to the Document Set metadata or, perhaps better still, just use the standard Created On metadata and then create Views suitably filtered/grouped/sorted on that date.
This way you can also create filtered views for 'Client' or 'Invoice' or 'Financial Year' or whatever.
As soon as your documents exist in a folder, you can no longer filter/sort/group etc., the document library based on metadata.
FURTHER INFORMATION
I am personally structuring my Sales document library thus:
Name: Opportunity; Content Type: Document Set; Metadata: Client Name, Client Address, Client Contact
Name: Proposal; Content Type: Document; Metadata: Proposal ID, Version
Name: Quote; Content Type: Document; Metadata: Quote ID, Version
Etc...
This way the basic SharePoint view is a list of Opportunities (Document Sets), inside which are Proposals, Quotes etc., but I can also filter the view to just show Proposals (i.e. filter by Content Type), or search for a specific Proposal ID, or group by Client Name, then sort chronologically, or by Proposal ID etc.
I'm just saying that you get a lot more flexibility if you avoid using Folders entirely.
p.s. I've been researching for days now how to create Document Sets with graph, it never occurred to me that it might be a two-step process i.e. create the folder, then patch its content type. Many thanks for your post!!
Just re-read your post and my assumption that the 'day' would be your document set was incorrect. In this case, there would be no benefit having a Document Set containing Folders because the moment a Folder exists in the Document Set, metadata flow stops, and the only reason (well, the main reason*) to use Document Sets in preference to Folders is that metadata flow.
*Document Sets also allow you to automatically create a set of documents based on defined templates.

Trying to export dynamodb table variables from a serverless stack fails with intrinsic function !Ref

I have a working stack built with servereless framework which includes a dynamodb table (stack was already deployed successfully). I am trying to export the dynamo table's variables (name and arn basically) so these could be used in another stack I have deployed.
To achieve this I have the following:
in serverless.yml:
resources:
Resources:
AqDataTable: ${file(resources/AqDataTable.yml):AqDataTable}
Outputs:
AqDataTableName: ${file(resources/AqDataTable.yml):Outputs.AqDataTableName}
AqDataTableArn: ${file(resources/AqDataTable.yml):Outputs.AqDataTableArn}
(...)
custom:
AqDataTable:
name: !Ref AqDataTable
arn: !GetAtt AqDataTable.Arn
stream_arn: !GetAtt AqDataTable.StreamArn
in resources/AqDataTable.yml:
Outputs:
AqDataTableName:
Value: ${self:custom.AqDataTable.name}
Export:
Name: ${self:custom.AqDataTable.name}-Name
AqDataTableArn:
Value: ${self:custom.AqDataTable.arn}
Export:
Name: ${self:custom.AqDataTable.name}-Arn
When trying to deploy I get the following error:
Serverless Error ---------------------------------------
Trying to populate non string value into a string for variable ${self:custom.AqDataTable.name}. Please make sure the value of the property is a string.
The way I worked around this is by replacing AqDataTable.name value in the serverless.yml custom section from !Ref AqDataTable to a "harder-coded" value: AqDataTable-${self:provider.stage} but obviously this is a bad practice which I would like to avoid.
I'd appreciate any inputs on why this stack format invalidates the !Ref intrinsic function, or better ways to achieve what I am after here.
Many thanks!
In case anyone ever faces this issue:
After going over the docs one more time, apparently what I initially tried to do is not possible. According to CF docs:
For outputs, the value of the Name property of an Export can't use Ref
or GetAtt functions that depend on a resource.
Similarly, the ImportValue function can't include Ref or GetAtt
functions that depend on a resource.

How to enable siteEdit for Embedded Fields when using DD4T?

I am trying to enable siteEdit for the Embedded fields for the pages implemented using DD4T.
I am able to find the methods and tags which helps to enable it for normal methods and component presentation but not for Embedded Fields and at the components(directly passing the Icompoennt model) level.
I am trying to enable it for SiteEdit2012(UI)
Please help.
The same as a 'normal' field. Imagine you have an embedded field called 'address' with 2 fields: street and number. This is how you would make it !SiteEdit enabled:
//Street
#Html.SiteEditField(Model.Component, Model.Component.Fields["address"].EmbeddedValues[0]["street"])
//Number
#Html.SiteEditField(Model.Component, Model.Component.Fields["address"].EmbeddedValues[0]["number"])

Filtering issue with Urchin 6

I have the following filter for one of my profile:
filter type: Include Pattern Only
filter field: user_defined_variable (AUTO)
filter pattern: \[53\]
case sensitive: no
In my content, I have the following javascript:
_userv=0;
urchinTracker();
__utmSetVar("various string in here");
Now, the issue is that in this profiles, there are files that are showing up in the report that shouldn't. For instance, for a specific profile, in the Webmaster View > Content By Title, a page with the following variable (as seen from the source) shows up :
__utmSetVar("[3][345]")
I have no idea why this is happening. The filter pattern doesn't match thus it shouldn't show up.
It turns out that it's supposed to include files that may have a different pattern. The reason is that it will report on all the files that were seen during a single visit, which includes other files with different custom variables.
To see the report on the custom vars:
Marketing Optimization
- Visitor segment performance
-- User defined

Resources