Azure Resource Manager Template: Is there a way to get all resourceId/resourceName of a resource type under a Resource Group? - azure-resource-manager

I wonder if there is a way to get all resources of a particular type under a resource group?

Yes, the easiest way to do it is using powershell , try the command below :
Get-AzResource -ResourceGroupName <resource group name>| where {$_.ResourceType -eq <resource type>} |select Name , ResourceId
Result for demo :
Hope it helps .

you can use resourceId() function for that for a single resource:
"[resourceId('otherResourceGroup', 'Microsoft.Storage/storageAccounts', 'examplestorage')]"
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-functions-resource#resourceid
this is not possible for ALL the resources in the resource group unless you know their names beforehand and use resourceId function for each one individually
or you could use external scripts, like the other answer suggests

Related

What is GRAPHDB_CONTEXT_TEST in Graphdb?

I am trying to set a new repository in GraphDB. The npm documentation say that
GRAPHDB_CONTEXT_TEST = 'http://ont.enapso.com/repo'; where is this "http://ont.enapso.com/repo" coming from?
Also, why do we need { prefix: 'entest', iri: 'http://ont.enapso.com/test#' } ?
In the test repository, it is:
But I don't understand if the inside the quotes is just a string, or a link?
GRAPHDB_CONTEXT_TEST = 'http://ont.enapso.com/repo';
This is the global variable in which we have 'http://ont.enapso.com/repo', which is to define which named graph would be used.
{ prefix: 'entest', iri: 'http://ont.enapso.com/test#' }
These are the prefixes that we could pass in along with their IRI. They are used to perform SPARQL queries which require your prefixes.
We pass the IRI inside the quotes referred to as an internationalized resource identifier. It is used to identify uniquely.
You can also check the updated documentation of the pack available at
ENAPSO GraphDB Client
ENAPSO GraphDB Admin
Hope that answers your questions.

How can I create cutom fields in Phabricator Manipfest programmatically?

How can I use the maniphest.custom-field-definitions for Phabricator from the cli? I know how to create the custom fields using the UI based on the docs, but I am trying to do this using a command line option by passing in the JSON as stdin if possible.
I am aware that I can conduct various operations from the command like like config, auth, api etc etc, but I could not find a pointer on how to work with custom fields from the command line.
You can use the following command to edit the custom field definitions from a shell:
phabricator/bin/config set maniphest.custom-field-definitions "{ your customfield json here}"
Note:
This is the same answer that I provided on phabricator-community.org. Posting it here as well for posterity.

reading yaml from twig

Preface: in ez4 i remember there was a tpl function to read ini settings, we used to use this to pass specific locations or id's with which we could then render certain content.
In ezplatform I am now doing the same thing but by using the PreContentViewListener (in the PreContentViewListener read a yml file and pass into the view as params), but this doesn't feel like the correct way as the PreContentViewListener doesn't always get triggered, in custom controllers for example.
Question
Is there a native way to read yaml files from within twig templates? After searching the docs and available packagists i cannot find anything :/
If your needs are simple (i.e. reading container parameters), you can also use eZ Publish config resolver component which is available in any Twig template with ezpublish.configResolver.
You can specify a siteaccess aware parameter in format <namespace>.<scope>.<param_name>, like this:
parameters:
app.default.param.name: 'Default param value'
app.eng.param.name: 'English param value'
app.cro.param.name: 'Croatian param value'
where default, eng and cro are different eZ Publish scopes.
You can then use the config resolver to fetch the parameter in current scope with:
{{ ezpublish.configResolver.parameter('param.name', 'app') }}
If you have Legacy Bridge installed, this even falls back to legacy INI settings if no Symfony container parameter exists:
{{ ezpublish.configResolver.parameter('SiteSettings.SiteName', 'site') }}
Disclaimer: Some say that using config resolver is bad practice, but for simpler usecases it is okay, IMO.
Have a look to our CjwPublishToolsBundle.
https://github.com/cjw-network/CjwPublishToolsBundle
https://github.com/cjw-network/CjwPublishToolsBundle/blob/master/Services/TwigConfigFunctionsService.php
Here we have 2 wrapper twig functions
{{cjw_config_resolver_get_parameter ( 'yamlvariablename', 'namespace default ezsettings') }}
=> ezpublish siteaccessmatching
{{cjw_config_get_parameter( 'mailer_transport' )}}
=> core symfony yaml reader without siteaccess
You could do a lot of things in eZ 4 and not always really good for your application design. ezini was able to read the configuration from the template but now in eZ Platform and by extension Symfony you need to respect more common patterns. IMO the view should not be that smart.
Then injecting variables to the view from a listener (PreContentViewListener or your own) is not a bad idea.
You can also use the Twig Globals that could allow you to do 2 global things:
inject variables (1)
inject a service (2)
Look here: https://symfony.com/doc/current/templating/global_variables.html
(2): please don't inject the service container globally it is bad
(1): I don't remember if the Twig Globals are Site Access aware, if not injecting your own service (2) to manage access to the config might be better.
And finally, I think that the use case is not a common one:
we used to use this to pass specific locations or id's with which we could then render certain content.
Most of the time it is a bad idea to pass ids coming from the configuration to render something, it is much better to organize the content structure to let you pull the location you want using the PHP API. (no id in configuration no hassle with dev, stage, preprod and prod architecture)

drupal context path and subpaths

I have a problem with Context path condition. I would like on the conditions region to assign something like path/* where * indicates all its subpaths. For instance, if we have path/subpath1/subpath2 the context condition should match on path, subpath1 and subpath2.
Is it possible to do that on Context?
Thanks in advance!
Yes, use ContextPHP module to add php for Conditions. The same way you are using php for Vivilibiliy paths of a block. Example:
https://drupal.stackexchange.com/questions/79512/tokens-in-blocks-configuration-for-list-only-pages/80155#80155

I'm having problems with configuring a filter that replicates specific tables only

I am trying to use filters to select specific tables to replicate.
I tried running this with the installer
./tools/tungsten-installer --master-slave -a \
...
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
and got this exception in trepctl status after the master had not installed properly:
Plugin class name property is missing or null: key=replicator.filter.replicate
which file is this properties file? How do I find it? Moreover, in specifying the settings for the filter, how do I know what exactly to put?
I discovered that I am supposed to Modify the configuration template file prior to configuration according to Issue 219 but what changes am I supposed to make in tungsten-replicator-2.0.5-diff that will later on be patched to the extraction?
Issue 254 suggests that If you want to apply a filter out of the box, you can use these options with tungsten-installer:
-a --property=replicator.filter.Replicate.ignoreFilter=schema_x.tablex,schema_x,tabley,schema_y,tablez
--svc-thl-filter=Replicate
However when I try using this for --property=replicator.filter.replicate.do,
but the problem is still the same:
pendingExceptionMessage: Plugin class name property is missing or null: key=replicator.filter.replicate
Your assistance will be greatly appreciated.
Rumbi
Update:
Hi
I had a look at this file: /root/tungsten/tungsten-replicator/samples/
conf/filters/default/tableignore.tpl .Acoording to this sample, a
static-SERVICE_NAME.properties file is supposed to have something like
this configured, please confirm if this is the correct syntax:
replicator.filter.tabledo=com.continuent.tungsten.replicator.filter.JavaScr iptFilter
replicator.filter.tabledo.script=${replicator.home.dir}/samples/
scripts/javascript-advanced/tabledo.js
replicator.filter.tabledo.tables=foo(database).bar(table)
replicator.stage.thl-to-dbms.filters=tabledo
However, I did not find tabledo.js (or something similar) in the
directory where tableignore.js exists. Could I please have the
location of this file. If there is an alternative way of specifiying
--property=replicator.filter.replicate.do=test without the use of
this .js file, your suggestions are most welcome.
Download the latest version of tungsten replicator. The missing tpl file was added about a month ago. After installation, the filtered tables should be added to static-service.properties under the section FILTERS.
Locate your replicator configuration file in static-YOUR_SERVICE_NAME.properties, e.g.
/opt/continuent/tungsten/tungsten-replicator/conf/static-mysql2vertica.properties
Make sure the individual dbms properties are set, in particular the setting replicator.applier.dbms:
# Batch applier basic configuration information.
replicator.applier.dbms=com.continuent.tungsten.replicator.applier.batch.SimpleBatchApplier
replicator.applier.dbms.url=jdbc:mysql:thin://${replicator.global.db.host}:${replicator.global.db.port}/tungsten_${service.name}?createDB=true
replicator.applier.dbms.driver=org.drizzle.jdbc.DrizzleDriver
replicator.applier.dbms.user=${replicator.global.db.user}
replicator.applier.dbms.password=${replicator.global.db.password}
replicator.applier.dbms.startupScript=${replicator.home.dir}/samples/scripts/batch/mysql-connect.sql
# Timezone and character set.
replicator.applier.dbms.timezone=GMT+0:00
replicator.applier.dbms.charset=UTF-8
# Parameters for loading and merging via stage tables.
replicator.applier.dbms.stageTablePrefix=stage_xxx_
replicator.applier.dbms.stageDirectory=/tmp/staging
replicator.applier.dbms.stageLoadScript=${replicator.home.dir}/samples/scripts/batch/mysql-load.sql
replicator.applier.dbms.stageMergeScript=${replicator.home.dir}/samples/scripts/batch/mysql-merge.sql
replicator.applier.dbms.cleanUpFiles=false
Depending on the database you are replicating to you may have to omit/modify some of the lines.
For more information see:
https://code.google.com/p/tungsten-replicator/wiki/Replicator_Batch_Loading
I don't know if this problem is still open or not.
I am using this version 2.0.6-xxx and installing the service using the parameters works for me.
I would like to point it out, that as the parameter says "--svc-extractor-filters" defines an extractor filter. Meaning that the parameters will guide the extraction of data in the master server.
If you intend to use it on the slave service, you should use the "--svc-applier-filters".
The parameters
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
supposed to create the following in the properties file:
This is the filter set up.
replicator.filter.replicate=com.continuent.tungsten.replicator.filter.ReplicateFilter
replicator.filter.replicate.ignore=
replicator.filter.replicate.do=test,*.foo
And you should also be able to find the
replicator.stage.binlog-to-q.filters=replicate
parameter set.
If you intend to use this filter in the slave, please find the line with:
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave
and change it as
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave,replicate
Hope this brief description did help to you!

Resources