Azure Devops - Limiting scope of IISWebAppDeploymentOnMachineGroup#0 task XmlVariableSubstitution - asp.net

I'm working on improving security in a legacy asp.net application. One issue identified was the use of hard-coded database connection strings in web.config.
To resovle this, I've moved the connection details to secret variables in Azure Devops variable groups.
The variable substitution is done in the IISWebAppDeploymentOnMachineGroup#0 task, by setting XmlVariableSubstitution.
This works fine. However I'm a bit concerned about how broadly this applies. This task will perform substitutions across all config files in the application, matching any element in appSettings, connectionStrings, configSections, based on key or name, against all pipeline variables.
If at some stage someone added a variable to the variable groups, which happens to match a key for any appSettings across the whole application, the value will be unintentionally and silently substituted.
I'd like to somehow limit the scope of the substitution task, to ensure it only applies where we need it to.
Is anyone aware of any way to do this?

When you use the option: XML variable substitution in the IISWebAppDeploymentOnMachineGroup task, it will loop all config files by default.
I am afraid that there is no such method can limit the scope of the Xml Variable Substitution action in the IISWebAppDeploymentOnMachineGroup task.
For a workaround, you can add File transform task to update the variable in the config file. It supports to defining the target file in the task.
for example:
- task: FileTransform#1
displayName: 'File Transform: '
inputs:
fileType: xml
targetFiles: web.config
On the other hand, you can also use the task RegEx Match & Replace task from RegEx Match & Replace. It supports to define the target variable and target file in the task. Refer to my previous ticker: RegExMatchReplace task

Related

how to read vars.<varName> in properties component - 'file' field in mule 4

My property file name is dev_123.yaml.
dev is an environment variable called env.
123 is a value coming from query param called rollId. Am storing this value in vars.rollId.
In configuration properties component, under 'file' field it will work if I give ${env}_123.yaml.
However, I want to read the value of '123' dynamically via vars too. I tried the following but dint work:
#[p('env') ++ "_" ++ vars.rollId ++ ".yaml"]
${env}_${vars.rollId}.yaml
That will not work. Configuration properties and configuration files are resolved at the startup of a Mule application. Variables are defined during flow execution, after application startup. There is no way to set a variable during start-up. Using configuration properties in the properties file name works because they are resolved at the same time.
An alternative could be to create a custom Mule Module with Mule SDK that implements operations to read properties files dynamically during flow execution. You have to consider if it worth the effort.

Web.Config transforms for Multi-Tenant deployment of WebForms app in docker over AWS ECS

Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html

Debugging Full text search in iManage Worksite

I am creating different workspaces having different types of names containing special characters also sometimes. When I am trying to perform a Full text search nothing comes up in workspace results. Now, problem is how can I start with the debugging process?
What all things to consider during Troubleshooting this issue on both IDOL Indexer and iManage worksite?
Any suggestions? Really Appreciated.
Thanks!
You can enable debugging logs for all interactions through the SDK by turning on middleware Logging - Basically these log the actions performed by iManage.dll.
The log paths registry key will provide the location of the log. Any application using iManage.dll will spawn a new file based on it's process name (such as YOURAPP.txt, WINWORD.txt, OUTLOOK.txt, etc)
Create or set the following values in HKEY_LOCAL_MACHINE\Software\Interwoven\WorkSite\8.0\Common
Name: Middleware Log Path
Type: String
Value: C:\Temp
Name: Middleware Log Flags
Type: DWORD
Value: ffff
This will show you useful information but the best way to debug is to pare your search parameters right back to basics then slowly introduce more complex parameters. Workspace search requests are made up of both IManage.IManProfileSearchParameters and IManage.IManWorkspaceSearchParameters
and I've found that depending on the data sometimes you'll need to search only with the IManProfileSearchParameters and leave IManWorkspaceSearchParameters as default empty parameters.

MarkLogic I don't know how to get all the result

Hello I am trying to read a module with this code:
(: Entry point - must be a read-only query. :)
xdmp:invoke(
'/path/mydocument.xqy',
(xs:QName('var1'), 'test',
xs:QName('var2'), "response"))
I am new in MarkLogic, I am using groovy and the api to connect to it, but also I saw I can invoke the module with this and indeed I did but it returns me
your query returned an empty sequence
I want to know if I can query xs:QName('var1'), 'test', changing test with a wildcard or how can I get all the information from the file called /path/mydocument.xqy?
I tried to use this:
xdmp:document-get("/path/mydocument.xqy)
but it says the file is not found. Although, if I use invoke I can query it, but I don't know what are the values I have to pass. I was wondering if there is something like sql using %% or something to give me all the data.
To answer the first question: "I am trying to read a module "
IF the module is in the database, then you must query the Modules database in which the module resides.
If the module is in the filesystem then you cannot directly access its source as a document but you can by executing xdmp:filesystem-file()
Simplification:
With the Default configuration of the server and REST client, user placed modules are in the "Modules" database and user placed documents are in the "Documents" database. This means, if you do a GET (read a "Document") with no additional parameters, it will return documents from the "Documents" database. Assuming you are using the default configuration for client and server, this would result in the behavior you are seeing. E.g. your Module code is in the Modules database, doing a GET for it by name will search the Documents database and correctly not find it.
You don't mention, and I don't know, the groovy library being used, but the REST API itself and all implementations of general purpose ML REST client libraries I am familiar with have options for overriding the default database with another. If the groovy library supports that, then specify the "Modules" database for your query and it should return the module document. Note: content-type will be application/text not text/xml.
You can simplify things for testing by bypassing the libraries and simply use a browser and try a URL like this http://yourserver.com:8000/v1/documents?uri=/your/module.xqy&database=Modules
Ref: https://docs.marklogic.com/REST/GET/v1/documents
Making the appropriate changes to the path and server for your use.
If you are still confused, then you should start with the basic MarkLogic tutorials and work through them one by one. You will most likely succeed faster by doing this then jumping straight into coding you don't understand yet.
DETAIL:
Note: The default behaviour is to EXECUTE documents when doing a GET call, using the Modules database. Thus doing a GET of http://yourserver:8000/your/module.xqy will EXECUTE it not return its source.
You will notice the REST API has a uri query parameter. This is EXECUTING the REST API code on /v1/documents which in turn will read the document specified by the uri and database parameters and return it.
I guess I can use:
xdmp:invoke(/pview/get-pview-browse-profiles.xqy,
cts:and-query((
cts:element-value-query(
xs:QName("letter"),"*", "wildcarded"),
cts:element-value-query(
xs:QName("collection"),"*", "wildcarded"))))
although it doesn't return anything

Risk if a registrant picks a username that matches a unix command?

In my app I ask users to register using a unique name. The app creates a directory for them with that name that they then can work with, saving files, etc.
I hadn't really thought about screening for other than alpha-numeric for the name. However, I ran across a thread somewhere than said to make sure not to create directory names that match a unix command name.
Is this a legitimate risk? If so, how might one programmatically screen for such an occurrence? I'm also curious how such a scenario might play out to illustrate the problem (exploit?). That last part is academic interest only, of course.
Generally, it doesn't matter(has no obvious security risk). Most softwares, for example shell, search a unix command based on some enviroment variables(like PATH). So even if your created directory matches a unix command like "cd", it can only be used as a parameter to other unix command, like cd cd.
However, if another application search the unix command based on other approaches like searching some directories, it may lead to security breaches.
The only way I can think of that being a risk is if you're going to turn around and process those user names through command-line functions. You would want to be careful to escape the user names anywhere that they could be interpreted as a command...though off the top of my head, with strictly alphanumeric user names, you'd have to go to a lot of trouble to run into such a risk.
If you decided anyway that you wanted to ensure that the username didn't match an application on the path of the creating process, you could shell out from whatever your app environment is, and evaluate the result of which $prospectiveUsername. If it returns anything other than an empty string, you know that the username is an application on the process's path.
NOTE: In the above scenario, make sure you sanitize the username before calling out to the shell command. Otherwise, you do run security risks, if e.g. the user decides to enter her username as "janedoe; rm -rf /".

Resources