We are using Jfrog Artifactory and looking for a way to automate the Repo, Group and permission creation for a list of items as part of a Azuredevops pipeline.
For example, I want to create a virtual Repo called "myproject-mvn-repo" with all its subcomponents as below.
create a virtual repository: myproject-mvn-repo
link existing or create remote repo for maven (if not existing): myproject-mvn-remote-repo
Create 2 local repos if not existing :- myproject-mvn-release-local-repo
- myproject-mvn-snapshot-local-repo
Create a security group for the Repos: - myproject-sg
Create 2 type permission for the Repos and related builds : myproject- developers (read write)
myproject-contributors (read/write/manage)
Add users to the group subsequently
I tried to follow the Jfrog document , but couldn't loop through for a number of items and would need to make it as idempotent(shouldn't create/modify any repo or component if already present)
Let's split it into 2 parts - managing repositories, and managing permissions.
Repositories
In order to create / update / delete multiple repositories in a single request you can use the Artifactory YAML Configuration.
For example (simplified):
PATCH /artifactory/api/system/configuration
Content-Type: application/yaml
localRepositories:
myproject-mvn-release-local-repo:
type: maven
...
myproject-mvn-snapshot-local-repo:
type: maven
...
remoteRepositories:
myproject-mvn-remote-repo:
type: maven
url: ...
...
virtualRepositories:
myproject-mvn-repo:
type: maven
repositories:
- myproject-mvn-release-local-repo
- myproject-mvn-snapshot-local-repo
- myproject-mvn-remote-repo
...
Note - this is a PATCH request, which means that if a repository already exists it will not fail the request, but it will update its configuration based on the settings in this request.
Permissions
For managing permissions there are also two options - using projects (preferred), or using groups and permission targets.
Using Projects
From the documentation:
JFrog Projects is a management entity for hosting your resources (repositories, builds, Release Bundles, and Pipelines), and for associating users/groups as members with specific entitlements. As such, using projects helps Platform Admins to offload part of their day-to-day management effort and to generate a better separation between the customer products to improve customer visibility on efficiency, scale, cost, and security. Projects simplifies the onboarding process for new users, creates better visibility for LOBs and project stakeholders.
You can create projects, assign roles to users and groups in projects, assign repositories to projects, and more. Projects can be managed using REST API, specifically (but not limited to):
Add a New Project - to create a new project
Update User in Project - add a user as a member of the project with given roles
Update Group in Project - add a group as a member of the project with given roles
Move Repository in a Project - to assign a repository to a project
Using Groups and Permission Targets
Manage groups using REST API. First try to create a group. If a group already exists it will return a 409 Conflict, then use update group instead, or just add / remove members to the group.
For example - create group myproject-developers with alice and bob as members (simplified):
POST /access/api/v2/groups
Content-Type: application/json
{
"name": "myproject-developers",
"description": "My project developers",
"members": ["alice", "bob"],
...
}
Manage permissions - use REST API to create / replace permission targets, aggregating the repositories and granting each group its relevant permissions on those repositories.
For example (simplified):
PUT /artifactory/api/security/permissions/myproject-permissions
Content-Type: application/json
{
"name": "myproject-developers",
"repositories": [
"myproject-mvn-release-local-repo",
"myproject-mvn-snapshot-local-repo",
"myproject-mvn-remote-repo"
],
"principals": {
"groups" : {
"myproject-developers" : ["r","w"],
"myproject-contributors" : ["r","w","m"]
}
},
...
}
Related
Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
I'm provisioning App Service, App Service plan and storage account to existing Resource Group using ARM template. Doing this on a nightly basis. Everything has worked several months but suddenly started to see errors like this:
{
"Code": "BadRequest",
"Message": "The requested app service plan cannot be created in the current resource group because it is hosting Linux apps. Please choose a different resource group or create a new one.",
"Target": null,
"Details": [
{
"Message": "The requested app service plan cannot be created in the current resource group because it is hosting Linux apps. Please choose a different resource group or create a new one."
},
{
"Code": "BadRequest"
},
{
"ErrorEntity": {
"ExtendedCode": "59314",
"MessageTemplate": "The requested app service plan cannot be created in the current resource group because it is hosting Linux apps. Please choose a different resource group or create a new one.",
"Parameters": [],
"Code": "BadRequest",
"Message": "The requested app service plan cannot be created in the current resource group because it is hosting Linux apps. Please choose a different resource group or create a new one."
}
}
],
"Innererror": null
' Error code: 1201
}
There are no changes on ARM template.
I don't have permissions to create new Resource Groups with this subscription, just Resource Group owner to this existing one.
Historically, you can't mix Windows and Linux apps in the same resource group. However, all resource groups created on or after January 21, 2021 do support this scenario. For resource groups created before January 21, 2021, the ability to add mixed platform deployments will be rolled out across Azure regions (including National cloud regions) soon.
See: https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-intro#limitations
See also the feature request to support Linux and Windows App Service Plan within the same Resource Group:
https://feedback.azure.com/forums/169385-web-apps/suggestions/37287583-allow-a-linux-and-windows-app-service-plan-to-exis
The issue can be resolved by creating new Linux App Service plan to Resource group and then deleting it. After that Windows App Service plan provisioning works again.
SOLUTION THAT WORKED FOR ME:
It seems same OS's (Linux/Windows) App Service Plans (ASP) can not be used in the same Resource Group with same Region.
So what I did was,
Created a new Resource Group (Optional, if you have it already)
Deleted all the ASP in the Group (if you are using already created Resource Group)
Search for "App Service plans" and press Enter
Click Add
Specify the Resource Group > Select OS (Linux) > Select Region (East US) > Select SKU > Review + Create.
Again,
Search for "App Service plans" and press Enter
Click Add
Specify the Resource Group > Select OS (Windows) > Select Region (Central US) > Select SKU > Review + Create.
Doing the above steps, resolved my issue. Hope it helps others.
In my case, I deleted all existing app services and solutions and placeholder in that resource group, then it works
We have setup nightly testing for an open source project (MERN stack). The Selenium tests require test data which we do not want to not make public. Initially we tried to keep test data as environment variables in the build server (CircleCI) but this approach is not scalable. We do not own any infrastructure - so any database or storage bucket based solutions will need additional cost which will not be feasible based on the org's current budget.Is there a smart solution to keep the test data files secure at no additional cost?
As you know, the challenge is that you need somewhere to put that data. If you're trying to do this without paying any providers, the best I can suggest is Amazon's free tier for either S3 storage or a database. https://aws.amazon.com/free/
Those could be securely accessed from CircleCI by just storing the API keys as project variables.
CircleCI's AWS S3 orb encapsulates the install and setup of AWS CLI to simplify this.
version: 2.1
orbs:
aws-s3: circleci/aws-s3#1.0.2
jobs:
build:
docker:
- image: 'circleci/node:10'
steps:
- checkout
- aws-s3/copy:
from: 's3://your-s3-bucket-name/test_data/somefile.ext'
to: test_data.ext
- run: # your test code here
When deploying a Microsoft.Web resource with the new MSI feature the principleId GUID for the created user is visible after deployment. Screenshot below shows the structure in the ARM-template.
What would be the best way to fetch this GUID later in the pipeline to be able to assign access rights in (for instance) Data Lake Store?
Is it possible to use any of the existing ARM template functions to do so?
I just struggled with this myself. The solution that worked for me was found deep in the comments here.
Essentially, you create a variable targeting the resource you are creating with the MSI support. Then you can use the variable to fetch the specific tenantId and principalId values. Not ideal, but it works. In my examples, I'm configuring Key Vault permissions for a Function App.
To create the variable, use the syntax below.
"variables": {
"identity_resource_id": "[concat(resourceId('Microsoft.Web/sites', variables('appName')), '/providers/Microsoft.ManagedIdentity/Identities/default')]"
}
To get the actual values for the tenantId and principalId, reference them with the following syntax:
{
"tenantId": "[reference(variables('identity_resource_id'), '2015-08-31-PREVIEW').tenantId]",
"objectId": "[reference(variables('identity_resource_id'), '2015-08-31-PREVIEW').principalId]"
}
Hope this helps anyone who comes along with the same problem!
Here are a few sample templates: https://github.com/rashidqureshi/MSI-Samples that show a) how to grant RBAC access to ARM resources b) how to create access policy for keyvault using the OID of the MSI
There is new way to get identity information. You can directly get them from resource that support Managed Identity for Azure resources (Managed Service Identity in the past).
{
"tenantId": "[reference(resourceId('Microsoft.Web/sites', variables('serviceAppName')),'2019-08-01', 'full').identity.tenantId]",
"objectId": "[reference(resourceId('Microsoft.Web/sites', variables('serviceAppName')),'2019-08-01', 'full').identity.principalId]",
}
You can also get principal Id for resource in other resource group or/and subscription. ResourceId supports optional parameters:
"tenantId": "[reference(resourceId(variables('resourceGroup'), 'Microsoft.Web/sites', variables('serviceAppName')),'2019-08-01', 'full').identity.tenantId]",
or
"tenantId": "[reference(resourceId(variables('subscription'), variables('resourceGroup'), 'Microsoft.Web/sites', variables('serviceAppName')),'2019-08-01', 'full').identity.tenantId]",
This is quite a long question, but there's quite a lot to it.
It feels like it should be a reasonably common use case, so I'm hoping the Stack Overflow community can provide me with a 'best practice in Symfony2' answer.
The solution I describe below works, but there are several consequences I'd like to avoid:
In my local dev environment, if I have used the wrong db connection the test will work in dev but fail on production
The routes of the ADMIN API are accessible on the PUBLIC API url, just denied.
If I have a mirror of live in my dev environment (3 separate checkouts with the corresponding parameters.yml file) then the feature tests for the other bundles fail
Is there a 'best practice in Symfony2' way to set up my project?
We're running a LAMP stack. We use git/(Atlassian) stash for version control.
We're using doctrine for the ORM and FOS-REST with OAuth plus symfony firewalls to authenticate and authorise the users.
We're committed to use Symfony2, so I am trying to find a 'best practice' solution:
I have a project with 3 applications:
A public-facing API (which gives read-only access to the data)
A protected API (which provides admin functionality)
A set of batch processes (to e.g. import data and monitor data quality)
Each application uses a set of shared models.
I have created 4 bundles, one each for the application and a 4th for the shared models.
Each application must use a different database user to access the database.
There's only one database.
There's several tables, one is called 'prices'
The admin API only must be accessible from one hostname (e.g. admin-api.server1)
The public API only must be accessible from a different hostname (e.g. public-api.server2)
Each application is hosted on a different server
In parameters.yml in my dev environment I have this
// parameters.yml
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: user2
api_admin_db_pass: pass2
batch_db_user: user3
batch_db_pass: pass3
In config.yml I have this:
// config.yml
doctrine:
dbal:
connections:
api_public:
user: "%api_public_db_user%"
password: "%api_public_db_pass%"
api_admin:
user: "%api_admin_db_user%"
password: "%api_admin_db_pass%"
batch:
user: "%batch_db_user%"
password: "%batch_db_pass%"
In my code I can do this (I believe this can be done from the service container too, but I haven't got that far yet)
$entityManager = $this->getContainer()->get('doctrine')->getManager('api_public');
$entityRepository = $this->getContainer()->get('doctrine')->getRepository('CommonBundle:Price', api_admin');
When I deploy my code to each of the live servers, I put junk values in the parameters.yml for the other applications
// parameters.yml on the public api server
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: **JUNK**
api_admin_db_pass: **JUNK**
batch_db_user: **JUNK**
batch_db_pass: **JUNK**
I have locked down my application so that the database isn't accessible (and thus the other API features don't work)
I have also set up Symfony firewall security so that the different routes require different permissions
There's also security in the apache vhost to deny access to say the admin api path from the public api directory.
So, I have secured my application and met the requirement of the security audit, but the dev process isn't ideal and something feels wrong.
As background:
We have previously looked at splitting it up into different applications within the same project (like this Symfony2 multiple applications and api centric application. Actually followed this method http://jolicode.com/blog/multiple-applications-with-symfony2) , but ran into difficulties, and in any case, Fabien says not to (https://groups.google.com/forum/#!topic/symfony-devs/yneojUuFiqw). That this existed in Symfony1 and was removed in Symfony2 is enough of an argument for me.
We have previously gone down the route of splitting up each bundle and importing it using composer, but this caused too many development overheads (for example, having to modify many repositories to implement a feature; it not being possible to see all of the changes for a feature in a single pull request).
We are receiving an ever growing number of requests to create APIs, and we're similarly worried about putting each application in its own repository.
So, putting each of the three applications in a separate Symfony project / git repository is something we want to avoid too.