I have created an Azure Synapse Analytics Workspace in which I have pipelines with parameters. These parameters have storage account name, folder names etc which will change as we deploy the Synapse Analytics Workspace from Dev to Stage and Prod.
During the deployment in the Parameters file - I do not see pipeline parameters parameterized. Only Triggers and Linked services parameters are available as parameters in ARM Template.
In Azure Data Factory, we would define using arm-template-parameters-definition.json that which parameters need to be available in ARM Template parameters file.
Is there a similar option available in Synapse Workspace?
You can follow this article: CICD Automation in Synapse Analytics: taking advantage of custom parameters in Workspace Templates.
You would need to create a custom template-parameters-definition.json file. Then you will be able to add more customization.
Related
I am interested in knowing how can I integrate a repository with Azure Machine Learning Workspace.
What have I tried ?
I have some experience with Azure Data Factory and usually I have setup workflows where
I have a dev azure data factory instance that is linked to azure repository.
Changes made to the repository using the code editor.
These changes are published via the adf_publish branch to the live dev instance
I use CI / CD pipeline and the AzureRMTemplate task to deploy the templates in the publish branch to release the changes to production environment
Question:
How can I achieve the same / similar workflow with Azure Machine Learning Workspace ?
How is CI / CD done with Azure ML Workspace
The following workflow is the official practice to be followed to achieve the task required.
Starting with the architecture mentioned below
we need to have a specific data store to handle the dataset
Perform the regular code modifications using the IDE like Jupyter Notebook or VS Code
Train and test the model
To register and operate on the model, deploy the model image as a web service and operate the rest.
Configure the CI Pipeline:
Follow the below steps to complete the procedure
Before implementation:
- We need azure subscription enabled account
- DevOps activation must be activated.
Open DevOps portal with enabled SSO
Navigate to Pipeline -> Builds -> Choose the model which was created -> Click on EDIT
Build pipeline will be looking like below screen
We need to use Anaconda distribution for this example to get all the dependencies.
To install environment dependencies, check the link
Use the python environment, under Install Requirements in user setup.
Select create or get workspace select your account subscription as mentioned in below screen
Save the changes happened in other tasks and all those muse be in same subscription.
The entire CI/CD procedure and solution was documented in link
Document Credit: Praneet Singh Solanki
Amplify does not support CLI option --profile. It always uses profile specified when application was generated. Different team members use different AWS profiles.
How to change/configure/use profile different than profile used during application generation?
Aim is to publish changes from different computer. Final goal is to use CI server to publish application to different regions.
Amplify does not work like "other" development tools where tool is detached from git. Amplify goes hand in hand with Git and requires initialization after cloning. Running amplify init and choosing existing environment (which is pushed by other developer), it is possible to select different AWS profile.
We are attempting to deploy Firebase Functions, Rules, and Indexes to multiple projects for tenant isolation of data. We are attempting to use Google Cloud Source Repository, but Cloud Build in each project does not have the ability to connect to the Central Project Source Repository - and we have added the required Source Repo IAM rules on our Cloud Build service account.
What is a good solution for deploying our Firebase Functions, Rules, and Indexes from a central repository?
You can't access to event from a source repository in another project mode. Thereby, you can't set up a trigger on the source repository that don't belong to your project
So, you can imagine this workaround to achieve what you want
Source Project
Create a PubSub topic (push-event for example)
Configure the trigger that you want which run a Cloud Build
In this Cloud Build, format a JSON message with all the push data that you want (commit SHA, type of event, repo name,...) and publish this message to push-event topic
Tenant Projects
Create a cloud function that trigger Cloud Build (focus on that bellow)
Create a push subscription on the pubsub push-event topic located in the source projet (be sure that the current account that run the terraform has the roles topicViewer and topicSubscriber on the push-event topic (or on the source project))
Note: the first thing that you have to do in the Cloud Build execution is to clone the source repository because you won't have the data automatically downloaded (get the correct source according with the branch, tag or pull event.)
Cloud Functions
I don't know your dev language, but the principle is to perform an API call to the Cloud Build API to launch the build. This API call require the content of the cloudbuild.json. So, in the cloud function,
You can also clone the source repo (grant the reader permission) in the /tmp directory and then read the cloudbuild.json file to run in your Cloud Build. But it could be difficult in case of branch, tag, or pull context.
You can publish, in addition of other data in the PubSub message published in the source project, the content of the cloudbuild.json file to run by the Cloud Functions in the tenant project.
Using Elastic Beanstalk console, I'm trying to create a super basic Node.js environment using the preconfigured platform and sample application.
When I try to configure more options and then modify Rolling updates and deployments, I only have two options for Deployment policy. Why don't I have the option to select Rolling or Rolling with additional batch? What else would I need to do either within Elastic Beanstalk or in another service (resource) to be able to do this?
Region is N. California and IAM user is full admin.
The Configuration present needs to be changed from the Low cost (Free Tier eligible) to either High availability or Custom Configuration. It would still be good to know how to change after the application has already been created using the Low cost preset.
We use a configuration management tool (Chef) for WSO2 API Manager installation (v2.1.0). For each installation, the WSO directory is deleted and overwritten with the new changes / patches.
This process removes already created APIs from the WSO2 API Publisher. (Since these are still present in the database, they cannot be re-created with the same name.) We have assumed that the entire API configuration is stored in the database which is obviously not the case.
This API-specific file is noticeable to us:
<wso2am>/repository/deployment/server/synapse-configs/default/api/admin--my-api-definition_vv1.xml
Are there any other such files that must not be deleted during a new installation or is there a way to create these files from the information stored in the database?
We have considered using the "API import / export tool" (https://docs.wso2.com/display/AM210/Migrating+the+APIs+to+a+Different+Environment). However, according to documentation, this also creates the database entries for the API, which in our case already exist.
You have to keep the content of the server folder (/repository/deployment/server). For this, you can use SVN based dep-sync. Once you enable dep-sync by giving an SVN server location, all the server specific data will be written to the SVN server.
When you are installing the newer pack, what you need to do is to point to the svn location and the database. (I hope you are using a production-ready database other than inbuilt h2)