Azure Bicep RG Deployment - Object reference not set to an instance of an object [duplicate] - azure-resource-manager

I'm trying to create a simple App Service Plan with the below code.
param Location string = 'eastus'
resource appServicePlan1 'Microsoft.Web/serverfarms#2020-12-01' = {
name: 'myasp'
location: Location
sku: {
name: 'S1'
capacity: 1
}
}
Below is the Azure CLI command that I'm using to execute the above Bicep script
az deployment group create --name deploy1 --resource-group az-devops-eus-dev-rg1 --template-file main.bicep
Below is the screenshot
All this was working earlier. I'm using the latest version of Bicep (v0.9.1) which is available as of today.
Any pointers on why this is occurring now would be much appreciated.

Just had this issue in a MS workshop. We solved it by adding a empty properties-element to the appServicePlan. Ex.
param Location string = 'eastus'
resource appServicePlan1 'Microsoft.Web/serverfarms#2020-12-01' = {
name: 'myasp'
location: Location
properties: {}
sku: {
name: 'S1'
capacity: 1
}
}

Related

Terraform with Localstack basic setup doesn't work

I want to try basic setup for localstack with terraform.
My docker-compose file, from the localstack docs
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
- "127.0.0.1:53:53" # DNS config (only required for Pro)
- "127.0.0.1:53:53/udp" # DNS config (only required for Pro)
- "127.0.0.1:443:443" # LocalStack HTTPS Gateway (only required for Pro)
environment:
- DEBUG=1
- LOCALSTACK_HOSTNAME=localhost
- PERSISTENCE=${PERSISTENCE-}
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-} # only required for Pro
- DOCKER_HOST=unix:///var/run/docker.sock
- SERVICES=s3
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
My terraform file to create s3 bucket
# Public Cloud Configuration
provider "aws" {
region = "us-east-1"
access_key = "test123"
secret_key = "testabc"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
endpoints {
s3 = "http://localhost:4566"
}
}
# Create Bucket
resource "aws_s3_bucket" "my_bucket" {
bucket = "bucket"
}
I got the next error when running the command terraform apply
│ Error: creating Amazon S3 (Simple Storage) Bucket (bucket): RequestError: send request failed
│ caused by: Put "http://bucket.localhost:4566/": dial tcp: lookup bucket.localhost on 10.222.50.10:53: no such host
│
│ with aws_s3_bucket.my_bucket,
│ on main.tf line 15, in resource "aws_s3_bucket" "my_bucket":
│ 15: resource "aws_s3_bucket" "my_bucket" {
I'm able to create s3 bucket manually in the localstack with command like
aws --endpoint-url http://localhost:4566 s3 mb s3://user-uploads
docker ps output
c9497bcff0e3 localstack/localstack "docker-entrypoint.sh" 23 minutes ago Up 18 minutes (healthy) 127.0.0.1:53->53/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:4510-4559->4510-4559/tcp, 127.0.0.1:4566->4566/tcp, 127.0.0.1:53->53/udp, 5678/tcp localstack_main
So why localstack is trying to access some different address like 192.168.178.1:53? Do I need to specify somewhere different address? Checked a number of tutorials and for everyone, the setup works fine.
The issue herein lies with your Terraform configuration. The following code inside main.tf works perfectly with tflocal (LocalStack's wrapper for Terraform CLI):
resource "aws_s3_bucket" "my_bucket" {
bucket = "bucket"
}
tflocal init
tflocal apply
If you don't wish to use tflocal, you would need to have a configuration like this:
provider "aws" {
access_key = "mock_access_key"
secret_key = "mock_secret_key"
region = "us-east-1"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
s3 = "http://s3.localhost.localstack.cloud:4566"
}
}
resource "aws_s3_bucket" "test-bucket" {
bucket = "my-bucket"
}
I hope this helps.
I solve it adding a property in provider aws
s3_use_path_style = true
Sample:
terraform {
required_version = ">= 0.12"
backend local {}
}
provider "aws" {
region = "localhost"
access_key = "local"
secret_key = "local"
skip_region_validation = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
s3_use_path_style = true // <<- this property here
endpoints {
dynamodb = "http://localhost:4566"
s3 = "http://localhost:4566"
}
}
// S3
resource "aws_s3_bucket" "my-bucket" {
bucket = "teste"
}
I hope help you

Azure Bicep - can't access output from external module

I have a module called "privateendpoints.bicep" that creates a private endpoint as follows:
resource privateEndpoint_resource 'Microsoft.Network/privateEndpoints#2020-07-01' = {
name: privateEndpointName
location: resourceGroup().location
properties: {
subnet: {
id: '${vnet_resource.id}/subnets/${subnetName}'
}
privateLinkServiceConnections: [
{
name: privateEndpointName
properties: {
privateLinkServiceId: resourceId
groupIds: [
pvtEndpointGroupName_var
]
}
}
]
}
}
output privateEndpointIpAddress string = privateEndpoint_resource.properties.networkInterfaces[0].properties.ipConfigurations[0].properties.privateIPAddress
This is then referenced by a calling bicep file as follows:
module sqlPE '../../Azure.Modules/Microsoft.Network.PrivateEndpoints/1.0.0/privateendpoints.bicep' = {
name:'sqlPE'
params:{
privateEndpointName:'pe-utrngen-sql-${env}-001'
resourceId:sqlDeploy.outputs.sqlServerId
serviceType:'sql'
subnetName:'sub-${env}-utrngenerator01'
vnetName:'vnet-${env}-uksouth'
vnetResourceGroup:'rg-net-${env}-001'
}
}
var sqlPrivateLinkIpAddress = sqlPE.outputs.privateEndpointIpAddress
My problem is, it won't build. In VSCode I get the error The type "outputs" does not contain property "privateEndpointIpAddress"
This is the property I just added. Prior to me adding then all worked ok. I've made sure to save the updated external module and I've ensure right-clicked it in VSCode and selected build, it build ok and created a json file.
So, it seems the client bicep file is not picking up changes in the external module.
Any suggestions please?
The problem seemed to be caused by the fact I had the external module open in a separate VS Code instance. Once I closed this and opened the file in the same instance as the calling bicep file then it worked ok.

Running into AWS Elastic BeanStalk Event Error: Manifest file has schema validation errors

I am setting up pipelines to AWS Elastic BeanStalk via bitbucket and I am running into: Manifest file has schema validation errors: Error Kind: ArrayItemNotValid, Path: #/aspNetCoreWeb.[0], Property: [0] Error Kind: PropertyRequired, Path: #/parameters.appBundle, Property: appBundle Error Kind: NoAdditionalPropertiesAllowed, Path: #/parameters, Property: parameters.
It seems that I am have a problem with my manifest file. However due to there being very little documentation on how to fix this problem. I am not able to resolve this issue. How do I solve this problem?
Here is my aws-windows-deployment-manifest file:
{
"manifestVersion": 1,
"deployments": {
"aspNetCoreWeb": [
{
"name": "CareerDash",
"parameters": {
"archive": "site",
"iisPath": "/"
}
}
]
}
}
Look like I figured it out. The issue is that the aws-windows-deployment-manifest.json file should be like the following:
{
"manifestVersion": 1,
"deployments": {
"aspNetCoreWeb": [
{
"name": "CareerDash",
"parameters": {
"appBundle": "site.zip", /*This line is where your web app file location is. The Web app folder should be in .zip file. */
"iisPath": "/" /* This line is the path to where your web app files are located in site.zip file, specifically the path to web.config file (which should be in the same level as your main web app files */
}
}
]
}
}
Overall your app bundle should be a zip file that contains the site.zip file and aws-windows-deployment-manifest.json file. In a hierarchy like so:
appBundleName.zip
site.zip
aws-windows-deployment-manifest.json

How to configure Wordpress via AWS Fargate Container using AWS CDK

I would like to configure Wordpress via AWS Fargate in the container variant (i.e. without EC2 instances) using AWS CDK.
I have already implemented a working configuration for this purpose. However, it is currently not possible to install themes or upload files in this form, since Wordpress is located in one or more docker containers.
Here is my current cdk implementation:
AWS-CDK
export class WordpressWebsiteStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// GENERAL
const vpc = new ec2.Vpc(this, 'Vpc', {
// 2 is minimum requirement for cluster
maxAzs: 2,
// only create Public Subnets in order to prevent aws to create
// a NAT-Gateway which causes additional costs.
// This will create 1 public subnet in each AZ.
subnetConfiguration: [
{
name: 'Public',
subnetType: ec2.SubnetType.PUBLIC,
},
]
});
// DATABASE CONFIGURATION
// Security Group used for Database
const wordpressSg = new ec2.SecurityGroup(this, 'WordpressSG', {
vpc: vpc,
description: 'Wordpress SG',
});
// Database Cluster for wordpress database
const dbCluster = new rds.DatabaseCluster(this, 'DBluster', {
clusterIdentifier: 'wordpress-db-cluster',
instances: 1,
defaultDatabaseName: DB_NAME,
engine: rds.DatabaseClusterEngine.AURORA, // TODO: AURORA_MYSQL?
port: DB_PORT,
masterUser: {
username: DB_USER,
password: cdk.SecretValue.plainText(DB_PASSWORD)
},
instanceProps: {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
vpc,
securityGroup: wordpressSg,
}
});
// FARGATE CONFIGURATION
// ECS Cluster which will be used to host the Fargate services
const ecsCluster = new ecs.Cluster(this, 'ECSCluster', {
vpc: vpc,
});
// FARGATE CONTAINER SERVICE
const fargateService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'WordpressFargateService', {
cluster: ecsCluster, // Required
desiredCount: 1, // Default is 1
cpu: 512, // Default is 256
memoryLimitMiB: 1024, // Default is 512
// because we are running tasks using the Fargate launch type in a public subnet, we must choose ENABLED
// for Auto-assign public IP when we launch the tasks.
// This allows the tasks to have outbound network access to pull an image.
// #see https://aws.amazon.com/premiumsupport/knowledge-center/ecs-pull-container-api-error-ecr/
assignPublicIp: true,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry(wordpressRegistryName),
environment: {
WORDPRESS_DB_HOST: dbCluster.clusterEndpoint.socketAddress,
WORDPRESS_DB_USER: DB_USER,
WORDPRESS_DB_PASSWORD: DB_PASSWORD,
WORDPRESS_DB_NAME: DB_NAME,
},
},
});
fargateService.service.connections.addSecurityGroup(wordpressSg);
fargateService.service.connections.allowTo(wordpressSg, ec2.Port.tcp(DB_PORT));
}
}
Perhaps someone knows how I can set up Fargate via CDK so that the individual Wordpress containers have a common volume on which the data is then located? Or maybe there is another elegant solution for this :)
Many thanks in advance :)
Found a solution 🤗
Thanks to the comments in the open GitHub-Issue and the provided Gist, I was finally able to configure a working solution.
I provided my current solution in this Gist. So feel free and just have a look at it, leave some comments and adapt it if it suites to your problem.
I am part of the AWS container service team and I would like to give you a bit of background re where we stand. We have recently (5/11/2020) announced the integration of Amazon ECS / AWS Fargate with Amazon EFS (Elastic File System). This is the plumbing that will allow you to achieve what you want to achieve. You can read more about the theory here and here for a practical example.
The example I linked above uses the AWS CLI simply because CloudFormation support for this feature has not been released yet (stay tuned). Once CFN support is released CDK will pick it up and, at that point, it will be able to adjust your CDK code to achieve what you want.

Assign profiles to whitelisted users

I have been exploring the profile list feature of the kubespawner, and am presented with a list of available notebooks when I login. All good. Now I have the use case of User A logging in and seeing notebooks 1 and 2, with User B seeing notebooks 2 and 3.
Is it possible to assign certain profiles to specific users?
I do not think Jupyterhub enables you do to that based on this https://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html
I think a way to achieve this would be having multiple jupyterhub instances configured with different list of notebook images. Based on something like AD group, you redirect your user to required instance so they get specific image options.
You can dynamically configure the profile_list to provide users with different image profiles.
Here's a quick example:
#gen.coroutine
def get_profile_list(spawner):
"""get_profile_list is a hook function that is called before the spawner is started.
Args:
spawner (_type_): jupyterhub.spawner.Spawner instance
Yields:
list: list of profiles
"""
# gets the user's name from the auth_state
auth_state = yield spawner.user.get_auth_state()
if spawner.user.name:
# gets the user's profile list from the API
api_url = # Make a request
data_json = requests.get(url=api_url, verify=False).json()
data_json_str = str(data_json)
data_json_str = data_json_str.replace("'", '"')
data_json_str = data_json_str.replace("True", "true")
data_python = json.loads(data_json_str)
return data_python
return [] # return default profile
c.KubeSpawner.profile_list = get_profile_list
And you can have your API return some kind of configuration similar to this:
[
{
"display_name": "Google Cloud in us-central1",
"description": "Compute paid for by funder A, closest to dataset X",
"spawner_override": {
"kubernetes_context": "<kubernetes-context-name-for-this-cluster">,
"ingress_public_url": "http://<ingress-public-ip-for-this-cluster>"
}
},
{
"display_name": "AWS on us-east-1",
"description": "Compute paid for by funder B, closest to dataset Y",
"spawner_override": {
"kubernetes_context": "<kubernetes-context-name-for-this-cluster">,
"ingress_public_url": "http://<ingress-public-ip-for-this-cluster>",
"patches": {
"01-memory": """
kind: Pod
metadata:
name: {{key}}
spec:
containers:
- name: notebook
resources:
requests:
memory: 16Mi
""",
}
}
},
]
Credit: https://multicluster-kubespawner.readthedocs.io/en/latest/profiles.html

Resources