Error while deploying Google App Engine(GAE) Flexible via Terraform script - app-engine-flexible

I created a terraform script to deploy a Java app engine to GAE flexible as below:
resource "google_app_engine_flexible_app_version" "test-terraform" {
version_id = "v1"
project = "project-id"
service = "service-terraform"
runtime = "java"
liveness_check {
path = "/"
}
readiness_check {
path = "/"
}
env_variables = {
port = "8080"
}
deployment {
zip {
source_url = "https://storage.googleapis.com/[BUCKET_NAME]/[ZIP_OBJECT_NAME]"
}
cloud_build_options {
app_yaml_path = "[PATH_TO_APP-YAML_FILE]"
}
}
# resoucres config
resources {
cpu = 1
memory_gb = 2
disk_gb = 10
}
# scale config
delete_service_on_destroy = true
}
I tried to change the value of PATH_TO_APP-YAML_FILE to
location of app.yaml on Storage
localtion of app.yaml on ZIP source code as "./src/main/appengine/app.yaml"
but not success deploy, error detail on Cloud Build show as below:
Step #1: WARN - A yaml configuration file was expected, but none was found at the provided path: app.yaml. Proceeding with default configuration values.
Step #1: Exception in thread "main" com.google.cloud.runtimes.builder.exception.ArtifactNotFoundException: No deployable artifacts were found. Unable to proceed.
Step #1: at com.google.cloud.runtimes.builder.buildsteps.PrebuiltRuntimeImageBuildStep.getArtifact(PrebuiltRuntimeImageBuildStep.java:77)
Step #1: at com.google.cloud.runtimes.builder.buildsteps.RuntimeImageBuildStep.run(RuntimeImageBuildStep.java:50)
Step #1: at com.google.cloud.runtimes.builder.BuildPipelineConfigurator.generateDockerResources(BuildPipelineConfigurator.java:104)
Step #1: at com.google.cloud.runtimes.builder.Application.main(Application.java:147) Finished Step #1 ERROR Blockquote
Could you please help me to point out exactly the value PATH_TO_APP-YAML_FILE?
Thanks!

According to the Terraform documentation this value stands for:
app_yaml_path - (Required) Path to the yaml file used in deployment, used to determine runtime configuration details.
However it is not clear if it is compatible with the source code being located in a Cloud Storage bucket. As suggested in the Terraform community page I would advice to open an issue in the HashiCorp forum to get more specific insight on this parameter.

Related

How to create GCP VM instance from marketplace's wordpress image with Terraform

How can i create a GCP VM instance with Terraform using click to deploy images?
I am trying :
data "google_compute_image" "wp_image" {
project = "click-to-deploy-images"
name = "wordpress"
}
boot_disk {
initialize_params {
image = data.google_compute_image.wp_image.id
}
}
but getting an error like:
Error: error retrieving image information: googleapi: Error 404: The resource 'projects/click-to-deploy-images/global/images/wordpress' was not found, notFound
I looked many location but couldn't find exact solution.
Note: I am using Terraform version = "3.48.0"

Azure Disk Encryption using terraform VM extension - forces replacement [Second run]

I created the following resource to encrypt 'All' disk of a VM, and it worked fine so far:
resource "azurerm_virtual_machine_extension" "vm_encry_win" {
count = "${var.vm_encry_os_type == "Windows" ? 1 : 0}"
name = "${var.vm_encry_name}"
location = "${var.vm_encry_location}"
resource_group_name = "${var.vm_encry_rg_name}"
virtual_machine_name = "${var.vm_encry_vm_name}"
publisher = "${var.vm_encry_publisher}"
type = "${var.vm_encry_type}"
type_handler_version = "${var.vm_encry_type_handler_version == "" ? "2.2" : var.vm_encry_type_handler_version}"
auto_upgrade_minor_version = "${var.vm_encry_auto_upgrade_minor_version}"
tags = "${var.vm_encry_tags}"
settings = <<SETTINGS
{
"EncryptionOperation": "${var.vm_encry_operation}",
"KeyVaultURL": "${var.vm_encry_kv_vault_uri}",
"KeyVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionKeyURL": "${var.vm_encry_kv_key_url}",
"KekVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionAlgorithm": "${var.vm_encry_key_algorithm}",
"VolumeType": "${var.vm_encry_volume_type}"
}
SETTINGS
}
When i ran the first time - ADE encryption is done for both OS and data disk.
However, When I re-run terraform using terraform plan or terraform apply, it wants to replace all my data disks I have already created, like the following screenshot illustrates.
I do not know how to solve it. And my already created disks should not be replaced.
I check on the lines of ignore_chnages
lifecycle {
ignore_changes = [encryption_settings]
}
I am not sure where to add or does this reference actually solves the problem?
Which resource block should i add them.
Or is there another way ?

How to share proto files between client and server

I have setup a Grpc service that contains two proto files (One with custom types and one for the service).
customtypes.proto
syntax = "proto3";
package CustomTypes;
option csharp_namespace = "CustomTypes";
// Example: 12345.6789 -> { units = 12345, nanos = 678900000 }
message Decimal {
// Whole units part of the amount
int64 units = 1;
// Nano units of the amount (10^-9)
// Must be same sign as units
sfixed32 nanos = 2;
}
greet.proto
syntax = "proto3";
option csharp_namespace = "Test";
import "Protos/customtypes.proto";
package greet;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply);
}
// The request message containing the user's name.
message HelloRequest {
CustomTypes.Decimal value = 1;
}
// The response message containing the greetings.
message HelloReply {
CustomTypes.Decimal value = 1;
}
The server side project compiles fine, but when i add both proto files to the client project with add connected service (in visual studio), it generates the following in my .csproj file
<ItemGroup>
<Protobuf Include="..\Server\Protos\customtypes.proto" GrpcServices="Client">
<Link>Protos\customtypes.proto</Link>
</Protobuf>
<Protobuf Include="..\Server\Protos\greet.proto" GrpcServices="Client">
<Link>Protos\greet.proto</Link>
</Protobuf>
</ItemGroup>
But when i try to compile it fails to find the the Decimal custom type.
1>------ Build started: Project: Client, Configuration: Debug Any CPU ------
1>Protos/customtypes.proto : error : File not found.
1>../Server/Protos/greet.proto(4,1): error : Import "Protos/customtypes.proto" was not found or had errors.
1>../Server/Protos/greet.proto(16,3): error : "CustomTypes.Decimal" is not defined.
1>../Server/Protos/greet.proto(21,3): error : "CustomTypes.Decimal" is not defined.
1>Done building project "Client.csproj" -- FAILED.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
I have setup a github project that contains everything that is needed https://github.com/AnderssonPeter/TestGRPC/tree/02779b47fda3128483698c628b4db05bc6b57c75
So how should i setup a project to share the proto files, the only solution i can think of is to create a common project and generate both client and server types, but it seems a bit wasteful to have the server types in the client project!

Spring cloud contract : Unable to read the stubjar from Jfrog in consumer side code

I am new to Spring cloud contract testing. I am to publish the stub jar to Jfrog artifactory from the producer side.
Following Gradle code:
publishing {
publications {
maven(MavenPublication) {
artifact("build/libs/provider-service-$version"+"-stubs.jar") {
extension 'jar'
}}}
repositories {
maven {
name 'libs-snapshot'
url "http://localhost:8081/artifactory/libs-snapshot/"
credentials {
username project.repoUser
password project.repoPassword
}
}
}
}
But on the consumer side I am not able to read the jar. I am getting the following error to read the jar.
Code:
#AutoConfigureStubRunner(ids = "com.test:provider-service:+:stubs:8082",
consumerName = "contracts",
properties = {"stubrunner.username=admin", "stubrunner.password=Cirrus123$"},
stubsPerConsumer = true,
stubsMode = StubRunnerProperties.StubsMode.REMOTE,
repositoryRoot = "http://localhost:8081/artifactory/libs-snapshot/")
Error:
java.lang.IllegalArgumentException: For groupId [com.test] artifactId [provider-service] and classifier [stubs] the version was not resolved! The following exceptions took place [org.eclipse.aether.transfer.MetadataNotFoundException: Could not find metadata com.test:provider-service/maven-metadata.xml in local (C:\Users\test\AppData\Local\Temp\aether-local7525112400154924089), org.eclipse.aether.transfer.MetadataTransferException: Could not transfer metadata com.test:provider-service/maven-metadata.xml from/to remote0 (http://localhost:8081/artifactory/libs-snapshot/): status code: 401, reason phrase: Unauthorized (401)]
But I am using the correct credentials to connect to Jfrog

Publishing web app to Azure Websites Staging deployment slot fails with webjob

I just created a new deployment slot for my app, imported the publishing profile to Visual Studio, but after deployment I get this error message:
Error 8: An error occurred while creating the WebJob schedule: No website could be found which matches the WebSiteName [myapp__staging] and WebSiteUrl [http://myapp-staging.azurewebsites.net] supplied.
I have 2 webjobs, a continuous and a scheduled webjob.
I already signed in to the correct Azure account, as stated by this answer.
Will I need to set something else up in order to deploy my app to a staging Deployment Slot with webjobs?
My app is using ASP.NET, if it makes a difference?
There are a few quirks when using the Azure Scheduler. The recommendation is to use the new CRON support instead. You can learn more about it here and here.
Jeff,
As David suggested, you can/should migrate to the new CRON support. Here's an example. The WebJob will be deployed as a continuous WebJob.
Keep in mind that in order to use this you need to install the WebJobs package and extensions that are currently a prerelease. You can get them on Nuget.
Install-Package Microsoft.Azure.WebJobs -Pre
Install-Package Microsoft.Azure.WebJobs.Extensions -Pre
Also, as David suggested if you're not using the WebJobs SDK, you can also run this using a settings.job file. He provided an example here.
Program.cs
static void Main()
{
//Set up DI (In case you're using an IOC container)
var module = new CustomModule();
var kernel = new StandardKernel(module);
//Configure JobHost
var storageConnectionString = "your_connection_string";
var config = new JobHostConfiguration(storageConnectionString) { JobActivator = new JobActivator(kernel) };
config.UseTimers(); //Use this to use the CRON expression.
//Pass configuration to JobJost
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
Function.cs
public class Functions
{
public void YourMethodName([TimerTrigger("00:05:00")] TimerInfo timerInfo, TextWriter log)
{
//This Job runs every 5 minutes.
//Do work here.
}
}
You can change the schedule in the TimerTrigger attribute.
UPDATE Added the webjob-publish-settings.json file
Here's an example of the webjob-publiss-settings.json
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "YourWebJobName",
"startTime": null,
"endTime": null,
"jobRecurrenceFrequency": null,
"interval": null,
"runMode": "Continuous"
}

Resources