I am facing a issue that containers which should only have private endpoints cannot be deployed
"The requested resource is not available in the location 'germanywestcentral' at this moment. Please retry with a different resource request or in another location. Resource requested: '1' CPU '1.5' GB memory 'Linux' OS virtual network"
The same container is working is fine as soon as I select public interface.
I didn't find anything about it in documentation or internet, so maybe someone here has an idea?
Thanks in advance
Stefan
Tried with QuickStart Image and DockerHub Registry as well getting the same result.
This error indicates that due to heavy load in the region in which you are attempting to deploy, the resources specified for your container can't be allocated at that time. Use one or more of the following mitigation steps to help resolve your issue.
1.Deploy to a different Azure region
2.Deploy at a later time
3.You can also reach out to support for details information.
Refernce : https://learn.microsoft.com/en-us/answers/questions/394616/trying-to-re-deploy-a-container-instance-but-error.html
Related
I have been trying blue green deployment with code deploy, but it throws an error: The following validation error occurred: Target group not attached to the Auto Scaling group (Service: AmazonAutoScaling; Status Code: 400; Error Code: ValidationError; Request ID: cd58091b-fe83-4dcf-b090-18c3b3d2dbbc; Proxy: null)
Though the policy has been applied to create target groups:
codedeploy:GetDeployment
elasticloadbalancing:DescribeTargetGroups
autoscaling:AttachLoadBalancers
autoscaling:AttachLoadBalancerTargetGroups
Does anyone could help me to sort out the issue & what am i missing?
The following is the error i encounter.
error
In our case, we managed to fix it the hard way by contacting AWS support team. Briefly about our app, we run Magento application behind an application load balancer with autoscaling, and the deployment is managed using AWS CodeDeploy on blue/green deployment.
We spent several days figuring out what's going on. Others suggested that there might be issue with IAM permissions, but we didn't touch that for months and deployment has never had any issues.
AWS' rep replied to us and said that in our case, there is a known issue / limitation on AWS Codedeploy that it currently don't support Blue/Green deployments based on ASGs that use Target Tracking scaling policies, because currently they don't attach the Green ASG to the original target group, and this is a requirement when Target Tracking scaling policies is enabled on the autoscaling group.
We then realized that we did some minor changes on our autoscaling groups' dynamic scaling policies that we switched from "CPU utilization"-based metrics to "Request count". Reverting it back to CPU utilization-based metrics solved the issue and we can run the deployment successfully.
Hope it helps as this error seems not to be documented in AWS doc.
I'm trying to allow my search indexer to connect to a cosmosdb behind a vnet. When adding a shared private access, the provisioning state is set to failed, without giving me an explanation. I have a private endpoint on the CosmosDB setup already. How do I make this work?
I had the same issue occur on the same day as you reported it.
I had been setting up running this connection via Azure Pipelines the day before but suddenly the same Pipeline stopped working.
I raised it as an issue with MS and it was quickly reproduced by the 1st line and passed onto escalation engineers who confirmed there was a recent change in the Fluent SDK used, when we initiate an ARM deployment for Shared Private Link resources we end up specifying both template link and template id (incorrectly/by accident). As a result, customers creating SPL resources are failing to do so with a 500.
I am told there is a world-wide fixed being rolled out and to be completed on Monday pacific time.
I wanted to resize RAM and CPU of my machine, so I stopped the VM instance and when I tried to start it I got an error:
The zone 'projects/freesarkarijobalerts/zones/asia-south1-a' does not
have enough resources available to fulfill the request. Try a
different zone, or try again later.`
Here you can see the screenshot.
I've tried to start VM instance today, but result was the same and I got an error message again:
The zone 'projects/freesarkarijobalerts/zones/asia-south1-a' does not
have enough resources available to fulfill the request. Try a
different zone, or try again later.`
Then I tried to move my instance to different region, but I got an error message:
sarkarijobalerts123#cloudshell:~ (freesarkarijobalerts)$ gcloud compute instances move wordpress-2-vm --zone=asia-south1-a --destination-zone=asia-south1-b
Moving gce instance wordpress-2-vm...failed.
ERROR: (gcloud.compute.instances.move) Instance cannot be moved while in state: TERMINATED
My website is DOWN for a couple of days, please help me.
The standard procedure is to create a snapshot out of the stopped VM instance [1] and then create a new one in another zone [2].
[1] https://cloud.google.com/compute/docs/disks/create-snapshots
[2] https://cloud.google.com/compute/docs/disks/restore-and-delete-snapshots#restore_a_snapshot_of_a_persistent_disk_to_a_new_disk
Let's have a look at the cause of this issue:
When you stop an instance it releases some resources like vCPU and memory.
When you start an instance it requests resources like vCPU and memory back and if there's not enough resources available in the zone you'll get an error message:
Error: The zone 'projects/freesarkarijobalerts/zones/asia-south1-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
more information available in the documentation:
If you receive a resource error (such as ZONE_RESOURCE_POOL_EXHAUSTED
or ZONE_RESOURCE_POOL_EXHAUSTED_WITH_DETAILS) when requesting new
resources, it means that the zone cannot currently accommodate your
request. This error is due to Compute Engine resource obtainability,
and is not due to your Compute Engine quota.
Resource availability are depending from users requests and therefore are dynamic.
There are a few ways to solve your issue:
Move your instance to another zone by following instructions.
Wait for a while and try to start your VM instance again.
Reserve resources for your VM by following documentation to avoid such issue in future:
Create reservations for Virtual Machine (VM) instances in a specific
zone, using custom or predefined machine types, with or without
additional GPUs or local SSDs, to ensure resources are available for
your workloads when you need them. After you create a reservation, you
begin paying for the reserved resources immediately, and they remain
available for your project to use indefinitely, until the reservation
is deleted.
I am trying to execute a simple Wordcount stream application but I face the error "Could not create internal topics - Stream-thread exception"
I have seen a similar thread but that seems to be more of a network issue.
Here is no security enabled on the kafka broker.
Only one broker is configured and still this issue.
Can someone let me know how to fix this?
Clean your temporary kafka queues.
Run --list command on kafka to see all the queues starting with your names and ending with -changelog & -repartition and manually run delete on them.
This one worked for me.
Also, check your settings on delete.topic.enable for actual deletion happening. It was not the default setting until 1.0.0 - see https://issues.apache.org/jira/browse/KAFKA-5384
i have connected to kafka using kafka tool and delete them manually
I'm not quite sure what's going on, but several issues are occurring on our website supported by Tridion 2011 SP1 that I can only think is being caused by the broker db
1, Loading the website results in the error with the Ambient Data Module:
[NullReferenceException: Object reference not set to an instance of an object.]
Tridion.ContentDelivery.AmbientData.HttpModule.OnRequestStart(Object sender, EventArgs e) +292
System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +79
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +269
2, Audience manager profile synchronisation between the broker and subscription management databases is failing, receiving the following errors from the logs:
Error occurred while fetching a synchronization batch from the presentation system with url: [URL] Profilesync.aspx. Message class java.sql.SQLException No Data Access Object for AudienceManagerProfile java.sql.SQLException: No Data Access Object for AudienceManagerProfile
3, Publishing the website publication repeatedly fails at the transport stage, returning:
Transport failed: Could not transport tcm_0-10689-66560.Content.zip using HTTPS
I know these sound like several issues happening at once, but my site was running fine up until another web publication was added into blueprinting. Since then we've been getting these errors, and like to think it's all related to an issue with the broker db.
Anyone come across something like this?
UPDATE: Should also add the fact that dynamic linking has stopped working as well, which strengthens my belief there is an issue either connecting to or with the broker database
That's full of loaded questions, oh.. boy.. let's take one by one.
Did you check your license file is valid and did not expire recently.
I would rule the license file out first. If that's not the case then see below.
First, Adding publication blueprinting publication does not have any impact on CDA site. So, I would totally rule this out.
Second, Your site up and running until publication added.. This can't be happening. There should be some configurations changed/added on your web site. Sometimes, the configurations updated but the APPPool did not recycled so you don't see the impact. At later time, when the AppPool restarted/recycled you will notice the errors and might have the impression that the site is breaking all of sudden.
Did you double check your configs and dlls?
Are you able to connect to your database directly with broker user
id/password?
Any firewall changes happened recently, try connecting to DB from
CDA Server
For #3, I had this same problem... is it possible that adding the new publication caused the package size to increase from less than 30mb to more than 30mb? If so, check IIS Request Filtering >> Edit Feature Settings and look at Maximum allowed content length. This defaults to 30MB and for us increasing this size to something larger than our package size solved the Transport Failed error (since IIS would just reject our transports due to the size being greater than 30mb).
About issue #1: Could it be you have the Tridion.ContentDelivery.AmbientData.HttpModule configured in the Web.config, but are lacking the java installation? Do you have a cd_ambient_conf.xml in your config folder? And a cd_ambient.jar in the lib?
About issue # 2: you seem to be missing the audience manager DAOs (Data Access Objects). Do you have the following in your cd_storage_conf.xml?
<StorageBindings>
<Bundle src="AudienceManagerDAOBundle.xml"/>
</StorageBindings>
About issue # 3: do you have more information in the cd_transport.log or windows event viewer?
I agree with Ram that the added publication couldn't have caused this. It looks like your CD installation was changed somehow. Did you do an upgrade or something?