Microsoft Translator custom model deploy failed - microsoft-cognitive

I've tried two times to deploy an already successfully trained model in Microsoft Custom Translator dashboard (Bleu score: 38.31).
The deploy process ends always with "Deployment Failed" status, not any error message. The model is the first and the only one present in the project.
I have a S1 subscription plan. Has anyone encountered a similar issue?
Thanks
Max

Related

I got Target group not attached to the Auto Scaling group error while doing blue green deployment through code deploy

I have been trying blue green deployment with code deploy, but it throws an error: The following validation error occurred: Target group not attached to the Auto Scaling group (Service: AmazonAutoScaling; Status Code: 400; Error Code: ValidationError; Request ID: cd58091b-fe83-4dcf-b090-18c3b3d2dbbc; Proxy: null)
Though the policy has been applied to create target groups:
codedeploy:GetDeployment
elasticloadbalancing:DescribeTargetGroups
autoscaling:AttachLoadBalancers
autoscaling:AttachLoadBalancerTargetGroups
Does anyone could help me to sort out the issue & what am i missing?
The following is the error i encounter.
error
In our case, we managed to fix it the hard way by contacting AWS support team. Briefly about our app, we run Magento application behind an application load balancer with autoscaling, and the deployment is managed using AWS CodeDeploy on blue/green deployment.
We spent several days figuring out what's going on. Others suggested that there might be issue with IAM permissions, but we didn't touch that for months and deployment has never had any issues.
AWS' rep replied to us and said that in our case, there is a known issue / limitation on AWS Codedeploy that it currently don't support Blue/Green deployments based on ASGs that use Target Tracking scaling policies, because currently they don't attach the Green ASG to the original target group, and this is a requirement when Target Tracking scaling policies is enabled on the autoscaling group.
We then realized that we did some minor changes on our autoscaling groups' dynamic scaling policies that we switched from "CPU utilization"-based metrics to "Request count". Reverting it back to CPU utilization-based metrics solved the issue and we can run the deployment successfully.
Hope it helps as this error seems not to be documented in AWS doc.

Airflow dag cannot find connection-id

I am managing a Google Cloud Composer environment which runs Airflow for a data engineering team. I have recently been asked to troubleshoot one of the dags they run which is failing with this error : [12:41:18,119] {credentials_utils.py:23} WARNING - [redacted-name] connection ID not available, falling back to Google default credentials
The job is basically a data pipeline which reads from various sources and stores data into GBQ. The odd part is that they have a strictly similar Dag running for a different project and it works perfectly.
I have recreated the .json credentials for the service account behind the connection as well as the connection itself in Airflow. I have sanitized the code to see if there was any hidden spaces or so.
My knowledge of Airflow is limited and I have not been able to find any similar issue in my research, any one have encountered this before?
So the DE team came back to me saying it was actually a deployment issue where an internal module involved in service account authentication was being utilized inside another DAG running in stage environment, rendering it impossible to proceed to credential fetch from the connection ID.

Azure Wide and Deep Recommender real inference pipeline error - Invalid graph: You have required input port(s) unconnected

It seems like there is bug in create real-time endpoints for "Wide & Deep Recommender" module at least with the sample workflow. I kept getting "Invalid graph: You have requested input port(s) unconnected". Does anyone know how to get around this issue?
Repro Steps:
Go to Azure ML -> Designer -> "Wide & Deep based Recommendation - Restaurant"
Train the model -> Create "Real-time inference pipeline" -> in Real-time inference pipeline, click "Submit" -> Error occurs
you can now register a trained model in Designer into the model registry and use automatically generated score.py/conda yaml to deploy it using SDK or CLI. After successful model training, publishing to inference cluster in authenticating to the AKS cluster and consume.
I was facing the same issue. I just deleted the 'Evaluate recommender' at the end (having a yellow exclamation mark) and submitted again.
That fixed my problem.

How do I get past this error in the Firebase console when adding a new app? "There was an unknown error while processing the request. Try again."

I'm getting this error after I try to add a new app and click 'Register App':
"There was an unknown error while processing the request. Try again."
I've looked at these and they have the same problem but didn't have solutions that worked:
Can't create Firebase project - There was an unknown error while processing the request. Try again
GCM - Getting Error message "There was an unknown error while processing the request. Try again." when creating new project
Firebase: There was an unknown error while processing the request. Try again.
I'm not using multiple Google accounts. I've tried signing out and in again, tried changing the package name, and the Firebase status page doesn't show any outages.
I had the same problem, After a long research fond that it was been OAuth 2.0 Client Id limit. In order to resolve, I have deleted all OAuth 2.0 Client IDs from Google cloud console, then try again. It works fine for me.
Steps to remove OAuth 2.0 Client IDs :
Open 'https://console.cloud.google.com/'
Select your project from the top navbar
Select APIs & Services from Left Side panel
Select Credentials in the left panel
Go to OAuth 2.0 Client IDs
Select all unused Entries (!Carefully)
Click on delete on the top bar
Done, go to firebase console try again
It may solve your problem!!
Have you tried changing your package name? I'd same issue,that was resolved by changing package name. As Firebase saves deleted package names as well.
You might reached number of OAuth 2.0 client IDs limits. There is a limit of around 30 client IDs that can be created within a single project.
Make sure you sign out all your google accounts and sign in with the account for your specific project. I'm normally signed in to 3 or 4 accounts and ran into this error issue. Did as I mentioned and it worked.
The operation has failed (Reason: [object ProgressEvent])
I am getting the above-mentioned error while creating a new project. This got resolved by logging out and logging in again

Case Create Workflow Fails with SLA : PopulateBusinessClosureRequests Failed

I have a workflow that creates cases from emails. I also have a simple SLA that is associated to the cases.
Recently I have noticed a large number of the case creation jobs failing with an error like this:
Plugin Trace:
[Microsoft.Xrm.Sdk.Workflow: Microsoft.Xrm.Sdk.Workflow.Activities.CreateEntity]
[CreateStep6]
[Microsoft.Crm.Service.ObjectModel: Microsoft.Crm.Service.ObjectModel.CasePostCreatePlugin]
[ffde44b1-a685-4948-85b8-e4091bc2a98b: CasePostCreatePlugin]
[Microsoft.Crm.ObjectModel: Microsoft.Crm.Extensibility.InternalOperationPlugin]
[46f6cf4c-14ae-4f1e-98a1-eae99a37e95c: ExecuteWorkflowWithInputArguments]
Starting sync workflow 'Case Follow Up', Id: b5983018-1027-e611-80bb-0050568f6e8c
Entering ConditionStep1_step: Case Follow Up
Entering ConditionStep5_step: Check condition step - Applicable When- KPI1
Entering ConditionStep7_step:
Entering ConditionStep9_step:
Sync workflow 'Case Follow Up' terminated with error 'PopulateBusinessClosureRequests Failed'
Error Message:
Unhandled Exception: Microsoft.Xrm.Sdk.InvalidPluginExecutionException: PopulateBusinessClosureRequests Failed
We do have working hours that is linked to a holiday schedule.
I cant seem to find a cause for the "PopulateBusinessClosureRequests" failure.
This is CRM 2016 U0.1 on-prem
We raised the issue with Microsoft, and it turns out that there is an issue in the product.
At the time of this answer they were working on a fix to be included in a service pack.

Resources