Deploy Azure form recognizer Invoice in AKS on premise container - azure-cognitive-services

I have Invoice and Layout services docker images and there is a tutorial on how to deploy it using docker compose here
I would like to deploy the same(Invoice & Layout together) in AKS as on-premise. I've tried multiple ways but couldn't get it worked.
If anyone tried deploying above services in on-premise/disconnected environment using kubernetes then please do help me out.

The layout container hostname has to be same name as this "azure-container-service-layout" and try to deploy in AKS.
In deployment file, make sure below environment variable is exactly same. if you change the hostname of layout container(i.e., IP of layout container/localhost/any other hostname) it doesn't forward the request to layout container and Invoice container endpoints will return the failed status since it is depends on layout service(Details here)
AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
In deployment file under spec, you can explicitly give static hostname as "azure-cognitive-service-layout".
After adding static hostname to the deployment, I'm able to deploy both containers(layout and invoice) together and get it worked as expected.
spec:
**hostname: azure-cognitive-service-layout**
containers:
- name: az-form-recognizer-invoice-deployment
image: "mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice:latest"

Related

How to run ASP.NET Core with Angular template inside Docker container?

Steps
Create a new project using Visual Studio and ASP.NET Core with Angular project template. (Other settings: .NET 6.0, HTTPS enabled, no auth)
Add Container Orchestration Support (Other settings: Docker Compose, Linux)
Run the project using the Docker Compose config via VS
Results
The browser opens but just shows an HTTP 404 Not Found error.
Container logs do not really help.
{"EventId":60,"LogLevel":"Warning","Category":"Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository","Message":"Storing keys in a directory \u0027/root/.aspnet/DataProtection-Keys\u0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.","State":{"Message":"Storing keys in a directory \u0027/root/.aspnet/DataProtection-Keys\u0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.","path":"/root/.aspnet/DataProtection-Keys","{OriginalFormat}":"Storing keys in a directory \u0027{path}\u0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed."}}
{"EventId":35,"LogLevel":"Warning","Category":"Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager","Message":"No XML encryptor configured. Key {834abb8d-55b6-4f29-b1a6-4aa8a6468f9e} may be persisted to storage in unencrypted form.","State":{"Message":"No XML encryptor configured. Key {834abb8d-55b6-4f29-b1a6-4aa8a6468f9e} may be persisted to storage in unencrypted form.","KeyId":"834abb8d-55b6-4f29-b1a6-4aa8a6468f9e","{OriginalFormat}":"No XML encryptor configured. Key {KeyId:B} may be persisted to storage in unencrypted form."}}
{"EventId":14,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Now listening on: https://[::]:443","State":{"Message":"Now listening on: https://[::]:443","address":"https://[::]:443","{OriginalFormat}":"Now listening on: {address}"}}
{"EventId":14,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Now listening on: http://[::]:80","State":{"Message":"Now listening on: http://[::]:80","address":"http://[::]:80","{OriginalFormat}":"Now listening on: {address}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Application started. Press Ctrl\u002BC to shut down.","State":{"Message":"Application started. Press Ctrl\u002BC to shut down.","{OriginalFormat}":"Application started. Press Ctrl\u002BC to shut down."}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Hosting environment: Development","State":{"Message":"Hosting environment: Development","envName":"Development","{OriginalFormat}":"Hosting environment: {envName}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Content root path: /app/","State":{"Message":"Content root path: /app/","contentRoot":"/app/","{OriginalFormat}":"Content root path: {contentRoot}"}}
I need this project to run on Docker. How can this be done?
I know that the Node is not installed in the Docker container but knowing that doesn't exactly help. How to fix it?

DynamoDB table created by Terraform in LocalStack not visible in NoSQL Workbench

Summary: Code and configuration known to show up in NoSQL Workbench when using DynamoDB Local mysteriously don't work with LocalStack: though the connection works, the tables no longer show in NoSQL Workbench (but continue to show up when using the aws-cli).
I created a table in DynamoDB Local running in Docker that worked in NoSQL Workbench. I wrote code to seed that database, and it all worked and showed up in NoSQL Workbench.
I switched to LocalStack (so I can interact with other AWS services locally). I was able to create a table with Terraform and can seed it with my code (using the configuration given here). Using the aws-cli, I can see the table, etc.
But inside NoSQL Workbench, I couldn't see the table I created and seeded when connecting as shown below. There weren't connection errors; the table just isn't there. It doesn't seem related to the bugginess issue described here, as restarting the application did not help. I didn't change any AWS account settings like region, keys, etc.
If you don't want to change your region to localhost, there is another solution. From the LocalStack docs:
"DYNAMODB_SHARE_DB: When activated, DynamodDB will use a single database instead of separate databases for each credential and region."
e.g. add the variable to your docker-compose.yml
...
localstack:
container_name: my_localstack
image: "localstack/localstack:0.13.0.8"
environment:
- DYNAMODB_SHARE_DB=1
...
Summary: To use NoSQL Workbench with LocalStack, set the region to localhost in your code and Terraform config, and fix the resulting validation error (saying there isn't a localhost region) by setting skip_region_validation to true in the aws provider block in the Terraform config.
The problem is disclosed in the screenshot above:
NoSQL Workbench uses the localhost region.
When using DynamoDB Local, it appears the region is ignored, so this quirk is hidden (i.e. there is a mismatch between the region in the Terraform file and my code on the one hand and NoSQL Workbench on the other, but it doesn't matter with DyanmoDB Local).
But with LocalStack region is not ignored, so the problem popped up.
I wouldn't have written this up except for one more quirk that took a while to figure out. When I updated the Terraform configuration thus:
provider "aws" {
access_key = "mock_access_key"
// For compatibility with NoSQL workbench local connections
region = "localhost"
I started getting this error when running terraform apply:
╷
│ Error: Invalid AWS Region: localhost
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 1, in provider "aws":
│ 1: provider "aws" {
│
╵
I dug around a bit and found this issue in the AWS provider repo for Terraform, which explains that you should do this:
provider "aws" {
access_key = "mock_access_key"
// For compatibility with NoSQL workbench local connections
region = "localhost"
skip_region_validation = true

FOSElasticaBundle not populating searchableDocuments in AWS ElasticSearch

I am trying to get FOSElasticaBundle working on AWS ElasticSearch. At the moment I have my development env all set up and working perfectly using Docker containers for ElasticSearch using
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
If I populate my ElasticSearch using:
docker-compose exec php php /var/www/symfony/bin/console fos:elastica:populate --env=prod
this all works perfectly and the index has searchable items in it.
However moving this to AWS is throwing up an issue.
I have set up a ElasticSearch service (v6.2) within AWS using their VPN option, I am able to connect to this (I know it does connect as I had connection errors until I used this in the config:
fos_elastica:
clients:
default:
transport: 'AwsAuthV4'
aws_access_key_id: '%amazon.s3.key%'
aws_secret_access_key: '%amazon.s3.secret%'
aws_region: '%amazon.s3.region%'
When I run
php bin/console fos:elastica:populate --env=prod it looks like it is populating
3200/6865 [=============>--------------] 46% 4 secs/9 secs
Populating ppc/keywords
Refreshing ppc
But once complete my Amazon console shows 0 searchableDocuments and if run a query I get nothing back.
Has anyone come across this and any idea how to solve it, even being able to get more feedback from populate would help me work out where it is going wrong.
Edit 17:29 31/5
So I created a Elasticsearch install in a docker container on a standard EC2 instance and pointed at that and it indexes perfectly, so it is something to do with the connection with AWS. One of the differences between them is that the Docker install doesn't have to use:
transport: 'AwsAuthV4'
aws_access_key_id: '%amazon.s3.key%'
aws_secret_access_key: '%amazon.s3.secret%'
aws_region: '%amazon.s3.region%'
I presume then it's something to do with this, I would have thought if it wasn't authorised though I would get an error. Although it's working currently I would prefer to use the Amazon service just so it takes a install out of my life to keep an eye on!
I have same problem, but without using access_key.
The solution was adding to the client config key transport with value https
fos_elastica:
clients:
default:
host: vpc-xxxxxxxxxxxxxxxxxxxxxxxxx.es.amazonaws.com
port: 443
transport: https
my problem was in empty aws_access_key_id and aws_secret_access_key values.
Please, check it.

How to automatically scrape all Docker instances from Kubernetes with Prometheus?

I've successfully installed Prometheus in Google Container Engine and I have these targets up:
kubernetes-apiservers
kubernetes-cadvisor
kubernetes-nodes
Now I would like to scrape Nginx stats from each of the Docker containers inside this Kubernetes cluster (which seems like a sensible thing to do).
But how can I make Prometheus automatically pull the metrics from all the Nginx instances running in all of the Docker containers?
From my research so far, the answer involves kubernetes_sd_config but I simply could not find enough documentation on how to put the pieces together.
Thank you!
Edit: This is not about exposing the Nginx stats. This is just about scraping any stats that are exposed by all Docker containers.
You are correct that you need to use the kubernetes_sd_config directive. Before continuing let me just say that what you should be asking is "Automatically scape all pods from Kubernetes". This is because a pod is considered the lowest unit of scale in Kubernetes. Regardless it is clear what you are trying to do.
So the kubernetes_sd_config can be used to discover all pods with a given tag like so:
- job_name: 'some-app'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: python-app
action: keep
The source label [__meta_kubernetes_pod_label_app] is basically using the Kubernetes api to look at app pods that have a label of 'app' and whose value is captured by the regex expression, given on the line below (in this case, matching 'python-app').
Hope that helps. You can follow blog post here for more detail. Also for more information about kubernetes_sd_config check out docs here.
Note: it is worth mentioning that kubernetes_sd_config is still in beta. Thus breaking changes to configuration may occur in future releases.

Deploying a Meteor app with Distelli

I've gotten pretty far into a deployment of my Meteor application on Distelli. Like, almost there. I've done everything as far as setting up the EC2 box, creating a user group [which didn't even seem necessary as I was able to SSH into the box with full rights without specifying my machine's IP], creating an elastic IP, successful build, and deployment to that box. But, I can't seem to check if Meteor is actually running (note: when I ssh in, there are active instances of Mongo and Node, so SOMETHING is running).
The problem has something to do with associating the elastic IP with my ROOT_URL and domain. I'm just not sure what to do at this step and can't seem to find any directions that are Meteor specific. Been using these guides:
https://www.distelli.com/docs/tutorials/how-to-set-up-aws-ec2
https://www.distelli.com/docs/tutorials/deploying-meteor-applications
http://gregblogs.com/tlt-associate-a-namecheap-domain-with-an-amazon-ec2-instance/
Recap: Distelli deployment is a success, but I get the follow error just before finishing:
Error: $ROOT_URL, if specified, must be an URL
I've set my ROOT_URL to my domain, and associated according to the previous guide. I can run traceroute on the IP, but like port 3000, so my inclination is the Meteor build is silently failing.
My manifest: https://gist.github.com/newswim/c642bd9a1cf136da73c3
I've noticed that when I point the CNAME record to my ec2 public DNS, NameCheap (aptly named) adds a . to the end of the record. Beyond that, I'm pretty much stumped.

Resources