need help I have this
service {
name = "nginx"
tags = [ "nginx", "web", "urlprefix-/nginx" ]
port = "http"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
how can i add a health for a specific URI if it returns a 200 response
like localhost:8080/test/index.html
It is as simple as adding another check configured to use an http check like so where /health is served by your nginx on its published port:
service {
name = "nginx"
tags = [ "nginx", "web", "urlprefix-/nginx" ]
port = "http"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "2s"
}
}
You can see a full example here:
https://www.nomadproject.io/docs/job-specification/index.html
Related
I want to send a post requests with SQL query to a druid API. I used requests package to make the request:
payload = {
"query": "select count(1) from table1",
"resultFormat": "array",
"header": True,
"typesHeader": True,
"sqlTypesHeader": True,
"context": {
"somekeys":"somevalues"
}
}
druidURL = "someurl:8888/durid/v2/sql"
x = requests.post(druidUrl,json=payload)
The only result I got is code:405.When I inspect the network in Chrome when the POST request succussed with the result on the druid console, the URL and Payload are exactly the same except there is an additional property called Remote Address.
You have a typo in the URL ("durid"):
import requests
payload = {
"query": "select 1+1",
"resultFormat": "array",
"header": True,
"typesHeader": True,
"sqlTypesHeader": True,
"context": {
"somekeys":"somevalues"
}
}
druidUrl = "http://localhost:8888/druid/v2/sql"
x = requests.post(druidUrl, json=payload)
print(x)
I would like to deploy a minimal k8s cluster on AWS with Terraform and install a Nginx Ingress Controller with Helm.
The terraform code:
provider "aws" {
region = "us-east-1"
}
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
variable "cluster_name" {
default = "my-cluster"
}
variable "instance_type" {
default = "t2.large"
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.11"
}
data "aws_availability_zones" "available" {
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.0.0"
name = "k8s-${var.cluster_name}-vpc"
cidr = "172.16.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
public_subnets = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "12.2.0"
cluster_name = "eks-${var.cluster_name}"
cluster_version = "1.18"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
worker_groups = [
{
name = "worker-group-1"
instance_type = "t3.small"
additional_userdata = "echo foo bar"
asg_desired_capacity = 2
},
{
name = "worker-group-2"
instance_type = "t3.small"
additional_userdata = "echo foo bar"
asg_desired_capacity = 1
},
]
write_kubeconfig = true
config_output_path = "./"
workers_additional_policies = [aws_iam_policy.worker_policy.arn]
}
resource "aws_iam_policy" "worker_policy" {
name = "worker-policy-${var.cluster_name}"
description = "Worker policy for the ALB Ingress"
policy = file("iam-policy.json")
}
The installation performs correctly:
helm install my-release nginx-stable/nginx-ingress
NAME: my-release
LAST DEPLOYED: Sat Jun 26 22:17:28 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NGINX Ingress Controller has been installed.
The kubectl describe service my-release-nginx-ingress returns:
Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
The VPC is created and the public subnet seems to be correctly tagged, what is lacking to make the Ingress aware of the public subnet ?
In the eks modules you are prefixing the cluster name with eks-:
cluster_name = "eks-${var.cluster_name}"
However you do not use the prefix in your subnet tags:
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
Drop the prefix from cluster_name and add it to the cluster name variable (assuming you want the prefix at all). Alternatively, you could add the prefix to your tags to fix the issue, but that approach makes it easier to introduce inconsistencies.
Can Someone please help what is this error for? I was configuring AWS Backup and got this error message. I have tried in many ways (IAM policy etc) but no luck. Any assistance is much appreciated.
Error: Error getting Backup Vault: AccessDeniedException:
status code: 403, request id: 501c0713-0ce9-4879-93f6-1887322a38be
I ran into this issue using terraform. I figured this out by adding the "backup-storage:MountCapsule" permission to the policy of the role I am using to create the resource. Here is a slightly edited policy and role configuration. Hopefully, this helps someone.
data "aws_iam_policy_document" "CloudFormationServicePolicy" {
statement {
sid = "AllResources"
effect = "Allow"
actions = [
"backup:*",
"backup-storage:MountCapsule",
...
]
resources = ["*"]
}
statement {
sid = "IAM"
effect = "Allow"
actions = ["iam:PassRole"]
resources = ["*"]
}
}
resource "aws_iam_policy" "CloudFormationServicePolicy" {
name = "${local.resource_name}-CloudFormationServicePolicy"
description = "policy for the IAM role "
path = "/${local.metadata["project"]}/${local.metadata["application"]}/"
policy = data.aws_iam_policy_document.CloudFormationServicePolicy.json
}
resource "aws_iam_role" "CloudFormationServiceRole" {
name = "${local.resource_name}-CloudFormationServiceRole"
description = "Allow cluster to manage node groups, fargate nodes and cloudwatch logs"
force_detach_policies = true
assume_role_policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : "sts:AssumeRole",
"Principal" : {
"Service" : ["cloudformation.amazonaws.com", "ecs-tasks.amazonaws.com"]
},
"Effect" : "Allow",
"Sid" : "TrustStatement"
},
{
"Effect" : "Allow",
"Principal" : {
"AWS" : "arn:aws:iam::xxxxxxx:role/OrganizationAdministratorRole"
},
"Action" : "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "CloudFormationService_task_role_policy_attachment" {
role = aws_iam_role.CloudFormationServiceRole.name
policy_arn = aws_iam_policy.CloudFormationServicePolicy.arn
}
Wordpress is to be given the opportunity to log on via Identity Server 4: We have entered the necessary data in Wordpress (most recently with the plugin "OAuth Single Sign On - SSO (OAuth client)") and tried to address the Identity Server. Please note the picture attached.
Settings of the WordPress plugin:
Unfortunately we always get the message
Error: Sorry, there was an error : unauthorized_client. The plugin
sends the following request URL:
http://yyyyyy.com/connect/authorize?client_id=wp-internal&scope=openid%20email&redirect_uri=https://xxxxxx.com/wp-admin/admin-ajax.php?action=openid-connect-authorize&response_type=code&state=V29yZHByZXNzIENsaWVudCBJbnRlcm5hbA===
As far as we can see, the statement is correct - and now we have no meaningful idea what we can do. Can you help or do you have an idea?
Configuration on IdentityServer4 is:
new Client
{
ClientId = "wp-internal",
ClientName = "Wordpress Client Internal",
AllowedGrantTypes = GrantTypes.Code,
AllowAccessTokensViaBrowser = false,
RequireConsent = true,
ClientSecrets =
{
new Secret("#dfmsgmwpgdidsa2019".Sha256())
},
RedirectUris = { "https://xxxxxx.com/wp-admin/admin-ajax.php?action=openid-connect-authorize" },
PostLogoutRedirectUris = { "https://xxxxxx.com/" },
FrontChannelLogoutUri = "https://xxxxxx.com/logout",
LogoUri = "https://xxxxxx.com/logout",
AllowedCorsOrigins = { "https://xxxxxx.com" },
AllowedScopes =
{
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile,
IdentityServerConstants.StandardScopes.Email
},
AllowOfflineAccess = true,
AlwaysIncludeUserClaimsInIdToken = true
},
I want to add the security property to my node configuration using Gradle. I'm trying to do something like the below:
node {
name "O=Bank_A,L=New York,C=US"
p2pPort 10005
rpcSettings {
address("localhost:10006")
adminAddress("localhost:10046")
}
h2Port 9005
cordapps = [
"$project.group:bank-abc:$project.version",
"$project.group:shared-contracts-states:$project.version",
"$corda_release_group:corda-finance:$corda_release_version"
]
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "jdbc:h2:tcp://10.0.75.1:9014/node"
username = "some user"
password = "some pass"
driverClassName = "org.h2.Driver"
}
}
}
}
}
when I execute gradlew deployNodes. I get the below error:
What went wrong:
A problem occurred evaluating root project 'tbs-term-reciprocal-dapp'.
Could not set unknown property 'security' for object of type net.corda.plugins.Node.
In order to add security config, you need to use extraConfig within your node's Gradle script.
Taking your example, the extraConfig will look like this:
extraConfig = [
security : [
authService : [
dataSource : [
type: "DB",
passwordEncryption: "SHIRO_1_CRYPT",
connection : [
jdbcUrl: "jdbc:h2:tcp://10.0.75.1:9014/node",
username: "sa",
password: "",
driverClassName: "org.h2.Driver"
]
]
]
]
]