Trigger argo workflow template via argo events based on webhook payload data key - argo-events

im looking for a way to trigger a workflow using webhook json payload key for example
if payload is {"action":"copy","id":"someid"}
on triggers
triggers:
template:
- "group1"
"use this template if action key is 'copy'"
- "group2"
"use this template" if action key is 'move'"
i have created a example sensor file but, argo sersor is complaining that eventSourceName used multiple times.
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: testing-sensor
spec:
template:
serviceAccountName: argo
dependencies:
- name: data
eventSourceName: testhook
eventName: test
filters:
data:
- path: body.message
type: string
value:
- copy
- name: data1
eventSourceName: testhook
eventName: test
filters:
data:
- path: body.message
type: string
value:
- move
triggers:
- template:
conditions: "data"
name: group-1
- template:
conditions: "data1"
name: group-2
If some one could help it would be nice

You should use a data filter in your Sensor. (Example sensor.)
It would look something like this:
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: data-filter
spec:
dependencies:
- name: test-dep
eventSourceName: webhook
eventName: example
filters:
data:
- path: body.action
type: string
value:
- "copy"
triggers:
- template:
name: data-workflow
k8s:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: create
# ...

Related

Ambassador Edge Stack JWT filter with Firebase token not working

I'm trying to verify a Firebase generated JWT token with an Ambassador Edge Stack (datawire/edge-stack version 3.3.0) Filter.
The Firebase token is generated using a login/password authent mechanism on firebase, something like (in Python):
email=input("Enter email: ")
password=input("Enter password: ")
user = authentication.sign_in_with_email_and_password(email, password)
custom_token = auth.create_custom_token(user["localId"], additional_claims)
print("JWT Token :")
print(custom_token)
After the token is generated, I use it with a curl command such as:
curl -H "Authorization: Bearer $TOKEN" https://ambassador-ip.nip.io/hello-world/
and the curl command returns the following error:
},
"message": "Token validation error: token is invalid: errorFlags=0x00000002=(ValidationErrorUnverifiable) wrappedError=(KeyID=\"50***redacted***1\": JWK not found)",
"status_code": 401
}
Here is the ambassador Filter I've declared:
apiVersion: getambassador.io/v2
kind: Filter
metadata:
name: "firebase-filter"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
JWT:
jwksURI: "https://www.googleapis.com/service_accounts/v1/metadata/x509/securetoken#system.gserviceaccount.com"
audience: "${local.project_id}"
issuer: "https://securetoken.google.com/${local.project_id}"
And the policy filter applied to my backend:
apiVersion: getambassador.io/v3alpha1
kind: FilterPolicy
metadata:
name: "firebase-filter-policy"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
rules:
- host: "*"
path: "/hello-world/"
filters:
- name: "firebase-filter"
namespace: "${kubernetes_namespace.hello_world.metadata[0].name}"
For the record, the curl command with the same token works on a deployed hello-world Cloud Run with a GCP API gateway configured as follow:
swagger: '2.0'
info:
title: Example Firebase auth Gateway
description: API Gateway with firebase auth
version: 1.0.0
schemes:
- https
produces:
- application/json
securityDefinitions:
firebase:
authorizationUrl: ''
flow: implicit
type: oauth2
x-google-issuer: "https://securetoken.google.com/${project_id}"
x-google-jwks_uri: "https://www.googleapis.com/service_accounts/v1/metadata/x509/securetoken#system.gserviceaccount.com"
x-google-audiences: "${project_id}"
paths:
/v1/hello:
get:
security:
- firebase: []
description: Hello
operationId: hello
responses:
'200':
description: Success
x-google-backend:
address: 'https://hello-redacted-ew.a.run.app'
Any idea why the Ambassador filter is misconfigured ?
Ambassador JWT Filter needs the jwksURI to point to the Firebase secure token service account public keys and not the X509 certificates, therefore the Filter should be:
apiVersion: getambassador.io/v2
kind: Filter
metadata:
name: "firebase-filter"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
JWT:
jwksURI: "https://www.googleapis.com/service_accounts/v1/jwk/securetoken#system.gserviceaccount.com"
audience: "${local.project_id}"
issuer: "https://securetoken.google.com/${local.project_id}"
This is working for Firebase tokens only. If you want to make this works with Custom tokens using some dedicated service account for example, you might need the jwksURI to point to your service account public keys, something like:
apiVersion: getambassador.io/v2
kind: Filter
metadata:
name: "firebase-custom-filter"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
JWT:
jwksURI: "https://www.googleapis.com/service_accounts/v1/jwk/${service_account}#${local.project_id}.iam.gserviceaccount.com"
audience: "${local.project_id}"
issuer: "https://securetoken.google.com/${local.project_id}"
The JWT Filter requires you to provide the url for the .well-known/openid-configuration so that it can verify the signature of the token. I'm not familiar with Firebase but looking on their docs it appears you can find this here:
https://firebase.google.com/docs/auth/web/openid-connect
For example your Filter should be configured something like the following (i'm guessing on the jwksURI):
apiVersion: getambassador.io/v2
kind: Filter
metadata:
name: "firebase-filter"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
JWT:
jwksURI: "https://securetoken.google.com/${local.project_id}/.well-known/openid-configuration"
audience: "${local.project_id}"
issuer: "https://securetoken.google.com/${local.project_id}"

How to configure kafka topics via Dapr?

As per title, does anybody know how to configure kafka topics using dapr pubsub component? Specifically I would like to configure retention.ms topic property.
Current pubsub.yaml:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.kafka
version: v1
metadata:
# Kafka broker connection setting
- name: brokers
value: "kafka:9092"
- name: authRequired
value: "false"
- name: initialOffset
value: "oldest"
- name: maxMessageBytes
value: 8192
Thanks in advance.

Datastore Index Not working for property that is JSON string

The following query in Datastore return the expected number of results:
SELECT timestamp from leadgenie_campaign_model_dev where campaign = "3667f39d-a3ff-4acb-b1ca-6f730bbc7989"
This query is backed by an index, which allows for the projection.
But this one doesn't return any results, despite the fact that an index also exists. attrs is a JSON string
SELECT timestamp, attrs from leadgenie_campaign_model_dev where campaign = "3667f39d-a3ff-4acb-b1ca-6f730bbc7989"
Here's the spec for the indexes:
indexes:
- kind: leadgenie_campaign_model_dev
properties:
- name: campaign
- name: attrs
- name: timestamp
- kind: leadgenie_campaign_model_dev
properties:
- name: campaign
- name: timestamp
- name: attrs
- kind: leadgenie_campaign_model_dev
properties:
- name: campaign
- name: timestamp
Is your attrs field more than 1,500 bytes by any chance? That is a limit for a string property to be indexed. See https://cloud.google.com/datastore/docs/concepts/limits .
The solution was to include the subproperty in the query, after creating an index for those properties specifically.
SELECT timestamp, attrs.score, attrs.sz from leadgenie_campaign_model_dev where campaign = "3667f39d-a3ff-4acb-b1ca-6f730bbc7989"
google.api_core.exceptions.FailedPrecondition: 400 no matching index found. recommended index is:
- kind: leadgenie_campaign_model_dev
properties:
- name: campaign
- name: attrs.score
- name: attrs.sz
- name: timestamp

istio internal GRPC services communication

I am having trouble having two in-cluster GRPC services (written in netcore3.0) I get Grpc.Core.RpcException: Status(StatusCode=Unavailable, Detail="Connection reset by peer") (with uri = <service>.default.svc.cluster.local) or Grpc.Core.RpcException: Status(StatusCode=Unimplemented, Detail="") with uri = user.default.svc.cluster.local:80. The weird part is all the services work fine if they are communicating from different clusters. I'm I using the right urls. The configuration of one of the services in attached here.
apiVersion: v1
kind: Service
metadata:
name: user
labels:
app: user
service: user
spec:
ports:
- port: 80
name: grpc-port
protocol: TCP
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-v1
labels:
app: user
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: user
version: v1
spec:
containers:
- name: user
image: ***
imagePullPolicy: IfNotPresent
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "***"
ports:
- containerPort: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: user
spec:
hosts:
- user.default.svc.cluster.local
http:
- route:
- destination:
port:
number: 80
host: user.default.svc.cluster.local
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: user
spec:
host: user.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
---
FIXED: I managed to the it to work by using gRPC's .NETCORE3 client factory integration as described here
Instead of creating a channel and client manually as one would usually do. i.e
var endpoint = "test"; //or http://test
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
Channel channel = new Channel(endpoint, ChannelCredentials.Insecure);
client = new TestService.TestServiceClient(channel);
I used GRPC client factory integration (in ConfigureServices (startup.cs) like this (after adding Grpc.Net.ClientFactory package version 0.1.22-pre1:
services.AddGrpcClient<TestService.TestServiceClient>(o =>
{
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
o.BaseAddress = new Uri("http://test");
});
Thereafter you can access the client by using Dependency Injection.
I'm not sure why the second approach works but the first one doesn't.

DynamoDB table not created in serverless.yml file

Getting the error cannot do operation on a non-existent table after running sls offline start and attempting to access the users endpoint. serverless.yml file is as follows:
service:
name: digital-secret
plugins:
- serverless-dynamodb-local
- serverless-offline # must be last in the list
custom:
userTableName: 'users-table-${self:provider.stage}'
dynamoDb:
start:
migrate: true
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-2
iamRoleStatements:
- Effect: Allow
Action:
- 'dynamodb:Query'
- 'dynamodb:Scan'
- 'dynamodb:GetItem'
- 'dynamodb:PutItem'
- 'dynamodb:UpdateItem'
- 'dynamodb:DeleteItem'
Resource:
- { "Fn::GetAtt": ["usersTable", "Arn"] }
environment:
USERS_TABLE: ${self:custom.userTableName}
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
user:
handler: index.handler
events:
- http: 'GET /users/{proxy+}'
- http: 'POST /users'
resources:
Resources:
usersTable:
Type: 'AWS::DynamoDB::Table'
Properties:
TableName: ${self:custom.userTableName}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Can anyone help point out what is wrong here? I've looked through the docs and at many different examples available online, but nothing I can see is different than the above.
The serverless-dynamodb-local docs say that the custom block should be structured like this:
custom:
dynamodb:
start:
migrate: true
You have dynamoDb instead of dynamodb
If anyone else is having issues with this, I spent hours trying to track down this issue and it was because I inadvertently had the wrong case for the [r]esources section in serverless.yml
Resources: <-- Needs to be lower case 'r'
Resources:
usersTable:
Type: 'AWS::DynamoDB::Table'
Properties:
...

Resources