CSRF Token with the “SameSite” attribute value “Lax” or “Strict” was omitted because of a cross-site redirect - fetch

I am using Kratos in my local environment just fine, works great and does what I want it to do, however the main issue I am facing is when I moved my ory kratos location to a droplet and my frontend to my vercel. On the droplet I have setup kratos as show here with nginx. After setting it up I tested my login flow and got these error's when I requested /self-service/registration/flows?id=${flowId}
Thinking I setup my config incorrectly I set it up as seen here.
version: v0.10.1
dsn: postgres://kratos:MYPASSWORD/kratos?sslmode=disable&max_conns=20&max_idle_conns=4
serve:
public:
base_url: http://127.0.0.1:4433/
host: 127.0.0.1
cors:
allow_credentials: true
allowed_origins:
- https://frontend.example.com
allowed_methods:
- POST
- GET
- PUT
- PATCH
- DELETE
allowed_headers:
- Authorization
- Cookie
- Content-Type
exposed_headers:
- Content-Type
- Set-Cookie
enabled: true
admin:
base_url: http://kratos:4434/
host: 127.0.0.1
selfservice:
default_browser_return_url: https://frontend.example.com
allowed_return_urls:
- https://frontend.example.com
methods:
password:
enabled: true
oidc:
enabled: false
flows:
error:
ui_url: https://frontend.example.comerror
settings:
ui_url: https://frontend.example.comsettings
privileged_session_max_age: 15m
recovery:
enabled: true
ui_url: https://frontend.example.comrecovery
verification:
enabled: true
ui_url: https://frontend.example.comverification
after:
default_browser_return_url: https://frontend.example.com/
logout:
after:
default_browser_return_url: https://frontend.example.com/
login:
ui_url: https://frontend.example.com/login
lifespan: 10m
registration:
lifespan: 10m
ui_url: https://frontend.example.com/registration
after:
password:
hooks:
- hook: session
oidc:
hooks:
- hook: session
log:
leak_sensitive_values: false
# set in SECRETS_COOKIE and SECRETS_DEFAULT env variables
secrets:
default:
- 795135465767325463454 #fake
cookie:
- 223108c7839f6324242342 #fake
cookies:
domain: frontend.example.com
path: /
same_site: Lax
session:
lifespan: 72h
cookie:
domain: frontend.example.com
path: /
same_site: Lax
hashers:
argon2:
parallelism: 1
memory: 128MB
iterations: 2
salt_length: 16
key_length: 16
identity:
default_schema_id: default
schemas:
- id: default
url: file:///root/kratos/config/identity.schema.json
courier:
smtp:
connection_uri: smtps://test:test#mailslurper:1025/?skip_ssl_verify=true
The issue still persisted so I checked my code
const req: any = await fetch(`https://frontend.example.com/self-service/${flow}/flows?id=${flowId}`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
credentials: 'include',
}).catch((err) => {
throw err;
}); //error
everything seems fine, I have credentials and my Content-Type was correct, Then doing some research I found this https://github.com/ory/kratos/issues/662 I was wondering, would this effect me in my situation (I have my frontend domain https://frontend.example.com and my kratos droplet https://kratos.example.com are examples), and if so how I could go about fixing it(could I use the ory-proxy to get cookies to my application safely?)? My idea was that I could simply setup ory hydra has the oauth provider for my platform but I am not sure.
Also,
Thanks in advance!

Related

disabling discovery for k8s api client

Right now i used the way from the 1st answer from:
Cannot read configmap with name: [xx] in namespace ['default'] Ignoring
But in application logs:
2022-04-19 14:14:57.660 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] INFO i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Start listing and watching...
2022-04-19 14:14:57.662 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] ERROR i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Reflector loop failed unexpectedly
io.kubernetes.client.openapi.ApiException:
at io.kubernetes.client.openapi.ApiClient.handleResponse(ApiClient.java:974)
at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:886)
at io.kubernetes.client.informer.SharedInformerFactory$1.list(SharedInformerFactory.java:207)
at io.kubernetes.client.informer.cache.ReflectorRunnable.run(ReflectorRunnable.java:88)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
And its works properly and reading configmaps mounted in deployment.
How can i disable the all another features?
Using implementation("org.springframework.cloud:spring-cloud-starter-kubernetes-client-config:2.1.1")
In bootstrap.yml:
spring:
application:
name: toast
cloud:
vault:
enabled: false
kubernetes:
reload:
enabled: true
mode: event
strategy: refresh
config:
sources:
- name: ${spring.application.name}-common
- name: ${spring.application.name}
enabled: true
paths:
#- { { .Values.application } }-common-config/data.yml
#- { { .Values.application } }-config/application.yml
- /etc/${spring.application.name}-common/config/application.yml
- /etc/${spring.application.name}/config/data.yml
enabled: true
I need can use it without rbac resource in k8s.

What is wrong with my SuluSyliusConsumerBundle configuration?

I have a Sulu and a Sylius application which I am trying to integrate using the SuluSyliusBundle and RabbitMQ. I have added the following configs...
messenger.yaml
framework:
router:
resource: "%kernel.project_dir%/config/routing_%sulu.context%.yml"
messenger:
# Uncomment this (and the failed transport below) to send failed messages to this transport for later handling.
# failure_transport: failed
buses:
message_bus:
middleware:
- doctrine_transaction
- event_middleware
# Uncomment this (and the failed transport below) to send failed messages to this transport for later handling.
# failure_transport: failed
transports:
# https://symfony.com/doc/current/messenger.html#transport-configuration
# async: '%env(MESSENGER_TRANSPORT_DSN)%'
# failed: 'doctrine://default?queue_name=failed'
# sync: 'sync://'
routing:
# Route your messages to the transports
# 'App\Message\YourMessage': async
sulu_sylius_consumer.yaml
sylius_base_url: 'http://127.0.0.1:8001'
sylius_default_channel: 'default_channel'
sylius_oauth_config:
username: api
password: api
client_id: demo_client
client_secret: secret_demo_client
route_defaults_fallback:
view: 'templates/products/default'
firewall_provider_key: 'main'
The issue is that once I install the bundle and add these two configs, I get the following error
I believe this is being caused by the config in the messenger. I have added the routes to routes_admin.yaml as it is show in the documentation here:
routes_admin.yaml
app_events:
type: rest
resource: App\Controller\Admin\EventController
prefix: /admin/api
name_prefix: app.
options:
expose: true
app_event_registrations:
type: rest
resource: App\Controller\Admin\EventRegistrationController
prefix: /admin/api
name_prefix: app.
options:
expose: true
app_locations:
type: rest
resource: App\Controller\Admin\LocationController
prefix: /admin/api
name_prefix: app.
options:
expose: true
app_products:
type: rest
resource: App\Controller\Admin\ProductsController
prefix: /admin/api
name_prefix: app.
options:
expose: true
sulu_route:
mappings:
Sulu\Bundle\SyliusConsumerBundle\Model\RoutableResource\RoutableResource:
generator: schema
resource_key: product_content
options:
route_schema: "/products/{object.getCode()}"
routes_website.yaml
app.event:
path: /{_locale}/event/{id}
controller: App\Controller\Website\EventWebsiteController::indexAction
app.product:
path: /{_locale}/product/{id}
controller: App\Controller\Website\ProductWebsiteController::indexAction
What can I do to fix this error?
[edit: added routes_admin and routes_website yaml]

Enable CORS for Cloud Run with Cloud Endpoints v1

I have been following an article on Medium to deploy Cloud Endpoints v1 in front of a Cloud Run service hosting a REST API and everything works well.
I now have a requirement to enable CORS support and I've added the below configuration to my endpoints YAML file but get an error saying "This service does not allow CORS traffic" when my browser tries to make a pre-flight request (I've tested this with Postman too with the same error). I know there's a flag to enable CORS --cors_preset=basic using environment variables but I'm not sure what key to set with. Any ideas or help is appreciated.
Endpoints YAML snipper:
swagger: '2.0'
info:
title: Cloud Endpoints with Cloud Run
description: Testing Cloud Endpoints with Cloud Run
version: 1.0.0
host: endpoint-<hash>-uc.a.run.app
x-google-endpoints:
- name: endpoint-<hash>-uc.a.run.app
allowCors: true
schemes:
- https
produces:
- application/json
Error:
{
"code": 7,
"message": "The service does not allow CORS traffic.",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "service_control"
}
]
}
PS: Thanks Guillaum Blaquiere for the awesome article.
UPDATE:
I ended up testing with an incomplete URL and hence received the above error as my backend service wasn't configured to respond to all pre-flight request URLs. Having fixed this, I now get the below error only on the CORS pre-flight configured URL.
{
"code": 13,
"message": "INTERNAL_SERVER_ERROR",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [
],
"detail": "application"
}
]
}
and logs:
invalid URL prefix in "", client: <CLIENT_IP>, server: , request: "OPTIONS /api/v1/<REMAINING_URL> HTTP/1.1", host: "endpoint-<HASH>-uc.a.run.app"
I would say it's necesary to add ESPv2 Config, I've noticed that the note regarding the ESPv2 config was added since last april, and the Medium document was published on 2019, so I think such required step was not mentioned before.
Later in the same section it's mentioned that the flags for cors are passed by the "--set-env-vars" flag of the deploy command.
You can find more about the ESPv2 Beta startup options in here.
I managed to resolve the issue by defining OPTIONS operations in my YAML file with no security, for each path that I had already defined. See below example YAML file for an endpoint path '/api/v1/hello' with GET and OPTIONS operations defined.
swagger: '2.0'
info:
title: Cloud Endpoints with Cloud Run
description: Testing Cloud Endpoints with Cloud Run
version: 1.0.0
host: endpoint-randomhash-uc.a.run.app
x-google-endpoints:
- name: endpoint-randomhash-uc.a.run.app
allowCors: true
schemes:
- https
produces:
- application/json
x-google-backend:
address: https://backend-randomhash-uc.a.run.app
path_translation: APPEND_PATH_TO_ADDRESS
security:
- auth0_jwk: []
paths:
/api/v1/hello:
get:
summary: Say hello
operationId: helloName
parameters:
- name: "name"
in: "query"
description: "Your name"
type: "string"
responses:
'200':
description: Successful operation
schema:
type: string
options:
summary: CORS pre-flight for say hello
operationId: helloNameOptions
parameters:
- name: "name"
in: "query"
description: "Your name"
type: "string"
responses:
'200':
description: Successful operation
schema:
type: string
security: []
securityDefinitions:
auth0_jwk:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "https://project.auth0.com/"
x-google-jwks_uri: "https://project.auth0.com/.well-known/jwks.json"
x-google-audiences: "firebase-application-host"
As Sergio pointed out in his comment to a SO question, the other option in my case is to use Firebase Hosting proxy to use the same domain and avoid CORS.

Nonexistent parameter in Swiftmailer

In an attempt to configure both a memory & a spool mailer in a Symfony 4.3 application I followed the docs to create this configuration:
swiftmailer:
default_mailer: memory
mailers:
memory:
sender_address: 'admin#bogus.info'
transport: smtp
username: admin#bogus.info
password: 123Abcd
host: localhost
spool: { type: 'memory' }
spooler:
sender_address: 'admin#bogus.info'
transport: smtp
username: admin#bogus.info
password: 123Abcd
host: localhost
spool:
type: file
path: '%kernel.project_dir%/var/spool'
And in services.yaml:
App\Services\Emailer:
$spoolMailer: '%swiftmailer.mailer.spooler%'
$defaultMailer: '%swiftmailer.default_mailer%'
$senderAddress: '%swiftmailer.mailer.memory_mailer.sender_address%'
$projectDir: '%kernel.project_dir%'
But with those four parameters in the service the following occurs with php bin/console debug:container:
The service "App\Services\Emailer" has a dependency on a non-existent
parameter "swiftmailer.mailer.spooler"...
Why does this configuration not work?
The service "App\Services\Emailer" has a dependency on a non-existent parameter "swiftmailer.mailer.spooler"...
Surrounding parameters with the % symbol allows you to pass values to your services.
As you want to inject a service, you should prefix your parameter with the # symbol.
Also, to get the default mailer service, you have to inject #swiftmailer.mailer
EDIT: Proper way to retrieve the sender address: %swiftmailer.mailer.memory.sender_address%
Updated service definition :
App\Services\Emailer:
$spoolMailer: '#swiftmailer.mailer.spooler'
$defaultMailer: '#swiftmailer.mailer'
$senderAddress: '%swiftmailer.mailer.memory.sender_address%'
$projectDir: '%kernel.project_dir%'

Redis with Symfony2 causes problems between sites on my server

I'm using symfony2 snc-redis bundle for caching.
On my server, redis has been installed and working correctly.
My problem is; when i try to clear or flush db with redis, all sites on my server that using redis, crashes. Giving internal server error because of prod env.
I'v tried to change redis configuration ports in my config.yml for every single site on my server but i think didn't work.
My sample snc-redis configuration:
snc_redis:
clients:
default:
type: predis
alias: default
dsn: redis://localhost
logging: %kernel.debug%
cache:
type: predis
alias: cache
dsn: redis://localhost/1
logging: true
cluster:
type: predis
alias: cluster
dsn:
- redis://127.0.0.1/5
- redis://127.0.0.2/6
- redis://pw#/var/run/redis/redis-1.sock/7
- redis://127.0.0.1:6379/8
options:
profile: 2.4
connection_timeout: 10
connection_persistent: true
read_write_timeout: 30
iterable_multibulk: false
throw_errors: true
cluster: Snc\RedisBundle\Client\Predis\Connection\PredisCluster
monolog:
type: predis
alias: monolog
dsn: redis://localhost/1
logging: false
options:
connection_persistent: true
session:
client: default
prefix: foo
use_as_default: true
doctrine:
metadata_cache:
client: cache
entity_manager: default
document_manager: default
result_cache:
client: cache
entity_manager: [default, read]
document_manager: [default, slave1, slave2]
namespace: "dcrc:"
query_cache:
client: cache
entity_manager: default
monolog:
client: monolog
key: monolog
swiftmailer:
client: default
key: swiftmailer
monolog:
handlers:
main:
type: service
id: monolog.handler.redis
level: debug
What i'm doing wrong? How can i get it work correctly and will not cause crashing.
My Redis Bundle for Symfon2:
Snc\RedisBundle\SncRedisBundle()
https://github.com/snc/SncRedisBundle
You can define prefix for each site like this:
snc_redis:
clients:
default:
dsn: "redis://localhost:6379"
options:
prefix : "site_name"
type: phpredis
alias: default
logging: %kernel.debug%
Note: You must to be considered to put this prefix to all clients ;)
Did you try to change client alias for every site ?

Resources