I have a Sulu and a Sylius application which I am trying to integrate using the SuluSyliusBundle and RabbitMQ. I have added the following configs...
messenger.yaml
framework:
router:
resource: "%kernel.project_dir%/config/routing_%sulu.context%.yml"
messenger:
# Uncomment this (and the failed transport below) to send failed messages to this transport for later handling.
# failure_transport: failed
buses:
message_bus:
middleware:
- doctrine_transaction
- event_middleware
# Uncomment this (and the failed transport below) to send failed messages to this transport for later handling.
# failure_transport: failed
transports:
# https://symfony.com/doc/current/messenger.html#transport-configuration
# async: '%env(MESSENGER_TRANSPORT_DSN)%'
# failed: 'doctrine://default?queue_name=failed'
# sync: 'sync://'
routing:
# Route your messages to the transports
# 'App\Message\YourMessage': async
sulu_sylius_consumer.yaml
sylius_base_url: 'http://127.0.0.1:8001'
sylius_default_channel: 'default_channel'
sylius_oauth_config:
username: api
password: api
client_id: demo_client
client_secret: secret_demo_client
route_defaults_fallback:
view: 'templates/products/default'
firewall_provider_key: 'main'
The issue is that once I install the bundle and add these two configs, I get the following error
I believe this is being caused by the config in the messenger. I have added the routes to routes_admin.yaml as it is show in the documentation here:
routes_admin.yaml
app_events:
type: rest
resource: App\Controller\Admin\EventController
prefix: /admin/api
name_prefix: app.
options:
expose: true
app_event_registrations:
type: rest
resource: App\Controller\Admin\EventRegistrationController
prefix: /admin/api
name_prefix: app.
options:
expose: true
app_locations:
type: rest
resource: App\Controller\Admin\LocationController
prefix: /admin/api
name_prefix: app.
options:
expose: true
app_products:
type: rest
resource: App\Controller\Admin\ProductsController
prefix: /admin/api
name_prefix: app.
options:
expose: true
sulu_route:
mappings:
Sulu\Bundle\SyliusConsumerBundle\Model\RoutableResource\RoutableResource:
generator: schema
resource_key: product_content
options:
route_schema: "/products/{object.getCode()}"
routes_website.yaml
app.event:
path: /{_locale}/event/{id}
controller: App\Controller\Website\EventWebsiteController::indexAction
app.product:
path: /{_locale}/product/{id}
controller: App\Controller\Website\ProductWebsiteController::indexAction
What can I do to fix this error?
[edit: added routes_admin and routes_website yaml]
Related
I am using Kratos in my local environment just fine, works great and does what I want it to do, however the main issue I am facing is when I moved my ory kratos location to a droplet and my frontend to my vercel. On the droplet I have setup kratos as show here with nginx. After setting it up I tested my login flow and got these error's when I requested /self-service/registration/flows?id=${flowId}
Thinking I setup my config incorrectly I set it up as seen here.
version: v0.10.1
dsn: postgres://kratos:MYPASSWORD/kratos?sslmode=disable&max_conns=20&max_idle_conns=4
serve:
public:
base_url: http://127.0.0.1:4433/
host: 127.0.0.1
cors:
allow_credentials: true
allowed_origins:
- https://frontend.example.com
allowed_methods:
- POST
- GET
- PUT
- PATCH
- DELETE
allowed_headers:
- Authorization
- Cookie
- Content-Type
exposed_headers:
- Content-Type
- Set-Cookie
enabled: true
admin:
base_url: http://kratos:4434/
host: 127.0.0.1
selfservice:
default_browser_return_url: https://frontend.example.com
allowed_return_urls:
- https://frontend.example.com
methods:
password:
enabled: true
oidc:
enabled: false
flows:
error:
ui_url: https://frontend.example.comerror
settings:
ui_url: https://frontend.example.comsettings
privileged_session_max_age: 15m
recovery:
enabled: true
ui_url: https://frontend.example.comrecovery
verification:
enabled: true
ui_url: https://frontend.example.comverification
after:
default_browser_return_url: https://frontend.example.com/
logout:
after:
default_browser_return_url: https://frontend.example.com/
login:
ui_url: https://frontend.example.com/login
lifespan: 10m
registration:
lifespan: 10m
ui_url: https://frontend.example.com/registration
after:
password:
hooks:
- hook: session
oidc:
hooks:
- hook: session
log:
leak_sensitive_values: false
# set in SECRETS_COOKIE and SECRETS_DEFAULT env variables
secrets:
default:
- 795135465767325463454 #fake
cookie:
- 223108c7839f6324242342 #fake
cookies:
domain: frontend.example.com
path: /
same_site: Lax
session:
lifespan: 72h
cookie:
domain: frontend.example.com
path: /
same_site: Lax
hashers:
argon2:
parallelism: 1
memory: 128MB
iterations: 2
salt_length: 16
key_length: 16
identity:
default_schema_id: default
schemas:
- id: default
url: file:///root/kratos/config/identity.schema.json
courier:
smtp:
connection_uri: smtps://test:test#mailslurper:1025/?skip_ssl_verify=true
The issue still persisted so I checked my code
const req: any = await fetch(`https://frontend.example.com/self-service/${flow}/flows?id=${flowId}`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
credentials: 'include',
}).catch((err) => {
throw err;
}); //error
everything seems fine, I have credentials and my Content-Type was correct, Then doing some research I found this https://github.com/ory/kratos/issues/662 I was wondering, would this effect me in my situation (I have my frontend domain https://frontend.example.com and my kratos droplet https://kratos.example.com are examples), and if so how I could go about fixing it(could I use the ory-proxy to get cookies to my application safely?)? My idea was that I could simply setup ory hydra has the oauth provider for my platform but I am not sure.
Also,
Thanks in advance!
Right now i used the way from the 1st answer from:
Cannot read configmap with name: [xx] in namespace ['default'] Ignoring
But in application logs:
2022-04-19 14:14:57.660 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] INFO i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Start listing and watching...
2022-04-19 14:14:57.662 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] ERROR i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Reflector loop failed unexpectedly
io.kubernetes.client.openapi.ApiException:
at io.kubernetes.client.openapi.ApiClient.handleResponse(ApiClient.java:974)
at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:886)
at io.kubernetes.client.informer.SharedInformerFactory$1.list(SharedInformerFactory.java:207)
at io.kubernetes.client.informer.cache.ReflectorRunnable.run(ReflectorRunnable.java:88)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
And its works properly and reading configmaps mounted in deployment.
How can i disable the all another features?
Using implementation("org.springframework.cloud:spring-cloud-starter-kubernetes-client-config:2.1.1")
In bootstrap.yml:
spring:
application:
name: toast
cloud:
vault:
enabled: false
kubernetes:
reload:
enabled: true
mode: event
strategy: refresh
config:
sources:
- name: ${spring.application.name}-common
- name: ${spring.application.name}
enabled: true
paths:
#- { { .Values.application } }-common-config/data.yml
#- { { .Values.application } }-config/application.yml
- /etc/${spring.application.name}-common/config/application.yml
- /etc/${spring.application.name}/config/data.yml
enabled: true
I need can use it without rbac resource in k8s.
I have created different channels for my monolog, while working with my app (dev and test environment) everything is fine, creates and writes into all logs but when execute my tests (unit test) I am getting the following error: "The service definition "monolog.logger.event" does not exist"
I've dump my "ContainerBuilder $container" in "vendor/symfony/monolog-bundle/DependencyInjection/Compiler/LoggerChannelPass.php" and for some reason the monolog.logger.event does not exists, all the rest of my channels do exists: doctrine, request, security, etc...
I am pasting my monolog config corresponding to the channels:
monolog:
handlers:
main:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
channels: ['!event', '!snc_redis', '!doctrine', '!request', '!security']
event:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%_event.log"
level: debug
channels: ['event']
snc_redis:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%_redis.log"
level: debug
channels: ['snc_redis']
doctrine:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%_doctrine.log"
level: debug
channels: ['doctrine']
request:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%_request.log"
level: debug
channels: ['request']
security:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%_security.log"
level: debug
channels: ['security']
Any clue how can I fix this
It is happens because web/app_test.php and console have different debug flag, for example:
#web/app_test.php
$kernel = new AppKernel('test', false);
#bin(app)/console
$kernel = new AppKernel('test', true);
In this situation web application cannot find the right cache container file.
I'm having trouble setting up Monolog to use Swiftmailer and the Html formatter. I have multiple monolog handlers and swiftmail mailers which might be the issue. Here's my config.yml
Monolog section:
services:
monolog.formatter.html:
class: Monolog\Formatter\HtmlFormatter
monolog:
channels: ["orders", "support"]
handlers:
# Called
orders:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%_orders.log"
level: info
channels: orders
# Called
support:
type: fingers_crossed
action_level: critical
handler: support_grouped
channels: support
# Not called, referenced in a handler [support]
support_grouped:
type: group
members: [support_log, environment_log, support_buffered]
# Not called, referenced in a group handler [grouped_support]
support_buffered:
type: buffer
handler: support_buffered_grouped
# Not called, referenced in a handler [support_buffered]
support_buffered_grouped:
type: group
members: [hipchat, webmaster_email, support_email]
# Not called, referenced in a group handler [support_buffered_grouped]
support_email:
type: swift_mailer
mailer: swiftmailer.mailer.errors
from_email: %webmaster_email_address%
to_email: %support_email_address%
subject: '[Costimator] Support: An Error Occurred!'
level: debug
content_type: text/html
formatter: monolog.formatter.html
# Called
webmaster:
type: fingers_crossed
action_level: critical
handler: webmaster_grouped
# Not called, referenced in a handler [webmaster]
webmaster_grouped:
type: group
members: [environment_log, webmaster_buffered]
# Not called, referenced in a handler [webmaster_grouped]
webmaster_buffered:
type: buffer
handler: webmaster_buffered_grouped
# Not called, referenced in a handler [webmaster_buffered_grouped]
webmaster_buffered_grouped:
type: group
members: [hipchat, webmaster_email]
# Not called, referenced in a group handler [webmaster_buffered_grouped, support_buffered_grouped]
webmaster_email:
type: swift_mailer
mailer: swiftmailer.mailer.errors
content_type: text/html
from_email: %webmaster_email_address%
to_email: %webmaster_email_address%
subject: '[Costimator] An Error Occurred!'
level: debug
formatter: monolog.formatter.html
# Not called, referenced in a group handler [support_grouped, webmaster_grouped]
environment_log:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
# Not called, referenced in a group handler [support_grouped, webmaster_grouped]
support_log:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.support_log"
level: debug
# Not called, referenced in a group handler [webmaster_buffered_grouped, support_buffered_grouped]
hipchat:
type: hipchat
notify: true
token: xxx
room: xxx
level: critical
Swiftmailer section:
# Swiftmailer Configuration
swiftmailer:
default_mailer: %mailer_default_config%
mailers:
default:
delivery_address: "%mailer_delivery_address%"
transport: "%mailer_transport%"
host: "%mailer_host%"
username: "%mailer_user%"
password: "%mailer_password%"
spool:
type: "%mailer_spool_type%"
path: "%kernel.root_dir%/spool"
errors:
delivery_address: "%mailer_errors_delivery_address%"
transport: "%mailer_errors_transport%"
host: "%mailer_errors_host%"
username: "%mailer_errors_user%"
password: "%mailer_errors_password%"
spool:
type: "%mailer_errors_spool_type%"
path: "%kernel.root_dir%/spool"
Errors are being sent as text/html but without the table formatting. Any ideas?
// Ping #seldaek
Edit Formatters seem to be set on the handlers:
The configuration above actually works correctly. Not sure what was going on.
Leaving this here for anyone else who wants to implement HTML formatting.
I'm using symfony2 snc-redis bundle for caching.
On my server, redis has been installed and working correctly.
My problem is; when i try to clear or flush db with redis, all sites on my server that using redis, crashes. Giving internal server error because of prod env.
I'v tried to change redis configuration ports in my config.yml for every single site on my server but i think didn't work.
My sample snc-redis configuration:
snc_redis:
clients:
default:
type: predis
alias: default
dsn: redis://localhost
logging: %kernel.debug%
cache:
type: predis
alias: cache
dsn: redis://localhost/1
logging: true
cluster:
type: predis
alias: cluster
dsn:
- redis://127.0.0.1/5
- redis://127.0.0.2/6
- redis://pw#/var/run/redis/redis-1.sock/7
- redis://127.0.0.1:6379/8
options:
profile: 2.4
connection_timeout: 10
connection_persistent: true
read_write_timeout: 30
iterable_multibulk: false
throw_errors: true
cluster: Snc\RedisBundle\Client\Predis\Connection\PredisCluster
monolog:
type: predis
alias: monolog
dsn: redis://localhost/1
logging: false
options:
connection_persistent: true
session:
client: default
prefix: foo
use_as_default: true
doctrine:
metadata_cache:
client: cache
entity_manager: default
document_manager: default
result_cache:
client: cache
entity_manager: [default, read]
document_manager: [default, slave1, slave2]
namespace: "dcrc:"
query_cache:
client: cache
entity_manager: default
monolog:
client: monolog
key: monolog
swiftmailer:
client: default
key: swiftmailer
monolog:
handlers:
main:
type: service
id: monolog.handler.redis
level: debug
What i'm doing wrong? How can i get it work correctly and will not cause crashing.
My Redis Bundle for Symfon2:
Snc\RedisBundle\SncRedisBundle()
https://github.com/snc/SncRedisBundle
You can define prefix for each site like this:
snc_redis:
clients:
default:
dsn: "redis://localhost:6379"
options:
prefix : "site_name"
type: phpredis
alias: default
logging: %kernel.debug%
Note: You must to be considered to put this prefix to all clients ;)
Did you try to change client alias for every site ?