Is there a spring data redis mapping the Redisson framework - spring-data-redis

As the title says, was there a spring data redis mapping to the Redisson framework (http://redisson.org)

Short answer
There is Spring Data Redis integration
Long answer
Consider Spring Data Redis integration as another type of connector or binding (check here for the connector term). The library provides RedissonConnectionFactory (implements RedisConnectionFactory) which would be the base for working with e.g. #RedisHash and spring cache abstraction (#EnableCaching). There's also a redisson-spring-boot-starter but make sure not to have lettuce or jedis in classpath because org.springframework.boot.autoconfigure.data.redis.RedisAutoConfiguration (provided by spring-boot-autoconfigure) might create a RedisConnectionFactory before org.redisson.spring.starter.RedissonAutoConfiguration (provided by redisson-spring-boot-starter)!

Add redisson-spring-boot-starter dependency into your project:
compile 'org.redisson:redisson-spring-boot-starter:3.13.5'
Add settings into application.settings file
common spring boot props:
spring:
redis:
database:
host:
port:
password:
ssl:
timeout:
cluster:
nodes:
sentinel:
master:
nodes:
redisson:
file: classpath:redisson.yaml
config: |
clusterServersConfig:
idleConnectionTimeout: 10000
connectTimeout: 10000
timeout: 3000
retryAttempts: 3
retryInterval: 1500
failedSlaveReconnectionInterval: 3000
failedSlaveCheckInterval: 60000
password: null
subscriptionsPerConnection: 5
clientName: null
loadBalancer: !<org.redisson.connection.balancer.RoundRobinLoadBalancer> {}
subscriptionConnectionMinimumIdleSize: 1
subscriptionConnectionPoolSize: 50
slaveConnectionMinimumIdleSize: 24
slaveConnectionPoolSize: 64
masterConnectionMinimumIdleSize: 24
masterConnectionPoolSize: 64
readMode: "SLAVE"
subscriptionMode: "SLAVE"
nodeAddresses:
- "redis://127.0.0.1:7004"
- "redis://127.0.0.1:7001"
- "redis://127.0.0.1:7000"
scanInterval: 1000
pingConnectionInterval: 0
keepAlive: false
tcpNoDelay: false
threads: 16
nettyThreads: 32
codec: !<org.redisson.codec.FstCodec> {}
transportMode: "NIO"
3.Use Redisson through spring bean with RedissonClient interface or RedisTemplate/ReactiveRedisTemplate objects

Related

Send the trace data of a website using Jaeger and Opentelemetry to Opensearch

I'm working on the observability part of Opensearch so I'm trying to collect the trace data of a wordpress website and send it to Opensearch.
I'm collecting the trace data using the wordpress plugin Decalog, this later sends the data to Jaeger agent, then from jaeger i'm sending the data to Opentelemetry and then to Data prepper and lastly to Opensearch.
Jaeger agent service in docker-compose :
jaeger-agent:
container_name: jaeger-agent
image: jaegertracing/jaeger-agent:latest
command: [ "--reporter.grpc.host-port=otel-collector:14250" ]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778/tcp"
networks:
- our-network
The "command" ligne got me this error : Err: connection error: desc = "transport: Error while dialing dial tcp: lookup otel-collector on 127.0.0.11:53: server misbehaving"","system":"grpc","grpc_log":true
So I changed otel-collector to the IP of the otel-collector container.
Otel collector and data prepper are installed using docker-compose.
data-prepper:
restart: unless-stopped
container_name: data-prepper
image: opensearchproject/data-prepper:latest
volumes:
- ./data-prepper/examples/trace_analytics_no_ssl.yml:/usr/share/data-prepper/pipelines.yaml
- ./data-prepper/examples/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml
- ./data-prepper/examples/demo/root-ca.pem:/usr/share/data-prepper/root-ca.pem
ports:
- "21890:21890"
networks:
- our-network
depends_on:
- "opensearch"
otel-collector:
container_name: otel-collector
image: otel/opentelemetry-collector:0.54.0
command: [ "--config=/etc/otel-collector-config.yml" ]
working_dir: "/project"
volumes:
- ${PWD}/:/project
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
- ./data-prepper/examples/demo/demo-data-prepper.crt:/etc/demo-data-prepper.crt
ports:
- "4317:4317"
depends_on:
- data-prepper
networks:
- our-network
The configuration of otel.yaml (to send data from opentelemetry to opensearch):
receivers:
jaeger:
protocols:
grpc:
exporters:
otlp/2:
endpoint: data-prepper:21890
tls:
insecure: true
insecure_skip_verify: true
logging:
service:
pipelines:
traces:
receivers: [jaeger]
exporters: [logging, otlp/2]
The configuration for data prepper pipeline : entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
prepper:
- otel_trace_raw_prepper:
sink:
- opensearch:
hosts: [ "http://localhost:9200" ]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_raw: true
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
prepper:
- service_map_stateful:
sink:
- opensearch:
hosts: ["http://localhost:9200"]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_service_map: true
As of now I'm getting the following errors:
Jaeger agent :
Err: connection error: desc = \"transport: Error while dialing dial tcp otel-collector-container-IP:14250: i/o timeout\"","system":"grpc","grpc_log":true}
Open telemetry collector :
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:86 Starting processors...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:98 Starting receivers...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:102 Exporter is starting... {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info static/strategy_store.go:203 No sampling strategies provided or URL is unavailable, using defaults {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info pipelines/pipelines.go:106 Exporter started. {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info service/collector.go:220 Starting otelcol... {"Version": "0.54.0", "NumCPU": 2}
2022-08-04T15:31:32.683Z info service/collector.go:128 Everything is ready. Begin running and processing data.
2022-08-04T15:31:32.684Z warn zapgrpc/zapgrpc.go:191 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "data-prepper:21890",
"ServerName": "data-prepper:21890",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp data-prepper-container-ip:21890: connect: connection refused" {"grpc_log": true}
Data prepper :
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.amazon.dataprepper.DataPrepper]: Constructor threw exception; nested exception is java.lang.RuntimeException: No valid pipeline is available for execution, exiting
Followed by this at the end :
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-08-04T15:23:22,803 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-08-04T15:23:22,806 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
Opensearch needs a separate tool to support ingestion of Opentelemetry data. It is called DataPrepper and is part of the Opensearch project. There is a nice getting started guide on how to set up trace analytics in Opensearch.
DataPrepper works similar as Fluentd or the Opentelemetry Collector, but has proper support for Opensearch as a data sink. It pre-processes trace data adequately for the Opensearch Dashboards UI tracing plugin. DataPrepper also supports the Opentelemetry metrics format.
Are you still having issues running Data Prepper? The configuration used in this example has been updated since the latest release, and should now be up to date and working (https://github.com/opensearch-project/data-prepper/blob/main/examples/trace_analytics_no_ssl.yml)

VNF do not forward packets sent from Client in Openstack using VNFF Graph

I'm trying to ping from Client to 8.8.8.8 via VNF1 so I use VNFFG to force ICMP traffic of Client go through VNF1 before going out to internet.
After I apply the VNFFG rule in openstack, VNF1 can see MPLS packet encapsulated from Client's ICMP packet by openstack when I use tcpdump but the Forwarding Table of VNF1 do not receive any packet to continue forward that packet.
This is packet seen on VNF1:
09:15:12.161830 MPLS (label 13311, exp 0, [S], ttl 255) IP 12.0.0.58 > 8.8.8.8: ICMP echo request, id 10531, seq 15, length 64
I capture that packet, see that the content can be read (without encryption) and src, dst MAC belong to Client and VNF1 respectively.
This is my VNFFG template:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Sample VNFFG template
topology_template:
node_templates:
Forwarding_path1:
type: tosca.nodes.nfv.FP.TackerV2
description: demo chain
properties:
id: 51
policy:
type: ACL
criteria:
- name: block_icmp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 1
- name: block_udp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 17
path:
- forwarder: VNF1
capability: CP1
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: Traffic to server
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 1
dependent_virtual_link: [VL1]
connection_point: [CP1]
constituent_vnfs: [VNF1]
members: [Forwarding_path1]
This is my VNF Descriptor:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 2 GB
disk_size: 20 GB
properties:
image: VNF1
availability_zone: nova
mgmt_driver: noop
key_name: my-key-pair
config: |
param0: key1
param1: key2
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: my-private-network
vendor: Tacker
FIP1:
type: tosca.nodes.network.FloatingIP
properties:
floating_network: public
requirements:
- link:
node: CP1
I used this command to deploy VNFGG rule:
tacker vnffg-create --vnffgd-template vnffg_test.yaml forward_traffic
I do not know if the problem can come from the key I defined for VNF1 because I do not know what param0: key0 and param1: key1 used for and where are they?
How can I resolve to make the VNF forward these packet.

How to register an existing Clearcase view existing on other server

I have a two views of the same name on two different servers but they are not synchronized.
They do not have the same config spec as shown below.
Environment is AIX.
t123456#server1:/dwp_root/d/streams/rcl/bin:d> ct catcs -tag deliver_pml_ux
element * A46_1.4.2
t123456#server2:/u/t123456:t> ct catcs -tag deliver_pml_ux
element * DEM_7.7.52
Information from server 1
t123456#server1:/u/t123456:d> ct lsview -l -pro -full deliver_pml_ux
Tag: deliver_pml_ux
Global path: /clearcase/views/deliver_pml_ux.vws
Server host: server1
Region: dwh
Active: YES
View tag uuid:58a2fc3c.761011e8.8052.00:02:c7:d6:16:4c
View on host: server1
View server access path: /clearcase/views/deliver_pml_ux.vws
View uuid: 58a2fc3c.761011e8.8052.00:02:c7:d6:16:4c
View owner: ccadmin
t123456#server1:/u/t123456:d>ct lsstgloc -view -long
Name: viewstg
Type: View
Region: dwh
Storage Location uuid: 3a407c2c.ca8b11e1.805a.00:02:f6:0b:ad:4c
Global path: /clearcase/stg/views
Server host: d1dw753
Server host path: /clearcase/stg/views
Information from server 2
t123456#server2:/u/t123456:t> ct lsview -l -pro -full deliver_pml_ux
Tag: deliver_pml_ux
Global path: /clearcase/views/deliver_pml_ux.vws
Server host: server2
Region: dwh
Active: YES
View tag uuid:9c721b34.ba1811e1.8026.00:02:f6:0b:ad:4c
View on host: server2
View server access path: /clearcase/views/deliver_pml_ux.vws
View uuid: 9c721b34.ba1811e1.8026.00:02:f6:0b:ad:4c
View owner: loaddsfr
t123456#server2:/u/t123456:t> ct lsstgloc -view -long
t123456#server2:/u/t123456:t>
Check if you need to use cleartool register in your case.
ct register -view -replace -host server2 -hpath /clearcase/views/deliver_pml_ux.vws /clearcase/views/deliver_pml_ux.vws
It needs to run on a ClearCase client within the right target region.

Exception 504 when registering the consumer

I've been working with Symfony 2.7 and the RabbitMQBundle to handle some long processes asynchronously.
After facing the issue where the MySQL connection dies after a few minutes, I discovered rabbitmq-cli-consumer, a small app in Go that takes care of consuming the queue, and gives its content to a command.
In my case, I use it with this command: ./rabbitmq-cli-consumer -c configuration-stock.conf --include -V -e 'php app/console amqp:consume:stock --env=prod -vvv', with this configuration file:
[rabbitmq]
host = HOST
username = USERNAME
password = PASSWORD
vhost=/VHOST
port=PORT
queue=stock
compression=Off
[exchange]
name=exports
type=direct
durable=On
[queuesettings]
routingkey=stock
messagettl=10000
deadLetterExchange=exports.dl
deadLetterroutingkey=stock
priority=10
To handle errors, I intend to use RabbitMQ's x-dead-letter-exchange and x-dead-letter-routing-key configuration, to be able to retry the message later (in case something went temporarly wrong).
My issue is that, when I define my queues in RabbitMQBundle's configuration, rabbitmq-cli-consumer is unable to consume the queue, throwing this error:
2018/04/23 11:35:54 Connecting RabbitMQ...
2018/04/23 11:35:54 Connected.
2018/04/23 11:35:54 Opening channel...
2018/04/23 11:35:54 Done.
2018/04/23 11:35:54 Setting QoS...
2018/04/23 11:35:54 Succeeded setting QoS.
2018/04/23 11:35:54 Declaring queue "stock"...
2018/04/23 11:35:54 Registering consumer...
2018/04/23 11:35:54 failed to register a consumer: Exception (504) Reason: "channel/connection is not open"
Here is the configuration I use for RabbitMQBundle:
old_sound_rabbit_mq:
producers:
exports:
connection: default
exchange_options:
name: 'exports'
type: direct
exports_dl:
connection: default
exchange_options:
name: 'exports.dl'
type: direct
consumers:
stock_dead_letter:
connection: default
exchange_options:
name: exports.dl
type: direct
queue_options:
name: stock.dl
routing_keys:
- stock
arguments:
x-dead-letter-exchange: ['S', 'exports']
x-dead-letter-routing-key: ['S', 'stock']
x-message-ttl: ['I', 60000]
callback: amqp.consumers.exports.stock
multiple_consumers:
exports:
connection: default
exchange_options:
name: 'exports'
type: direct
queues:
stock:
name: stock
callback: amqp.consumers.exports.stock
routing_keys:
- stock
arguments:
x-dead-letter-exchange: ['S', 'exports.dl']
x-dead-letter-routing-key: ['S', 'stock']
Has anyone ever encountered something similar ? And how did you solve it ?

Flyway not picking up migration file in Spring MVC

I've followed a few tutorials and configured Flyway for DB initialisation.
I took a schema dump from MYSQL (no data) and named the file V1__initialSchema.sql. So this is full of specific create table, foreign keys etc. as dumped by mysql.
Then I've configured the beans:
Flyway Initiailser
#Bean(initMethod = "migrate")
protected Flyway flyway() {
Flyway flyway = new Flyway();
flyway.setBaselineOnMigrate(true);
//flyway.setLocations("classpath:db/migration");
flyway.setDataSource(dataSource());
return flyway;
}
JPA Initialiser
#Bean
#DependsOn(value = "flyway")
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
factory.setDataSource(dataSource());
factory.setJpaVendorAdapter(vendorAdapter);
factory.setPackagesToScan("com.ideafactory.mvc", "com.ideafactory.plugins");
Properties jpaProperties = new Properties();
jpaProperties.put("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
//jpaProperties.put("hibernate.hbm2ddl.auto","create-drop");
jpaProperties.put("hibernate.show_sql", false);
jpaProperties.put("hibernate.format_sql", false);
jpaProperties.put("hibernate.use_sql_comments", false);
jpaProperties.put("hibnerate.connection.CharSet", "utf8");
jpaProperties.put("hibernate.connect.characterEncoding", "utf8");
jpaProperties.put("hibernate.connection.useUnicode", true);
jpaProperties.put("jadira.usertype.autoRegisterUserTypes", true);
factory.setJpaProperties(jpaProperties);
factory.afterPropertiesSet();
factory.setLoadTimeWeaver(new InstrumentationLoadTimeWeaver());
return factory;
}
I switched on logging and I can see it is "skipping" my initialise file, which I'm not sure why. The schema hasn't been created.
l.util.logging.slf4j.Slf4jLog 40 debug - Scanning for classpath resources at 'db/migration' (Prefix: 'V', Suffix: '.sql')
16:35:08.272 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning URL: file:/Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/db/migration/
16:35:08.273 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - JBoss VFS v2 available: false
16:35:08.273 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning starting at classpath root in filesystem: /Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/
16:35:08.273 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning for resources in path: /Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/db/migration (db/migration)
16:35:08.273 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Found resource: db/migration/V1__initialSchema.sql
16:35:08.279 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning for classes at 'db/migration' (Implementing: 'org.flywaydb.core.api.migration.jdbc.JdbcMigration')
16:35:08.279 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning URL: file:/Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/db/migration/
16:35:08.279 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - JBoss VFS v2 available: false
16:35:08.280 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning starting at classpath root in filesystem: /Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/
16:35:08.280 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning for resources in path: /Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/db/migration (db/migration)
16:35:08.280 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Filtering out resource: db/migration/V1__initialSchema.sql (filename: V1__initialSchema.sql)
16:35:08.281 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning for classes at 'db/migration' (Implementing: 'org.flywaydb.core.api.migration.spring.SpringJdbcMigration')
16:35:08.281 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning URL: file:/Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/db/migration/
16:35:08.282 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - JBoss VFS v2 available: false
16:35:08.282 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning starting at classpath root in filesystem: /Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/
16:35:08.282 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Scanning for resources in path: /Users//Documents/Projects/MerchantX/target/java_ecommerce/WEB-INF/classes/db/migration (db/migration)
16:35:08.282 DEBUG org.flywaydb.core.internal.util.logging.slf4j.Slf4jLog 40 debug - Filtering out resource: db/migration/V1__initialSchema.sql (filename: V1__initialSchema.sql)
I haven't used Flyway before, can anyone explain why it filtered out my initialisation file?
AFAIK baselineOnMigrate creates first (V1) version from your actual schema in DB. And only following version will get applied (V1.1, V2, ...).
So either don't use baselineOnMigrate but you need to start with empty DB schema or start indexing your versions from (eg.) V2.

Resources