ChainedTransactionManager with DataSourceTransactionManager + KafkaTransactionManager. Kafka sends operation are not getting rolling back - spring-kafka

I am trying perform DB + kafka send operation in #Transactional.
But when exception occur in transactional method, DB operation is getting rollback successfully but kafka transaction is not getting rollback.
I am using spring-kafka - 2.5.2.RELEASE.
According to this thread https://github.com/spring-projects/spring-kafka/issues/433, JpaTransactionManager + KafkaTransactionManager , is working
#EnableTransactionManagement
public class SampleBaseApplication {
public static void main(String[] args) {
ConfigurableApplicationContext ctx = SpringApplication.run(SampleBaseApplication.class, args);
Map<String, PlatformTransactionManager> tms = ctx.getBeansOfType(PlatformTransactionManager.class);
System.out.println(tms);
ctx.close();
}
#Bean
public ApplicationRunner runner1(Foo foo) {
return args -> foo.sendToKafkaAndDB();
}
#Bean
public DataSourceTransactionManager dstm(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
#Bean(name="chainedTxMang")
public ChainedTransactionManager chainedTxM(DataSourceTransactionManager dstm, KafkaTransactionManager<?, ?> kafka) {
return new ChainedTransactionManager(dstm, kafka);
}
#Component
public static class Foo {
#Autowired
#Qualifier("transactionalTemplate")
private KafkaTemplate<String, String> template;
#Autowired
private DataAccess dataAccess;
#Transactional(transactionManager = "chainedTxMang")
public void sendToKafkaAndDB() throws Exception {
dataAccess.insertInTable("113", "111",
"111", "COMPLETED", "113");
System.out.println(this.template.send("TEST_TOPIC", "bar").get());
throw new RuntimeException("exp...");
}
}
}
below are the logs -
USER\Downloads\sample\target\classes started by DC-USER in C:\Users\DC-USER\Downloads\sample)
2021-Jun-21 20:17:44 PM [main] INFO com.example.SampleBaseApplication - {} - No active profile set, falling back to default profiles: default
2021-Jun-21 20:17:49 PM [main] INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer - {} - Tomcat initialized with port(s): 8082 (http)
2021-Jun-21 20:17:49 PM [main] INFO org.apache.coyote.http11.Http11NioProtocol - {} - Initializing ProtocolHandler ["http-nio-8082"]
2021-Jun-21 20:17:49 PM [main] INFO org.apache.catalina.core.StandardService - {} - Starting service [Tomcat]
2021-Jun-21 20:17:49 PM [main] INFO org.apache.catalina.core.StandardEngine - {} - Starting Servlet engine: [Apache Tomcat/9.0.36]
2021-Jun-21 20:17:53 PM [main] INFO org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - {} - Initializing Spring embedded WebApplicationContext
2021-Jun-21 20:17:53 PM [main] INFO org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext - {} - Root WebApplicationContext: initialization completed in 8303 ms
2021-Jun-21 20:17:54 PM [main] INFO org.springframework.aop.framework.CglibAopProxy - {} - Unable to proxy interface-implementing method [public final void org.springframework.dao.support.DaoSupport.afterPropertiesSet() throws java.lang.IllegalArgumentException,org.springframework.beans.factory.BeanInitializationException] because it is marked as final: Consider using interface-based JDK proxies instead!
2021-Jun-21 20:17:54 PM [main] TRACE org.springframework.transaction.annotation.AnnotationTransactionAttributeSource - {} - Adding transactional method 'com.example.SampleBaseApplication$Foo.sendToKafkaAndDB' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; 'chainedTxMang'
2021-Jun-21 20:17:55 PM [main] INFO org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - {} - Initializing ExecutorService 'applicationTaskExecutor'
2021-Jun-21 20:17:58 PM [main] INFO org.springframework.boot.actuate.endpoint.web.EndpointLinksResolver - {} - Exposing 14 endpoint(s) beneath base path '/actuator'
2021-Jun-21 20:17:58 PM [main] INFO org.apache.coyote.http11.Http11NioProtocol - {} - Starting ProtocolHandler ["http-nio-8082"]
2021-Jun-21 20:17:58 PM [main] INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer - {} - Tomcat started on port(s): 8082 (http) with context path ''
2021-Jun-21 20:17:58 PM [main] INFO com.example.SampleBaseApplication - {} - Started SampleBaseApplication in 15.457 seconds (JVM running for 22.468)
2021-Jun-21 20:17:58 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Initializing transaction synchronization
2021-Jun-21 20:17:58 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Clearing transaction synchronization
2021-Jun-21 20:17:58 PM [main] INFO com.zaxxer.hikari.HikariDataSource - {} - Hikari Handler DB Pool - Starting...
2021-Jun-21 20:17:59 PM [main] INFO com.zaxxer.hikari.HikariDataSource - {} - Hikari Handler DB Pool - Start completed.
2021-Jun-21 20:17:59 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Bound value [org.springframework.jdbc.datasource.ConnectionHolder#20a3e10c] for key [HikariDataSource (Hikari Handler DB Pool)] to thread [main]
2021-Jun-21 20:17:59 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Initializing transaction synchronization
2021-Jun-21 20:17:59 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Clearing transaction synchronization
2021-Jun-21 20:17:59 PM [main] DEBUG org.springframework.kafka.transaction.KafkaTransactionManager - {} - Creating new transaction with name [com.example.SampleBaseApplication$Foo.sendToKafkaAndDB]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; 'chainedTxMang'
2021-Jun-21 20:17:59 PM [main] INFO org.apache.kafka.clients.producer.ProducerConfig - {} - ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [localhost:9093]
buffer.memory = 33554432
client.dns.lookup = default
client.id = producer-transx-0
compression.type = lz4
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = C:\Users\DC-USER\Downloads\sample\src\main\resources\kafka.server.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = C:\Users\DC-USER\Downloads\sample\src\main\resources\kafka.server.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 300000
transactional.id = transx-0
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
2021-Jun-21 20:17:59 PM [main] INFO org.apache.kafka.clients.producer.KafkaProducer - {} - [Producer clientId=producer-transx-0, transactionalId=transx-0] Instantiated a transactional producer.
2021-Jun-21 20:18:00 PM [main] WARN org.apache.kafka.clients.producer.ProducerConfig - {} - The configuration 'The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. Note that <code>enable.idempotence</code> must be enabled if a TransactionalId is configured. The default is <code>null</code>, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting <code>transaction.state.log.replication.factor</code>.' was supplied but isn't a known config.
2021-Jun-21 20:18:00 PM [main] INFO org.apache.kafka.common.utils.AppInfoParser - {} - Kafka version: 2.5.0
2021-Jun-21 20:18:00 PM [main] INFO org.apache.kafka.common.utils.AppInfoParser - {} - Kafka commitId: 66563e712b0b9f84
2021-Jun-21 20:18:00 PM [main] INFO org.apache.kafka.common.utils.AppInfoParser - {} - Kafka startTimeMs: 1624286880090
2021-Jun-21 20:18:00 PM [main] INFO org.apache.kafka.clients.producer.internals.TransactionManager - {} - [Producer clientId=producer-transx-0, transactionalId=transx-0] Invoking InitProducerId for the first time in order to acquire a producer ID
2021-Jun-21 20:18:01 PM [kafka-producer-network-thread | producer-transx-0] INFO org.apache.kafka.clients.Metadata - {} - [Producer clientId=producer-transx-0, transactionalId=transx-0] Cluster ID: xfiOAAyKRtC9OMjttHLmyQ
2021-Jun-21 20:18:01 PM [kafka-producer-network-thread | producer-transx-0] INFO org.apache.kafka.clients.producer.internals.TransactionManager - {} - [Producer clientId=producer-transx-0, transactionalId=transx-0] Discovered transaction coordinator localhost:9093 (id: 1 rack: null)
2021-Jun-21 20:18:01 PM [kafka-producer-network-thread | producer-transx-0] INFO org.apache.kafka.clients.producer.internals.TransactionManager - {} - [Producer clientId=producer-transx-0, transactionalId=transx-0] ProducerId set to 0 with epoch 7
2021-Jun-21 20:18:01 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Bound value [org.springframework.kafka.core.KafkaResourceHolder#42aa1324] for key [org.springframework.kafka.core.DefaultKafkaProducerFactory#3976ebfa] to thread [main]
2021-Jun-21 20:18:01 PM [main] DEBUG org.springframework.kafka.transaction.KafkaTransactionManager - {} - Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer#6164e137]]
2021-Jun-21 20:18:01 PM [main] TRACE org.springframework.transaction.interceptor.TransactionInterceptor - {} - Getting transaction for [com.example.SampleBaseApplication$Foo.sendToKafkaAndDB]
2021-Jun-21 20:18:01 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder#20a3e10c] for key [HikariDataSource (Hikari Handler DB Pool)] bound to thread [main]
2021-Jun-21 20:18:01 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder#20a3e10c] for key [HikariDataSource (Hikari Handler DB Pool)] bound to thread [main]
2021-Jun-21 20:18:01 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder#20a3e10c] for key [HikariDataSource (Hikari Handler DB Pool)] bound to thread [main]
2021-Jun-21 20:18:01 PM [main] INFO com.example.DataAccess - {} - Time taken to perform insert operation: 278ms
2021-Jun-21 20:18:01 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Retrieved value [org.springframework.kafka.core.KafkaResourceHolder#42aa1324] for key [org.springframework.kafka.core.DefaultKafkaProducerFactory#3976ebfa] bound to thread [main]
2021-Jun-21 20:18:01 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Retrieved value [org.springframework.kafka.core.KafkaResourceHolder#42aa1324] for key [org.springframework.kafka.core.DefaultKafkaProducerFactory#3976ebfa] bound to thread [main]
SendResult [producerRecord=ProducerRecord(topic=TEST_TOPIC, partition=null, headers=RecordHeaders(headers = [], isReadOnly = true), key=null, value=bar, timestamp=null), recordMetadata=TEST_TOPIC-0#0]
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.interceptor.TransactionInterceptor - {} - Completing transaction for [com.example.SampleBaseApplication$Foo.sendToKafkaAndDB] after exception: java.lang.RuntimeException: exp...
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.interceptor.RuleBasedTransactionAttribute - {} - Applying rules to determine whether transaction should rollback on java.lang.RuntimeException: exp...
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.interceptor.RuleBasedTransactionAttribute - {} - Winning rollback rule is: null
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.interceptor.RuleBasedTransactionAttribute - {} - No relevant rollback rule found: applying default rules
2021-Jun-21 20:18:02 PM [main] DEBUG org.springframework.kafka.transaction.KafkaTransactionManager - {} - Initiating transaction rollback
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Removed value [org.springframework.kafka.core.KafkaResourceHolder#42aa1324] for key [org.springframework.kafka.core.DefaultKafkaProducerFactory#3976ebfa] from thread [main]
2021-Jun-21 20:18:02 PM [main] DEBUG org.springframework.kafka.transaction.KafkaTransactionManager - {} - Resuming suspended transaction after completion of inner transaction
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Initializing transaction synchronization
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Clearing transaction synchronization
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Removed value [org.springframework.jdbc.datasource.ConnectionHolder#20a3e10c] for key [HikariDataSource (Hikari Handler DB Pool)] from thread [main]
2021-Jun-21 20:18:02 PM [main] TRACE org.springframework.transaction.support.TransactionSynchronizationManager - {} - Initializing transaction synchronization
2021-Jun-21 20:18:02 PM [main] INFO org.springframework.boot.autoconfigure.logging.ConditionEvaluationReportLoggingListener - {} -
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2021-Jun-21 20:18:02 PM [main] ERROR org.springframework.boot.SpringApplication - {} - Application run failed
java.lang.IllegalStateException: Failed to execute ApplicationRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:789) [spring-boot-2.3.1.RELEASE.jar:2.3.1.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:776) [spring-boot-2.3.1.RELEASE.jar:2.3.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322) [spring-boot-2.3.1.RELEASE.jar:2.3.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237) [spring-boot-2.3.1.RELEASE.jar:2.3.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.3.1.RELEASE.jar:2.3.1.RELEASE]
at com.example.SampleBaseApplication.main(SampleBaseApplication.java:60) [classes/:?]
Caused by: java.lang.RuntimeException: exp...
at com.example.SampleBaseApplication$Foo.sendToKafkaAndDB(SampleBaseApplication.java:102) ~[classes/:?]
at com.example.SampleBaseApplication$Foo$$FastClassBySpringCGLIB$$ca642166.invoke(<generated>) ~[classes/:?]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771) ~[spring-aop-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) ~[spring-aop-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor$$Lambda$805/1148088421.proceedWithInvocation(Unknown Source) ~[?:?]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:367) ~[spring-tx-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:118) ~[spring-tx-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) ~[spring-aop-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691) ~[spring-aop-5.2.7.RELEASE.jar:5.2.7.RELEASE]
at com.example.SampleBaseApplication$Foo$$EnhancerBySpringCGLIB$$9728c463.sendToKafkaAndDB(<generated>) ~[classes/:?]
at com.example.SampleBaseApplication.lambda$0(SampleBaseApplication.java:68) ~[classes/:?]
at com.example.SampleBaseApplication$$Lambda$526/1532644077.run(Unknown Source) ~[?:?]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:786) ~[spring-boot-2.3.1.RELEASE.jar:2.3.1.RELEASE]
... 5 more
2021-Jun-21 20:18:02 PM [main] INFO org.springframework.boot.web.embedded.tomcat.GracefulShutdown - {} - Commencing graceful shutdown. Waiting for active requests to complete
2021-Jun-21 20:18:02 PM [tomcat-shutdown] INFO org.springframework.boot.web.embedded.tomcat.GracefulShutdown - {} - Graceful shutdown complete
2021-Jun-21 20:18:02 PM [main] INFO org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - {} - Shutting down ExecutorService 'applicationTaskExecutor'
2021-Jun-21 20:18:02 PM [main] INFO org.apache.kafka.clients.producer.KafkaProducer - {} - [Producer clientId=producer-transx-0, transactionalId=transx-0] Closing the Kafka producer with timeoutMillis = 30000 ms.
2021-Jun-21 20:18:02 PM [main] INFO com.zaxxer.hikari.HikariDataSource - {} - Hikari Handler DB Pool - Shutdown initiated...
2021-Jun-21 20:18:02 PM [main] INFO com.zaxxer.hikari.HikariDataSource - {} - Hikari Handler DB Pool - Shutdown completed.

Related

Can't connect RabbitMQ to my app from docker [duplicate]

This question already has an answer here:
Connecting to rabbitmq docker container from service in another container
(1 answer)
Closed last year.
I am currently stuck with this problem for about a week and really can't find an appropriate solution. The problem is that when I try to connect to dockerized RabbitMQ it gives me the same error every time:
wordofthedayapp-wordofthedayapp-1 | [40m[1m[33mwarn[39m[22m[49m: MassTransit[0]
wordofthedayapp-wordofthedayapp-1 | Connection Failed: rabbitmq://localhost/
wordofthedayapp-wordofthedayapp-1 | RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were
reachable
wordofthedayapp-wordofthedayapp-1 | ---> System.AggregateException: One or more errors occurred. (Connection failed)
wordofthedayapp-wordofthedayapp-1 | ---> RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed
wordofthedayapp-wordofthedayapp-1 | ---> System.TimeoutException: The operation has timed out.
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.Impl.TaskExtensions.TimeoutAfter(Task task, TimeSpan timeout)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.Impl.SocketFrameHandler.ConnectOrFail(ITcpClient socket, AmqpTcpEndpo
int endpoint, TimeSpan timeout)
wordofthedayapp-wordofthedayapp-1 | --- End of inner exception stack trace ---
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.Impl.SocketFrameHandler.ConnectOrFail(ITcpClient socket, AmqpTcpEndpo
int endpoint, TimeSpan timeout)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.Impl.SocketFrameHandler.ConnectUsingAddressFamily(AmqpTcpEndpoint end
point, Func`2 socketFactory, TimeSpan timeout, AddressFamily family)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.Impl.SocketFrameHandler.ConnectUsingIPv4(AmqpTcpEndpoint endpoint, Fu
nc`2 socketFactory, TimeSpan timeout)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.Impl.SocketFrameHandler..ctor(AmqpTcpEndpoint endpoint, Func`2 socket
Factory, TimeSpan connectionTimeout, TimeSpan readTimeout, TimeSpan writeTimeout)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.Framing.Impl.IProtocolExtensions.CreateFrameHandler(IProtocol protoco
l, AmqpTcpEndpoint endpoint, Func`2 socketFactory, TimeSpan connectionTimeout, TimeSpan readTimeout, TimeSpan writeTimeout)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.ConnectionFactory.CreateFrameHandler(AmqpTcpEndpoint endpoint)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.EndpointResolverExtensions.SelectOne[T](IEndpointResolver resolver, F
unc`2 selector)
wordofthedayapp-wordofthedayapp-1 | --- End of inner exception stack trace ---
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.EndpointResolverExtensions.SelectOne[T](IEndpointResolver resolver, F
unc`2 selector)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.ConnectionFactory.CreateConnection(IEndpointResolver endpointResolver
, String clientProvidedName)
wordofthedayapp-wordofthedayapp-1 | --- End of inner exception stack trace ---
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.ConnectionFactory.CreateConnection(IEndpointResolver endpointResolver
, String clientProvidedName)
wordofthedayapp-wordofthedayapp-1 | at RabbitMQ.Client.ConnectionFactory.CreateConnection(IList`1 hostnames, String clientPr
ovidedName)
wordofthedayapp-wordofthedayapp-1 | at MassTransit.RabbitMqTransport.Integration.ConnectionContextFactory.CreateConnection(I
Supervisor supervisor)
Here you can find my docker-compose.yml:
version: '3.9'
services:
rabbitmq:
image: rabbitmq:3.9-management
hostname: rabbitmq
volumes:
- "~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/"
- "~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq"
ports:
- 5672:5672
- 15672:15672
expose:
- 5672
- 15672
environment:
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
healthcheck:
test: [ "CMD", "rabbitmqctl", "status", "-f", "http://localhost:15672"]
interval: 5s
timeout: 20s
retries: 5
networks:
- app
ms-sql-server:
container_name: ms-sql-server
image: mcr.microsoft.com/mssql/server:2019-latest
user: root
volumes:
- "appdb:/var/opt/mssql/data"
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "Password123!"
MSSQL_PID: Express
ports:
- 1433:1433
healthcheck:
test: ["CMD" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
networks:
- app
wordofthedayapp:
build:
dockerfile: WordOfTheDay.Api/Dockerfile
image: wordofthedayapp
environment:
DbServer: "ms-sql-server"
DbPort: "1433"
DbUser: "sa"
Password: "Password123!"
Database: "appdb"
ports:
- 5001:80
restart: on-failure
depends_on:
- rabbitmq
networks:
- app
volumes:
appdb:
networks:
app:
My appsettings string:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*",
"ConnectionStrings": {
"WordContext": "Server=ms-sql-server;Database=master;User=sa;Password=Password123!;MultipleActiveResultSets=true;Integrated Security=false;TrustServerCertificate=true",
"RabbitMQHost": "amqp://elias:123456#localhost:5672"
}
}
This is how it works in the app using MassTransit:
public static void AddConfiguredMassTransit(this IServiceCollection services, string host)
{
services.AddMassTransit(Configuration =>
{
Configuration.UsingRabbitMq((context, config) =>
{
config.Host(host);
});
});
services.AddMassTransitHostedService();
}
services.AddConfiguredMassTransit(Configuration.GetConnectionString("RabbitMQHost"));
I hope at least anyone knows what is wrong with this code because I really tired trying to fix it and browsing internet for solution. Thank you in advance!
P.S. Important information! Everything works perfect when I test it locally without a docker, but when I try to dockerize the app this happens.
If you are testing locally, the host is likely localhost since Docker is exposing the port to the local machine. When running in a container, however, the virtual network should have a hostname of rabbitmq, which would need to be used instead of localhost when running inside a container on the same network.
Since the log shows:
Connection Failed: rabbitmq://localhost/
I'm guessing you aren't updating the host name when running inside a container.
You can determine if your application is running in a container easily:
bool IsRunningInContainer => bool.TryParse(Environment.GetEnvironmentVariable("DOTNET_RUNNING_IN_CONTAINER"), out var inDocker) && inDocker;
Then, in your configuration:
var host = IsRunningInContainer ? "rabbitmq" : "localhost";

Firebase: Cloud Function Triggers: six hidden "_xxxxx" documents in new collections triggering cloud functions

I recently upgraded my google cloud functions to Node.js 10. I've encountered strange occurrences of hidden documents in new collections triggering cloud functions. These are the documents:
_createTime
_fieldsProto
_readTime
_ref
_serializer
_updateTime
These document String names do not exist in my code.
but they do exist in the constructor of a DocumentSnapshot: googleapis.dev/nodejs/firestore/latest/document.js.html
I have a cloud function that triggers for documents onCreate with example path ('followers/{userId}/userFollowers/{followerId}'). Three times now with new users and first creation of this sub-collection has this occurred.
As a temporary workaround, I told the function to ignore [followerId]'s that match these hidden document names. However, I have over 30 functions that trigger on document create, and I don't want to have this hack workaround permanently at the top of each of these functions.
Has anyone else experienced this? Any Idea what going on?
Here is my Flutter/Dart code to follow a user:
followUser(String followerId) async {
await getUserRef(uid: followerId).get().then((doc) {
if (!doc.exists) return;
documentUpdate(
docRef: getFollowersRef(fid: followerId, uid: currentUser.id),
payload: {
//PAYLOAD EXAMPLE:
'notificationToken': currentUser.notificationToken, //String
'username': currentUser.username, //String
'ofUsername': doc.data['username'], //String
'profileImgURL': currentUser.photoUrl, //String
'timestamp': DateTime.now(), //Timestamp
'displayName': currentUser.displayName, //String
});
documentUpdate(
docRef: getFollowingRef(uid: currentUser.id, fid: followerId),
payload: {
// follower data. Same as above but inverted for follower.
});
});
}
documentUpdate({DocumentReference docRef, Map<String, dynamic> payload}) {
return docRef.setData(payload, merge: true);
}
Snippet from index.js cloud function:
exports.onNewFollower = functions.firestore
.document('/followers/{userId}/userFollowers/{followerId}')
.onCreate(async (snapshot, context) => {
const userId = context.params.userId;
const followerId = context.params.followerId;
console.log(`uid: [${userId}], fid: [${followerId}]`);
if (followerId == 'index' ||
userId == '_ref' || //ADDED THIS AFTER FOR PREVENTION
userId == '_fieldsPronto' ||
userId == '_createTime' ||
userId == '_readTime' ||
userId == '_updateTime' ||
userId == '_serializer' ||
followerId == '_ref' ||
followerId == '_fieldsPronto' ||
followerId == '_createTime' ||
followerId == '_readTime' ||
followerId == '_updateTime' ||
followerId == '_serializer')
return await Promise.resolve(true);
var promises = [];
promises.push(db.collection('followers').doc(userId).collection('userFollowers').doc('index').set(
{ 'uids': { [followerId]: true } },
{ merge: true },
)
);
promises.push(
db.collection('following').doc(followerId).collection('userFollowing').doc('index').set(
{ 'uids': { [userId]: true } },
{ merge: true },
)
);
promises.push(db.collection('activityFeed').doc(userId).collection('feedItems').doc(followerId).set(
{
timestamp: admin.firestore.FieldValue.serverTimestamp(),
type: 'newFollower',
userId: followerId,
userProfileImg: snapshot.data().profileImgURL,
username: snapshot.data().username,
displayName: snapshot.data().displayName,
}
));
return await Promise.all(promises);
});
Here is an example of the cloud function console output when these odd documents are attempted. It is confusing to understand, but it is clear that these odd documents trigger after the one intended does. The cloud function throws errors because these odd documents don't contain the data the function requires to complete.
7:16:25.473 PM
onNewFollower
Function execution started
7:16:26.535 PM
onNewFollower
uid: [Du1orkZrykWJ1BL0kKOuj4HO0ji2], fid: [LtiIcZ8rrphcEnyCWnieKvte6ln2]
7:16:27.442 PM
onNewFollower
Function execution took 1971 ms, finished with status: 'ok'
7:17:14.663 PM
onNewFollower
Function execution started
7:17:14.677 PM
onNewFollower
uid: [_ref], fid: [LtiIcZ8rrphcEnyCWnieKvte6ln2]
7:17:14.684 PM
onNewFollower
Function execution took 21 ms, finished with status: 'error'
7:17:15.342 PM
onNewFollower
Function execution started
7:17:15.352 PM
onNewFollower
Function execution took 11 ms, finished with status: 'ok'
7:17:15.659 PM
onNewFollower
Function execution started
7:17:15.664 PM
onNewFollower
uid: [_createTime], fid: [LtiIcZ8rrphcEnyCWnieKvte6ln2]
7:17:15.666 PM
onNewFollower
Error: Value for argument "data" is not a valid Firestore document. Cannot use "undefined" as a Firestore value (found in field "username"). at Object.validateUserInput (/workspace/node_modules/#google-cloud/firestore/build/src/serializer.js:251:15) at validateDocumentData (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:610:22) at WriteBatch.set (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:232:9) at DocumentReference.set (/workspace/node_modules/#google-cloud/firestore/build/src/reference.js:338:14) at exports.onNewFollower.functions.firestore.document.onCreate (/workspace/index.js:842:99) at cloudFunction (/workspace/node_modules/firebase-functions/lib/cloud-functions.js:131:23) at Promise.resolve.then (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:330:28) at process._tickCallback (internal/process/next_tick.js:68:7)
7:17:15.667 PM
onNewFollower
Function execution took 31 ms, finished with status: 'error'
7:17:15.790 PM
onNewFollower
Function execution started
7:17:15.797 PM
onNewFollower
Function execution took 7 ms, finished with status: 'ok'
7:17:16.682 PM
onNewFollower
Error: Value for argument "data" is not a valid Firestore document. Cannot use "undefined" as a Firestore value (found in field "username"). at Object.validateUserInput (/workspace/node_modules/#google-cloud/firestore/build/src/serializer.js:251:15) at validateDocumentData (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:610:22) at WriteBatch.set (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:232:9) at DocumentReference.set (/workspace/node_modules/#google-cloud/firestore/build/src/reference.js:338:14) at exports.onNewFollower.functions.firestore.document.onCreate (/workspace/index.js:842:99) at cloudFunction (/workspace/node_modules/firebase-functions/lib/cloud-functions.js:131:23) at Promise.resolve.then (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:330:28) at process._tickCallback (internal/process/next_tick.js:68:7)
7:17:17.467 PM
onNewFollower
Function execution started
7:17:17.616 PM
onNewFollower
Function execution started
7:17:17.948 PM
onNewFollower
Function execution started
7:17:18.288 PM
onNewFollower
Function execution started
7:17:18.664 PM
onNewFollower
uid: [_readTime], fid: [LtiIcZ8rrphcEnyCWnieKvte6ln2]
7:17:19.069 PM
onNewFollower
Function execution took 1603 ms, finished with status: 'error'
7:17:19.363 PM
onNewFollower
uid: [_updateTime], fid: [LtiIcZ8rrphcEnyCWnieKvte6ln2]
7:17:19.430 PM
onNewFollower
uid: [_serializer], fid: [LtiIcZ8rrphcEnyCWnieKvte6ln2]
7:17:19.863 PM
onNewFollower
Function execution took 2247 ms, finished with status: 'error'
7:17:19.878 PM
onNewFollower
Function execution took 1931 ms, finished with status: 'error'
7:17:19.960 PM
onNewFollower
uid: [_fieldsProto], fid: [LtiIcZ8rrphcEnyCWnieKvte6ln2]
7:17:20.089 PM
onNewFollower
Error: Value for argument "data" is not a valid Firestore document. Cannot use "undefined" as a Firestore value (found in field "username"). at Object.validateUserInput (/workspace/node_modules/#google-cloud/firestore/build/src/serializer.js:251:15) at validateDocumentData (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:610:22) at WriteBatch.set (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:232:9) at DocumentReference.set (/workspace/node_modules/#google-cloud/firestore/build/src/reference.js:338:14) at exports.onNewFollower.functions.firestore.document.onCreate (/workspace/index.js:842:99) at cloudFunction (/workspace/node_modules/firebase-functions/lib/cloud-functions.js:131:23) at Promise.resolve.then (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:330:28) at process._tickCallback (internal/process/next_tick.js:68:7)
7:17:20.417 PM
onNewFollower
Function execution took 2129 ms, finished with status: 'error'
7:17:20.923 PM
onNewFollower
Error: Value for argument "data" is not a valid Firestore document. Cannot use "undefined" as a Firestore value (found in field "username"). at Object.validateUserInput (/workspace/node_modules/#google-cloud/firestore/build/src/serializer.js:251:15) at validateDocumentData (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:610:22) at WriteBatch.set (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:232:9) at DocumentReference.set (/workspace/node_modules/#google-cloud/firestore/build/src/reference.js:338:14) at exports.onNewFollower.functions.firestore.document.onCreate (/workspace/index.js:842:99) at cloudFunction (/workspace/node_modules/firebase-functions/lib/cloud-functions.js:131:23) at Promise.resolve.then (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:330:28) at process._tickCallback (internal/process/next_tick.js:68:7)
7:17:20.928 PM
onNewFollower
Error: Value for argument "data" is not a valid Firestore document. Cannot use "undefined" as a Firestore value (found in field "username"). at Object.validateUserInput (/workspace/node_modules/#google-cloud/firestore/build/src/serializer.js:251:15) at validateDocumentData (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:610:22) at WriteBatch.set (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:232:9) at DocumentReference.set (/workspace/node_modules/#google-cloud/firestore/build/src/reference.js:338:14) at exports.onNewFollower.functions.firestore.document.onCreate (/workspace/index.js:842:99) at cloudFunction (/workspace/node_modules/firebase-functions/lib/cloud-functions.js:131:23) at Promise.resolve.then (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:330:28) at process._tickCallback (internal/process/next_tick.js:68:7)
7:17:21.428 PM
onNewFollower
Error: Value for argument "data" is not a valid Firestore document. Cannot use "undefined" as a Firestore value (found in field "username"). at Object.validateUserInput (/workspace/node_modules/#google-cloud/firestore/build/src/serializer.js:251:15) at validateDocumentData (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:610:22) at WriteBatch.set (/workspace/node_modules/#google-cloud/firestore/build/src/write-batch.js:232:9) at DocumentReference.set (/workspace/node_modules/#google-cloud/firestore/build/src/reference.js:338:14) at exports.onNewFollower.functions.firestore.document.onCreate (/workspace/index.js:842:99) at cloudFunction (/workspace/node_modules/firebase-functions/lib/cloud-functions.js:131:23) at Promise.resolve.then (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:330:28) at process._tickCallback (internal/process/next_tick.js:68:7)
7:17:36.561 PM
onNewFollower
Function execution started
7:17:36.572 PM
onNewFollower
Function execution took 11 ms, finished with status: 'ok'
7:17:37.567 PM
onNewFollower
Function execution started
7:17:37.581 PM
onNewFollower
Function execution took 15 ms, finished with status: 'ok'
7:17:41.263 PM
onNewFollower
Function execution started
7:17:41.370 PM
onNewFollower
Function execution took 108 ms, finished with status: 'ok'
7:17:42.070 PM
onNewFollower
Function execution started
7:17:42.853 PM
onNewFollower
Function execution took 785 ms, finished with status: 'ok'
7:24:25.667 PM
onNewFollower
Function execution started
7:24:26.775 PM
onNewFollower
uid: [ilxBwWHVDiVJ2iR4AsfIxrpIUgb2], fid: [LtiIcZ8rrphcEnyCWnieKvte6ln2]
7:24:27.567 PM
onNewFollower
Function execution took 1901 ms, finished with status: 'ok'
The document successfully written and the collection with none of the odd documents showing
I posted this issue to the FlutterFire team. They've informed me that this is in fact an issue with the plugin and that the fix will be rolled out in the coming updates to the package...

Corda Issue Connecting to TestNet Node Using Tools Explorer

Attempting to connect to a Corda TestNet node following instructions from this link but continue to get below error..
https://docs.corda.net/testnet-explorer-corda.html
net.corda.nodeapi.exceptions.InternalNodeException: Something went wrong within the Corda node.
Node log contents:
[ERROR] 2019-10-15T13:37:48,109Z [Node thread-1] proxies.ExceptionMaskingRpcOpsProxy.log - Error during RPC invocation [errorCode=24h7hj, moreInformationAt=https://errors.corda.net/OS/4.0/24h7hj] {actor_id=rpcuser, actor_owning_identity=OU=Cb014cf3e-d863-4d6f-827b-6f8813de6b9c, O=TESTNET_Clear Markets, L=London, C=GB, actor_store_id=NODE_CONFIG, fiber-id=10000003, flow-id=94fd17a4-3a59-4acc-bd50-97220790bcc8, invocation_id=bbd17234-6b67-47ab-83e5-50823956001e, invocation_timestamp=2019-10-15T13:37:48.093Z, origin=rpcuser, session_id=329a00ad-4747-4f9d-b0fd-5b38dcf4ef1e, session_timestamp=2019-10-15T13:37:44.868Z, thread-id=114}
java.lang.IllegalArgumentException: Corda service net.corda.finance.internal.ConfigHolder does not exist
at net.corda.node.internal.AbstractNode$ServiceHubInternalImpl.cordaService(AbstractNode.kt:985) ~[corda-node-4.0.jar:?]
at net.corda.finance.internal.CashConfigDataFlow.call(CashConfigDataFlow.kt:47) ~[corda-finance-workflows-4.1.jar:?]
at net.corda.finance.internal.CashConfigDataFlow.call(CashConfigDataFlow.kt:44) ~[corda-finance-workflows-4.1.jar:?]
at net.corda.node.services.statemachine.FlowStateMachineImpl.run(FlowStateMachineImpl.kt:228) ~[corda-node-4.0.jar:?]
at net.corda.node.services.statemachine.FlowStateMachineImpl.run(FlowStateMachineImpl.kt:45) ~[corda-node-4.0.jar:?]
at co.paralleluniverse.fibers.Fiber.run1(Fiber.java:1092) ~[quasar-core-0.7.10-jdk8.jar:0.7.10]
at co.paralleluniverse.fibers.Fiber.exec(Fiber.java:788) ~[quasar-core-0.7.10-jdk8.jar:0.7.10]
at co.paralleluniverse.fibers.RunnableFiberTask.doExec(RunnableFiberTask.java:100) ~[quasar-core-0.7.10-jdk8.jar:0.7.10]
at co.paralleluniverse.fibers.RunnableFiberTask.run(RunnableFiberTask.java:91) ~[quasar-core-0.7.10-jdk8.jar:0.7.10]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_222]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_222]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_222]
at net.corda.node.utilities.AffinityExecutor$ServiceAffinityExecutor$1$thread$1.run(AffinityExecutor.kt:63) ~[corda-node-4.0.jar:?]
[WARN ] 2019-10-15T13:47:39,383Z [Thread-14] core.client.fail - AMQ212037: Connection failure has been detected: syscall:read(..) failed: Connection reset by peer [code=GENERIC_EXCEPTION]
[WARN ] 2019-10-15T13:47:39,404Z [Thread-34 (ActiveMQ-client-global-threads)] rpc.RPCServer.bindingRemovalArtemisMessageHandler - Detected RPC client disconnect on address rpc.client.rpcuser.6152356516528416918, scheduling for reaping
Found solution.
Need to add following to node.conf:
"cordappSignerKeyFingerprintBlacklist" : [
],
due to this occurring during attempted loading of "demo" apps:
[WARN ] 2019-10-15T18:30:02,203Z [main] cordapp.JarScanningCordappLoader.loadCordapps - Not loading CorDapp Corda Finance Demo (R3) as it is signed by development key(s) only: [Sun EC public key, 256 bits
public x coord: 1606301601488985262456987828510069198490398827685577289418991162593641911319
public y coord: 39305038020852387120148508817752828470229346772241518574141632445003972282481
parameters: secp256r1 [NIST P-256, X9.62 prime256v1] (1.2.840.10045.3.1.7)].
[WARN ] 2019-10-15T18:30:02,217Z [main] cordapp.JarScanningCordappLoader.loadCordapps - Not loading CorDapp Corda Finance Demo (R3) as it is signed by development key(s) only: [Sun EC public key, 256 bits
public x coord: 1606301601488985262456987828510069198490398827685577289418991162593641911319
public y coord: 39305038020852387120148508817752828470229346772241518574141632445003972282481
parameters: secp256r1 [NIST P-256, X9.62 prime256v1] (1.2.840.10045.3.1.7)].
[WARN ] 2019-10-15T18:30:02,232Z [main] cordapp.JarScanningCordappLoader.loadCordapps - Not loading CorDapp Corda Finance Demo (R3) as it is signed by development key(s) only: [Sun EC public key, 256 bits
public x coord: 1606301601488985262456987828510069198490398827685577289418991162593641911319
public y coord: 39305038020852387120148508817752828470229346772241518574141632445003972282481
parameters: secp256r1 [NIST P-256, X9.62 prime256v1] (1.2.840.10045.3.1.7)].
[WARN ] 2019-10-15T18:30:02,332Z [main] cordapp.JarScanningCordappLoader.loadCordapps - Not loading CorDapp Corda Finance Demo (R3) as it is signed by development key(s) only: [Sun EC public key, 256 bits
public x coord: 1606301601488985262456987828510069198490398827685577289418991162593641911319
public y coord: 39305038020852387120148508817752828470229346772241518574141632445003972282481
parameters: secp256r1 [NIST P-256, X9.62 prime256v1] (1.2.840.10045.3.1.7)].
[INFO ] 2019-10-15T18:30:02,871Z [main] internal.Node.startNode - The Corda node is running in production mode. If this is a developer environment you can set 'devMode=true' in the node.conf file.

docker compose jest integration tests cannot make intra-container http requests RequestError: Error: getaddrinfo ENOTFOUND

I have a docker compose stack with a few containers. The two in question are an extended python:3-onbuild container (see base image here) with a falcon webserver running and a basic node:8.11-alpine container that is attempting to make post requests to the python webserver. The python webserver is making database calls to a postgres:alpine container. This is a simplified version of my docker-compose.yml file:
version: '3.6'
services:
app: # python:3-onbuild
ports:
- 5000
build:
context: ../../
dockerfile: infra/docker/app.Dockerfile
lambda: # node:8.11-alpine
ports:
- 10000
build:
context: ../../
dockerfile: infra/docker/lambda.Dockerfile
depends_on:
- app
db: # postgres:alpine
ports:
- 5432:5432
build:
context: ../../
dockerfile: infra/docker/db.Dockerfile
I have a suite of integration tests I'm trying to pass. When I run the tests with an endpoint in the outside world (https python server running the same code as the app service) all of the tests pass. The problem arises when I try to make these requests against the app service from within my docker-compose stack, specifically from the lambda service to the app service.
I don't think it is a timing issue. In the beforeAll function of my jest suite, I poll the python server for 100 seconds, once every 10 seconds like this:
In my jest test
beforeAll(async () => {
const endpointUrl = getEndpointUrl();
const status = await checkStatus(endpointUrl, 10);
// NOTE: never reaches this line, starts tests before 5-minute timeout
console.log(`[integration.test]: status = ${status}`);
}, 5 * 60 * 1000); // 5-minute timeout for async function
Where checkStatus() is defined
/**
* Checks the status of the backend every 10 seconds, n times
* #param {string} endpointUrl
* #param {number} n
*/
export function checkStatus(endpointUrl: string, n: number): Promise<string> {
console.log(`[lib][checkStatus]: before timeout date = ${Date()}`);
return new Promise<string>((resolve, reject) => {
checkStatusHelper(endpointUrl, n, resolve, reject);
});
}
function checkStatusHelper(endpointUrl, n, resolve, reject): void {
if (n === 0) reject();
setTimeout(() => {
console.log(`[lib][checkStatus]: after timeout date ${n} = ${Date()}`);
rp.get(`${endpointUrl}/status`)
.then(res => {
console.log(`[lib][checkStatus]: after request date = ${Date()}`);
console.log(`[lib][checkStatus]: res = ${res}`);
resolve('success');
})
.catch(err => {
console.log(`[lib][checkStatus]: err = ${err}`);
checkStatusHelper(endpointUrl, n - 1, resolve, reject);
});
}, 10 * 1000);
}
I get output like the following (with request-promise-native):
console.log src/lib.ts:30
[lib][checkStatus]: before timeout date = Fri May 11 2018 01:54:10 GMT+0000 (UTC)
console.log src/lib.ts:39
[lib][checkStatus]: after timeout date 10 = Fri May 11 2018 01:54:20 GMT+0000 (UTC)
console.log src/lib.ts:47
[lib][checkStatus]: err = RequestError: Error: getaddrinfo ENOTFOUND app app:5000
console.log src/lib.ts:39
[lib][checkStatus]: after timeout date 9 = Fri May 11 2018 01:54:31 GMT+0000 (UTC)
console.log src/lib.ts:47
[lib][checkStatus]: err = RequestError: Error: getaddrinfo ENOTFOUND app app:5000
# etc ...
And similar output with axios instead of request-promise-native
console.log src/lib.ts:30
[lib][checkStatus]: before timeout date = Fri May 11 2018 01:56:57 GMT+0000 (UTC)
console.log src/lib.ts:39
[lib][checkStatus]: after timeout date 10 = Fri May 11 2018 01:57:07 GMT+0000 (UTC)
console.error node_modules/jsdom/lib/jsdom/virtual-console.js:29
Error: Error: getaddrinfo ENOTFOUND app app:5000
at Object.dispatchError (/Halloo/node_modules/jsdom/lib/jsdom/living/xhr-utils.js:65:19)
at Request.client.on.err (/Halloo/node_modules/jsdom/lib/jsdom/living/xmlhttprequest.js:676:20)
at emitOne (events.js:121:20)
at Request.emit (events.js:211:7)
at Request.onRequestError (/Halloo/node_modules/request/request.js:878:8)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7) undefined
console.log src/lib.ts:47
[lib][checkStatus]: err = Error: Network Error
console.log src/lib.ts:39
[lib][checkStatus]: after timeout date 9 = Fri May 11 2018 01:57:18 GMT+0000 (UTC)
console.error node_modules/jsdom/lib/jsdom/virtual-console.js:29
Error: Error: getaddrinfo ENOTFOUND app app:5000
at Object.dispatchError (/Halloo/node_modules/jsdom/lib/jsdom/living/xhr-utils.js:65:19)
at Request.client.on.err (/Halloo/node_modules/jsdom/lib/jsdom/living/xmlhttprequest.js:676:20)
at emitOne (events.js:121:20)
at Request.emit (events.js:211:7)
at Request.onRequestError (/Halloo/node_modules/request/request.js:878:8)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7) undefined
console.log src/lib.ts:47
[lib][checkStatus]: err = Error: Network Error
# etc ...
I also don't think it is not a networking issue. If I comment out my test step and let the lambda service spin up normally, I can ssh into the lambda container and test the connection using axios or request-promise-native. This /status endpoint simply returns the all the table names in my database to ensure the database container is also running properly. This works almost instantly, and far before 100 seconds
$ node
> const axios = require('axios')
undefined
> axios.get('http://app:5000/status').then(res => console.log(res.data));
Promise {
<pending>,
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] } }
> [ [ 'table0' ],
[ 'table1' ],
[ 'table2' ],
[ 'table3' ],
[ 'table4' ],
[ 'table5' ] ]
I thought maybe I can't use jest for these purposes, but I feel like I can because everything works flawlessly when I change the url to my production https endpoint running in aws and comment out the beforeAll() step in my jest suite.
It ended up being a problem with my jest configuration
The default environment in Jest is a browser-like environment through jsdom. If you are building a node service, you can use the node option to use a node-like environment instead.
To fix this I had to add this in my jest.config.js file:
{
// ...
testEnvironment: 'node'
// ...
}
Source: https://github.com/axios/axios/issues/1418

Exception message: Invalid URI: The hostname could not be parsed

We have Dot.net web application windows server 2016, we have binded the url on IIS v10.
If we browse the url from web its working fine, but whenever we test it from Dynatrace Client we see error.
Exception message: Invalid URI: The hostname could not be parsed.
here is what we se on event.
Event code: 3005
Event message: An unhandled exception has occurred.
Event time: 3/12/2018 6:00:06 AM
Event time (UTC): 3/12/2018 10:00:06 AM
Event ID: 968692c222434a6396144999bc967aef
Event sequence: 566
Event occurrence: 1
Event detail code: 0
Application information:
Application domain: /LM/W3SVC/4/ROOT-xxxxx
Trust level: Full
Application Virtual Path: /
Application Path: D:\path\path
Machine name: server
Process information:
Process ID: 6916
Process name: w3wp.exe
Account name: server\tappp
Exception information:
Exception type: UriFormatException
Exception message: Invalid URI: The hostname could not be parsed.
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result)
at System.Web.HttpApplication.CallHandlerExecutionStep.InvokeEndHandler(IAsyncResult ar)
at System.Web.HttpApplication.CallHandlerExecutionStep.OnAsyncHandlerCompletion(IAsyncResult ar)
Request information:
Request URL: http://server.app.local:1110/ccccll.svc
Request path: /ccccll.svc
User host address: xx.xxx.xx.xx
User:
Is authenticated: False
Authentication Type:
Thread account name: server\app
Thread information:
Thread ID: 80
Thread account name: server\app
Is impersonating: False
Stack trace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result)
at System.Web.HttpApplication.CallHandlerExecutionStep.InvokeEndHandler(IAsyncResult ar)
at System.Web.HttpApplication.CallHandlerExecutionStep.OnAsyncHandlerCompletion(IAsyncResult ar)

Resources