How to enable TLS in Corda 3.1? - corda

What's the correct way to configure TLS in production on a Corda Node?
We're trying to enable TLS on CordaApp Sample version 3.1, but the following error occurs in Corda webserver:
[ERROR] 2018-05-03T13:58:16,984Z [main] Main.main - Exception during node startup {}
org.apache.activemq.artemis.api.core.ActiveMQConnectionTimedOutException: AMQ119013: Timed out waiting to receive cluster topology. Group:null
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:804)
node.conf file is:
myLegalName="O=PartyA,L=London,C=GB"
p2pAddress="localhost:10005"
rpcSettings = {
address="localhost:10006"
adminAddress="localhost:10046"
useSsl=true
ssl {
certificatesDirectory="./certificates"
keyStorePassword="cordacadevpass"
trustStorePassword="trustpass"
}
}
rpcUsers=[
{
password=test
permissions=[
ALL
]
username=user1
}
]
webAddress="localhost:10007"
devMode=true

According to Mike Hearn, from the Corda Ledger Slack channel, RPC SSL is broken in Corda 3.1 and the rework is being made in this pull request.

Related

Grpc integration with spring camel web app

I have a web service built on spring-camel. I am trying to integrate the grpc server using grpc-spring-boot-starter. My implementation of grpc service is as below.
#GrpcService
public class GreetingServiceImpl extends GreetingServiceGrpc.GreetingServiceImplBase {
#Override
public void processGrpcRequest(GreetingRequest request, StreamObserver<GreetingResponse> responseObserver) {
String receivedMessage = request.getRequest();
GreetingResponse response = GreetingResponse.newBuilder()
.setResponse("Your message received " + receivedMessage).build();
responseObserver.onNext(response);
responseObserver.onCompleted();
}
}
I package my web service as a war and I find no error during deployement of the war file in my application server. However, when I try to communicate with my grpc server, I get the following error message.
Exception in thread "main" 15:18:01.363 [grpc-nio-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x02a01ffb, L:/127.0.0.1:56000 - R:localhost/127.0.0.1:9089] INBOUND PING: ack=true bytes=1234
io.grpc.StatusRuntimeException: UNIMPLEMENTED: HTTP status code 404
invalid content-type: text/html
headers: Metadata(:status=404,content-type=text/html,date=Fri, 04 Mar 2022 09:48:01 GMT,content-length=74)
DATA-----------------------------
<html><head><title>Error</title></head><body>404 - Not Found</body></html>
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
at com.test.grpc.GreetingServiceGrpc$GreetingServiceBlockingStub.processGrpcRequest(GreetingServiceGrpc.java:156)
at com.test.grpc.GrpcClient.main(GrpcClient.java:16)
I have recreated the same issue with minimal setup and code is available here github . Please can anyone help on this. Thanks

Spring Cloud Stream with Kafka Binder: /bindings Actuator API does not stop producer

I have a Spring Cloud Stream project with Actuator and the Kafka binder. I am exploring the bindings/ actuator and am trying to stop a producer as an exercise. I make the following POST request via curl:
curl -v 'localhost:8081/actuator/bindings/producer-out-0' -H 'content-type: application/json' -d '{"state": "STOPPED"}'
Actual Results:
The query returns 204. The state of the producer (seen from GET /actuator/bindings/producer-out-0) is now stopped. The producer is still producing messages, however, which can be seen from both logging and consumer activity on the topic.
Expected Results:
I expected the producer to stop producing messages. (I have also tried using the PAUSED state, which also returns 204, but error logs indicate that this producer cannot be paused.)
Do I misunderstand how this actuator works? When a producer is stopped, is it expected that S.C.S. will continue to poll that producer? The only documentation I am aware of is here, but it doesn't answer my questions as far as I can tell.
Background:
I am using spring-boot-starter-parent 2.5.3 and have starter-web and starter-actuator listed as dependencies. I don't think I'm missing any.
This is the producer/consumer pair. As you can see I am using a pollable supplier.
#Configuration
#Profile("numbers")
public class NumberHandlers {
private static final Logger LOGGER = LoggerFactory.getLogger(NumberHandlers.class);
#Bean
public Supplier<Integer> producer() {
// Needed an effectively-final mutable integer. Side-bar comments welcome. :P
var counter = new AtomicInteger();
return () -> {
var n = counter.getAndIncrement();
LOGGER.info("Producing number: " + n);
return n;
};
}
#Bean
public Consumer<Integer> consumer() {
return it -> LOGGER.info("Consuming number: " + it);
}
}
These are active when I pass in the numbers profile. My configurations are below.
application.yml:
server:
port: 8081
spring:
cloud:
stream:
kafka:
binder:
brokers: ${env.kafka.bootstrapservers:localhost}
management:
endpoints:
web:
exposure:
include: 'bindings'
... and application-numbers.yml:
spring:
cloud:
stream:
poller:
fixedDelay: 5000
bindings:
producer-out-0:
destination: numbers-raw
producer:
partitionCount: 3
consumer-in-0:
destination: numbers-raw
kafka:
bindings:
producer-out-0:
producer:
topic.properties:
# These look weird because they're done as an exercise.
retention.bytes: 10000
retention.ms: 172800000
function:
definition: producer;consumer
I am testing in a localhost environment using a docker-compose kafka and zookeeper on the host network.
Thanks!
Lifecycle control of producer bindings is not currently supported, only consumer bindings.

"Got an error reading communication packets" when connecting to Google Cloud SQL from Firebase Functions

I connect to Google Cloud SQL from Firebase Functions using whitelisted public IP. It worked until yesterday. I now get the following error:
{ insertId:
"s=cd0e3771777e45499a64e5d27f181595;i=41f0f9;b=c506ddb142564223b97b8eae98860618;m=154ecd3c3d6;t=57b6a4c27df31;x=1fb3c8162344104e-0#a1"
logName:
"projects/[my-project]/logs/cloudsql.googleapis.com%2Fmysql.err"
receiveTimestamp: "2018-11-24T14:56:04.390263054Z" resource: {
labels: { database_id: "[my-project]:[my-db]" project_id:
"[my-project]" region: "asia" } type:
"cloudsql_database" } severity: "ERROR" textPayload:
"2018-11-24T14:55:57.984322Z 246208 [Note] Aborted connection 246208
to db: '[db-name]' user: '[db-user]' host: '107.178.237.15' (Got an
error reading communication packets)" timestamp:
"2018-11-24T14:55:57.984561Z" }
The DB instance is fine. I can connect from my laptop as well as from GCE instance, just not Firebase Functions. But it used to work. Is there a way to get a more descriptive error message?

Terraform Provisioner "local-exec" not working as expected | VPC Peering Connection Accept issue

I'm unable to get the auto accept peering done through the work around mentioned in the link (Why am I getting a permissions error when attempting to auto_accept vpc peering in Terraform?"] via provisioner option
See below Terraform code of mine. Can some one help me out?
provider "aws" {
region = "us-east-1"
profile = "default"
}
provider "aws" {
region = "us-east-1"
profile = "peer"
alias = "peer"
}
data "aws_caller_identity" "peer" {
provider = "aws.peer"
}
resource "aws_vpc_peering_connection" "service-peer" {
vpc_id = "vpc-123a56789bc"
peer_vpc_id = "vpc-YYYYYY"
peer_owner_id = "012345678901"
peer_region = "us-east-1"
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
Output I'm getting:
Error: Error applying plan:
1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: 1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: Unable to modify peering options. The VPC Peering Connection "pcx-08ebd316c82acacd9" is not active. Please set `auto_accept` attribute to `true`, or activate VPC Peering Connection manually.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure
Where as I'm able to run the aws cli command successfully via linux shell, outside the terraform template. Let me know if I'm missing out something in the terraform script.
Try with moving out your "local-exec" and add depends on link with your VPC peering.
resource "null_resource" "peering-provision" {
depends_on = ["aws_vpc_peering_connection.service-peer"]
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
As said Koe it's may be better to use auto_accept option.

How to send emails with Symfony 4 Swiftmailer from a local machine running on Windows 10?

I'm trying to send emails with Symfony 4, Wamp and fake sendmail on Windows 10 but without success, I'm hosted on OVH. I want to specify that I have a site hosted on OVH with same parameters running on Symfony 2 and Swiftmailer works perfectly.
Here is my Symfony .env line for Swiftmailer:
MAILER_URL=smtp://smtp.dnimz.com:465?encryption=ssl&auth_mode=login&username=simslay#dnimz.com&password=***
Here is part of my Symfony controller for Swiftmailer:
$message = (new \Swift_Message('Hello Email'))
->setFrom(array($this->container->getParameter('mailer_user') => 'dnimz'))
->setTo($user->getEmail())
->setBody(
$this->renderView(
'emails/email_registration.html.twig',
array('username' => $user->getUsername())
),
'text/html'
);
try {
$this->get('mailer')->send($message);
$this->addFlash('notice', 'mail envoyé !');
} catch (\Exception $e) {
$this->addFlash(
'notice',
'<strong>Le message n\'a pu être envoyé !</strong>'
);
}
Here is my sendmail.ini from fake sendmail:
[sendmail]
smtp_server=smtp.dnimz.com
smtp_port=465
smtp_ssl=auto
error_logfile=error.log
auth_username=simslay#dnimz.com
auth_password=***
pop3_server=
pop3_username=
pop3_password=
force_sender=
force_recipient=
hostname=
Here is my php.ini mail function part:
SMTP =smtp.dnimz.com
smtp_port =465
sendmail_from = "simslay#dnimz.com"
sendmail_path = "C:/wamp64/sendmail/sendmail.exe"
mail.add_x_header = On
When sending the email, I've got this warnings and error:
[debug] Warning: stream_socket_client(): SSL operation failed with code 1. OpenSSL Error messages:
error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed
2017-12-29T18:44:09+00:00 [debug] Warning: stream_socket_client(): Failed to enable crypto
2017-12-29T18:44:09+00:00 [debug] Warning: stream_socket_client(): unable to connect to ssl://smtp.dnimz.com:465 (Unknown error)
2017-12-29T18:44:09+00:00 [error] Exception occurred while flushing email queue: Connection could not be established with host smtp.dnimz.com [ #0]
I don't know why these errors occur?
It looks like you either don't have OpenSSL installed or there is a certificate error. I remember a similar issue as the first warning regularly occuring with composer on Windows. The solution was to install a missing certificate (just place it in your certs folder in your WAMP-installation).
For reference see:
https://akrabat.com/ssl-certificate-verification-on-php-5-6/
Composer update fails while updating from packagist
https://github.com/composer/composer/issues/2798#issuecomment-59812991

Resources