Corda - Failed to find a store at certificates\sslkeystore.jks - corda

Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.

You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955

Related

FTP_INCORRECT_HOST_KEY in N/SFTP Module

While creating the connection from NetSuite to SFTP using N/SFTP module i'm facing an error states:
"FTP_INCORRECT_HOST_KEY","message":"Provided host key does not match
remote server's fingerprint."
I have tried checking with my server team but no hope. Can any one suggest me how to resolve this or how can i get an authorized finger print host key from server.
I have tried with Suitescript 2.0 module (N/SFTP) with the help of the tool mentioned below.
https://ursuscode.com/netsuite-tips/suitescript-2-0-sftp-tool/
/**
*#NApiVersion 2.x
#NScriptType ScheduledScript
*/
define(['N/sftp', 'N/file', 'N/runtime'],function(sftp, file,runtime) {
function execute(context)
{
var myPwdGuid = "Encrypted password by GUID";
var myHostKey = "Some long Host key around 380 characters";
// establish connection to remote FTP server
var connection = sftp.createConnection({
username: 'fuel_integration',
passwordGuid: myPwdGuid, // references var myPwdGuid
url: '59.165.215.45',//Example IP
directory: '/sftproot/TaleoSync',
restrictToScriptIds : runtime.getCurrentScript().id,
restrictToCurrentUser :false,
hostKey: myHostKey // references var myHostKey
});
// specify the file to upload using the N/file module
// download the file from the remote server
var downloadedFile = connection.download({
directory: '/sftproot/TaleoSync',
filename: 'Fuel Funnel Report_without filter.csv'
});
downloadedFile.folder = ;
downloadedFile.save();
context.response.write(' Downloaded "Fuel Funnel Report_without filter" to fileCabinet');
}
return {
execute: execute
};
});
I expect to create a connection between SFTP and NetSuite to down a file from SFTP and place it to NetSuite file cabinet.
A couple of things:
restrictToScriptIds : runtime.getCurrentScript().id,
restrictToCurrentUser :false,
Are not part of the createConnection signature. Those should have been used when you created a Suitelet to vault your credential.
However the hostkey complaint may be dealt with by using ssh-keyscan from a linux box.
ssh-keyscan 59.165.215.45
should replay with the server name then ssh-rsa then a long base64 string. Copy that string so it ends up in myHostKey and set the hostKeyType to RSA.

SASL_SSL integration with EmbeddedKafka

I've been following this blog post to implement an embedded sasl_ssl
https://sharebigdata.wordpress.com/2018/01/21/implementing-sasl-plain/
#SpringBootTest
#RunWith(SpringRunner.class)
#TestPropertySource(properties = {
"spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}",
"spring.kafka.consumer.group-id=notify-integration-test-group-id",
"spring.kafka.consumer.auto-offset-reset=earliest"
})
public class ListenerIntegrationTest2 {
static final String INBOUND = "inbound-topic";
static final String OUTBOUND = "outbound-topic";
static {
System.setProperty("java.security.auth.login.config", "src/test/java/configs/kafka/kafka_jaas.conf");
}
#ClassRule
public static final EmbeddedKafkaRule KAFKA = new EmbeddedKafkaRule(1, true, 1,
ListenerIntegrationTest2.INBOUND, ListenerIntegrationTest2.OUTBOUND)
.brokerProperty("listeners", "SASL_SSL://localhost:9092, PLAINTEXT://localhost:9093")
.brokerProperty("ssl.keystore.location", "src/test/java/configs/kafka/kafka.broker1.keystore.jks")
.brokerProperty("ssl.keystore.password", "pass")
.brokerProperty("ssl.key.password", "pass")
.brokerProperty("ssl.client.auth", "required")
.brokerProperty("ssl.truststore.location", "src/test/java/configs/kafka/kafka.broker1.truststore.jks")
.brokerProperty("ssl.truststore.password", "pass")
.brokerProperty("security.inter.broker.protocol", "SASL_SSL")
.brokerProperty("sasl.enabled.mechanisms", "PLAIN,SASL_SSL")
.brokerProperty("sasl.mechanism.inter.broker.protocol", "SASL_SSL");
When I use the PLAINTEXT://localhost:9093 config I get the following:
WARN org.apache.kafka.clients.NetworkClient - [Controller id=0, targetBrokerId=0] Connection to node 0 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
However, when I remove it, I get org.apache.kafka.common.KafkaException: Tried to check server's port before server was started or checked for port of non-existing protocol
I've tried changing the SecurityProtocol type to autodiscover which style of broker communication it should be using (it's hardcoded to plaintext - this should probably get fixed):
if (this.kafkaPorts[i] == 0) {
this.kafkaPorts[i] = TestUtils.boundPort(server, SecurityProperties.forName(this.brokerProperties.getOrDefault("security.protocol", SecurityProtocol.PLAINTEXT).toString()); // or whatever property can give me the security protocol I should be using to communicate
}
I still get the following error: WARN org.apache.kafka.clients.NetworkClient - [Controller id=0, targetBrokerId=0] Connection to node 0 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
Is there a way to correctly configure embedded kafka to be sasl_ssl enabled?

update exisitng terraform compute instance when added new "components"

I am new with terraform, but I have created an openstack compute instance like this:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
# Import SSH key pair into openstack project
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
# Create a new virtual machine
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
}
For maintainability and flexibility reasons I would like to add some "components" in the same instance, it could be anything, but here I have tried with a provisionner file and remote execution.
Indeed, when I add this arguments in my compute instance, I noticed that my compute instance will not be updated. For example:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
# Add a provisionner file on the ressource
provisioner "file" {
source = "foo_scripts/bar-setup.sh"
destination = "/tmp/bar-setup.sh"
connection {
type = "ssh"
user = "user"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
# execute server setup file
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bar-setup.sh",
"sudo bash /tmp/bar-setup.sh",
]
connection {
type = "ssh"
user = "centos"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
Indeed, by adding the provionner file on the ressource, when I run the command terraform plan or terraform apply, nothing change on my instance. I have infos messages notify me that:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
What it's the right way to apply my changes to my compute instance.
Following Terraform documentation:
Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction.
If you want the provisionners to run again, you should destroy (terraform destroy) and create (terraform apply) the resource again.
There's no way that Terraform can check the state of a local or a remote execution, it's not like there's an API call that can tell you what happen on your custom code - bar-setup.sh.
That would be like magic, or actual Magic.
Terraforms' for managing the infrastructure, the config of the instance, and not really for the content on the instance. Immutable content and recreating is the true path here. Making a completely new instance. However if it's your Hammer there are ways.
If you taint the resource that you want to update, then when terraform is run again next time the resource will be re-executed. But heed what I said about Hammers.
Alternatively you could leverage your CM tool of choice to manage the content of your instance - Chef/Ansible or create the images (i.e. immutable) used by Openstack via a tool like packer and update those. I'd do the latter.

Elasticsearch not starting, but throwing ReceiveTimeoutTransportException

I am trying to use elastic search with java api, but when i try to run application, i am getting following exception.
18:13:52.378 [elasticsearch[Fallen One][generic][T#1]] INFO org.elasticsearch.client.transport - [Fallen One] failed to get local cluster state for [#transport#-1][integra][inet[/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[/127.0.0.1:9300]][cluster/state] request_id [52] timed out after [5001ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356) [elasticsearch-1.0.1.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
18:13:52.381 [elasticsearch[Fallen One][generic][T#1]] DEBUG org.elasticsearch.transport.netty - [Fallen One] disconnected from [[#transport#-1][integra][inet[/127.0.0.1:9300]]]
18:13:52.391 [elasticsearch[Fallen One][generic][T#3]] DEBUG org.elasticsearch.transport.netty - [Fallen One] connected to node [[#transport#-1][integra][inet[/127.0.0.1:9300]]]
Code for connecting to elastic search is
private String[] esNodes = { "127.0.0.1:9300" };
protected TransportClient buildClient() throws Exception {
Settings settings = ImmutableSettings.settingsBuilder()
.put("client.transport.sniff", true)
.put("client.transport.ignore_cluster_name",true).build();
TransportClient client = new TransportClient(settings);
for (int i = 0; i < esNodes.length; i++) {
client.addTransportAddress(toAddress(esNodes[i]));
}
return client;
}
private InetSocketTransportAddress toAddress(String address) {
if (address == null) return null;
String[] splitted = address.split(":");
int port = 9300;
if (splitted.length > 1) {
port = Integer.parseInt(splitted[1]);
}
return new InetSocketTransportAddress(splitted[0], port);
}
can any one kindly help me, i am new to elastic search and have no idea how to resolve the issue.
I am using this code to connect to my elasticsearch and its pretty well working.
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();
this.client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(ipAddress,9300));
Where ipAddress and Clustername are argument of my function.
You should have a look at the network on your laptop. Check if you can connect to localhost. Another thing you could try is start two elasticsearch instances with the same configuration to see if they connect. Finally have a look at the network part of elasticsearch.yml. When having network problems on a local machine I usually try the following two options:
network.host: localhost
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost"]
I think Elastic search server is Not started.. Try to hit in browser or curl comments.. If it works.. Try the above code..
NOTE:IF YOUR JAVA API JAR VERSION DIFFERS ALSO LL RAISE PROB.. ( get java api version same as ES version
Note : elasticsearch java api helps you connect to with elasticsearch.. But its can't start a elasticsearch node using api..
Your code is good.. It's Vry standard too..

How to specify credentials from a Java Web Service in PTC Windchill PDMLink

I am currently investigating the possibility of using a Java Web Service (as described by the Info*Engine documentation of Windchill) in order to retrieve information regarding parts. I am using Windchill version 10.1.
I have successfully deployed a web service, which I consume in a .Net application. Calls which do not try to access Windchill information complete successfully. However, when trying to retrieve part information, I get a wt.method.AuthenticationException.
Here is the code that runs within the webService (The web service method simply calls this method)
public static String GetOnePart(String partNumber) throws WTException
{
WTPart part=null;
RemoteMethodServer server = RemoteMethodServer.getDefault();
server.setUserName("theUsername");
server.setPassword("thePassword");
try {
QuerySpec qspec= new QuerySpec(WTPart.class);
qspec.appendWhere(new SearchCondition(WTPart.class,WTPart.NUMBER,SearchCondition.LIKE,partNumber),new int[]{0,1});
// This fails.
QueryResult qr=PersistenceHelper.manager.find((StatementSpec)qspec);
while(qr.hasMoreElements())
{
part=(WTPart) qr.nextElement();
partName = part.getName();
}
} catch (AuthenticationException e) {
// Exception caught here.
partName = e.toString();
}
return partName;
}
This code works in a command line application deployed on the server, but fails with a wt.method.AuthenticationException when performed from within the web service. I feel it fails because the use of RemoteMethodServer is not what I should be doing since the web service is within the MethodServer.
Anyhow, if anyone knows how to do this, it would be awesome.
A bonus question would be how to log from within the web service, and how to configure this logging.
Thank you.
You don't need to authenticate on the server side with this code
RemoteMethodServer server = RemoteMethodServer.getDefault();
server.setUserName("theUsername");
server.setPassword("thePassword");
If you have followed the documentation (windchill help center), your web service should be something annotated with #WebServices and #WebMethod(operationName="getOnePart") and inherit com.ptc.jws.servlet.JaxWsService
Also you have to take care to the policy used during deployment.
The default ant script is configured with
security.policy=userNameAuthSymmetricKeys
So you need to manage it when you consume your ws with .Net.
For logging events, you just need to call the log4j logger instantiated by default with $log.debug("Hello")
You can't pre-authenticate server side.
You can write the auth into your client tho. Not sure what the .Net equivilent is, but this works for Java clients:
private static final String USERNAME = "admin";
private static final String PASSWORD = "password";
static {
java.net.Authenticator.setDefault(new java.net.Authenticator() {
#Override
protected java.net.PasswordAuthentication getPasswordAuthentication() {
return new java.net.PasswordAuthentication(USERNAME, PASSWORD.toCharArray());
}
});
}

Resources