How to connect STAN to NATS using NKeys? - nats.io

We're setting up a NATS and STAN cluster. The STAN cluster needs to connect to our NATS cluster, obviously. But now I'm having trouble to authenticate, when connecting the STAN cluster to the NATS cluster.
We are using NKeys for authentication (https://docs.nats.io/developing-with-nats/security/nkey).
When I try to connect with the STAN credentials with the python client (nats.py), then I have no problem at all.
STAN asks for a credentials file for authentication. I tried giving a file with only the seed, also seed and user pubkey... How should I do it?
Thanks for the help in advance!

https://github.com/nats-io/nats-streaming-server/issues/1095
Asked the question on GitHub.
Nkeys are not supported by NATS Streaming. JWT is the only option.

Related

Migrating from an unencrypted Redshift Cluster to Encrypted

I am trying to enable SSE with a Customer-Managed CMK in my production Redshift cluster to follow certain security protocols.
For POC purposes, I spun up a 1 Node dc2.large Redshift cluster and following this doc, I was able to enable SSE.
However, my question is, does enabling SSE encrypt the existing data in the cluster? If not, what steps should be taken?
Overall what are the downsides, if any, of enabling encryption at rest in a production Redshift cluster and what are the best practices?
There is no need to change anything in your code or existing pipelines/process. This is Disk encryption. Its nothing to do with your database connections or code.
To know more about the process then read these links.
https://aws.amazon.com/about-aws/whats-new/2018/10/encrypt-amazon-redshift-1-click/
https://docs.aws.amazon.com/redshift/latest/mgmt/changing-cluster-encryption.html

Connecting EEG device to Azure Machine Learning studio

I did some researches and video to figure out how to connect EEG devices which is Emotiv Insight in real-time to Microsoft Azure Machine Learning Studio.
I thought any ways to do it, perhaps I need to connect to other services before connecting to Azure studio.
My aim is making an app taken brainwave and use Azure studio to analyze it. Finally, data is saved to firebase and response to my app.
However, I am stucking to find a way to connect my EEG data to Azure. It is appreciated for anyone can help me.
Emotive use a service, running on the cortex process. You need a websocket to talk with it. Additionaly all communication to and from the service is using JSon objects. Then you need to transport that into Azure

"Ping" a tensorflow serving server

I implemented several gRPC services, which all share the basic interface which allows me to "ping" them to see if they are up. For that, I have the getServiceVersion() request, which returns me the service version if the service is up, and the request breaks if the service is not up. How would you do this is tf serving? Or is there a better procedure in GRPC in general?
If no other answer comes up, I will just create a ping model which returns the service version from a constant. This would be a prediction model which predicts with 100% accuracy if the predictor is up. :)

SAP receive adapter high availability

We are having a active-active BizTalk cluster with windows server as software load balancer. The solution includes a SAP receive adapter accepting inbound rfc calls. The goal is to make SAP adapter high availabile.
Read the documentation (), it does says 'You must always cluster the SAP receive adapter to accommodate a two-phase commit scenario.' and 'hosts running the receive handlers for FTP, MSMQ, POP3, SQL, and SAP require a clustering mechanism to provide high availability.'
What we currently did in both the active-active node for BizTalk, we have a host instance enabled. With refering to above documentation, does it mean we did it incorrectly? We should take the clustered host instance instead the active-active deployment?
thanks for all the help in advance.
You need to cluster the host that handles the SAP receive. What this means is that you will always have only one instance of the adapter running at any given time and if one of the server goes down, the other will pick up.
Compare this with your scenario where you simply have two (non-clustered) instances running concurrently: yes, this gives you high availability - but also deadlocks! The two will run independently of each other... With the cluster scenario above, they will run one at the time
To cluster the SAP receive host: open the admin console, find the host, right-click and Cluster.

Using snow (and snowfall) with AWS for parallel processing in R

In relation to my earlier similar SO question , I tried using snow/snowfall on AWS for parallel computing.
What I did was:
In the sfInit() function, I provided the public DNS to socketHosts parameter like so
sfInit(parallel=TRUE,socketHosts =list("ec2-00-00-00-000.compute-1.amazonaws.com"))
The error returned was Permission denied (publickey)
I then followed the instructions (I presume correctly!) on http://www.imbi.uni-freiburg.de/parallel/ in the 'Passwordless Secure Shell (SSH) login' section
I just cat the contents of the .pem file that I created on AWS into the ~/.ssh/authorized_keys of the AWS instance I want to connect to from my master AWS instance and for the master AWS instance as well
Is there anything I am missing out ?
I would be very grateful if users can share their experiences in the use of snow on AWS.
Thank you very much for your suggestions.
UPDATE:
I just wanted to update the solution I found to my specific problem:
I used StarCluster to setup my AWS cluster : StarCluster
Installed package snowfall on all the nodes of the cluster
From the master node issued the following commands
hostslist <- list("ec2-xxx-xx-xxx-xxx.compute-1.amazonaws.com","ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com")
sfInit(parallel=TRUE, cpus=2, type="SOCK",socketHosts=hostslist)
l <- sfLapply(1:2,function(x)system("ifconfig",intern=T))
lapply(l,function(x)x[2])
sfStop()
The ip information confirmed that the AWS nodes were being utilized
Looks not that bad but the pem file is wrong. But it is sometimes not that simple and many people have to fight with this issues. A lot of tips you can find in this post:
https://forums.aws.amazon.com/message.jspa?messageID=241341
Or check google for other posts.
From my experience most people have problems in these steps:
Can you log onto the machines via ssh? (ssh ec2-00-00-00-000.compute-1.amazonaws.com). Try to use the public DNS, not the public IP to connect.
You should check your "Security groups" in AWS if the 22 port is open for all machines!
If you plan to start more than 10 worker machines you should work on a MPI installation on your machines (much better performance!)
Markus from cloudnumbers.com :-)
I believe #Anatoliy is correct: you're using an X.509 certificate. For the precise steps to take to add the SSH keys, look at the "Types of credentials" section of the EC2 Starters Guide.
To upload your own SSH keys, take a look at this page from Alestic.
It is a little confusing at first, but you'll want to keep clear which are your access keys, your certificates, and your key pairs, which may appear in text files with DSA or RSA.

Resources