Apply security in grpc client server node js - grpc

I am new to GRPC call, i created 1 server and client,now how to restrict other clients to connect my server
i haven't tried any solution,because not know the best solution and i have limited time to appy solution so posting question

you can create a server with secure credentials as mentioned here
there are different methods there, you can choose any of it
I prefer SSL method
to create a connection with ssl_creds
const root_cert = fs.readFileSync('path/to/root-cert');
const ssl_creds = grpc.credentials.createSsl(root_cert);
const stub = new helloworld.Greeter('myservice.example.com', ssl_creds);
for generating SSL I have used git repo and follow the instruction according to README

Related

keycloak starts with a new realm and some client configurations

I try to use keycloak as the authentication service in my design. In my case, when the keycloak starts, I need one more realm besides default master realm. Assuming the new agency is called "demo".
So it means when keycloak starts, it should have two realms (master and demo).
In addtion, in the realm demo, I need to configure the default client "admin-cli" to enable "Full Scope Allowed". Also need to add some buildin mapper to this client.
In this case, I wonder whether I can use something like initialization file which keycloak can load when starting ?
Or I need to use keycloak client APIs to do this operations (e.g., Java keycloak admin client)?
Thanks in advance.
You can try the following:
Create the Realm;
Set all the options that you want;
Go to Manage > Export;
Switch Export groups and roles to ON;
Switch Export clients to ON;
Export.
That will export a .json file with the configurations.
Then you can tested it be deleting your Demo Realm and:
Go to Add Realm;
Chose the .json file that was exported;
Click Create.
Check if the configurations that you have changed are still presented on the Demo Realm, if there are then it means that you can use this file to import the Realm from. Otherwise, for the options that were not persistent you will have to create them via the Admin Rest API.

BizTalk 2016: How to use HTTP Send adapter with API token

I need to make calls to a rest API service via BizTalk Send adapter. The API simply uses a token in the header for authentication/authorization. I have tested this in a C# console app using httpclient and it works fine:
string apiUrl = "https://api.site.com/endpoint/<method>?";
string dateFormat = "dateFormat = 2017-05-01T00:00:00";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("token", "<token>");
client.DefaultRequestHeaders.Add("Accept", "application/json");
string finalurl = apiUrl + dateFormat;
HttpResponseMessage resp = await client.GetAsync(finalurl);
if (resp.IsSuccessStatusCode)
{
string result = await resp.Content.ReadAsStringAsync();
var rootresult = JsonConvert.DeserializeObject<jobList>(result);
return rootresult;
}
else
{
return null;
}
}
however I want to use BizTalk to make the call and handle the response.
I have tried using the wcf-http adapter, selecting 'Transport' for security (it is an https site so security is required(?)) with no credential type specified and placed the header with the token in the 'messages' tab of the adapter configuration. This fails though with the exception: System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
I have tried googling for this specific scenario and cannot find a solution. I did find this article with suggestions for OAUth handling but I'm surprised that even with BizTalk 2016 I still have to create a custom assembly for something so simple.
Does anyone know how this might be done in the wcf-http send adapter?
Yes, you have to write a custom Endpoint Behaviour and add it to the send port. In fact with the WCF-WebHttp adapter even Basic Auth doesn't work so I'm currently writing an Endpoint Behaviour to address this.
One of the issues with OAuth, is that there isn't one standard that everyone follows, so far I've had to write 2 different OAuth behaviours as they have implemented things differently. One using a secret and time stamp hashed to has to get a token, and the other using Basic Auth to get a token. Also one of them you could get multiple tokens using the same creds, whereas the other would expire the old token straight away.
Another thing I've had to write a custom behaviour for is which version of TLS the end points expects as by default BizTalk 2013 R2 tries TLS 1.0, and then will fail if the web site does not allow it.
You can feedback to Microsoft that you wish to have this feature by voting on Add support for OAuth 2.0 / OpenID Connect authentication
Maybe someone will open source their solution. See Announcement: BizTalk Server embrace open source!
Figured it out. I should have used the 'Certificate' for client credential type.
I just had to:
Add token in the Outbound HTTP Headers box in the Messages tab and select 'Transport' security and 'Certificate' for Transport client credential type.
Downloaded the certificate from the API's website via the browser (manually) and installed it on the local servers certificate store.
I then selected that certificate and thumbprint in the corresponding fields in the adapter via the 'browse' buttons (had to scroll through the available certificates and select the API/website certificate I was trying to connect to).
I discovered this on accident when I had Fiddler running and set the adapter proxy setting to the local Fiddler address (http://localhost:8888). I realized that since Fiddler negotiates the TLS connection/certificate (I enabled tls1.2 in fiddler) to the remote server, messages were able to get through but not directly between the adapter and the remote API server (when Fiddler WASN'T running).

Using SockJS in Meteor to connect to external service or stream API

I am trying to use SockJS from my Meteor to connect to another service but I can't get a reference to SockJS within meteor client or server. Does anyone have a good example of using SockJS to connect to other service or streaming API's from Meteor?
I have tried to accomplish this two ways but 'socket' is always undefined:
var socket = sockjs.createServer({ sockjs_url: 'http://api.hitbtc.com:8081' });
socket.onmessage = function(msg) {
var data = JSON.parse(msg.data);
console.log("CONNECTED!!" + data)
};
var socket = new SockJS('http://api.hitbtc.com:8081');
socket.onmessage = function(msg) {
var data = JSON.parse(msg.data);
console.log("CONNECTED!!" + data)
};
Even though SockJS is used by the Meteor itself it's hidden deeply inside the ddp package and it's not really exposed to the users. So basically, you have two options here:
You can either put another copy of SockJS into your app, ...
... or you can teach your custom server to understand DDP protocol, then you will be able to use DDP.connect to establish a new connection.
The second solution does not make sense of course if you are using 3rd party service. The first solution feels ugly because of the data redundancy, but I am afraid it's the only way out if 2. is not acceptable.
In the server:
execute Npm.require('./') and observe the path informed in the error, from it you can point to the packages from the depths of Meteor, in the case of SockJS the path (in version 1.10.2 of Meteor) is:
Npm.require('./meteor/ddp-server/node_modules/sockjs');
In the specific case of Sockjs, its use is slightly different from that presented on the Github page, as follows:
const sockjs = Npm.require('./meteor/ddp-server/node_modules/sockjs');
const echo = sockjs.listen(WebApp.httpServer, {prefix: '/echo'});
echo.on('connection', function(conn) {
conn.on ('data', function(message) {
conn.write(message);
});
conn.on('close', function(){});
});
I didn't find the sockjs "client" package in these files, because the sockjs-client package is specific to the browser. So I downloaded from the CDN that the "echo" output provided, I use "--exclude-archs web.browser.legacy" in my test environment, but from what I read out there the sockjs-client package is available if you don't use it this parameter.
Sockjs relies on "faye-websocket" which has both a client and a websocket server designed to run on NodeJs, here is the suggestion.
Ps: I didn't find an equivalent form on the client side (there is no Npm.require)

Is it possible to use client side smtp?

Real world problem: When meteor app lost connection with a self deployed meteor server,
it would be a nice solution to send email automatically from client side with the data,
what is not refreshing to other clients while server is offline...
Is it possible? I mean haven't seen any
if (Meteor.isClient) { MAIL_URL = 'smtp://....
sendMessage(this.userId, toId, msg);
likes... however client could use google smtp for example
A secondary server might be wiser than email. There is a package for clustering Meteor servers.

Symfony2 Using Amazon Load Balancers and SSL: Error on isSecure() Check

Hi I'm running into an issue where Symfony2 doesn't recognize the load balancer headers from Amazon AWS, which are need to determine if a request is SSL or not using the requires_channel: https security configuration.
By default Symfony2 $request->isSecure() looks for "X_FORWARDED_PROTO" but there's apparently no standard for this, and Amazon AWS load balancers use "HTTP_X_FORWARDED_PROTO".
I see the cookbook article for setting trusted proxies in config, but that's geared around whitelisting specific IP addresses and won't work with AWS, which generates dynamic IPs. Another feature, setting the framework config to include trust_proxy_headers: true is deprecated. This breaks my app by forcing endless redirects on the pages that require SSL-only.
You can now change the headers using setTrustedHeaderName(). This method allows you to change the four headers used throughout the file.
const HEADER_CLIENT_IP = 'client_ip'; // defaults 'X_FORWARDED_FOR'
const HEADER_CLIENT_HOST = 'client_host'; // defaults 'X_FORWARDED_HOST'
const HEADER_CLIENT_PROTO = 'client_proto'; // defaults 'X_FORWARDED_PROTO'
const HEADER_CLIENT_PORT = 'client_port'; // defaults 'X_FORWARDED_PORT'
The above, taken from the Request class indicate the keys available for use with the aforementioned method.
// $request is instance of HttpFoundation\Request;
$request->setTrustedHeaderName('client_proto', 'HTTP_X_FORWARDED_PROTO');
That said, at the time of writing, using "symfony/http-foundation": "2.5.*" the below code correctly determines whether or not the request is secure whilst behind an AWS Load Balancer.
// All IPs (*)
// $proxies = [$request->getClientIp()];
// Array of CIDR pools from load balancer
// EC2 -> Network & Security -> Load Balancers
// -> X -> Instances (tab) -> Availability Zones
// -> Subnet (column)
$proxies = ['172.x.x.0/20'];
$request->setTrustedProxies($proxies);
var_dump($request->isSecure()); // bool(true)
You're right the X_FORWARDED_PROTO header is hardcoded into HttpFoundation\Request while - as far as i know - overriding the request class in symfony is currently not possible.
There has been a discussion/RFC about this topic here and there is an open pull-request that solves this issue using a RequestFactory.

Resources