GSSAPI: importing a context created on a different host - gssapi

I am learning about GSSAPI and am intrigued by the exporting/importing context feature of the API. I am designing an application that will involve multiple processes across multiple machines. My question, then, is: if I create a security context and export it from machine A, can I import it in a program running on machine B? The documentation refers to an "interprocess token," but that language is somewhat vague.
Thanks in advance!

Why do you need that? Have every token its own context.

Related

How exactly does PrincipalContext.ValidateCredentials work?

I've searched the stack but didn't find an answer that helps my situation. We have two web apps on different sets of servers. Both do Active Directory authentication using the exact same standard code. And the target LDAP server is the same in all cases.
Using ctx As New PrincipalContext(ContextType.Domain) If ctx.ValidateCredentials(un_in, pw_in) Then...
However, in one case those two lines execute instantaneously, and the other there's a consistent 21 second delay (there's logging directly before and after these lines). And for the slow one, it's slow regardless of environment, i.e. on our dev/test/stage/prod servers.
We're at a loss as to what to check. Basic network connectivity checks show no delays, and plus this happens on 4 different servers. Connectivity to the domain controller, which as I understand it is how IIS would know which LDAP server to check possibly?. Thoughts?
In case anyone comes across this. The solution was to add a parameter,ContextOptions.Negotiate, to the code:
Using ctx As New PrincipalContext(ContextType.Domain)
If ctx.ValidateCredentials(un_in, pw_in, **ContextOptions.Negotiate**) Then
Something in the network environment changed that no one could identify. But adding this param removed the delay.

WSO2 ESB Clustered databases and Data Services

I could setup a cluster of wso2 ESB, following the dedicated doc, with a manager and two workers.
I am not sure about two points:
Does each worker node need it's own REGISTRY_LOCAL database ?
With both workers using the same db it works, but I'm not sure it is the way to do and the doc isn't clear about that.
Adding Data Services as a feature ?
Almost no docs about that, but not being able to fetch more than one row is a big limitation for me, so is it possible to add this feature in a clustered environment or is it better to separate the Data Services Servers from the ESB ones ?
If someone has experience in that kind of stuff I will really appreciate a feedback.
Thanks
You can use the same registry database for all the members in the cluster so that they can communicate with each other using that. You may refer my blog [1] for further information. Personally I have never used local registry DBs for each member in the cluster setup.
You cannot fetch multiple records when you install the DSS feature into the ESB. Also installing the feature will incur some performance overheads in the ESB. Therefore I thoroughly recommend you to use a separate DSS instance to get your work done. It also separates the concerns clearly which sounds good.
[1] http://ravindraranwala.blogspot.com/2015/09/wso2-esb-worker-manager-cluster-without.html

some generic questions about neo4j

I'm new to non-php web applications and to nosql databases. I was looking for a smart solution matching my application requirements and I was very surprised when I knew that there exist graph based db. Well I found neo4j very nice and very suitable for my application, but as I've already wrote I'm new to this and I have some limitations in understending how it works. I hope you guys could help me to learn.
If I embed neo4j in a servlet program then the database access I create is shared among the different threads of that servet right? so I need to put database creation in init() method and the shutdown in the destroy() right? And it will be thread safe.(every dot is a "right?") But what if I want to create a database shared among the whole application?
I heard that graph databases in general relies on a relational low level. Is that true for neo4j? But if it is then I see an high level interface to the real persistence layer, so what a Connection is in this case? Are there some techniques like connection pooling or these low level things are all managed by neo4j?
In my application I need to join some objects to users and many other classification stuff. any of these object has an unique id (a String). then If some one asks to view some stuff about object having id=QW then I need to load the vertex associate to object.QW. Is this an easy operation for graph datbases?
If I need to manage authentications, so as I receive the couple (usr,pwd) and I need to check whether exists this couple in my graph. Is the same problem as before or there exist some good variation for managing authentications?
thanks
If you're coming from PHP world in most cases you're better of running Neo4j in server mode and access it either via REST directly or use a client driver like https://github.com/jadell/neo4jphp. If you still want to embed Neo4j in a servlet environment, the GraphDatabaseService is a shared component, maybe stored within the ServletContext. On a per request (and therefore per-thread) basis you start and commit transactions.
Neo4j is a native graph database. The bare metal persistence layer is optimized for navigating from one node to its neighbors as fast as possible and written by the Neo4j devteam themselves. There are other graph databases out there reusing other persistence technologies for their underlying persistence.
Best thing is to run the Neo4j online course at http://www.neo4j.org/learn/online_course.
see SecurityRules
As the Neo4j is NoSql Graph Database,
Genration of the Unique ID you have to handle using the GUID(with 3.x autonincremented proery also supported for particular label),
as the Neo4j default genrated id is unique but can be realocated to the another object once the first assigned object is deleted,
I am .net developer in my project I used the Neo4j rest api it works well, i will sugesst you to go with that,as it is implemented using async-awit programing pattern, so long running operation you can pass to DB and utilize your web server resources in more prominent way.

Migrating from IBM MQ to javax.jms.* (Weblogic)

I've been looking for days about how one could migrate from using IBM Websphere MQ to rather only using the QueueManager within Weblogic 10.3.x server. This would save cost of licenses for IBM MQ. The closest I came was finiding an external link which stated that IBM examples existed that did something similar(moving away from MQ to standard jms libraries), but when I attempted to follow the link: http://www.developer.ibm.com/isv/tech/sampmq.html
it lead to a dead page :\
More specifically I am interested in
What classes to use in my attempts to replace the following, com.ibm.mq.* classes:
MQEnvironment
MQQueueManager
MQGetMessageOptions
MQPutMessageOptions
and other classes which don't have an obvious javax.jms.* alternative.
Some of the nuances & work-arounds I may encounter in this migration process.
The database we are forwarding the queue messages to is Oracle 11 Standard (with advanced queuing) if that changes anything, so basically we are looking to "cut out the middle-man", so to speak. Your learned responses will be highly appreciated!
You seem to use the MQI api for MQ, to which there is no replacement at hand. There is no other way than to actually rewrite your MQ application logic to use the JMS API.
A good way might be to first migrate into JMS using the same WebSphere MQ server, since it allows you to verify your results in a reliable way.
You ask for what classes to replace say MQGetOptions with. There are no single 1-to-1 replacement (there are even some aspects of MQI that JMS cannot easily replace). Most of the MQPutOptions and other options are available by setting parameters on sessions and messages in JMS. You really need to understand the JMS api before trying this switch.
Then, when you have jms working with WebSphere MQ, you can do as Beryllium suggests, but swapping the libraries to Weblogic, switch any reference to com.ibm.mq.jms.MQConnectionFactory;, configuring the new parameters and pray to any a available god - press run :)
I have completed an application which supported both JBossMQ and MQSeries/WebSphere MQ.
The MQSeries specific classes I required were
import com.ibm.mq.jms.JMSC;
import com.ibm.mq.jms.MQConnectionFactory;
import com.ibm.mq.jms.MQQueueConnectionFactory;
import com.ibm.mq.jms.MQTopicConnectionFactory;
These were sufficient to create the javax.jms.QueueConnection/TopicConnection.
As for WebSphere MQ, I connected directly.
As for JBossMQ I looked up the factories using JNDI.
So on top of this there is only JMS.
So the first step is to rewrite your application so that only the initializing part uses WebSphere MQ specific classes (the ones I have listed above)
Replace the remaining MQ specific part with a JNDI/directory lookup for a queue connection factory provided by your application server
Remove the MQ series specific parts from your source.
Here is a simple example which shows how to send a message.

designing a distributed (over many servers) error logging feature, WCF or?

I am designing a error logging feature so our servers (each donig different things) can have a central data store for logging errors.
Would it be a good idea to have the various applications writing to the error log file using a WCF service, or is that a bad idea?
they can do it just by ADO.NET to the database, which I think is the simpler route.
How about having a look at syslog? It was made for exactly that purpose.
I'd say just log to your local data store. The advantages are :
Speed - it's pretty rapid to just
dump your chosen error report to an
existing data connection.
Tracability - What happens if you
have an error in your service? You
lose all ability to chase down
errors on all servers.
Simplicity - If you change the
endpoint for your errors service,
you have to update every other
application that uses the error
service.
Reporting - Do you really want to
trawl through error reports from
tens / hundreds of applications in
one place when you could easily find
them in the data store local to the
app?
Of course, any of these points could be viewed from the other side, these are just my opinions.
We're looking at a similar approach, except for audit logging as well as error handling.
Looking at using WCF over netTcp, also looking at using the event log, but that seems to require high trust settings, and maybe performance issues.
Not convinced by ZombieSheep's objections:
It's pretty rapid to dump your chosen error report over an existing WCF connection. Seriously. Plus, you can do it async/queued. Not a key factor for me.
You log to the central service and the local service. When the erroer service comes back on line, you poll your machines for events since the last timestamp. Problem solved.
Use a dns alias, and don't change the path - the way you should do internal addressing anyway IMO.
What if you have multiple apps on a single machine? What if you want to see the timing of errors across multiple apps?

Resources