I am looking to create a plugin on yagna where providers will only accept tasks from whitelisted requestors utilizing private/public keys for authentication.
Are there any documentation or how can I go about doing it? Workflow as follows:
Requestor sends task in a specific subnet(do requestors also need a plugin, or are yagna ids unique?)
Providers on that subnet check if it's a whitelisted requestor by checking yagna id or other task information which cannot be faked
I am not entirely sure on how I would format it because I don't know how the system works behind the scenes - so any advice there might be necessary if my workflow is too bad.
It's a nice idea & it's doable.
You can differentiate requestors by their id's. To get the nodes' id (requestor or provider) just run yagna id list (yagna daemon has to be running - yagna service run).
Probably the most challenging part would be to modify ReactToProposal handler of CompositeNegotiator. Current implementation casts Proposal (Demand in this context) into ProposalView. Unfotunatelly at this point requestor_id is lost. For testing purposes you can just filter requestors directly in fn handle() and return Ok(ProposalResponse::RejectProposal {....}) when you want to block a requestor. Your field of interest is msg.demand.issuer_id.
If you want to introduce some kind of public/private key pair functionality, it could achieved by adding custom * demand/offer constraint* that is understood by both your requestor and provider. Unfortunately at this time there is no public documentation on this topic.
Please feel free to reach out if you have further questions.
Related
In canvas, in order to have an LTI app authenticate, the site admin has enter the JWK for the remote site. The format of a JWK is well defined:
{
"kty":"RSA",
"kid":"...",
"use":"sig",
"alg":"RS256",
"n":"u6gqiV...",
"e":"AQAB"
}
First, can we use a tool like openssl, create a key, and generate a JWK from that? Currently we are writing code to do this using jose4j but it's not even clear if that is necessary.
second, Canvas is demanding optional fields like kid, alg, and use. We guessed that use should be "sig", we made up kid: "1" and guessed alg: "RS256"
Is there a place that is accessible (ie not behind IMSGlobal's paywall) that defines what this should be? Is it standard or specific to Canvas?
We meet again- been pouring over the LTI specs for months now, and am in the mood to see if I can spare others some headaches.
You may be familiar with validation schemas in which you use an SSL tool to generate a public and private key at the same time, entangled with each other. The public key is used to sign a payload, and since the payload itself is a factor in creating the signature, it cannot be intercepted and maliciously altered without invalidating it. The recipient is given the public key, used to verify that the the payload is clean.
JWK serves the same purpose as a public key. The only difference is, a developer doesn't need to email it to the recipient app's IT team in advance. The recipient of the JWT payload can retrieve it on-demand, all it needs to know is what URI to ask. That means the keys can actually be replaced by the sender without breaking any functionality.
As I mentioned elsewhere, in a bit of a rant more appropriate for this question:
This security step is akin to getting an email from your bank, and rather than click a potentially-spam link therein, you call your bank directly to make sure the email is on the level.
Now the sender's JWKS endpoint doesn't really know ahead of time who's going to reach out to it, and may want to service multiple other entities, so it may actually supply an array of public keys to cover all bases. The recipient of course only cares about the one associated with the payload it just received, so within the JWK signaure is a "kid", that can be matched up to the 'kid' in one of those array elements, affiliated with the relevant key.
How to create a JWK? Go here. Dependencies are listed at the top, and they probably use openssl under the hood.
The JWKs is a method of exchange the public keys between the tool and the platform, and to allow each side to control the rotation of their keys. The format for a JWKs is a managed ietf standard.
LTI 1.3 is based on the OIDC third-party initiation flow, which in-turn is based ontop of OAuth2. However, a full working knowledge of these specifications is not required to integrate your application with LTI 1.3. IMS curates a collection of code examples on github that might help you get started.
I want to test a REST service that returns the detail of a given entity identified by an UUID, i.e. my consumer pact has an interaction requesting a GET like this:
/cities/123e4567-e89b-12d3-a456-426655440000
So I need this specific record to exist in the Database for the pact verifier to find it. In other projects I've achieved this executing an SQL INSERT in the state setup, but in this case I'd prefer to use the microservice's JPA utilities for accessing to the DB, because the data model is quite complex and using these utilities would save me much effort and make the test much more maintainable.
The problem is that these utilities do not allow specifying the identifier when you create a new record (they assign an automatic ID). So after creating the entity (in the state setup) I'd like to tell the pact verifier to use the generated ID rather than the one specified by the consumer pact.
As far as I know, Pact matching techniques are not useful here because I need the microservice to receive this specific ID. Is there any way for the verifier to be aware of the correct ID to use in the call to the service?
You have two options here:
Option 1 - Find a way to use the UUID from the pact file
This option (in my option) would be the better one, because you are using well known values for you verification. With JPA, I think you may be able to disable the auto-generation of the ID. And if you are using Hibernate as the JPA provider, it may not generate an ID if you have provided it one (i.e. setting the ID on the entity to the one from the pact file before saving it). This is what I have done recently.
Using a generator (as mentioned by Beth) would be a good mechanism for this problem, but there is no current way to provide a generator to use a specific value. They generate random ones on the fly.
Option 2 - Replace the ID in the URL
Depending on how you run the verification, you could use a request filter to change the UUID in the URL to the one which was created during the provider state callback. However, I feel this is a potentially bad thing to do, because you could change the request in a way that weakens the contract. You will not be verifying that your provider adheres to what the consumer specified.
If you choose this option, be careful to only change the UUID portion of the URL and nothing else.
For information on request filters, have a look at Gradle - Modifying the requests before they are sent and JUnit - Modifying the requests before they are sent in the Pact-JVM readmes.
Unfortunately not. The provider side verifier takes this information from the pact file itself and so can't know how to send anything else.
The best option is to use provider states to manage the injection of the specific record prior this test case (or to just have the correct record in there in the first place).
You use the JPA libraries during the provider state setup to modify the UUID in record to what you're expecting.
If you are using pact-jvm on both the consumer and provider sides, I believe you may be able to use 'generators', but you'll need to look up the documentation on that as I haven't used them.
We are using Micro services architecture where top services are used for exposing REST API's to end user and backend services does the work of querying database.
When we get 1 user request we make ~30k requests to backend service. We are using RxJava for top service so all 30K requests gets executed in parallel.
We are using haproxy to distribute the load between backend services.
However when we get 3-5 user requests we are getting network connection Exceptions, No Route to Host Exception, Socket connection Exception.
What are the best practices for this kind of use case?
Well you ended up with the classical microservice mayhem. It's completely irrelevant what technologies you employ - the problem lays within the way you applied the concept of microservices!
It is natural in this architecture, that services call each other (preferably that should happen asynchronously!!). Since I know only little about your service APIs I'll have to make some assumptions about what went wrong in your backend:
I assume that a user makes a request to one service. This service will now (obviously synchronously) query another service and receive these 30k records you described. Since you probably have to know more about these records you now have to make another request per record to a third service/endpoint to aggregate all the information your frontend requires!
This shows me that you probably got the whole thing with bounded contexts wrong! So much for the analytical part. Now to the solution:
Your API should return all the information along with the query that enumerates them! Sometimes that could seem like a contradiction to the kind of isolation and authority over data/state that the microservices pattern specifies - but it is not feasible to isolate data/state in one service only because that leads to the problem you currently have - all other services HAVE to query that data every time to be able to return correct data to the frontend! However it is possible to duplicate it as long as the authority over the data/state is clear!
Let me illustrate that with an example: Let's assume you have a classical shop system. Articles are grouped. Now you would probably write two microservices - one that handles articles and one that handles groups! And you would be right to do so! You might have already decided that the group-service will hold the relation to the articles assigned to a group! Now if the frontend wants to show all items in a group - what happens: The group service receives the request and returns 30'000 Article numbers in a beautiful JSON array that the frontend receives. This is where it all goes south: The frontend now has to query the article-service for every article it received from the group-service!!! Aaand your're screwed!
Now there are multiple ways to solve this problem: One is (as previously mentioned) to duplicate article information to the group-service: So every time an article is assigned to a group using the group-service, it has to read all the information for that article form the article-service and store it to be able to return it with the get-me-all-the-articles-in-group-x query. This is fairly simple but keep in mind that you will need to update this information when it changes in the article-service or you'll be serving stale data from the group-service. Event-Sourcing can be a very powerful tool in this use case and I suggest you read up on it! You can also use simple messages sent from one service (in this case the article-service) to a message bus of your preference and make the group-service listen and react to these messages.
Another very simple quick-and-dirty solution to your problem could also be just to provide a new REST endpoint on the articles services that takes an array of article-ids and returns the information to all of them which would be much quicker. This could probably solve your problem very quickly.
A good rule of thumb in a backend with microservices is to aspire for a constant number of these cross-service calls which means your number of calls that go across service boundaries should never be directly related to the amount of data that was requested! We closely monitory what service calls are made because of a given request that comes through our API to keep track of what services calls what other services and where our performance bottlenecks will arise or have been caused. Whenever we detect that a service makes many (there is no fixed threshold but everytime I see >4 I start asking questions!) calls to other services we investigate why and how this could be fixed! There are some great metrics tools out there that can help you with tracing requests across service boundaries!
Let me know if this was helpful or not, and whatever solution you implemented!
This is a theoretical question.
imagine an aspnet website. by clicking a button site sends mail.now:
I can send mail async with code
I can send mail using QueueBackgroundWorkItem
I can call a ONEWAY webservice located in same website
I can call a ONEWAY webservice located in ANOTHER website (or another subdomain)
none of above solutions wait for mail operation to be completed.so they are fine.
my question is why I should use service solution instead of other solutions. is there an advantage ?
4th solution adds additional tcpip traffic to use service its not efficient right ?
if so, using service under same web site (3rd solution) also generates additional traffic. is that correct ?
I need to understand why people using services under same website ? Is there any reason besides make something available to ajax calls ?
any information would be great. I really need to get opinions.
best
The most appropriate architecture will depend on several factors:
the volume of emails that needs to be sent
the need to reuse the email sending capability beyond the use case described
the simplicity of implementation, deployment, and maintenance of the code
Separating out the sending of emails in a service either in the same or another web application will make it available to other applications and from client side code. It also adds some complexity to the code calling the service as it will need to deal with the case when the service is not available and handle errors that may occur when placing the call.
Using a separate web application for the service is useful if the volume of emails sent is really large as it allows to offload the work to one or servers if needed. Given the use case given (user clicks on a button), this seems rather unlikely, unless the web site will have really large traffic. Creating a separate web application adds significant development, deployment and maintenance work, initially and over time.
Unless the volume of emails to be sent is really large (millions per day) or there is a need to reuse the email capability in other systems, creating the email sending function within the same web application (first two options listed in the question) is almost certainly the best way to go. It will result in the least amount of initial work, is easy to deploy, and (perhaps most importantly) will be the easiest to maintain.
An important concern to pay significant attention to when implementing an email sending function is the issue of robustness. Robustness can be achieved with any of the possible architectures and is somewhat of an different concern as the one emphasized by the question. However, it is important to consider the proper course of action needed if (1) the receiving SMTP refuses the take the message (e.g., mailbox full; non-existent account; rejection as spam) and (2) an NDR is generated after the message is sent (e.g., rejection as spam). Depending on the kind of email sent, it may be OK to ignore these errors or some corrective action may be needed (e.g., retry sending, alert the user at the origination of the emails, ...)
I'm working on an application that allows users to send internal message to one-another.
I'll tell you what the current setup is and please help me figure out how to make it work or perhaps suggest another angle to take. We're using BlazeDS with Spring.
User A listens for messages on
message topic Chat.A
User B listens for messages on message topic Chat.B
Both users listen for global messages (system-wide messages) on topic Chat.System
So we have a multi-topic consumer for the personal message topic and one for the global message topic.
So a couple of questions I have:
Is it better to do it as two
distinct consumers (that share the
same handler function) or as one,
multi-topic consumer?
How do I check that the client A is actually the one listening to Chat.A and not just some one else that knows how to write BlazeDS clients? We have Spring Security in place, but how can I listen for subscription requests and block them if their user name (pulled from security context) doesn't match the sub-topic that they requested?
I've also read about selectors. Well, that looked promising, but again, how do I check that when a consumer uses selector="for == A || for == System that the consumer belongs to a client that has authenticated as that "for" user.
How do selectors compare/contrast to sub-topics? What's the best situation for each of them?
A selector is basically an expression you can use to filter which messages will be dispatched through your consumer. According to the docs, it uses SQL 92 conditional expression syntax:
http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=messaging_6.html
A subtopic is sort of a special case of a selector, filtering out messages whose "DSSubtopic" header don't match the provided value.
The important thing to understand with both of these is that the client determines which messages are sent to it, and as such it cannot be relied upon entirely for security.
To implement secure server-based filtering of messages based on an authenticated user's identity, see my answer to a related question here:
Flex Messaging Security
As far as multiple Consumers vs. MultiTopicConsumer, not sure there. They're both going to use the same underlying ChannelSet, so it ought not to have a big performance difference. I think it's mostly a question of whether it's convenient to have one event handler that responds to all messages from the MultiTopicConsumer or whether it's easier to have separate event handlers for each Consumer.
I usually use subtopics for this. But if you do it that way make sure that you disable subscriptions to wildcard subtopics.