Intended usage of /network-map/ack-parameters: parameters Hash and(?) NodeKey? - corda

The Corda Node can accept the new network parameters update with the /network-map/ack-parameters post request. The Parameters Hash is sent to the network operator with this request.
Therefore, there are 2 questions:
Is it intended that the network operator can know from which Node this acceptance request came? By other word, how the network operator can know which node accepted the new network parameters update?
If I check the Cordite Network Map Service implementation(https://gitlab.com/cordite/network-map-service/blob/master/src/main/kotlin/io/cordite/networkmap/service/NetworkMapService.kt#L294), the submitted parameter is interpreted as a key for NodeInfo in the NodeInfo storage, instead of being interpreted as Parameters Hash. It looks inconsistent with how the Corda defines the /ack-parameters request parameters. Does Cordite implementation of Network Map Service is adequate on this aspect?

I'll do my best to use my own intuition to answer these as they're quite specific.
Yes it's intended for the network operator to know where the acceptance request comes from as it's part of properly managing the parties on the network. You can actually check this out in the corda source code here to see how the "doorman" handles new node participants: https://github.com/corda/corda
I don't think cordite needs to be semantically exactly similar to the way that Corda does it. That being said it's certainly adequate.

Related

Replies are not always delivered to the desired reply listener container

We are applying the request/reply semantics using spring-kafka with the ReplyingKafkaTemplate.
However, we noticed that sometimes the reply does not end up where it should.
Here is a rough description of our setup:
Service A
2 instances
consuming messages from topic-a which has 2 partitions. (each instance gets 1 partitions assigned).
Service A is the initiator.
Service B:
2 instances, consumes messages from topic-b, which also has 2 partitions.
Reacts to incoming messages from A and returns a reply message using the #SendTo annotation.
Observed behaviour:
When an instance of service A, e.g. A1, is sending a message to service B the send fails with a reply timeout. The request is consumed by B successfully and a reply is being returned, however it was consumed by the other instance, e.g. A2. From the logging I can see that A1 get topic-a-0 assigned, whereas A2 gets topic-a-1 assigned.
Suggestions from the docs:
Our scenario is described in this section of the docs: https://docs.spring.io/spring-kafka/reference/html/#replying-template
It gives a couple suggestions:
Give each instance a dedicated reply topic
Use reply partition header and use dedicated partitions for each instance
Our setup is based on a single topic for the whole service. So all incoming events and reply events are send to this and consumed from this topic. So option #1 is not desirable in our situation.
The downside of option #2 is that you cannot use the group management feature, which is a pitty because our services run on Kubernetes so we'd like to use the group management feature for maximum flexibility.
A third option?
So I was wondering if there was a third option:
Why not use group management and determine the assigned topic partitions of the reply container at runtime on the fly when sending a message and set the reply partition header.
It looks like the ReplyingKafkaTemplate#getAssignedReplyTopicPartitions method provides exactly this information.
This way, the partitions are not fixed and we can still use the group management feature.
The only downside I can foresee is that when the partitions are rebalanced after the request was sent but before the reply was received, the request could fail.
I already have tested something to see if it works and it looks like it does. The main reason for me to post this question is to check if my idea makes sense, are there any caveats to take into account. I'm wondering why this is not supported by spring-kafka out of the box.
If my solution makes sense, I am willing to raise an enhancement issue and provide a PR on the spring-kafka project.
The issue, as you describe, is there is no guarantee we'll get the same partition(s) after a rebalance.
The "third option" is to use a different group.id for each instance and set sharedReplyTopic=true. In this case all instances will get the reply and it will be discarded by the instance(s) that did not send the request.
The best solution, however, is to use a unique reply topic for each instance.

DICOM [Storage SCP] : What should be the Associate Response if no proposed presentation context is accepted?

My application is Storage SCP. Third party Storage SCU connects to my application and proposes two presentation contexts. SCP does not support either. What Associate Response should I send in this case?
Set the status of each presentation context to "Rejected - Abstract syntax not supported.", and send Associate Accept. This way, none of the presentation contexts in Associate Response will be accepted. Associate Accept does not make sense here.
Send Associate Reject all together.
I am doing option 2 now, but not sure if this is correct implementation. I searched specifications but could not found anything conclusive. Please mention the location in specifications that clearly discuss about this situation.
AFAIK, there is no explicit rule, but I think it is very clear implicitly by the reasons that the SCP must give for association- and/or presentation context rejection.
Referring to PS 3.8, 7.1.1.9, there is a positive list of valid reasons for association rejection. There is no reason defined which is suitable to indicate that the association is rejected because none of the proposed presentation contexts can be accepted.
For Presentation Context rejection, PS3.8, table 9-18 defines the possible reasons. I suppose that either
3 - abstract-syntax-not-supported (provider rejection)
or
4 - transfer-syntaxes-not-supported (provider rejection)
Is appropriate to express the reason for rejection. In other words, I do not think that your implementation is correct. You should accept the association, reject all presentation contexts and expect the SCU to release / abort the association.

DICOM: C-Move without C-Find (Query Retrieve SCU)

In DICOM, following are the classes defined for C-Find and C-Move at Study Root.
Study Root Query/Retrieve Information Model - FIND: 1.2.840.10008.5.1.4.1.2.2.1
Study Root Query/Retrieve Information Model - MOVE: 1.2.840.10008.5.1.4.1.2.2.2
I have implemented Query Retrieve SCP and SCU in multiple applications. In all those cases, I always implemented both the classes. I do C-Find first to get the list of matching data. Then based on result, I do (automatically or manually) C-Move to get the instances. All those implementations are working fine.
Recently, I am working on one application that combines DICOM with other private protocol to fulfill some specific requirements. It just stuck to my mind if it is possible to directly do C-Move without doing C-Find as SCU?
I already know the identifier (StudyInstanceUID) to retrieve and I also know that it does present on SCP.
I looked into specifications but could not found anything conclusive. I am aware that C-Find and C-Move could be issued by SCU to SCP on different connections/associations. So in first glance, what I am thinking looks possible and legal.
I worked with many third party DICOM applications; none of them implements SCU the way I am thinking. All SCUs implement C-Find AND C-Move both.
Question:
Is it DICOM legal and practical to implement Query Retrieve SCU C-Move command without C-Find command? Please point me to the reference in specifications if possible.
Short answer: Yes this is perfectly legal per DICOM specification.
Long answer: Let's consider the DCMTK reference DICOM Q/R implementation. It provides a set of basic SCU command line tools, namely findscu and movescu. The idea is to pipe the output of findscu to movescu to construct a valid C-MOVE (SCU) request.
In your requirement you are simply replacing the findscu step with a private implementation that does not rely on the publicly defined C-FIND (SCU) protocol but by another mechanism (extension to DICOM).
So yes your C-MOVE (SCU) implementation is perfectly valid, since there is no requirement to provide C-FIND (SCU) during this query.
I understand you are not trying to backup an entire database using C-MOVE (SCU), that was just a possible scenario where someone would be trying to use C-MOVE (SCU) without first querying with a valid C-FIND (SCU) result.

Storage Commitment Service (push model): how i get the result back to my SCU?

I planned to implement a Storage Commitment Service to verify if files previously sent to the storage were safely stored.
My architecture is very simple and straightforward my SCU sends some secondary capture images to the storage and I want to be sure they are safely stored before delete them.
I am going to adopt push model and I wonder what steps/features I need to implement to accomplish the service
What I understood is
I need to issue a N-ACTION request with SOP Class UID
1.2.840.10008.1.20.1 and add to the request a transaction identifier together with a list of Referenced SOP Class UID – Referenced SOP
Instance UID where Referenced SOP Instance UID are the UIDs of the
secondary capture images I previously sent to the storage and
Referenced SOP Class UID in my case is the soap class identifier
representing the Secondary Capture Image
Wait for my N-ACTION response to see if the N-ACTION request succeed
or not
Get the response from the storage in form of N-EVENT-REPORT
But when? How the storage give me back the
N-EVENT-REPORT along with the results? Does my SCP AE implements some
SCP features? Or I need to issue a N-EVENT request to get a
N-EVENT-REPORT?
Have a look at the image below copied from here:
Now, about your question, following is the explanation assuming same association will be used for entire communication. For communication over multiple associations, refer above article from Roni.
But when?
Immediately. On same connection/association. On receiving NAction response, you should wait for timeout configured in your application. Before timeout expires, you should get the NEventReport.
How the storage give me back the N-EVENT-REPORT along with the results?
When you receive NAction response from SCP, that means SCP saying "Ok; I understood what you want. Now wait while I fetch your data...". So, you wait. When SCP is ready with all the data (check list) necessary, it just sends it back on same association through NEventReport. You parse the Report and do your stuff and send response to SCP saying "Fine; I am done with you." and close the association.
Does my SCP AE implements some SCP features?
No (in most of the cases); you do not need to implement any SCP features in both (single association/multiple associations) cases. You should get NEventReport on same association as mentioned above. DICOM works on TCPIP. Client/Server concept in TCP is only limited to who establishes the connection and who listens for connections. Once the connection is established, any one can read/write data on socket.
In rare cases, SCP sends NEventReport by initiating new association on its own. In that case, SCU need to implement SCP features. This model is not in use as far as I am aware. It is difficult to implement this model for both SCP and SCU. It also needs multiple configurations which everyone tends to avoid. So, this could be neglected. I am calling this rare because I never (at least so far) come across such implementation. But yes; this is valid case for valid reason.
Or I need to issue a N-EVENT request to get a N-EVENT-REPORT?
No; as said above. Refer this.
J.3.3 Notifications
The DICOM AEs that claim conformance to this SOP Class as an SCP shall invoke the N-EVENT-REPORT request. The DICOM AEs that claim conformance to this SOP Class as an SCU shall be capable of receiving the N-EVENT-REPORT request.
That said, SCU should be able to process NEventReport. It will NOT issue it.
There are three different sequences of events possible. I could describe them here, but this article is really excellent: Roni's DICOM blog
I have nothing to add to what is written there.

RADIUS with MS-CHAPv2 Explanation

Can't find any flowcharts on how communication works between peers. I know how it works in Radius with PAP enabled, but it appears that with MS-Chapv2 there's a whole lot of work to be developed.
I'm trying to develop a RADIUS server to receive and authenticate user requests. Please help me in the form of Information not code.
MSCHAPv2 is pretty complicated and is typically performed within another EAP method such as EAP-TLS, EAP-TTLS or PEAP. These outer methods encrypt the MSCHAPv2 exchange using TLS. The figure below for example, shows a PEAP flowchart where a client or supplicant establishes a TLS tunnel with the RADIUS server (the Authentication Server) and performs the MSCHAPv2 exchange.
The MSCHAPv2 exchange itself can be summarized as follows:
The AS starts by generating a 16-byte random server challenge and sends it to the Supplicant.
The Supplicant also generates a random 16-byte peer challenge. Then the challenge response is calculated based on the user's password. This challenge response is transmitted back to the AS, along with the peer challenge.
The AS checks the challenge response.
The AS calculates a peer challenge response based on the password and peer challenge.
The Supplicant checks the peer challenge response, completing the MSCHAPv2 authentication.
If you'd like to learn about the details and precise calculations involved, feel free to check out my thesis here. Sections 4.5.4 and 4.5.3 should contain all information you need in order to implement a RADIUS server capable of performing an MSCHAP exchange.
As you can see in the figure, many different keys are derived and used. This document provides a very untuitive insight into their functionality. However, the CSK is not explained in this document. This key is optionally used for "cryptobinding", i.e. in order to prove to the AS that both the TLS tunnel and MSCHAPv2 exchange were performed by the same peer. It is possible to derive the MSK from only the TLS master secret, but then you will be vulnerable to a relay attack (the thesis also contains a research paper which gives an example of such an attack).
Finally, the asleap readme gives another good and general step by step description of the MSCHAPv2 protocol, which might help you further.
Unfortunately i can't add anymore comments, the demand is for me to have 50 reputation.
To your request:
My lab enviorment is of SSL-VPN used with AS of RADIUS.
Constructed with the following 3 items:
End-User -> there's no 'client' installed, the connection starts through a web portal. client = web browser
NAS -> This is the machine that provides the web-portal(the place the End-User enters the Username & Password) AND acts as a RADIUS CLient, transfering requests to the AS.
AS(RADIUS) -> This is me. I receive the access-requests and validate the username & password.
So in accordance with that, what i receive in the Access-Request is:
MS-CHAP2-Response:
7d00995134e04768014856243ebad1136e3f00000000000000005a7d2e6888dd31963e220fa0b700b71e07644437bd9c9e09
MS-CHAP-Challenge: 838577fcbd20e293d7b06029f8b1cd0b
According to RFC2548:
MS-CHAP-Challenge This Attribute contains the challenge sent by a NAS to a Microsoft Challenge-Handshake Authentication Protocol (MS-CHAP) user. It MAY be used in both Access-Request and Access-Challenge packets.
MS-CHAP2-Response This Attribute contains the response value provided by an MS-
CHAP-V2 peer in response to the challenge. It is only used in
Access-Request packets.
If i understand correctly, and please be calm this is all very new to me, based on your flowchart the AS is also the Authenticator who inits the LCP.
And in my case, the LCP is initiated by the NAS, So my life made simple and i only get the Access-Request without needing to create the tunnel.
My question now is, how do i decrypt the password? I understood there's a random challenge 16-byte key but that is held by the NAS.
From my recollection, i only need to know the shared secret and decrypt the whole thing using the algorithem described in your thesis.
But the algorithem is huge, i've tried different sites to see which part of it the AS supposed to use and failed in each attempt to decrypt.
Since i can't ask for help anymore in this thread, i can only say this little textbox cannot fill the amount of gratitude i have for your help, truely lucky to have you see my thread.
Do email me, my contact info are in my profile.
Also, for some reason i can't mark your answer as a solution.
"is typically performed within another EAP method such as EAP-TLS, EAP-TTLS or PEAP."
Well...
RADIUS win2008 server here, configured to NO EAP, only MS-CHAPv2 encryption, to replace the PAP.
This is why alot of what you said and what i said wasn't adding up...
I'm not MITM, i'm the AS, and my NAS(the one who knocks) is the RADIUS_Client/Authenticator.
When the user enters UN&PW a random encryption, which i'm now on the look for, is created with MS-CHAPv2 and all of the above is irrelevant.
With the items received from the Authenticator which again are:
- Username, MS-CHAP-Challenge, MS-CHAP2-Response
The AS performs a magical ceremony to come up with the following:
-Access-Accept
-MPPE-Send-Key
-MPPE-Recv-Key
-MS-CHAP2-Sucess
-MS-CHAP-DOMAIN
This is from a working scenario, where i have a RADIUS server, a radius client and a user.
A NOT working scenario, is the one where i am the RADIUS Server(AS), cause that's my goal, building a RADIUS server, not MITM.
So all i got left is finding out what decryption algorithem needed for those and how.

Resources