What exactly is "bulk data" in WADO-RS standard? - dicom

[Referring to http://dicom.nema.org/medical/dicom/2016e/output/chtml/part18/sect_6.5.html]
When we are talking about WADO-RS, NEMA mentions that:
Every request (we'll leave out /metadata & /rendered requests for now) can have accept-type of three types:
1. multipart/related; type="application/dicom" [dcm-parameters]
------- (DICOM File format as mentioned in PS3.10)
2. multipart/related; type="application/octet-stream" [dcm-parameters]
------- (Bulk data)
3. multipart/related; type="{media-type}" [dcm-parameters]
------- (Bulk data)
For all these accept types, response is created as multipart with each part corresponding to a particular Instance. Now I understand the first case (application/dicom) in which we'll have fill each response part with each SOP Instance's .dcm counterpart. (for e.g., if the WADO RS is for a Study, then the multipart response will have one part for each SOP Instance's Dicom File Stream)
But when it comes to the bulk data I have few questions:
What exactly is bulk-data in WADO-RS standard? Is it only the 7FE00010 tag, or is it all the binary tags of an SOP Instance combined into one single binary data?
If it is just 7FE00010, then there will be one http response part each for every SOP Instance. Then how will the WADO-RS client come to know which bulk data is of which SOP Instance?
Information about this is limited on the internet. Hence asking here.
If any one has any article about this, that's welcome too.
Ps: I am new to DICOM/DICOMWeb

What I've generally used is to look at what's included (or should it be excluded?) in the Composite Instance Retrieve Without Bulk Data service class:
(7FE0,0010) Pixel Data
(7FE0,0008) Float Pixel Data
(7FE0,0009) Double Float Pixel Data
(0028,7FE0) Pixel Data Provider URL
(5600,0020) Spectroscopy Data
(60xx,3000) Overlay Data
(50xx,3000) Curve Data
(50xx,200C) Audio Sample Data
(0042,0011) Encapsulated Document
(5400,1010) Waveform Data (within a (5400,0100) Waveform Sequence)

Related

Decode JSON RPC request to a contract

I am currently using some website to read some useful data. Using the browser's Inspect>Network I can see this data comes from JSON RPC requests to (https://bsc-dataseed1.defibit.io/) the public available BSC explorer API endpoint.
This requests have the following format:
Request params:
{"jsonrpc":"2.0","id":43,"method":"eth_call","params":[{"data":"...LONGBYTESTRING!!!","to":"0x1ee38d535d541c55c9dae27b12edf090c608e6fb"},"latest"]}
Response:
{"jsonrpc":"2.0","id":43,"result":"...OTHERVERYLONGBYTESTRING!!!"}
I know that the to field corresponds to the address of a smart contract 0x1ee38d535d541c55c9dae27b12edf090c608e6fb.
Looks like this requests "queries" the contract for some data (but it costs 0 gas?).
From (the very little) I understand, the encoded data can be decoded with the schema, which I think I could get from the smart contract address. (perhaps this is it? https://api.bscscan.com/api?module=contract&action=getabi&address=0x1ee38d535d541c55c9dae27b12edf090c608e6fb)
My goal is to understand the data being sent in the request and the data given in the response so I can reproduce the data from the website without having to scrape this data from the website.
Thanks.
The zero cost is because of the eth_call method. It's a read-only method which doesn't record any state changes to the blockchain (and is mostly used for getter functions, marked as view or pure in Solidity).
The data field consists of:
0x
4 bytes (8 hex characters) function signature
And the rest is arguments passed to the function.
You can find an example that converts the function name to the signature in this other answer.

passing objects into the url - Url Params /Query strings

I have two websites. One website is going to capture form data and put it into a url...
let url = `https://xxxxxxxxxx.herokuapp.com/band/${band._id}?toggle=true&eventDate={"eventDate": ${mainDate[0]}, "eventCharge": ${mainDate[1]}}&quoteAdjuster=${sliderValue}`
Some of the information that I collect in the form is stored in objects and arrays.
Is there a way to send objects/arrays in this url to my website? Currently that whole object above, mainDate, still comes through as a string.
Thanks!
You could change your objects and arrays into strings on purpose by using JSON.stringify(myObject).
Then the server would just need to just use JSON.parse(sentData) in order to reconstruct the arrays and objects (some types of data don't survive this operation, so be careful. For example Date objects become strings and you have to reconstruct them manually).
Also, remember that the GET protocol has a fairly small payload limit (8KB). You will want to switch to using POST, if those parameters aren't important for the URL that the user is browsing.

Why Tomcat returns different headers for HEAD and GET requests to my RESTful API?

My initial purpose was to verify the HTTP chunked transfer. But accidentally found this inconsistency.
The API is designed to return a file to client. I use HEAD and GET methods against it. Different headers are returned.
For GET, I get these headers: (This is what I expected.)
For HEAD, I get these headers:
According to this thread, HEAD and GET SHOULD return identical headers but not necessarily.
My question is:
If Transfer-Encoding: chunked is used because the file is dynamically fed to the client and Tomcat server cannot know its size beforehand, how could Tomcat know the Content-Length when HEAD method is used? Does Tomcat just dry-run the handler and count all the file bytes? Why doesn't it simply return the same Transfer-Encoding: chunked header?
Below is my RESTful API implemented with Spring Web MVC:
#RestController
public class ChunkedTransferAPI {
#Autowired
ServletContext servletContext;
#RequestMapping(value = "bootfile.efi", method = { RequestMethod.GET, RequestMethod.HEAD })
public void doHttpBoot(HttpServletResponse response) {
String filename = "/bootfile.efi";
try {
ServletOutputStream output = response.getOutputStream();
InputStream input = servletContext.getResourceAsStream(filename);
BufferedInputStream bufferedInput = new BufferedInputStream(input);
int datum = bufferedInput.read();
while (datum != -1) {
output.write(datum);
datum = bufferedInput.read();
}
output.flush();
output.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
ADD 1
In my code, I didn't explicitly add any headers, then it must be Tomcat that add the Content-Length and Transfer-Encoding headers as it sees fit.
So, what are the rules for Tomcat to decide which headers to send?
ADD 2
Maybe it's related to how Tomcat works. I hope someone can shed some light here. Otherwise, I will debug into the source of Tomcat 8 and share the result. But that may take a while.
Related:
HTTP HEAD and GET different result
Content-Length header with HEAD requests?
Does Tomcat just dry-run the handler and count all the file bytes?
Yes, the default implementation of javax.servlet.http.HttpServlet.doHead() does that.
You can look at helper classes NoBodyResponse, NoBodyOutputStream in HttpServlet.java
The DefaultServlet class (the Tomcat servlet that is used to serve static files) is more wise. It is capable of sending the correct Content-Length value, as well as serving GET requests for a subset of the file (the Range header). You can forward your request to that servlet, with
ServletContext.getNamedDispatcher("default").forward(request, response);
Although it seems strange, it might make sense to send the size only in response to a HEAD request and chunked in response to a GET request, depending on the type of data that has to be returned by the server.
While your API seems to provide a static file, you also talk about dynamically created files or data, so I will be talking in general here (also for webservers in general).
First let's have a look at the different usages for GET and HEAD:
With GET the client is requesting the whole file or data (or a range of the data), and wants it as fast as possible. So there is no specific reason for the server to send the size of the data first, especially when it could start sending faster/sooner in chunked mode. So the fastest possible way is preferred here (the client will have the size after the download anyway).
With HEAD on the other hand, the client usually wants some specific information. This could just be a check on existance or 'last-changed', but it could also be used if the client wants a certain part of the data (with a range request, including a check to see if range requests are supported for that request), or just needs to know the size of the data up front for some reason.
Lest's look at some possible scenarios:
Static file:
HEAD: there's no reason to not include the size in the response-header because that information is available.
GET: most of the time the size will be inluded in the header and the data sent in one go, unless there are specific performance reasons to send it in chunks. On the other hand it seems you are expecting chunked transfer for you file, so this could make sense here.
Live logfile:
Ok, somewhat strange, but possible: downloading a file where the size could change while downloading.
HEAD: again, the client probably wants the size, and the server can easily provide the size of the file at that specific time in the header.
GET: since loglines could be added while downloading, the size is unknown up front. Only option is to send chunked.
Table with fixed-sized records:
Let's imagine a server needs to send back a table with fixed-length records coming from multiple sources/databases:
HEAD: size is probably wanted by the client. The server could quickly do a query for count in each database, and send the calculated size back to the client.
GET: instead of doing a query for count in each database first, the server better starts sending the resulting records from each database in chunks.
Dynamically generated zip-files:
Maybe not common, but an interesting example.
Imagine you want to provide dynamically generated zip-files to the user based on some parameters.
Let's first have a look at the structure of a zip-file:
There are two parts: first there's a block for each file: a small header followed by the compressed data for that file. Then there's a list of all the files inside the zip-file (including sizes/positions).
So the prepared blocks for each file could be pre-generated on disk (and the names/sizes stored in some data structure.
HEAD: the client probably wants to know the size here. The server can easily calculate the size of all the needed blocks + the size of the second part with the list of the files inside.
If the client wants to extract a single file, it could directly ask for the last part of the file (with a range-request) to have the list, and then with a second request ask for that single file. Although the size is not necessarily needed to get the last n bytes, it could be handy if for example if you wanted to store the different parts in a sparse file with the same size of the full zip-file.
GET: no need to do the calculations first (including generating the second part to know its size). It would be better and faster to just start sending each block in chunks.
Fully dynamically generated file:
In this case it wouldn't be very efficient to return the size to a HEAD request of course, since the whole file would need to be generated just to know its size.

Is it true that DICOM "Media Storage SOP Instance UID" = "SOP Instance UID"? Why?

I have two questions when I am reading the
DICOM standard:
In a DICOM file, (0002 0003)"Media Storage SOP Instance UID" and
(0008 0018) "SOP Instance UID", are they the same? What about (0002
0002) and (0008 0016)? and Why ??
Chris is correct, they are the same. From the dicom standard section C.12.1.1.1:
The SOP Class UID and SOP Instance UID Attributes are defined for all
DICOM IODs. However, they are only encoded in Composite IODs with the
Type equal to 1. See Section C.1.2.3. When encoded they shall be equal
to their respective Attributes in the DIMSE Services and the File Meta
Information header (see PS3.10 Media Storage).
As to the reason why these items are duplicated, I can only speculate, but the File Meta Information Header only exists in dicom files (it is not transmitted by an SCP/SCU). When an SCP writes a file from the DICOM data it receives, it has to get the SOP class and instance UIDs from the dataset, so that is the mechanical reason they are the same. As to why these tags and not some others, I am sure there are many reasons, but note that the File Meta Information Header is always readable by any dicom entity as it is always "Little Endian Explicit" even if the following dataset is some weird transfer syntax. So these two fields are always guaranteed to be readable and usable in any valid dicom file (even if the group 8 versions are in an unreadable transfer syntax).
I also tried to look up the condition:
However, they are only encoded in Composite IODs
Almost every IOD is a Composite IOD when I look at the standard:
Normalized IODs
Composite IODs
Yes they are the same. Tags with group 0002 are part of the DICOM P10 header, I assume they are duplicated so they can be quickly read without having to parse the entire file.

JMeter: How to capture overall response data using Regular Expression Extractor

I want to re-use the Response Data received in Listener as show in Image below.
I would like to know, how can I capture overall response so that I can re-use the same for uploading.
Scenario is:
Download 1KB of string data using TCP Sampler (Port: XYZW)
Upload the text response received (Port: ASDF)
As per How to Extract Data From Files With JMeter the relevant Regular Expression should be:
(?s)(^.*)
Entire configuration:
With Http sampler, I add a BeanShell PostProcessor as a child of Http sampler and use below script to retrieve all response data, I think it's the same with TCP sampler, let's try:
// get all response data
String dashboardData = prev.getResponseDataAsString();
// do something with the data
// and then put the retrieved data into parameter to use later
vars.put("dataTobeUsed", dashboardData);
and we can use ${dataTobeUsed} for other samplers
If you want to get the response data via regular expression extractor, you can use the pattern ([^"]+)
Hope it's helpful!
Hope I understood your question right,
You can use regular exp [a-z0-9]* with any reference name lets say "TCP_Data" in your first TCP request.
Now you can use the same reference name in TCP request 2, by ${TCP_Data}.

Resources