HTTPbis - what does bis mean? - http

I've often seen "bis" appended to versions of protocols (eg v.34bis or httpbis).
What does "bis" mean or stand for?
A telecom engineer I know thinks it might be French in origin.

As others have already said, "bis" comes from "twice" or "repeat". It's used to indicate a second variant of something (although usually with only minor variations that don't warrant a new name).
In the context of HTTP, HTTPbis is the name of the working group in charge of refining HTTP. According to its charter:
HTTP is one of the most successful and widely-used protocols on the
Internet today. However, its specification has several editorial
issues. Additionally, after years of implementation and extension,
several ambiguities have become evident, impairing interoperability
and the ability to easily implement and use HTTP.
The working group will refine RFC2616 to:
Incorporate errata and updates (e.g., references, IANA registries, ABNF)
Fix editorial problems which have led to misunderstandings of the specification
Clarify conformance requirements
Remove known ambiguities where they affect interoperability
Clarify existing methods of extensibility
Remove or deprecate those features that are not widely implemented and also unduly affect interoperability
Where necessary, add implementation advice
Document the security properties of HTTP and its associated mechanisms (e.g., Basic and Digest authentication, cookies, TLS) for common applications
It will also incorporate the generic authentication framework from RFC
2617, without obsoleting or updating that specification's definition
of the Basic and Digest schemes.
Finally, it will incorporate relevant portions of RFC 2817 (in
particular, the CONNECT method and advice on the use of Upgrade), so
that that specification can be moved to Historic status.
In doing so, it should consider:
Implementer experience
Demonstrated use of HTTP
Impact on existing implementations and deployments
The Working Group must not introduce a new version of HTTP and should
not add new functionality to HTTP. The WG is not tasked with producing
new methods, headers, or extension mechanisms, but may introduce new
protocol elements if necessary as part of revising existing
functionality which has proven to be problematic.
The last paragraph (emphasis mine) explains why they've used "bis" in this context, since they explicitly don't want a new version.

bis
The word (also used as a prefix or suffix) bis , applied to some modem protocol standards, is Old Latin for "repeat" (akin to Old High German "twice"). When a protocol ends with "bis," it means that it's the second version of that protocol.
Similarly, ter is from Old Latin meaning "three times." The suffix terbo in the V.xx modem protocol is an invented word based on the Old Latin ter and the word turbo (Latin for "whirling top" or "whirlwind") meaning "speed." V.32terbo is the third version developed of the V.32 modem protocol..
(from http://whatis.techtarget.com/definition/0,,sid9_gci211669,00.html)

Related

what's the purpose of wado-rs vs Dicom Service Class Users/Providers

I am trying to figure out what's the difference between transferring dicom files with a (SCU/SCP) like pynetdicom3 vs using the wado api.
Both methods can be used for transferring dicom files. But I can't figure out what's the standard use case for each?
First of all, you can implement all common use cases with both approaches. The difference is rather in technology you are using and systems you want to interface with than in features supported by the one or the other approach.
The "traditional" TCP/IP based DICOM services have been developed since 1998. They are widely spread and widely supported by virtually all current systems in the field. From the nowadays perspective they may appear a bit clumsy and they have some built-in glitches (e.g. limitation to 127 presentation contexts). Still they are much more common than the web-based stuff.
Especially when it comes to communication use cases across different sites, it is hard to implement them with the TCP/IP based protocols.
The WADO services have been developed by the DICOM committee to adopt new technology and facilitate DICOM implementation for application based on web technology. They are quite new (in terms of the DICOM Standard ;-) ).
Having said that the major use case are web-based applications, I have not seen any traditional modalities supporting them yet, and I do not expect them to come up in the near future. This is because, you can rely on PACS supporting TCP/IP based DICOM but you would have to hope for WADO.
There is a tendency for PACS systems to support WADO in addition to TCP/IP to facilitate integration of web viewers and mobile devices where an increasing number of applications only supports WADO.
So my very subjective advice would be:
For an application that is designed for the usage within a hospital: Stick with TCP/IP based DICOM, since you can be quite sure that it will be supported by the systems you are going to interface with.
If connectivity via internet is a major use case, or your application uses a lot of web technology, consider using WADO but investigate the support for WADO among the relevant systems you need to interface with. This probably depends on the domain your application is targeting.
To add to the already very good answer by #kritzel_sw - WADO is only part of the picture. WADO is for retrieving images over the web. There's also STOW or STore Over the Web and QIDO or Query based on ID for DICOM Objects for storing new objects to PACS and querying the PACS respectively.
I think we will see it more and more in the future and not only for web based DICOM viewers, but also normal DICOM communications between the systems. It's especially useful for the cases where one of the systems is not DICOM aware and the developers are also not experienced in DICOM.
Consider a use case from my own experience. We want doctors to be able to upload photographs of skin conditions of their patients and send these photos to our PACS. It's much easier and probably cheaper to commision some developer to do it with STOW, where the specification is basically "take the JPG photo uploaded by the user, add necessary metadata in JSON format according to spec and send it all to this address with an HTTP POST request" rather than "convert uploaded JPG files to valid DICOM objects with the necessary metadata, transfer syntax etc and implement a C-STORE SCU to send it to our PACS". For the first job you can get any decent developer experienced in web dev, for the second you need to find someone who already knows what DICOM is with all its quirks or pay someone a lot to learn it.
That's why I love all these new web-based DICOM options and see great future for those.

Is it safe to use custom http headers which don't start with 'X-'

According to RFC 6648 which has deprecated the 'X-' convention, there is no reason to prefix custom headers like API key with 'X-' for designing web API now.
In spite of RFC standard, I wonder that it is really safe to conform the new convention, without 'X-'. Someone says that some browsers or even some ISPs which follow the old convention may block such headers which follow the new convention by reason of non-standard but I couldn't confirm that yet.
The solution would be simple, supporting both conventions or old convention only. Though, regardless of the solution, I'd like to know if it is safe or not to conform the new convention or is a question with no answer.

Is there a public HTTP 1.1 reference implementation?

While learning more about HTTP 1.1 and reading the spec, it strikes me that it could be helpful to have a public reference implementation which can demonstrate the protocol. I imagine it would provide ideal, basic examples, as well as working examples of those parts of the protocol which are often disabled on public servers (e.g., TRACE).
I'm talking about a running, publicly accessible server(s). The idea would be to show how HTTP (should) works via an actual running webserver(s) (and the source). A user could build arbitrary requests using fiddler or the like, to see how the server responds. I'm assuming it would be open source. It would likely be based on an existing webserver implmentation (e.g., Apache), perhaps with extensions to support the entire protocol where the existing impl. doesn't (Transfer-Encoding compression, etc.). I know this last part is a pipe dream, I'm just putting it here by way of explanation.
I understand that HTTP is a very broad protocol, so that a reference implementation would not be comprehensive. I can imagine many, many reasons why something like this would not exist, and I know I can start up my own local server and play around with it (I've done that sort of thing for years). I know I can poke around against well-known existing public servers (Google, etc.). But, I'm wondering if anything like a public reference implementation exists.
As an IETF spec, HTTP/1.1 does not have a reference implementation. Instead,
"at least two independent interoperating implementations
with widespread deployment and successful operational experience"
were required.
From the Implementation report for HTTP/1.1 to Draft Standard, you can see there were substantially more than that:
We have implementation & testing reports from 26 implementations
You say:
I can imagine many, many reasons why something like this would not exist
Here's one: for a reasonably complex specification, you don't want people designing to a specific implementation. Any "reference" implementation would have bugs, which would then be picked up by subsequent code built against that reference.
The specification is authoritative; in the case that an implementation diverges, you should consult the specification (and its errata) for the correct behaviour.
I know I can poke around against well-known existing public servers
Exactly. Per The Tao of IETF:
"We believe in rough consensus and running code"

What is the current state of the Cookie2 specification?

Do you have some information regarding browsers that implement/plan to implement this part of the HTTP 1.1 specification? Additionally, what frameworks have already implemented this feature. I've done my Google research but I'd like to know if there's something else.
Also, do/would you use it? Do you find it better than the Cookie/Set-Cookie implementation?
Update: the Cookie2 specification never caught on, and RFC 6265 now declares it obsolete, making this question moot - though it's possibly still interesting to see a discussion of why it failed.
The answer below was written in 2009.
I'll mainly answer the second part.
I did some research into it recently and am now firmly of the opinion that no, it is not ready for use, and I would not use it.
Finding concrete data on the existing specification that will work with current browsers and proxies is difficult, because cookies started out as a proprietary browser extension and continue to have proprietary features added, like the most recent "http-only" flag. I think by and large the industry has continued to use this quasi "Netscape-style" mixed with RFC 2109 implementation, except with more loose rules about third-party cookies and some strange behaviour sometimes with non-quoted strings.
As for whether I find it better, a read through of the spec does certainly show its benefits - ie, the client now passes back the path, domain and port parameters as 'dollar' parameters, so a web app knows what parameters to use to delete/overwrite that cookie. The ability to store comments with the cookies will be a win for the user one day, so they get the chance to see a plain text explanation of what the cookie is for, but unless browsers start warning people about cookies, who is going to see them?
The need to send both a set-cookie and set-cookie2 header also upset the purist in me, as did the need for a client to send a Cookie2 header in addition to the Cookie header, which seemed unnecessary when I looked at it. YMMV.
Read RFC 6265 which obsoletes rfc 2965. It has advice not to use or implement cookie2
The current state is that most browser only fully support the initial Cookie specification by Netscape.
Set-Cookie/Cookie per RFC 2109 are only supported by some browser (I don’t know which) and Set-Cookie2/Cookie2 per RFC 2965 only by Opera.

Where can I find a List of Standard HTTP Header Values?

I'm looking for all the current standard header values a web server would generally receive. An example would be things like "what will the header look like when coming from a Mac running OS X Leopard and Camino installed?" or "what will the header look like when coming from Fedora 9 running Firefox 3.0.1 versus SuSe running Konqueror?"
PConroy gave an example from JQuery tending towards what I'm looking for. What I want though are the actual example headers.
Did you try the RFC? It has all that information.
Actually, when searching for information on any protocol or standard, try to search for the RFC first.
Cheers.
With regards to user-agent, that is entirely up to the creator of the application. See this semi tongue-in-cheek history of user-agent. In summary, there really isn't a canonical set of values. Microsoft based user-agents may change based on software installed on the local machine (version of .NET framework, etc).
There is no set-in-stone list of user agent values. You can find lengthy lists (such as this one used by the JQuery browser plugin).
Regarding other HTTP Headers, this wikipedia article is a good place to start.
IANA keeps track of HTTP headers
IANA is responsible for maintaining many of the codes and numbers contained in a variety of Internet protocols, enumerated below. We provide this service in coordination with the Internet Engineering Task Force (IETF).
Which includes:
Message Headers
Permanent Message Header Field Names
Provisional Message Header Field Names
Here's the exhaustive list which was originally based on RFC 4229
For the user agent, a quick google search pulled up this site.
The list of HTTP headers is easily available on the W3 website:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
PConroy also linked to the wikipedia page, which is more concise, and a little easier formatted:
http://en.wikipedia.org/wiki/List_of_HTTP_headers
However, the "User-Agent" header is a bad example, since there's no set response; the user-agent string is decided by the client so it can literally be anything. There's a very comprehensive List of User Agents available, but it's not necessarily going to cover any possible option, since even some toolbars and applications can modify the user-agent for Internet Explorer or other browsers.
The chipmunk book from O'Reilly is good as is Chris Shiflett's HTTP reference.
Oh, whoops, it's not a chipmunk. It's a thirteen-lined ground squirrel.

Resources