How do I write a two-word type in JSON:API? - json-api

I want to post an alien artifact to my server.
Do I write type="alienArtifact"
or type="alien-artifact"
or something completely different?
I looked here https://jsonapi.org/format/ but is only deals with simple types, "objects".

JSON:API specification is agnostic if you use kebab-case, snake_case or camelCase for field names but it comes with a recommendation:
Member names SHOULD be camel-cased (i.e., wordWordWord)
This recommendation was change in October 2018. It was kebab-case before. Therefor many articles about JSON:API specification and documentation for libraries are still using kebab-case. These arguments were given as main reasons for this change:
camelCased names can be used directly as identifiers in almost all programming languages, making json:api easier to get started with and to work with over time. Dasherized names are usable as identifiers pretty much only in Lisps.
camelCased names are the most common convention in the JS community, and JS is the biggest user of JSON:API
camelCased names are slightly shorter than dasherized (or snake-case) names, which could help with url character limits in
the case of complex filters or other types of complex queries
(e.g., some GraphQL like “deep querying” feature) that we might
serialize into urls in the future.
You could find more details about this change in the appropriate pull request.

Related

What is the Best Practice and Efficient way to organize Query, Mutation and Subscription classes in Hotchocolate GraphQL

which one is best and efficient way to organize query, mutation and subscription class?
Partial class
[ExtendObjectType(OperationTypeNames.Query)]
others?
And what is the difference, what is happening behind the scenes?
In this official docs
If we just want to organize the fields of one of our types in different files, we can use partial classes in the Annotation-based approach.
But in workshop ExtendObjectType is used
I have also asked the same question here
With partial classes nothing special is happening behind the scenes as it's just a regular C# feature.
Its downsides are that they can not be used across assemblies and you can't replace or remove fields that where defined in another partial class.
Type extensions on the other hand are being merged with the original type definition by Hot Chocolate. They work across assemblies and allow you to hide or replace members of the original type definition.
As you've quoted from the documentation, partial classes are easier to use if you just want to split your type definitions into different files within the same project. For a more Hot Chocolate integrated way of combining these multiple files (classes) Type Extensions should be used.

Why does 'x-www-form-urlencoded' begin with 'x-www', when other standard content types do not?

I understand that in the past, it was standard for custom headers names to use the prefix "X-" (I'm aware it no longer is considered standard to do this), but I've been unable to find if there is any relationship between this naming convention and the value ("application/x-www-form-urlencoded"). Did it start out as a custom content-type value that was later adopted or something?
I found this link here, which certainly was interesting, but have been unable to find the answer to my question.
Does anybody know the reason this prefix was chosen, and what it signifies?
it was standard for custom headers names to use the prefix "X-"
Actually … no, not at all. To be precise: It has never been a standard, just a best practice. It allowed implementors to introduce new content types and codings without the need to write an entire RFC for it. Nowadays the IANA Media Type Registry is good for that. RFC 6648 put an end to this practice.
The reason application/x-www-form-urlencoded is prefixed in this way (it is listed as a proper MIME type in said registry, btw)) stems from the fact that it is a "custom" method of structuring the query string in a URL. That part has never seen proper regulation. The people behind HTML just went and did it, which fully justified the prefix.
As far as the history: it has the x- prefix because it originated in a proposal from Mosaic—and since it was just a proposal, they used that x- extension prefix to initially define it. But then other browsers implemented it that way too, and nobody ever got around to taking the time to properly standardize an unprefixed alternative, so it just stuck that way, and here were are now.
It can be traced back to a 1993 thread on the www-talk mailing list titled “Submitting input-form data to server”, and in that thread, a September 1993 message from Marc Andreessen:
This is what we're doing in Mosaic 2.0… See
http://www.ncsa.uiuc.edu/SDG/Software/Mosaic/Docs/fill-out-forms/overview.html
...for details on what we're up to
That link is broken now but the document, titled “Mosaic for X version 2.0 Fill-Out Form Support” is archived at archive.org. Here’s the relevant excerpt:
ENCTYPE specifies the encoding for the fill-out form contents. This attribute only applies if METHOD is set to POST -- and even then, there is only one possible value (the default, application/x-www-form-urlencoded) so far.
Anyway, application/x-www-form-urlencoded is now formally defined in the URL spec, with algorithms for parsing and serializing it—though the section it’s all defined in has this note:
The application/x-www-form-urlencoded format is in many ways an aberrant monstrosity, the result of many years of implementation accidents and compromises leading to a set of requirements necessary for interoperability, but in no way representing good design practices. In particular, readers are cautioned to pay close attention to the twisted details involving repeated (and in some cases nested) conversions between character encodings and byte sequences. Unfortunately the format is in widespread use due to the prevalence of HTML forms.

Can someone explain FHIR extensions?

I've been trying to wrap my head around authoring profiles in FHIR. The trouble I'm having is around the use of using extensions.
The documentation talks about extensions as if they are simply just there to extend existing elements of the resource which a profile belongs to, this is kind of confirmed to me when using forge because I can add new elements which don't have extensions.
It feels very foreign to me as in our proprietary storage system, we have the equivalent of profiles, and they have properties about them (which I think are similar to elements in fhir), however a property is only designed to store one type of thing; e.g. you might have a patient profile that has the properties DOB, ethniticy, identifier, etc. I don't really understand what profiles are for in the context of fhir, are they similar to my properties? Can I use the to limit the datatype that a profile instance can have for a particular element?
Is there any better documentation than the spec? I'm finding it really hard to get to grips with.
FHIR extensions are used to be able to enter extra data elements, when there's no field for that in the standard definition. Mother's maiden name is an example of that for the Patient resource.
The use of an extension is a standard FHIR mechanism and will always look like this:
<extension>
<url value="http://hl7.org/fhir/StructureDefinition/patient-mothersMaidenName"/>
<valueString value="Williams"/>
</extension>
The url is the canonical url for the definition of the extension, which is a StructureDefinition resource defining the extension and the datatype(s) of the value.
You can have extensions on every level of a resource/datatype.
Since profiling is a very overloaded term, it is hard for me to understand what you're saying about profiles and properties in your proprietary system, or how that relates to your question. But in general, FHIR profiling is needed and used to
be able to add data when there's no data field for it in the specification (i.e. an extension of the specs)
constrain the specification in places where you need to be more strict, for example to make an optional field mandatory (i.e. a constraint on the specs, also called a profile)
I recommend browsing through some of the profiles and their descriptions on the Simplifier repository to get an idea of why people are creating profiles on FHIR.

Determining type of CollectionBase via Reflections (or Microsoft.Cci)

Question:
Is there a static way to reliably determine the type contained by a type derived from CollectionBase, using Reflection or Microsoft.Cci?
Background:
I am working on a code generator that copies types, makes customized versions of those types, and converters between. It walks the types in the source assembly via Microsoft.Cci. It prints out source code using textual templates. It does a lot of conversion and customization, and tosses out code that I don't care about.
In my resulting code, I intend to replace List<T> everywhere that a CollectionBase, IEnumerable<T>, or T[] was previously used. I want to use List<T> because I am pretty sure I can serialize it without extra work, which is important for my application. T is concrete in every case. I am trying not to copy CollectionBase classes because I'd have to copy over the custom implementation, and I'd like to avoid having to do that in my code generator.
The only part I'm having a problem with is determining T for List<T> when replacing a custom CollectionBase.
What I've done so far:
I have briefly looked at the MSDN docs and samples for CollectionBase, and they mention creating a custom Add method on your derived type. I don't think this is in any way enforced, so I'm not sure I can rely on that. An implementor could name it something else, or worse, have a collection that supports multiple types, with Object as their only common ancestor.
Alternatives I have considered:
Maybe the default serialization does some tricks that I can take advantage of. Is there a default serialization for CollectionBase collections, or do you generally have to implement it yourself? If you have to do it yourself, is there some reliable metadata I could look at in order to determine the types? If it supports default serialization, does it rely on the runtime types of the items in the collection?
I could make a mapping in my code generator of known CollectionBase types, mapped to their corresponding T for List<T>. If a given CollectionBase type that I encounter isn't in the list, throw an exception. This is probably what I'll go with if I there isn't a reliable alternative.
I'm still not sure enough about what you want to do to give advice. Still, do your CollectionBase-derived classes all implement Add(T)? If so, you could look for an Add method with single parameter of type other than object, and use that type for T.

standard for resolving a urn:uuid (and other)?

My application uses urn:uuid as URIs for entities. Of course, when I get, e.g. RDF information about a resource, the referred entities (subject or objects) will contain URIs in the urn:uuid schema. To fetch the representation of the new entity, possibly in a REST way, I need a "resolver", similar in some way to dx.doi.org for DOIs. Another case could be the resolution of a isbn: URI, so to obtain a sensible representation of this URI.
My question is relative to what's out there, in terms of proposed standards, for URI-to-representation-URL resolution.
The concluded URN Working Group of the IETF has also done some work on resolving URNs and published quite a few RFCs on this topic. A list of references is contained in the group charter. Maybe some of them help you.
An UUID is a universally unique identifier, so I don't see how you would be able to resolve a uuid I just generated (e.g. 3136aa1a-fec8-11de-a55f-00003925d394) to something useful.
Only if you manage a database of uuids somewhere, you can retrieve more from it. Or you would have to ask everyone/everything "Do you know this uuid?"
The urn:uuid definition defines a clear space of unique identifiers you can use for defining something truly unique. But as nobody else can guess its value, you can't derive information from it.
There is no standard (proposed or otherwise) for resolving a URN. It's just a name (Uniform Resource NAME) and may have arbitrary meaning.
XML/RDF creates some confusion by using URNs which do resolve because they happen to also be URLs (Uniform Resource Locators) which point to objects describing their meaning, but this is merely a convention. They merely have to be unique and always mean the same thing.
If you are developing an application, you might want to consider use URNs which are also resolvable URLs for items with fixed meaning, and randomly generated URN's in the urn:uuid namespace to identify instances of objects.
That sounded about as confusing as the RDF spec:-)
Quick example:
Tiger: http://www.example.com/animals/tiger
Instance of a Tiger: urn:uuid:9a652678-4616-475d-af12-aca21cfbe06d
There might be a HTML page at http://www.example.com/animals/tiger, but there doesn't have to be. It's merely a convention.
[Additional Clarification Added]
The distinction here is between URNs (Names) and URLs (Locations).
A URN just names something. It's not a location of anything.
URLs are valid URNs, so you can use a URL for a URN if you want to.
In the above example, I could use e.g. http://www.example.com/tigers/9a652678-4616-475d-af12-aca21cfbe06d as the name of my tiger. I could put something at that address. But what would I put there? You can't download an instance of a tiger using http!
The convention in RDF is that if a URN is also a URL, it will point at some documentation defining what the name means.
What RDF is trying to give you is a convention for naming things which ensures that when two people use the same name, they mean the same thing. The UUID specification allows you to generate a unique name for something which is not likely to be used by anything else. But it's just a name, and there's no way of turning it into a thing.
Hope this helps.
One reason URNs exist is to give people the opportunity to create identifiers without the (implicit) responsibility of maintaining a service that describes the underlying resources. You could say that for RDF this is an advantage, but not a necessity, but you'd also be less inclined to use a particular vocabulary for example if you discovered that those HTTP URLs are no longer dereferenceable.
That being said, some URNs can be traced back to their representation. Here are some examples:
The ietf namespace defines several identifier schemes, so URIs like urn:ietf:rfc:2648 can be resolved if you implement the specific patterns.
Some namespaces are defined in other IANA registries, for example urn:ietf:params:xml: with the corresponding files for the resources.
Other namespaces point to already-established identifier spaces, like urn:isbn: (some metadata can be retrieved, but I don't think there is anything that will allow you to download the book from its ISBN), urn:oid:. There is also urn:publicid:, some of whose identifiers may be found somewhere deep inside ISO.
There is no general mechanism for URN resolution, and indeed there cannot be (that is also true for other URI schemes, like tag:).
Talking specifically about UUIDs, in my opinion, the best way out of this is not to use a URN at all. If you want to use a web server for the resolution, a "standard" way is to use the genid well-known service, thus your primary URI would be something like this: http://example.org/.well-known/genid/b47df9f0-a9c5-4e8a-9762-844a33ba7a3e. If you host RDF at that location, there is nothing wrong with adding owl:sameAs <urn:uuid:b47df9f0-a9c5-4e8a-9762-844a33ba7a3e> there if you have to.
To my knowledge, there is only one method that is in use today to create a link that conveys the question "Do you know this URN?", well, kind of: a magnet: link. There is nothing in principle that would require you to use a hash there like you usually find, so something like magnet:?xt=urn:uuid:b47df9f0-a9c5-4e8a-9762-844a33ba7a3e could work, provided you have your own client that can handle that.

Resources