I want to be able to "suppress" the default HotChocolate (server) behavior and ignore schema errors on query.
Let me try to give example.
Let assume we do have the following schema:
type Query {
myTypes: [MyType]
}
type MyType{
id: ID!
name: String
}
If I try to make query like:
{
myTypes {
id,
name
}
}
Everything is okay.
However, if I make something like:
{
myTypes {
id,
name,
notKnownProperty
}
}
The query fails.
Which in general is desired behavior (the whole benefit of strict schema), but not in my particular case.
The use case is when we have 2 or more services, that expose one and the same schema (with different data) and some of those services evolve with different speed. Unfortunately we cannot control versions of those services, as some of them are on-premise. And we end up in situation where some of the services have newer versions (with more fields on the same types) than others. This restricts us to use only the oldest clients available.
We would like to do the opposite - use the newest clients and handle missing fields in them (with default values for example).
One possible solution is to be implemented some sort of versioning. Either in the path (aka /graphql/v1) or in the object types itself (aka MyType1, MyType2, etc.).
But I personally don't like those approaches.
Any ideas and input are appreciated !
Thanks in advance!
Once you want to use new clients they should be backwards compatible, i.e. be able to use the older versions of the API correctly. For that, it's obviously not enough to treat non-existing values like having default (null) values. It's a big difference between when your end-user doesn't have a phone number and when they has but the old API version doesn't provide it.
So, you need to have a more powerful strategy to implement backwards compatibility. In case you don't want to use the versioning as it is, there are at least two other strategies available in regards to GraphQL (not only hotchocolate):
Your client apps could initially analyze the scheme to check which properties are missed and decide how to build requests to the API having that knowledge.
Your client apps could change their strategy on-the-fly. I.e. not querying the scheme but handling the errors of queries instead. Since the errors in Graph QL are pretty structured, the client can easily understand which properties are missed and adjust the future requests accordingly.
Related
The title is probably poorly worded, but I'm trying my hand at creating a REST api with symfony. I've studied a few public api's to get a feel for it, and a common principle seems to be dealing with a single resource path at a time. However, the data I'm working with has a lot of levels (7-8), and each level is only guaranteed to be unique under its parent (the whole path makes a composite key).
In this structure, I'd like to get all children resources from all or several parents. I know about filtering data using the queryParam at the end of a URI, but it seems like specifying the parent id(s) as an array is better.
As an example, let's say I have companies in my database, which own routers, which delegate traffic for some number of devices. The REST URI to get all devices for a router might look like this:
/devices/company/:c_id/routers/:r_id/getdevices
but then the user has to crawl through all :r_id's to get all the devices for a company. Some suggestions I've seen all involve moving the :r_id out of the path and using it in the the query string:
/devices/company/:c_id/getdevices?router_id[]=1&router_id[]=2
I get it, but I wouldn't want to use it at that point.
Instead, what seems functionally better, yet philosophically questionable, is doing this:
/devices/company/:c_id/routers/:[r_ids]/getdevices
Where [r_ids] is a stringified array of ids that can be decoded into an array of integers/strings server-side. This also frees up the query-parameter string to focus on filtering devices by attributes (age, price, total traffic, status).
However, I am new to all of this and having trouble finding what is "standard". Is this a reasonable solution?
I'll add I've tested the array string out in Symfony and it works great. But I can't tell if it can become a vehicle for malicious queries since I intend on using Doctrine's DBAL - I'll take tips on that too (although it seems like a problem regardless for string id's)
However, I am new to all of this and having trouble finding what is "standard". Is this a reasonable solution?
TL;DR: yes, it's fine.
You would probably see an identifier like that described using a level 4 URI Template, with your list of identifiers encoded via a path segment expansion.
Your example template might look something like:
/devices/company{/c_id}/routers{/r_ids}/devices
And you would need to communicate to the template consumer that c_id is a company id, and r_ids is a list of router identifiers, or whatever.
You've seen simplified versions of this on the web: URI templates are generalizations of web forms that read information from input controls and encode the inputs into the query string.
I know I can setup multiple namespaces for DoctrineCacheBundle in config.yml file. But Can I use one driver but with multiple namespaces?
The case is that in my app I want to cache all queries for all of my entities. The problem is with flushing cache while making create/update actions. I want to flush only part of my cached queries. My app is used by multiple clients. So when a client updates sth in his data for instance in Article entity, I want to clear cache only for this client only for Article. I could add proper IDs for each query and remove them manually but the queries are dynamically used. In my API mobile app send version number for which DB should return data so I don't know what kind of IDs will be used in the end.
Unfortunately I don't think what you want to do can be solved with some configuration magic. What you want it some sort of indexed cache, and for that you have to find a more powerful tool.
You can take a look at doctrines second level cache. Don't know how good it is now (tried it once when it was in beta and did not make the cut for me).
Or you can build your own cache manager. If you do i recommend using redis. The data structures will help you keep you indexes (Can be simulated with memcached, but it requires more work). What I meen by indexes.
You will have a key like client_1_articles where 1 is the client id. In that key you will store all the ids of the articles of client 1. For every article id you will have a key like article_x where x is the id the of article. In this example client_1_articles is a rudimentary index that will help you, if you want at some point, to invalidated all the caches of articles coming from client 1.
The abstract implementation for the above example will end up being a graph like structure over your cache, with possibly
-composed indexes 'client_1:category_1' => {article_1, article_2}
-multiple indexes for one item eg: 'category_1'=>{article_1, article_2, article_3}, 'client_1' => {article_1, article_3}
-etc.
Hope this help you in some way. At least that was my solution for a similar problem.
Good luck with your project,
Alexandru Cosoi
When i have a resource, let's say customers/3 which returns the customer object and i want to return this object with different fields, or some other changes (for example let's say i need to have include in customer object also his latest purchase (for the sake of speed i dont want to do 2 different queries)).
As i see it my options are:
customers/3/with-latest-purchase
customers/3?display=with-latest-purchase
In the first option there is distinct URI for the new representation, but is this REALLY needed? Also how do i tell the client that this URI exist?
In the second option there is GET parameter telling the server what kind of representation to return. The URI parameters can be explained through OPTIONS method and it is easier to tell client where to look for the data as all the representations are all in one place.
So my question is which of these is better (more RESTful) and/or is there some better way to do this that i do not know about?
I think what is best is to define atomic, indivisible service objects, e.g. customer and customer-latest-purchase, nice, clean, simple. Then if the client wants a customer with his latest purchases, they invoke both service calls, instead of jamming it all in one with funky parameters.
Different representations of an object is OK in Java through interfaces but I think it is a bad idea for REST because it compromises its simplicity.
There is a misconception that making query parameters look like file paths is more RESTful. The query portion of the address is included when determining a distinct URI so the second option is fine.
Is there much of a performance hit in including the latest purchase data in all customer GET requests? If not, the simplest thing would be to do that so there would neither be weird URL params or double requests. If getting the latest order is a significant hardship (which it probably shouldn't be) there is nothing wrong with adding a flag in the query string to include it.
I have a resource that has a counter. For the sake of example, let's call the resource profile, and the counter is the number of views for that profile.
Per the REST wiki, PUT requests should be used for resource creation or modification, and should be idempotent. That combination is fine if I'm updating, say, the profile's name, because I can issue a PUT request which sets the name to something 1000 times and the result does not change.
For these standard PUT requests, I have browsers do something like:
PUT /profiles/123?property=value&property2=value2
For incrementing a counter, one calls the url like so:
PUT /profiles/123/?counter=views
Each call will result in the counter being incremented. Technically it's an update operation but it violates idempotency.
I'm looking for guidance/best practice. Are you just doing this as a POST?
I think the right answer is to use PATCH. I didn't see anyone else recommending it should be used to atomically increment a counter, but I believe RFC 2068 says it all very well:
The PATCH method is similar to PUT except that the entity contains a
list of differences between the original version of the resource
identified by the Request-URI and the desired content of the resource
after the PATCH action has been applied. The list of differences is
in a format defined by the media type of the entity (e.g.,
"application/diff") and MUST include sufficient information to allow
the server to recreate the changes necessary to convert the original
version of the resource to the desired version.
So, to update profile 123's view count, I would:
PATCH /profiles/123 HTTP/1.1
Host: www.example.com
Content-Type: application/x-counters
views + 1
Where the x-counters media type (which I just made up) is made of multiple lines of field operator scalar tuples. views = 500 or views - 1 or views + 3 are all valid syntactically (but may be forbidden semantically).
I can understand some frowning-upon making up yet another media type, but I humbly suggest it's more correct than the POST / PUT alternative. Making up a resource for a field, complete with its own URI and especially its own details (which I don't really keep, all I have is an integer) sounds wrong and cumbersome to me. What if I have 23 different counters to maintain?
An alternative might be to add another resource to the system to track the viewings of a profile. You might call it "Viewing".
To see all Viewings of a profile:
GET /profiles/123/viewings
To add a viewing to a profile:
POST /profiles/123/viewings #here, you'd submit the details using a custom media type in the request body.
To update an existing Viewing:
PUT /viewings/815 # submit revised attributes of the Viewing in the request body using the custom media type you created.
To drill down into the details of a viewing:
GET /viewings/815
To delete a Viewing:
DELETE /viewings/815
Also, because you're asking for best-practice, be sure your RESTful system is hypertext-driven.
For the most part, there's nothing wrong with using query parameters in URIs - just don't give your clients the idea that they can manipulate them.
Instead, create a media type that embodies the concepts the parameters are trying to model. Give this media type a concise, unambiguous, and descriptive name. Then document this media type. The real problem of exposing query parameters in REST is that the practice often leads out-of-band communication, and therefore increased coupling between client and server.
Then give your system a uniform interface. For example, adding a new resource is always a POST. Updating a resource is always a PUT. Deleting is DELETE, and getiing is GET.
The hardest part about REST is understanding how media types figure into system design (it's also the part that Fielding left out of his dissertation because he ran out of time). If you want a specific example of a hypertext-driven system that uses and doucuments media types, see the Sun Cloud API.
After evaluating the previous answers I decided PATCH was inappropriate and, for my purposes, fiddling around with Content-Type for a trivial task was a violation of the KISS principle. I only needed to increment n+1 so I just did this:
PUT /profiles/123$views
++
Where ++ is the message body and is interpreted by the controller as an instruction to increment the resource by one.
I chose $ to deliminate the field/property of the resource as it is a legal sub-delimiter and, for my purposes, seemed more intuitive than / which, in my opinion, has the vibe of traversability.
I think both approaches of Yanic and Rich are interresting. A PATCH does not need to be safe or indempotent but can be in order to be more robust against concurrency. Rich's solution is certainly easier to use in a "standard" REST API.
See RFC5789:
PATCH is neither safe nor idempotent as defined by [RFC2616], Section
9.1.
A PATCH request can be issued in such a way as to be idempotent,
which also helps prevent bad outcomes from collisions between two
PATCH requests on the same resource in a similar time frame.
Collisions from multiple PATCH requests may be more dangerous than
PUT collisions because some patch formats need to operate from a
known base-point or else they will corrupt the resource.
Previously, settings for deployments of an ASP.NET application were stored in multiple configuration files under the Web.config config sections using a KEY/VALUE format. We are moving these 'site module options' to the database for a variety of reasons.
Here are the two options we are mulling over at the moment:
1. A single table with the applicationId, moduleId, and key as a Primary Key with a Value field.
Pros:
- This mimics the file access.
- It is easy to select entire sections to cache in hashtables/value objects.
Cons:
- More difficult to update since each key needs to be updated individually.
- Must cast each value if it's not a string.
2. Individual tables for each section which separate stored procedures, classes, etc.
Pros:
- Data is guaranteed to be consistent since the column and object types are typed.
- Updating is done in one trip to the database through an explicit interface.
Cons:
- Must change the application interface to access the
- Must update the objects, database tables, and stored procedures each time something changes.
Do either of these sound like good ideas or is there another way I may have overlooked?
If I understand what you are proposing correctly. I would do the first approach. It leverages what you have already built. I would use the hash tables for caching inside of wrapper classes that can provide stongly typed interfaces for the properties.
For example:
/// <summary>
/// The time passwords expire, in days, if ExpirePasswords is on
/// </summary>
public int PasswordExpirationDays {
get { return ParseUtils.ParseInt(this["PasswordExpirationDays"], PW_MAX_AGE);}
set { this["PasswordExpirationDays"] = value.ToString(); }
}
Another option is to group like settings together into their own classes, and then use XML serialization/deserialization to store and retrieve instances of these settings classes to and from the database.
This doesn't specifically provide advantages above and beyond a key/value pair other than you don't have to yourself perform any type conversions (this is done behind the scenes as part of the serialization/deserialization process - so it still does happen). I find this sort of approach ideally suited for solving configuration issues such as you are facing. Its clean, quick to implement, very easy to expand, and very easy to test. You don't have to spend time creating a feature rich API to get at your settings, especially if you've already got your configuration subclassed out.
Also in a pinch you can direct your settings to come from database tables or the file system without altering your serialization/deserialization code (this is very nice during development).
Finally if you are using SQL Server (and likely Oracle, though I have no experience with Oracle and XML) and you think about the design of your settings class up front, you can define an XML schema for your serialized configuration object instances so you can use XQuery to quickly get a configuration setting's value without having to fully deserialize.
This is how we did it - Click Here
We were more concerned with the fact that different environments (Dev, Test, QA and Prod), had different values for the same key. Now we have only 2 keys in a WebEnvironment.Config file that never gets promoted. The first key is which environment are you in and the second one is the connection string.
The table gets loaded up once to a dictionary and then we can use it in our code like this:
cApp.AppSettings["MySetting"];