I am building a rest API and am wondering both about HTTP best practices I guess, and how that would apply to the DRF. When sending a PUT request, in the body, do requests have all of the parameters for objects that they would be manipulating? Even if not all of them are changing? Or do they only send the fields that are being updated? So for example if we had a House object with No. of rooms and floors, and I was changing No. of rooms should I only send that parameter, or both of them?
If requests should only contain the fields that are being updating, then how would that translate to the DjangoRestFramework? Any help would be greatly appreciated!
My Views are:
class HouseDetail(generics.RetrieveUpdateDestroyAPIView):
queryset = House.objects.all()
serializer_class = HouseSerializer
and serializer is:
class HouseSerializer(serializers.ModelSerializer):
quotes = serializers.PrimaryKeyRelatedField(many=True, read_only=True)
class Meta:
model = House
fields = (
'pk',
'address',
'quotes',
)
PUT is for full resource updates, PATCH is for partial resource updates because Fielding says so.
Therefore, in your example, if you wanted to only update a No. of rooms for your HouseDetail model, you would send the following payload in a PATCH request:
{ "no. of rooms": "42" }
It is still possible to partially update things with a PUT request in DRF (explained below) but you should simply use PATCH for that because it was created for this. You get PUT and PATCH functionality for your model because you have subclassed the generics.RetrieveUpdateDestroyAPIView class and have defined a serializer.
If a required field is omitted in your PUT request (see documentation for required), then you will receive a 400 status code with a response like {"detail": "x field is required"}. However, when you request via PATCH, there is a partial=True argument passed to the serializer which allows this partial only update. We can see this for the UpdateModelMixin:
def partial_update(self, request, *args, **kwargs):
kwargs['partial'] = True
return self.update(request, *args, **kwargs)
which calls update, which is:
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
return Response(serializer.data)
Related
I am in the process of scraping public data regarding metheorology for a project (data science), and in order to effectively do that I need to change the proxy used on my scrapy requests in the event of a 403 response code.
For this, I have defined a download middleware to handle such situation, which is as follows
class ProxyMiddleware(object):
def process_response(self, request, response, spider):
if response.status == 403:
f = open("Proxies.txt")
proxy = random_line(f) # Just returns a random line from the file with a valid structure ("http://IP:port")
new_request = Request(url=request.url)
new_request.meta['proxy'] = proxy
spider.logger.info("[Response 403] Changed proxy to %s" % proxy)
return new_request
return response
After properly adding the class to settings.py, I expected this middleware to deal with 403 responses by generating a new request with the new proxy, hence finishing in a 200 response. The observed behaviour is that it actually gets executed (I can see the Logger info about Changed proxy), but the new request does not seem to be made. Instead, I'm getting this:
2018-12-26 23:33:19 [bot_2] INFO: [Response] Changed proxy to https://154.65.93.126:53281
2018-12-26 23:33:26 [bot_2] INFO: [Response] Changed proxy to https://176.196.84.138:51336
... indefinitely with random proxies, which makes me think that I'm still retrieving 403 errors and the proxy is not changing.
Reading the documentation, regarding process_response, it states:
(...) If it returns a Request object, the middleware chain is halted and the returned request is rescheduled to be downloaded in the future. This is the same behavior as if a request is returned from process_request().
Is it possible that "in the future" is not "right after it is returned"? How should I do to change the proxy for all requests from that moment on?
Scrapy will drop duplicate requests to the same url by default, so that's probably what's happening on your spider. To check if this is your case you can set this settings:
DUPEFILTER_DEBUG=True
LOG_LEVEL='DEBUG'
To solve this you should add dont_filter=True:
new_request = Request(url=request.url, dont_filter=True)
Try this:
class ProxyMiddleware(object):
def process_response(self, request, response, spider):
if response.status == 403:
f = open("Proxies.txt")
proxy = random_line(f)
new_request = Request(url=request.url)
new_request.meta['proxy'] = proxy
spider.logger.info("[Response 403] Changed proxy to %s" % proxy)
return new_request
else:
return response
A better approach would be to use scrapy random proxies module instead:
'DOWNLOADER_MIDDLEWARES' : {
'rotating_proxies.middlewares.RotatingProxyMiddleware': 610,
'rotating_proxies.middlewares.BanDetectionMiddleware': 620
},
TL;DR: I commented on this issue and got asked to open a new ticket, but then realized this is more of a question as Spring RestDocs provides a way to achieve what I want (ignoring unimportant headers in contracts) with operation preprocessor. So here we are, on our friendly SoF
The problem is I am trying to generate contracts starting from a RestDocs test (using RestAssured and junit5 if it matters). Test setup (in Kotlin) looks like:
private val defaultDocument = document("{method_name}", SpringCloudContractRestDocs.dslContract())
lateinit var spec: RequestSpecification
#BeforeEach
internal fun setUp(restDocumentationContextProvider: RestDocumentationContextProvider) {
RestAssured.port = port
spec = RequestSpecBuilder()
.setConfig(
RestAssuredConfig.config()
.objectMapperConfig(
ObjectMapperConfig.objectMapperConfig()
.jackson2ObjectMapperFactory { _, _ -> mapper }
)
)
.addFilter(defaultDocument)
.addFilter(ResponseLoggingFilter())
.log(LogDetail.ALL)
.build()
}
where mapper and port are injected as Spring beans.
The server generates a Date header, which is the time when the response is generated. This is done automatically by Spring WebMvc (I think) and I don't care at all for that header. However, the Date header causes stub generation to fail, as I decided to use Spring Cloud Contracts in a polyglot world approach to generate and upload stub to maven repository, because now the server generates a different date.
As I point out here, the ContractDslSnippet does not seem to provide a way to ignore unimportant headers and/or to add matchers (which would still an open question).
The (short) list of questions:
How can I filter out unimportant headers from generated contracts?
Can I add custom matchers for headers, like I can do for body?
How to remove unimportant header, using Spring RestDocs preprocessors:
private val defaultDocument = document("{method_name}", SpringCloudContractRestDocs.dslContract())
lateinit var spec: RequestSpecification
#BeforeEach
internal fun setUp(restDocumentationContextProvider: RestDocumentationContextProvider) {
RestAssured.port = port
spec = RequestSpecBuilder()
.setConfig(
RestAssuredConfig.config()
.objectMapperConfig(
ObjectMapperConfig.objectMapperConfig()
.jackson2ObjectMapperFactory { _, _ -> mapper }
)
)
.addFilter(
documentationConfiguration(restDocumentationContextProvider)
.operationPreprocessors()
.withResponseDefaults(Preprocessors.removeMatchingHeaders("Date"))
)
.addFilter(defaultDocument)
.addFilter(ResponseLoggingFilter())
.log(LogDetail.ALL)
.build()
}
The important part is adding a new filter (the first one), which takes care of configuring Spring RestDocs to remove the Date from all its snippets, including the contract ones.
How to add custom matchers, using the default SpringCloudContractRestDocs.dslContract(): I don't think it is actually possible right now, but might be wrong here (glad if somebody can chime in and correct me in case)
I want to write a client which can consume streaming APIs. Essentially, have a getter that returns an HTTPResponseStream instead of HTTPResponse. I couldn't find one in HTTPotion, so I figured I'd give it a try instead. But I have no idea how to go about it, and would really appreciate some help!
You can do async requests with HTTPotion like so:
%HTTPotion.AsyncResponse{ id: id } = HTTPotion.get "http://example.com", [], [stream_to: self]
This will send messages of three different types to the current process (which is defined above via self):
# First, the response headers
%HTTPotion.AsyncHeaders{ id: ^id, status_code: 200, headers: _headers }
# Then, one or more chunks
%HTTPotion.AsyncChunk{ id: ^id, chunk: _chunk }
# And finally, an end message
%HTTPotion.AsyncEnd{ id: ^id }
The id can be used to handle the responses from multiple ongoing requests.
An image gets corrupted while being retrieved (through HTTP) and then sent (through HTTP) to a database. Image's raw data is handled in String form.
The service sends a GET for an image file, receives response with the raw image data (response's body) and the Content-Type. Then, a PUT request is sent with the aforementioned request's body and Content-Type header. (The PUT request is constructed by providing the body in String) This PUT request is sent to a RESTful database (CouchDB), creating an attachment (for those unfamiliar with CouchDB an attachment acts like a static file).
Now I have the original image, which my service GETs and PUTs to a database, and this 'copy' of the original image, that I can now GET from the database. If I then `curl --head -v "[copy's url]" it has the Content-Type of the original image, but Content-Length has changed, went from 200kb to about 400kb. If I GET the 'copy' image with a browser, it is not rendered, whereas, the original renders fine. It is corrupted.
What might be the cause? My guess is that while handling the raw data as a string, my framework guesses the encoding wrong and corrupts it. I have not been able to confirm or deny this. How could I handle this raw data/request body in a safe manner, or how could I properly handle the encoding (if that proves to be the problem)?
Details: Play2 Framework's HTTP client, Scala. Below a test to reproduce:
"able to copy an image" in {
def waitFor[T](future:Future[T]):T = { // to bypass futures
Await.result(future, Duration(10000, "millis"))
}
val originalImageUrl = "http://laughingsquid.com/wp-content/uploads/grumpy-cat.jpg"
val couchdbUrl = "http://admin:admin#localhost:5984/testdb"
val getOriginal:ws.Response = waitFor(WS.url(originalImageUrl).get)
getOriginal.status mustEqual 200
val rawImage:String = getOriginal.body
val originalContentType = getOriginal.header("Content-Type").get
// need an empty doc to have something to attach the attachment to
val emptyDocUrl = couchdbUrl + "/empty_doc"
val putEmptyDoc:ws.Response = waitFor(WS.url(emptyDocUrl).put("{}"))
putEmptyDoc.status mustEqual 201
//uploading an attachment will require the doc's revision
val emptyDocRev = (putEmptyDoc.json \ "rev").as[String]
// create actual attachment/static file
val attachmentUrl = emptyDocUrl + "/0"
val putAttachment:ws.Response = waitFor(WS.url(attachmentUrl)
.withHeaders(("If-Match", emptyDocRev), ("Content-Type", originalContentType))
.put(rawImage))
putAttachment.status mustEqual 201
// retrieve attachment
val getAttachment:ws.Response = waitFor(WS.url(attachmentUrl).get)
getAttachment.status mustEqual 200
val attachmentContentType = getAttachment.header("Content-Type").get
originalContentType mustEqual attachmentContentType
val originalAndCopyMatch = getOriginal.body == getAttachment.body
originalAndCopyMatch aka "original matches copy" must beTrue // << false
}
Fails at the last 'must':
[error] x able to copy an image
[error] original matches copy is false (ApplicationSpec.scala:112)
The conversion to String is definitely going to cause problems. You need to work with the bytes as Daniel mentioned.
Looking at the source it looks like ws.Response is just a wrapper. If you get to the underlying class then there are some methods that may help you. On the Java side, someone made a commit on GitHub to expose more ways of getting the response data other than a String.
I'm not familiar with scala but something like this may work:
getOriginal.getAHCResponse.getResponseBodyAsBytes
// instead of getOriginal.body
WS.scala
https://github.com/playframework/playframework/blob/master/framework/src/play/src/main/scala/play/api/libs/ws/WS.scala
WS.java
Here you can see that Response has some new methods, getBodyAsStream() and asByteArray.
https://github.com/playframework/playframework/blob/master/framework/src/play-java/src/main/java/play/libs/WS.java
I'm in the process of building a small tool for my employer, and attempting to implement it using Akka 2 + Scala.
One of the systems I'm interfacing with has a a SOAP api, which has an interesting implementation:
----Request-------
Client ----Request----> Server
Client <---Req Ack----- Server
----Response------
Client <---Response---- Server
Client ----Resp Ack---->Server
Keeping track of the requests is done via a task id sent by the req-ack.
My question is to the SO community and anyone who has experience with akka are the following:
What would be an ideal way to handle http messages using akka for this particular scenario? I have seen examples with Akka-Mist, which looks like it's been discontinued in favor of play-mini, then there's akka-camel, which I can't seem to be able to find the library for as part of the akka 2.0.3 distribution, so I'm a little confused as to what approach I should take. Is there a recommended way to wrap a encapsulate a servlet's behavior inside an akka actor?
Thank you all in advance.
Your question is VERY open ended so I will make some assumptions about your requirements.
Request assumption: There are a lot of requests being generated upon some event.
Create a router having actors that wrap the async-http-client (https://github.com/sonatype/async-http-client). Once the async-http-client completes a request it would send a message having the ID contained in the Req Ack to a coordination actor. The coordination actor would aggregate the ID's.
The following is largely pseudocode but it's close enough to the real API that you should be able to figure out the missing pieces.
case class Request(url:String)
case class IDReceived(id:String)
case class RequestingActor extends Actor {
override def receive = {
case Request(url) => {
//set up the async client and run the request
//since it's async the client will not block and this actor can continue making generating more requests
}
}
}
class AsyncHttpClientResponseHandler(aggregationActor:ActorRef) extends SomeAsyncClientHandler {
//Override the necessary Handler methods.
override def onComplete = {
aggregationActor ! IDReceived(//get the id from the response)
}
}
class SomeEventHandlerClass {
val requestRouter = actorSystem.actorOf(Props[RequestingActor].withRouter(FromConfig(//maybe you've configured a round-robin router)), requestRouterName)
def onEvent(url:String) {
requestRouter ! Request(url)
}
}
case class AggregationActor extends Actor {
val idStorage = //some form of storage, maybe a Map if other information about the task ID needs to be stored as well. Map(id -> other information)
override def receive = {
case IDReceived(id) => //add the ID to idStorage
}
}
Response assumption: You need to do something with the data contained in the response and then mark the ID as complete. The HTTP frontend only needs to deal with this one set of messages.
Instead of trying to find a framework with Akka integration just use a simple HTTP frontend that will send messages into the Akka network you've created. I can't imagine any advantage gained by using play-mini and I think it would obscure some of the lines delineating work separation and control. That's not always the case but given the lack of requirements in your question I have nothing else to base my opinion on.
case class ResponseHandlingActor extends Actor {
val aggregationActor = actorSystem.actorFor(//the name of the aggregation router within the Actor System)
override def receive = {
case Response(data) => {
//do something with the data. If the data manipulation is blocking or long running it may warrant its own network of actors.
aggregationActor ! ResponseReceived(//get the id from the response)
}
}
}
class ScalatraFrontEnd() extends ScalatraServlet {
val responseRouter = actorSystem.actorOf(Props[RequestingActor].withRouter(FromConfig(//maybe you've configured a round-robin router)), requestRouterName)
post("/response") {
responseRouter ! Response(//data from the response)
}
}
Creating a system similar to this gives you nice separation of concerns, makes it really easy to reason about where processing takes place and plenty of room for scaling the system or extending it.