I m using Soap UI free version for some rest mocking.
I need to persist my HTTP POST request (request received already compressed gzip) to a gzip file.
I have tried different ways to do that, however after to execute the code, when I try to decompress manually the file I have the following error: "The archive is either in unknown format or damaged".
The HTTP POST request has the following header:
Host : 127.0.0.1:8091
Content-Length : 636
User-Agent : Java/1.7.0_07
Connection : keep-alive
Content-Type : application/octet-stream
Accept : text/plain, application/json, application/*+json, */*
Pragma : no-cache
Cache-Control : no-cache
Below the solutions that I have tried:
Solution#1:
byte[] buffer = new byte[1024];
byte[] data = mockRequest.getRequestContent().getBytes();
def path="myfile.gz";
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(path));
bos.write(data);
bos.flush();
bos.close();
Solution#2
byte[] buffer = new byte[1024];
byte[] data = mockRequest.getRawRequestData();
def path="myfile.gz";
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(path));
bos.write(data);
bos.flush();
bos.close();
Can someone please help and let me know why I cannot decompress the gzip file and how I can do that?
Thanks,
This is Groovy, so you don't need all this Java clutter.
Here's some code that might work:
new File( path) << mockRequest.rawRequestData
EDIT
Ok, based on your comments, for zip files to be copied correctly, you probably need something a little different:
import java.nio.file.*
Files.copy(new ByteArrayInputStream(mockRequest.requestContent.bytes),
Paths.get( 'destination.zip' ) )
Tested this with an actual zip file's byte[] as source and it worked. If it does not work for you, then the byte array you're getting from requestContent.bytes just isn't a zip file.
Related
I am using Qt to interact with a GraphQL host. I am using QNetworkRequest, and the login process is done well, gives me a token in JSON format. Since then, GraphQL expects the token to be in the HTTP header:
{
"Authorization": "Bearer <token>"
}
Testing the server, I wrote a small Python code, which works well:
headers = {'Authorization': "Bearer {0}".format(token)}
url = "http://example.com:8000/graphql"
params = {'query': '{fieldQueries{fields(userId:"xxx"){fieldName fieldID}}}'}
result = requests.post(url, params=params, headers=headers)
print(result.json())
The below code is supposed to do the same operation in Qt:
QUrl url = QUrl("http://example.com:8000/graphql");
QNetworkAccessManager * mgr = new QNetworkAccessManager(this);
QNetworkRequest request(url);
QString query = QString("{fieldQueries{fields(userId:\"%1\"){fieldName fieldID}}}").arg(userId);
QUrlQuery params;
params.addQueryItem("query", query);
connect(mgr, SIGNAL(finished(QNetworkReply*)), this, SLOT(onQueryFinish(QNetworkReply*)));
connect(mgr, SIGNAL(finished(QNetworkReply*)), mgr, SLOT(deleteLater()));
auto header = QString("Bearer %1").arg(token);
request.setRawHeader(QByteArray("Authorization"), header.toUtf8());
mgr->post(request, params.query().toUtf8());
However, server gives back an internal server error (500).
As soon as I comment out the request.setRawHeader, server gives back You are not authorized to run this query.\nNot authenticated with no error.
How to make Qt to send this header correctly?
I don't know if it helps, but I checked the packets using WireShark. The Python generated packet for this request is a single packet (around 750 bytes), though Qt request is broken into two packets which the length of the first is 600 byte.
The working package:
...POST /graphql?query=%7BfieldQueries%7Bfields%28userId%3A%22xxx%22%29%7BfieldName+fieldID%7D%7D%7D
HTTP/1.1..Host: xxx:8000..
User-Agent: python-requests/2.21.0..
Accept-Encoding: gzip, deflate..
Accept: */*..
Connection: keep-alive..
Content-Type: application/json..
Authorization: Bearer <688 bytes token>..Content-Length: 0....
Qt generated packages:
....POST /graphql HTTP/1.1..Host: xxx:8000..
Authorization: Bearer <364 bytes token>..
Content-Type: application/json..
Content-Length: 113..
Connection: Keep-Alive..
Accept-Encoding: gzip, deflate..
Accept-Language: en-US,*..
User-Agent: Mozilla/5.0....
and
.e..query=%7BfieldQueries%7Bfields(useId:%22xxx%22)%7BfieldName fieldID%7D%7D%7D
I have checked other given solutions for the header, such as
Correct format for HTTP POST using QNetworkRequest,
Sending HTTP Header Info with Qt QNetworkAccessManagerand read the
QNetworkRequest Class documentation.
Graphql accepts both POST an GET requests. Therefore, instead of post, I used GET. It sends the query as a part of the URL, not the header, which from the captured packets I realized they are sent as the body.
The solution is as follows:
QNetworkAccessManager * mgr = new QNetworkAccessManager(this);
QString query = QString("{fieldQueries{fields(userId:\"%1\"){fieldName fieldID}}}").arg(userId);
QUrl url = QUrl(QString("http://example.com:8000/graphql?query=%1").arg(query));
QNetworkRequest request(url);
connect(mgr, SIGNAL(finished(QNetworkReply*)), this, SLOT(onQueryFinish(QNetworkReply*)));
connect(mgr, SIGNAL(finished(QNetworkReply*)), mgr, SLOT(deleteLater()));
auto header = QString("Bearer %1").arg(token);
request.setRawHeader(QByteArray("Authorization"), header.toUtf8());
mgr->get(request);
I can't unserstand what I am doing wrong, but when I am sending next request with curl, I am getting error:
echo {"id":1,"question":"aaa"},{"id":2,"question":"bbb?"} | curl -X POST --data-binary #- --dump - http://localhost:8529/_db/otest/_api/document/?collection=sitetestanswers
HTTP/1.1 100 (Continue)
HTTP/1.1 400 Bad Request
Server: ArangoDB
Connection: Keep-Alive
Content-Type: application/json; charset=utf-8
Content-Length: 100
{"error":true,"errorMessage":"failed to parse json object: expecting EOF","code":400,"errorNum":600}
Any ideas? I tied wrap it's to [...]. Nothing do not help.
With [...] validator mark this as valid
Same with D. Here is my code:
void sendQuestionsToArangoDB(Json questions)
{
string collectionUrl = "http://localhost:8529/_db/otest/_api/document/?collection=sitetestanswers";
auto rq = Request();
rq.verbosity = 2;
string s = `{"id":"1","question":"foo?"},{"id":2}`;
auto rs = rq.post(collectionUrl, s, "application/json");
writeln("SENDED");
}
--
POST /_db/otest/_api/document/?collection=sitetestanswers HTTP/1.1
Content-Length: 37
Connection: Close
Host: localhost:8529
Content-Type: application/json
HTTP/1.1 400 Bad Request
Server: ArangoDB
Connection: Close
Content-Type: application/json; charset=utf-8
Content-Length: 100
100 bytes of body received
For D I use this lib: https://github.com/ikod/dlang-requests
Same issue with vibed.
ArangoDB do not understand JSON if it's come ass array like [...]. It should be passed as key-value. So if you need pass array it should have key mykey : [].
Here is working code:
import std.stdio;
import requests.http;
void main(string[] args)
{
string collectionUrl = "http://localhost:8529/_db/otest/_api/document?collection=sitetestanswers";
auto rq = Request();
rq.verbosity = 2;
string s = `{"some_data":[{"id":1, "question":"aaa"},{"id":2, "question":"bbb"}]}`;
auto rs = rq.post(collectionUrl, s, "application/json");
writeln("SENDED");
}
otest - DB name
sitetestanswers - collection name (should be created in DB)
echo '[{"id":1,"question":"aaa"},{"id":2,"question":"bbb?"}]'
should do the trick. You need to put ticks around the JSON. The array brackets are necessary otherwise this is not valid JSON.
You are trying to send multiple documents. The data in the original question separates the documents by comma ({"id":1,"question":"aaa"},{"id":2,"question":"bbb?"}) which is invalid JSON. Thus the failed to parse json object answer from ArangoDB.
Putting the documents into angular brackets ([ ... ]) as some of the commentors suggested will make the request payload valid JSON again.
However, you're sending the data to a server endpoint that handles a single document. The API for POST /_api/document/?collection=... currently accepts a single document at a time. It does not work with multiple documents in a single request. It expects a JSON object, and whenever it is sent something different it will respond with an error code.
If you're looking for batch inserts, please try the API POST /_api/import, described in the manual here: https://docs.arangodb.com/HttpBulkImports/ImportingSelfContained.html
This will work with multiple documents in a single request. ArangoDB 3.0 will also allow sending multiple documents to the POST /_api/document?collection=... API, but this version is not yet released. A technical preview will be available soon however.
I'm writing a controller method in ASP.NET WebAPI to download a file. Here's the part where I set the headers and content:
result.Content = new StreamContent(new MemoryStream(csvResult));
result.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("text/csv");
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
result.Content.Headers.ContentDisposition.FileName = fileName;
result.Content.Headers.ContentLength = csvResult.Length;
The file is being downloaded, but the browser always shows it as "untitled file". Fiddler shows the request coming back with the correct headers:
Content-Disposition: attachment; filename=Report2015-9_createtest3.csv
Content-Length: 1477
Content-Type: text/csv
I've tried it with the content-type as "application/octet-stream" and "text/plain" and those don't work any better either. Any idea what's going on?
We have a web page that is grabs a series of strings from a url, finds some pdfs associated with those strings, zips them up using DotNetZip, and returns them to the user. The page that does this is very simple - here's the Page_Load:
protected void Page_Load(object sender, EventArgs e)
{
string[] fileNames = Request.QueryString["requests"].Split(',');
Response.Clear();
Response.ClearHeaders();
Response.ContentType = "application/zip";
string archiveName = String.Format("MsdsRequest-{0}.zip", DateTime.Now.ToString("yyyy-mm-dd-HHmmss"));
Response.AddHeader("Content-Disposition", "attachment; filename=\"" + archiveName + "\"");
using (ZipFile zip = new ZipFile())
{
foreach (string fileName in fileNames)
{
zip.AddFile(String.Format(SiteSettings.PdfPath + "{0}.pdf", msdsFileName), "");
}
zip.Save(Response.OutputStream);
}
Response.Flush();
}
(Before you ask, it would be fine if someone put other values in this url...these are not secure files.)
This works fine on my development box. However, when testing on our QA system, it downloads the zipped file, but it is corrupt. No error is thrown, and nothing is logged in the event log.
It may be possible for me to find a way to interactively debug on the QA environment, but since nothing is actually failing by throwing an error (such as if the dll wasn't found, etc.), and it's successfully generating a non-empty (but corrupt) zip file, I'm thinking I'm not going to discover much by stepping through it.
Is it possible that this is some kind of issue where the web server is "helping" me by "fixing" the file in some way?
I looked at the http response headers where it was working on my local box and not working on the qa box, but while they were slightly different I didn't see any smoking gun.
As an other idea I rejected, the content length occured to me as a possibility since if the content length value was too small I guess that would make it corrupt...but I'm not clear why that would happen and I don't think that's exactly it since if I try to zip and download 1 file I get a small zip...while downloading several files gives me a much larger zip. So that, combined with the fact that no errors are being logged, makes me think that the zip utility is correctly finding and compressing files and the problem is elsewhere.
Here are the headers, to be complete.
The response header on my development machine (working)
HTTP/1.1 200 OK
Date: Wed, 02 Jan 2013 21:59:31 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Content-Disposition: attachment; filename="MsdsRequest-2013-59-02-165931.zip"
Transfer-Encoding: chunked
Cache-Control: private
Content-Type: application/zip
The response header on the qa machine (not working)
HTTP/1.1 200 OK
Date: Wed, 02 Jan 2013 21:54:37 GMT
Server: Microsoft-IIS/6.0
P3P: CP="NON DSP LAW CUR TAI HIS OUR LEG"
SVR: 06
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Content-Disposition: attachment; filename="MsdsRequest-2013-54-02-165437.zip"
Cache-Control: private
Content-Type: application/zip
Set-Cookie: (cookie junk removed);expires=Wed, 02-Jan-2013 21:56:37 GMT;path=/;httponly
Content-Length: 16969
Not sure how to approach this since nothing is claiming a failure. I feel like this could be a web server configuration issue (since I don't have any better ideas), but am not sure where to look. Is there a tact I can take?
As it is you have miss to give an End() to the page right after the Flush() as:
...
zip.Save(Response.OutputStream);
}
Response.Flush();
Response.End();
}
But this is not the correct way, to use a page to send a zip file, probably IIS also gZip the page and this may cause also issues. The correct way is to use a handler and also avoid extra gZip compression for that handler by ether configure the IIS, ether if you make the gZip compression avoid it for that one.
a handler with a name for example download.ashx for your case will be as:
public void ProcessRequest(HttpContext context)
{
string[] fileNames = Request.QueryString["requests"].Split(',');
context.Response.ContentType = "application/zip";
string archiveName = String.Format("MsdsRequest-{0}.zip", DateTime.Now.ToString("yyyy-mm-dd-HHmmss"));
context.Response.AddHeader("Content-Disposition", "attachment; filename=\"" + archiveName + "\"");
// render direct
context.Response.BufferOutput = false;
using (ZipFile zip = new ZipFile())
{
foreach (string fileName in fileNames)
{
zip.AddFile(String.Format(SiteSettings.PdfPath + "{0}.pdf", msdsFileName), "");
}
zip.Save(context.Response.OutputStream);
}
}
current now i used response to return a xml file . but seems the performance is not good when file is lager.
so i would like know that how to return a byte[] (gzip /xml)
also the IE/firefox can dispaly this xml file from gzip byte array
before i use servlet it can auto show the xml file
#GET
#Path("/Test/{CustomerId}")
#Produces("application/xml")
public Response getTest() throws IOException {
return Response.ok().entity(new FileInputStream("CC100_PC.xml")).build();
}
by the way Jersey how to support init and Destroy function , i want add some database connection into init function and destroy it
Just add the GZIPContentEncodingFilter to your Jersey app - see http://jersey.java.net/nonav/apidocs/latest/jersey/com/sun/jersey/api/container/filter/GZIPContentEncodingFilter.html
That will automatically compress it using GZIP if the client supports it (which it figures out from the Accept-Encoding HTTP header).