Using request with validation and then download an xls file - python-requests

I'm trying to automate some analysis for which I need to repeatedly download an xls file from "https://aplicaciones-sic.coordinador.cl/redcdec/"
Im trying to create a session object to have a HTTP persistent connection, then download the file.
I'm getting the file, however, it seems that the validation is not working because the file is locked/"validation process failed" msg. If i manually download the file with the same credentials, I can verify that there's no problem with the xls file.
import requests
site_url = "https://aplicaciones-sic.coordinador.cl/redcdec"
files = { 'username2': 'good', 'password': 'also_good', 'doLogin': 'Entrar'}
site_url= "https://aplicaciones-sic.coordinador.cl/redcdec"
# create session
session = requests.Session()
# login to the site.
response = session.post(site_url, data=files)
print(response.text)
print(response.status_code)
I have tried multiple variations of the code exposed above.
edited some spelling and erased unnecessary lines

Related

Automate exporting CSV report in Kibana

I am trying to automate the csv export in Kibana. I know we can always send POST request to generate the report but the file will be available in reporting tab and not downloaded automatically.
Is there any way by which an application can automatically download the file and save it locally i.e. without any manual intervention.
I am trying to make an application which will automatically download the report file weekly for a particular object.
Send the Post Request to generate CSV report.
It will return a response as below:
{
"path": "/api/reporting/jobs/download/kiivr09200121bb65cdzn8p3",
"job": {
"id": "kiivr09200121bb65cdzn8p3",
.............
}
We can easily download the file using the url in path variable.
For e.g. if Kibana is running in localhost:5601
We can download it by the following url:
http://localhost:5601/api/reporting/jobs/download/kiivr09200121bb65cdzn8p3.
We need to set "kbn-xsrf" as true in headers, We also need to provide username and password in Basic Authorization incase Kibana needs authorization.

How do you download from a URL that takes data from a session in R

The title is a bit confusing but I'll be more specific. I have a store on RedBubble.com, and each sale made on the website is recorded by them. When a user is signed in, they can navigate to www.redbubble.com/account/sales/by_sale to view a tabulated version of these sales. Also, on the same page there is a link to download all of this data in CSV format.
I would like to download this file remotely from an R script, but the url for this file is simply "www.redbubble.com/account/sales/by_sale.csv" and has no information in the url as to which profile to download from (Understandably due to privacy/security/etc)
url <- "https://www.redbubble.com/account/sales/by_sale.csv"
dest_file <- "data/redbubble.csv"
download.file(url, destfile = dest_file)
In R (above), downloading just this url results in a 406 error. Is there a way to somehow relay session/login credentials in a 'get' request to access this file in R?

IIS 7.0+ HTTP PUT Completes, but No File Saved

I'm struggling to figure out what exactly is happening. I am using GdPicture to save a scanned document through java script using their COM+ code and source project as my starting ground. Long story short is their function issues a HTTP PUT command specifying the file name to be saved.
When I execute the command I see that the request is getting to my server, and even has the appropriate content size to include the pdf document. I even get a 200 response back to my browser, no errors or anything...... yet the pdf doesn't get saved. Is that because PUT isn't the right way to do this? I don't have the option to POST the file because the transfer is wrapped in GdPicture's api... so with that said.
I have done the following
Ensured that IIS_IUSRS group has write permissions to the "Upload" virtual directory
Added a handler that specifically allows the PUT verb for "*.pdf"
Removed the StaticFileHandler for the "Upload" virtual directory
I aplogize for the links, but I don't have 10 rep points yet
PUT Request from FIDDLER
Response
** Edit **
More information about GdPicture, I have already contacted them and their function is not the problem. The implementation is as simple as
var status = oGdViewer.SaveDocumentToPDF_2("http://domain.com/Annotation/Upload/" + FileName, "user", "pass");
Thanks!

iis7 website accessed externally downloads files to server instead of local machine

I've a site set up in IIS. It's allows users to download files from a remote cloud to their own local desktop. HOWEVER, the context seems to be mixed up, because when I access the website externally via the IP, and execute the download, it saves the file to the server hosting the site, and not locally. What's going on??
My relevant lines code:
using (var sw2 = new FileStream(filePath,FileMode.Create))
{
try
{
var request = new RestRequest("drives/{chunk}");
RestResponse resp2 = client.Execute(request);
sw2.Write(resp2.RawBytes, 0, resp2.RawBytes.Length);
}
}
Your code is writing a file to the local filesystem of the server. If you want to send the file to the client, you need to do something like
Response.BinaryWrite(resp2.RawBytes);
The Response object is what you use to send data back to the client who made the request to your page.
I imagine that code snippet you posted is running in some sort of code-behind somewhere. That is running on the server - it's not going to be running on the client. You will need to write those bytes in the Response object and specify what content-type, etc. and allow the user to Save the file himself.

Extracting zip from stream with DotNetZip

public void ZipExtract(Stream inputStream, string outputDirectory)
{
using (ZipFile zip = ZipFile.Read(inputStream))
{
Directory.CreateDirectory(outputDirectory);
zip.ExtractSelectedEntries("name=*.jpg,*.jpeg,*.png,*.gif,*.bmp", " ", outputDirectory,
ExtractExistingFileAction.OverwriteSilently);
}
}
[HttpPost]
public ContentResult Uploadify(HttpPostedFileBase filedata)
{
var path = Server.MapPath(#"~/Files");
var filePath = Path.Combine(path, filedata.FileName);
if (filedata.FileName.EndsWith(".zip"))
{
ZipExtract(Request.InputStream,path);
}
filedata.SaveAs(filePath);
_db.Photos.Add(new Photo
{
Filename = filedata.FileName
});
_db.SaveChanges();
return new ContentResult{Content = "1"};
}
I try to read zip archive from stream and extract files. Got the following exception in the line "using (ZipFile zip = ZipFile.Read(inputStream))" : ZipEntry::ReadDirEntry(): Bad signature (0xC618F879) at position 0x0000EE19
Any ideas how to handle this exception?
The error is occurring because the stream you are trying to read is not a valid zip bytestream. In most cases, Request.InputStream will not represent a zip file. It will represent an HTTP message, which will look like this:
POST /path/for/your/app.aspx HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.2; ...)
Content-Type: application/x-www-form-urlencoded
Content-Length: 11132
...more stuff here...
I think what you are doing is trying to read that message as if it were a zip file. That's not gonna work. The file content is actually embedded in the "... more stuff here..." part.
To work toward solving this, I suggest you work in smaller steps.
First, get the file upload to work, saving the content of the uploaded file to a filesystem file on the server. Then, on the server, try to open the file as a zipfile. If it works, then you should be able to replace the file saving portion, with ZipFile.Read(). If you cannot open the file that you saved, then it means that the file that you saved is not a zip file. Either it is incomplete, or, more likely, it includes extraneous data, like the HTTP headers.
If you have trouble successfully uploading a binary file like a zip file, first work on uploading a text file. You can more easily verify the upload of a text file on the server, by simply opening the uploaded content in a text editor, and checking that it contains exactly the content of the file that was uploaded from the client. Once you have this working, move to a binary file. Then you can move to a full streaming approach, using DotNetZip to read the stream. Once you get to this point, there should be no need to save the file to the filesystem, before reading it as a zip file, but you may want to save it anyway, for other reasons.
To help, you may want to use Fiddler2, the debugging HTTP proxy. Install it on the browser machine, turn it on, and it will help you see the messages that get sent from the browser to the ASPNET application on the server. You'll see that a file upload contains more that just the bare file data.
A more stable solution could be to use ICSharpCode ZipLib: http://www.sharpdevelop.net/OpenSource/SharpZipLib/Default.aspx

Resources