I'm a web developer.
DownloadController.php
$local_file = 'file.zip';
$download_file = 'd:\temp\download.zip';
if(file_exists($local_file)) {
header('Content-Description: File Transfer');
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename="'.basename($local_file).'"');
header('Expires: 0');
header('Cache-Control: must-revalidate');
header('Pragma: public');
header('Content-Length: ' . filesize($local_file));
readfile($download_file);
}
I hope download the 'file.zip' and downloaded path that 'd:\temp\download.zip'
Anyone help me!
Thank u.
You can use following code to download a file:
return response()->download($pathToFile);
OR
return response()->download($pathToFile, $name, $headers)
The download method may be used to generate a response that forces the user's browser to download the file at the given path. The download method accepts a file name as the second argument to the method, which will determine the file name that is seen by the user downloading the file. Finally, you may pass an array of HTTP headers as the third argument to the method:
Docs
Related
I'd like to recursively download all files from nested folders from this URL to my computer in the same nested structure:
https://hazardsdata.geoplatform.gov/?prefix=Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings%20HYDA/
I've tried several different approaches, using curl and RCurl, including this and some others. There are multiple file types within this folder. But I keep running into cryptic error message such as Error in function (type, msg, asError = TRUE) : error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
I'm not even sure how to begin.
in their javascript you'll find the url https://hazards-geoplatform.s3.amazonaws.com/ and there you'll find a xml file containing the path to (seemingly?) all their files, from there it shouldn't be hard, so
1: download the XML list of files from https://hazards-geoplatform.s3.amazonaws.com
2: each of the XML's <content> tag describes a file or a folder. filter out all the tags that is not relevant to you, that means if the content->key tag does not contain the text Brookings HYDA, filter it out.
3: the remaining content tags contain your download path and save path, for every key tag that ends with /: this is a "folder", you can't download a fol6der, just create the path, for example if the key is
<Contents>
<Key>Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Correspondence/</Key>
this means you should create the folders Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Correspondence and move on, however if the key's value does not end with /, it means you should download it, for example if you find
<Contents>
<Key>Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Correspondence/200724-CityBrookings-AirportInfo_Email.pdf</Key>
<LastModified>2022-03-04T17:54:48.000Z</LastModified>
<ETag>"9fe9af393f043faaa8e368f324c8404a"</ETag>
<Size>303737</Size>
<StorageClass>STANDARD</StorageClass>
</Contents>
it means the save filepath is Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Correspondence/200724-CityBrookings-AirportInfo_Email.pdf
and the url to download the file is https://hazards-geoplatform.s3.amazonaws.com/ + urlencode(key), in this case:
https://hazards-geoplatform.s3.amazonaws.com/Region8%2FR8_MIT%2FRisk_MAP%2FData%2FBLE%2FSouth_Dakota%2F60601300_BrookingsCO%2FBrookings%20HYDA%2FHydraulics_DataCapture%2FCorrespondence%2F200724-CityBrookings-AirportInfo_Email.pdf
idk how to do it with curl/r, but here's how to do it in PHP, happy porting
<?php
declare(strict_types=1);
function curl_get(string $url): string
{
echo "fetching {$url}\n";
static $ch = null;
if ($ch === null) {
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_ENCODING => '',
CURLOPT_FOLLOWLOCATION=>1,
CURLOPT_VERBOSE=>0
));
}
curl_setopt($ch, CURLOPT_URL, $url);
$ret = curl_exec($ch);
if(curl_errno($ch)) {
throw new Exception("curl error ".curl_errno($ch).": ".curl_error($ch));
}
return $ret;
}
$base_url = 'https://hazards-geoplatform.s3.amazonaws.com/';
$xml = curl_get($base_url);
$domd = new DOMDocument();
#($domd->loadHTML($xml));
$xp = new DOMXPath($domd);
foreach($xp->query("//key[contains(text(),'Brookings HYDA')]") as $node) {
$relative = $node->nodeValue;
if($relative[-1] === '/'){
// it's a folder, ignore
continue;
}
$dir = dirname($relative);
if(!is_dir($dir)) {
mkdir($dir, 0777, true);
}
$url = $base_url . urlencode($node->nodeValue);
file_put_contents($relative, curl_get($url));
}
after running that for a few seconds i have
$ find
.
./fuk.php
./Region8
./Region8/R8_MIT
./Region8/R8_MIT/Risk_MAP
./Region8/R8_MIT/Risk_MAP/Data
./Region8/R8_MIT/Risk_MAP/Data/BLE
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Correspondence
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Correspondence/200724-CityBrookings-AirportInfo_Email.pdf
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Correspondence/2D_Exceptions_2021Update.pdf
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/DCS_Checklist_Hydraulics_BrookingsCoSD.xlsx
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Simulations
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Simulations/RAS
./Region8/R8_MIT/Risk_MAP/Data/BLE/South_Dakota/60601300_BrookingsCO/Brookings HYDA/Hydraulics_DataCapture/Simulations/RAS/0.2PAC
soo it seems to be working.
the last output from the command is
fetching https://hazards-geoplatform.s3.amazonaws.com/Region8%2FR8_MIT%2FRisk_MAP%2FData%2FBLE%2FSouth_Dakota%2F60601300_BrookingsCO%2FBrookings+HYDA%2FHydraulics_DataCapture%2FSimulations%2FRAS%2F0.2PAC%2FPostProcessing.hdf
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 65019904 bytes) in /home/hans/test/fuk.php on line 17
meaning some of their files are over 134MB in size - it's easy to optimize the curl code to write directly to disk instead of storing the entire file in ram before writing to disk, but since you want to do this in R anyway, i won't bother optimizing the sample php script.
I am writing a custom endpoint for a REST api in wordpress, following the guide here: https://developer.wordpress.org/rest-api/extending-the-rest-api/adding-custom-endpoints/
I am able to write a endpoint that returns json data. But how can I write an endpoint that returns binary data (pdf, png, and similar)?
My restpoint function returns a WP_REST_Response (or WP_Error in case of error).
But I do not see what I should return if I want to responde with binary data.
Late to the party, but I feel the accepted answer does not really answer the question, and Google found this question when I searched for the same solution, so here is how I eventually solved the same problem (i.e. avoiding to use WP_REST_Response and killing the PHP script before WP tried to send anything else other than my binary data).
function download(WP_REST_Request $request) {
$dir = $request->get_param("dir");
// The following is for security, but my implementation is out
// of scope for this answer. You should either skip this line if
// you trust your client, or implement it the way you need it.
$dir = sanitize_path($dir);
$file = $request->get_param("file");
// See above...
$file = sanitize_path($file);
$sandbox = "/some/path/with/shared/files";
// full path to the file
$path = $sandbox.$dir.$file;
$name = basename($path);
// get the file mime type
$finfo = finfo_open(FILEINFO_MIME_TYPE);
$mime_type = finfo_file($finfo, $path);
// tell the browser what it's about to receive
header("Content-Disposition: attachment; filename=$name;");
header("Content-Type: $mime_type");
header("Content-Description: File Transfer");
header("Content-Transfer-Encoding: binary");
header('Content-Length: ' . filesize($path));
header("Cache-Control: no-cache private");
// stream the file without loading it into RAM completely
$fp = fopen($path, 'rb');
fpassthru($fp);
// kill WP
exit;
}
I would look at something called DOMPDF. In short, it streams any HTML DOM straight to the browser.
We use it to generate live copies of invoices straight from the woo admin, generate brochures based on $wp_query results etc. Anything that can be rendered by a browser can be streamed via DOMPDF.
I have a problem setting a correct path to symfony2 uploads directory.
I am trying to provide user with files that they previously uploaded.
Firstly I tried the following code:
$response = new Response();
$d = $response->headers->makeDisposition(
ResponseHeaderBag::DISPOSITION_ATTACHMENT,
$document->getWebPath()
);
$response->headers->set('Content-Disposition', $d);
as advised in the cookbook and How to get web directory path from inside Entity?.
This however resulted in the following error:
The filename and the fallback cannot contain the "/" and "\" characters.
Therefore I decided to switch to:
$filename = $this->get('kernel')->getRootDir() . '/../web' . $document->getWebPath();
return new Response(file_get_contents($filename), 200, $headers);
this however results in:
Warning: file_get_contents(/***.pl/app/../web/uploads/documents/2.pdf) [<a href='function.file-get-contents'>function.file-get-contents</a>]: failed to open stream: No such file or directory
My file that I want to serve is located in
/web/uploads/documents/2.pdf
What code should I use to provide this file to end users?
In order to serve binary files, it's better to use the BinaryFileResponse, which accepts the absolute file path as its argument. The setContentDisposition() doesn't accept file paths but file names ... and that argument is optional (you should only use it in case you want to change the name of the file being served to end-users):
use Symfony\Component\HttpFoundation\BinaryFileResponse;
$response = new BinaryFileResponse($filePath);
$response->setContentDisposition(
ResponseHeaderBag::DISPOSITION_ATTACHMENT, $fileName
); // This line is optional to modify file name
Regarding the file path, you can keep using the code you showed, but slightly changed:
$filePath = $this->container->getParameter('kernel.root_dir')
.'/../web/'
.$document->getWebPath();
I want to show a pdf, but I did this and I only gets to download the pdf: This is my code:
$response = new BinaryFileResponse($path);
$response->trustXSendfileTypeHeader();
$response->setContentDisposition(ResponseHeaderBag::DISPOSITION_ATTACHMENT, $file . '.pdf');
return $response;
Any idea?
Have you tried setting Content-Type header?
$response->headers->set('Content-Type', 'application/pdf');
Also, ditch that setContentDisposition call since DISPOSITION_ATTACHMENT value forces your browser to download the file.
I'm using this approach to display an icon near each link to a file from my web-application.
In order to avoid IE history cache problems I display a link as
FileName.xls. In this case the css rule doesn't do his job.
Do you know how can I solve this problem?
The selector a[href$='.xls'] you probably use, applies only if it matches the end of HREF's value. Use a[href*=.xls] instead.
Excerpt from Selectors Level 3:
[att*=val]
Represents an element with the att attribute whose value contains at
least one instance of the substring
"val". If "val" is the empty string
then the selector does not represent
anything.
Edit
If you have control over the .htaccess file, you may ensure there to avoid cache problems, so you can use your original stylesheet rules. See Cache-Control Headers using Apache and .htaccess for further details.
The problem is that a[href$='.xls'] matches the end of the href attribute of your anchor, but you're appending the timestamp, so the ending of that href is actually the timestamp.
To avoid caching problems you could handle the downloads using a proxy; that is, use a script that triggers the download of files. In PHP it's easily accomplised with readfile() function and sending no-cache headers, for example:
spreadsheet.xls
But since I don't know which programming language you're using, I couldn't say much more.
Duncan, I know this has already been answered, but just for your comment, here is a function that should work for you. I think it is almost straight off the PHP manual or some other example(s) somewhere. I have this in a class that handles other things like checking file permissions, uploads, etc. Modify to your needs.
public function downloadFile($filename)
{
// $this->dir is obviously the place where you've got your files
$filepath = $this->dir . '/' . $filename;
// make sure file exists
if(!is_file($filepath))
{
header('HTTP/1.0 404 Not Found');
return 0;
}
$fsize=filesize($filepath);
//set mime-type
$mimetype = '';
// mime type is not set, get from server settings
if (function_exists('finfo_file'))
{
$finfo = finfo_open(FILEINFO_MIME); // return mime type
$mimetype = finfo_file($finfo, $filepath);
finfo_close($finfo);
}
if ($mimetype == '')
{
$mimetype = "application/force-download";
}
// replace some characters so the downloaded filename is cool
$fname = preg_replace('/\//', '', $filename);
// set headers
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: public");
header("Content-Description: File Transfer");
header("Content-Type: $mimetype");
header("Content-Disposition: attachment; filename=\"$fname\"");
header("Content-Transfer-Encoding: binary");
header("Content-Length: " . $fsize);
// download
$file = #fopen($filepath,"rb");
if ($file)
{
while(!feof($file))
{
print(fread($file, 1024*8));
flush();
if (connection_status()!=0)
{
#fclose($file);
die(); // not so sure if this best... :P
}
}
#fclose($file);
}
}