nodejs read image headers - http

I'm writing a script to download files from urls in a list. The problem I'm having is that the urls don't just point to static files, like file.jpg, they tend to point to servlets that return a file.
What I want to do is download the file for each url and save it with a generic name, then read its headers and rename it with the appropriate extension. (Unless there's a better way)
How could I do that?
I've tried using mime-magic, but it tells me that the extension-less files are directories.

It should work using mime-magic. Are you sure the path is correct and the path is not pointing to a directory?
Otherwise you could use the command line tool file --mime /path/to/file
Here is how to detect an extension of a file using mime-magic:
mime('/path/to/foo.pdf', function (err, type) {
if (err) {
console.error(err.message);
// ERROR: cannot open `/path/to/foo.pdf' (No such file or directory)
} else {
console.log('Detected mime type: %s', type);
// application/pdf
}
});
Note: Added sled's comment as an answer under community-wiki.

Related

Golang net/http fileserver giving 404 on any pattern other than "/"

Hello awesome stackoverflow community,
Apologies for the lame question.
I've been playing around with the net/http package in Go, and was trying to set an http.Handle to serve the contents of a directory. My code to the Handle is
func main() {
http.Handle("/pwd", http.FileServer(http.Dir(".")))
http.HandleFunc("/dog", dogpic)
err := http.ListenAndServe(":8080", nil)
if err != nil {
panic(err)
}
}
My dogpic handler is using os.Open and an http.ServeContent, which is working fine.
However, when I try to browse localhost:8080/pwd I am getting a 404 page not found, but when I change the pattern to route to /, as
http.Handle("/", http.FileServer(http.Dir(".")))
it is showing the contents of the current page. Can someone please help me figure out why the fileserver is not working with other patterns but only /?
Thank you.
The http.FileServer as called with your /pwd handler will take a request for /pwdmyfile and will use the URI path to build the filename. This means that it will look for pwdmyfile in the local directory.
I suspect you only want pwd as a prefix on the URI, not in the filenames themselves.
There's an example for how to do this in the http.FileServer doc:
// To serve a directory on disk (/tmp) under an alternate URL
// path (/tmpfiles/), use StripPrefix to modify the request
// URL's path before the FileServer sees it:
http.Handle("/tmpfiles/", http.StripPrefix("/tmpfiles/", http.FileServer(http.Dir("/tmp"))))
You'll want to do something similar:
http.Handle("/pwd", http.StripPrefix("/pwd", http.FileServer(http.Dir("."))))
you should write http.Handle("/pwd", http.FileServer(http.Dir("./")))
http.Dir references a system directory.
if you want localhost/ then use http.Handle("/pwd", http.StripPrefix("/pwd", http.FileServer(http.Dir("./pwd"))))
it will serve all you have into /pwd directory at localhost/

Update/remove page ESP8266 Webserver

I'm using the ESP8266WebServer.h library for the ESP8266. Files can be served to a specific filename by using something like:
...
void example() {
sendFile(200, "text/html", data_example, sizeof(data_example));
}
...
webServer.on ("/example.html", example);
Once a file is served it cannot be updated by executing server.on ("/example.html", example2);.
How can a hosted file be removed (or updated to a blank file) so that it will return a 404 error ?
why not include a conditionnal logic in your example() function in order serve a 404 page when necessary ?
Hope it helps.

How to rename a jar file inside another jar file?

I have jar foo.jar which contains jar foo/config/baar-temp.jar.
What is the best method to rename baar-temp.jar to baar.jar?
Actually, jar format is based on zip and can be operated on as a file system using for example ZipFileSystemProvider available in Java7. That allows us to do a rather simple manipulation with the insides of one:
private void renameStuffInsideJar(String jarFilePath){
URI uri = URI.create("jar:file:"+jarFilePath);
try {
FileSystem jarFile = FileSystems.getFileSystem(uri)) {
Path pathInJarfile = jarFile.getPath("foo/config/baar-temp.jar");
Files.move(pathInZipfile,pathInZipfile.resolveSibling("baar.jar"));
} catch(IOException e){
//TODO
}
}
Alternatively, if it's not code you want, you could just open your jar file in your preferred archive manager like 7zip or WinRar and rename it using that.

How can I compress / gzip my mimified .js and .css files before publishing to AWS S3?

I ran Google pagespeed and it suggests compressing my .js and .css
Eliminate render-blocking JavaScript and CSS in above-the-fold content
Show how to fix
Enable compression
Compressing resources with gzip or deflate can reduce the number of bytes sent over the network.
Enable compression for the following resources to reduce their transfer size by 210.9KiB (68% reduction).
Compressing http://xx.com/content/bundles/js.min.js could save 157.3KiB (65% reduction).
Compressing http://xx.com/content/bundles/css.min.css could save 35.5KiB (79% reduction).
Compressing http://xx.com/ could save 18.1KiB (79% reduction).
During my publish I have a step that uses Windows Powershell to move a .js and .css mimified bundle to S3 and this goes to cloudfront.
Is there some step I could add in the PowerShell script that would compress the .js and .css files?
Also once the files are compressed then do I have to do anything more than change the name so as to tell my browser that it will need to try and accept a gzip file?
You can add to your upload script the needed code to gzip compress the files.
Some example code could be this:
function Gzip-FileSimple
{
param
(
[String]$inFile = $(throw "Gzip-File: No filename specified"),
[String]$outFile = $($inFile + ".gz"),
[switch]$delete # Delete the original file
)
trap
{
Write-Host "Received an exception: $_. Exiting."
break
}
if (! (Test-Path $inFile))
{
"Input file $inFile does not exist."
exit 1
}
Write-Host "Compressing $inFile to $outFile."
$input = New-Object System.IO.FileStream $inFile, ([IO.FileMode]::Open), ([IO.FileAccess]::Read), ([IO.FileShare]::Read)
$buffer = New-Object byte[]($input.Length)
$byteCount = $input.Read($buffer, 0, $input.Length)
if ($byteCount -ne $input.Length)
{
$input.Close()
Write-Host "Failure reading $inFile."
exit 2
}
$input.Close()
$output = New-Object System.IO.FileStream $outFile, ([IO.FileMode]::Create), ([IO.FileAccess]::Write), ([IO.FileShare]::None)
$gzipStream = New-Object System.IO.Compression.GzipStream $output, ([IO.Compression.CompressionMode]::Compress)
$gzipStream.Write($buffer, 0, $buffer.Length)
$gzipStream.Close()
$output.Close()
if ($delete)
{
Remove-Item $inFile
}
}
From this site: Gzip creation in Powershell
Powershell community extensions has a scriptlet for gZipping files, and it's very easy to use:
Write-Gzip foo.js #will create foo.js.gz
mv foo.js.gz foo.js -Force
You don't have to rename your files, just add a Content-Encoding header and set it to gzip.
Since Amazon S3 is intended to only serve static files, it doesn't compress files (assets), that's why you need to compress them by yourself:
Compress your .js and .css with gzip: I don't know about howto with PowerShell, but I do with Python, I suggest to make a python deployment script (even better a fabfile) and integrate the compression and push code on it.
"Also once the files are compressed then do I have to do anything more than change the name so as to tell my browser that it will need to try and accept a gzip file?" : Good Questions ! It is not necessary to change the name of the compressed file, I suggest to don't rename. However, you:
MUST set also the header "Content-Encoding:gzip" otherwise browsers will not know the file.
Must set the headers 'Content-Type': type (type = 'application/javascript' or 'text/css')
Must set the header 'x-amz-acl': 'public-read': to make it public accessible.
I suggest also to set the header "Cache-Control:max-age=TIME" (example: TIME=31104000, for 360 days): To help browsers caching it (Better performance)
This will work whether served from origin or with cloudfront. But remember, if you serve them with cloudfront, You will need to invalidate all the files after each push, otherwise the old version will live up to 24 hours from the push. Hope this helps, I can provide a python solution if needed.

How to get the uploaded file path?

I am using input tag type="file" to browse the file in asp.net.
I browsed the file "Linq2sql.zip" from the location "c\Desktop\Karthik\Linq2sql.zip".
i can get the file name and path using
HttpPostedFileBase file;
var filePath = Path.GetFullPath(file.FileName);
But File path is like = C:\\Program Files (x86)\\Common Files\\Microsoft Shared\\DevServer\\10.0\\Linq2sql.zip
i have to get the original file path c\\Desktop\\Karthik\\Linq2sql.zip. How can i get?
You can not get the original path of the file on the client system; that information is not sent by the client.
The reason you get what you do with GetFullPath is because that forces a resolution with the simple file name alone with the current directory of the asp.net process. That info is utterly meaningless - and in fact incorrect - in this case.

Resources