I have searched for something similar and I keep running across the FTP download answers. This is helpful information, but ultimately proving to be difficult to translate. I have found a powershell script and it works, but I am wondering if it can be tweaked for my needs. I don't have much experience with powershell scripting, but I'm trying to learn.
The need is this. I need to download and install a series of files to a remote machine, unattended. The files are distributed via email via tinyurls. I currently throw those into a .txt file, then have a powershell script read the list and download each file.
Requirements of the project and why I have turned to powershell (and not other utilities), is that these are very specialized machines. The only tools available are ones that are baked into Windows 7 embedded.
The difficulties I run into are:
The files download one at the time. I would like to grab as many downloads at the same time that the web server will allow. (usually 6)
The current script creates file names based off the tinyurl. I need the actual file name from the webserver.
Thanks in advance for any suggestions.
Below is the script I’m currently using.
# Copyright (C) 2011 by David Wright (davidwright#digitalwindfire.com)
# All Rights Reserved.
# Redistribution and use in source and binary forms, with or without
# modification or permission, are permitted.
# Additional information available at http://www.digitalwindfire.com.
$folder = "d:\downloads\"
$userAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20100101 Firefox/7.0.1"
$web = New-Object System.Net.WebClient
$web.Headers.Add("user-agent", $userAgent)
Get-Content "d:\downloads\files.txt" |
Foreach-Object {
"Downloading " + $_
try {
$target = join-path $folder ([io.path]::getfilename($_))
$web.DownloadFile($_, $target)
} catch {
$_.Exception.Message
}
}
If you do the web request before you decide on file name you should be able to get the expanded path (otherwise you would have to make two web requests, one to get the extended path and one to download the file).
When I tried this, I found that the BaseResponse property of the Microsoft.PowerShell.Commands.HtmlWebResponseObject returned by the Invoke-WebRequest cmdlet had a ResponseUri property which was the extended path we are looking for.
If you get the correct response, just save the file using the name from the extended path, something like the following (this sample code does not look at HTTP response codes or similar, but expects everything to go well):
function Save-TinyUrlFile
{
PARAM (
$TinyUrl,
$DestinationFolder
)
$response = Invoke-WebRequest -Uri $TinyUrl
$filename = [System.IO.Path]::GetFileName($response.BaseResponse.ResponseUri.OriginalString)
$filepath = [System.IO.Path]::Combine($DestinationFolder, $filename)
try
{
$filestream = [System.IO.File]::Create($filepath)
$response.RawContentStream.WriteTo($filestream)
$filestream.Close()
}
finally
{
if ($filestream)
{
$filestream.Dispose();
}
}
}
This method could be called using something like the following, given that the $HOME\Documents\Temp folder exists:
Save-TinyUrlFile -TinyUrl http://tinyurl.com/ojt3lgz -DestinationFolder $HOME\Documents\Temp
On my computer, that saves a file called robots.txt, taken from a github repository, to my computer.
If you want to download many files at the same time, you could let PowerShell make this happen for you. Either use PowerShell workflows parallel functionality or simply start a Job for each url. Here's a sample on how you could do it using PowerShell Jobs:
Get-Content files.txt | Foreach {
Start-Job {
function Save-TinyUrlFile
{
PARAM (
$TinyUrl,
$DestinationFolder
)
$response = Invoke-WebRequest -Uri $TinyUrl
$filename = [System.IO.Path]::GetFileName($response.BaseResponse.ResponseUri.OriginalString)
$filepath = [System.IO.Path]::Combine($DestinationFolder, $filename)
try
{
$filestream = [System.IO.File]::Create($filepath)
$response.RawContentStream.WriteTo($filestream)
$filestream.Close()
}
finally
{
if ($filestream)
{
$filestream.Dispose();
}
}
}
Save-TinyUrlFile -TinyUrl $args[0] -DestinationFolder $args[1]
} -ArgumentList $_, "$HOME\documents\temp"
}
Related
I wrote a small software using .net6 which should run on Windows and Linux (Ubuntu). In this software I need to access a file in a folder.
Linux: /folder1/folder2/file.txt
Windows: d:\folder1\folder2\file.txt
The folder structure and the filename is the same on both systems.
This code works so far
string[] pfad;
pfad = new[] { "folder1", "folder2","file.txt" };
Console.WriteLine(System.IO.Path.Combine(pfad));
and delivers the correct folder structur under Linux and Windows.
How can I define the root directory?
/ in Linux and d:\ in Windows
Can I detect the OS type somehow or what is the best approach?
Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData); is "fix" under Windows to C:... - I want to use another drive.
Borrowing from stefan answer but using OperatingSystem class instead of RuntimeInformation (since OperatingSystem is part of System i believe it's preferable)
string rootPath;
if (OperatingSystem.IsWindows())
rootPath = #"d:\";
else if (OperatingSystem.IsLinux())
rootPath = "/";
else
{
// maybe throw an exception
}
You can use System.Runtime.InteropServices.RuntimeInformation like this:
string rootPath;
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
{
rootPath = #"d:\";
}
else if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux))
{
rootPath = "/";
}
I am writing a custom endpoint for a REST api in wordpress, following the guide here: https://developer.wordpress.org/rest-api/extending-the-rest-api/adding-custom-endpoints/
I am able to write a endpoint that returns json data. But how can I write an endpoint that returns binary data (pdf, png, and similar)?
My restpoint function returns a WP_REST_Response (or WP_Error in case of error).
But I do not see what I should return if I want to responde with binary data.
Late to the party, but I feel the accepted answer does not really answer the question, and Google found this question when I searched for the same solution, so here is how I eventually solved the same problem (i.e. avoiding to use WP_REST_Response and killing the PHP script before WP tried to send anything else other than my binary data).
function download(WP_REST_Request $request) {
$dir = $request->get_param("dir");
// The following is for security, but my implementation is out
// of scope for this answer. You should either skip this line if
// you trust your client, or implement it the way you need it.
$dir = sanitize_path($dir);
$file = $request->get_param("file");
// See above...
$file = sanitize_path($file);
$sandbox = "/some/path/with/shared/files";
// full path to the file
$path = $sandbox.$dir.$file;
$name = basename($path);
// get the file mime type
$finfo = finfo_open(FILEINFO_MIME_TYPE);
$mime_type = finfo_file($finfo, $path);
// tell the browser what it's about to receive
header("Content-Disposition: attachment; filename=$name;");
header("Content-Type: $mime_type");
header("Content-Description: File Transfer");
header("Content-Transfer-Encoding: binary");
header('Content-Length: ' . filesize($path));
header("Cache-Control: no-cache private");
// stream the file without loading it into RAM completely
$fp = fopen($path, 'rb');
fpassthru($fp);
// kill WP
exit;
}
I would look at something called DOMPDF. In short, it streams any HTML DOM straight to the browser.
We use it to generate live copies of invoices straight from the woo admin, generate brochures based on $wp_query results etc. Anything that can be rendered by a browser can be streamed via DOMPDF.
I have a website on which users can write blog posts. I'm using stackoverflow pagedown editor to allow users to add content & also the images by inserting their link.
But the problem is that in case a user inserts a link starting with http:// such as http://example.com/image.jpg, browser shows a warning saying,
Your Connection to this site is not Fully Secure.
Attackers might be able to see the images you are looking at
& trick you by modifying them
I was wondering how can we force the browser to use the https:// version of site only from which image is being inserted, especially when user inserts a link starting with http://?
Or is there any other solution of this issue?
image
unfortunately, browser expect to have all loaded ressources provided over ssl. On your case you have no choice than self store all images or create or proxy request from http to https. But i am not sure if is really safe to do this way.
for exemple you can do something like this :
i assume code is php, and over https
<?php
define('CHUNK_SIZE', 1024*1024); // Size (in bytes) of tiles chunk
// Read a file and display its content chunk by chunk
function readfile_chunked($filename, $retbytes = TRUE) {
$buffer = '';
$cnt = 0;
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, CHUNK_SIZE);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
$filename = 'http://domain.ltd/path/to/image.jpeg';
$mimetype = 'image/jpeg';
header('Content-Type: '.$mimetype );
readfile_chunked($filename);
Credit for code sample
_ UPDATE 1 _
Alternate solution to proxify steamed downloaded file in Python
_ UPDATE 2 _
On following code, you can stream data from remote server to your front-end client, if your Django application is over https, content will be deliver correctly.
Goal is to read by group of 1024 bits your original images, them stream each group to your browser. This approch avoid timeout issue when you try to load heavy image.
I recommand you to add another layer to have local cache instead to download -> proxy on each request.
import requests
# have this function in file where you keep your util functions
def url2yield(url, chunksize=1024):
s = requests.Session()
# Note: here i enabled the streaming
response = s.get(url, stream=True)
chunk = True
while chunk :
chunk = response.raw.read(chunksize)
if not chunk:
break
yield chunk
# Then creation your view using StreamingHttpResponse
def get_image(request, img_id):
img_url = "domain.ltd/lorem.jpg"
return StreamingHttpResponse(url2yield(img_url), content_type="image/jpeg")
I'm having trouble using the ftpUpload() function of RCurl to upload a file to a non-existent folder in an SFTP. I want the folder to be made if its not there, using the ftp.create.missing.dirs option. Here's my code currently:
.opts <- list(ftp.create.missing.dirs=TRUE)
ftpUpload(what = "test.txt",
to "sftp://ftp.testserver.com:22/newFolder/existingfile.txt",
userpwd = paste(user, pwd, sep = ":"), .opts = opts)`
It doesn't seem to be working as I get the following error:
* Initialized password authentication
* Authentication complete
* Failed to close libssh2 file
I can upload a file to an existent folder with success, its just when the folder isn't there I get the error.
The problem seems be due the fact you are trying to create the new folder, as seen in this question: Create an remote directory using SFTP / RCurl
The error can be found in Microsoft R Open git page:
case SSH_SFTP_CLOSE:
if(sshc->sftp_handle) {
rc = libssh2_sftp_close(sshc->sftp_handle);
if(rc == LIBSSH2_ERROR_EAGAIN) {
break;
}
else if(rc < 0) {
infof(data, "Failed to close libssh2 file\n");
}
sshc->sftp_handle = NULL;
}
if(sftp_scp)
Curl_safefree(sftp_scp->path);
In the code the parameter rc is related to libssh2_sftp_close function (more info here https://www.libssh2.org/libssh2_sftp_close_handle.html), that tries close the nonexistent directory, resulting in the error.
Try use curlPerform as:
curlPerform(url="ftp.xxx.xxx.xxx.xxx/";, postquote="MkDir /newFolder/", userpwd="user:pass")
I have some jar file (custom) which I need to publish to Sonatype Nexus repository from Groovy script.
I have jar located in some path on machine where Groovy script works (for instance: c:\temp\module.jar).
My Nexus repo url is http://:/nexus/content/repositories/
On this repo I have folder structure like: folder1->folder2->folder3
During publishing my jar I need to create in folder3:
New directory with module's revision (my Groovy script knows this revision)
Upload jar to this directory
Create pom, md5 and sha1 files for jar uploaded
After several days of investigation I still have no idea how to create such script but this way looks very clear instead of using direct uploading.
I found http://groovy.codehaus.org/Using+Ant+Libraries+with+AntBuilder and some other stuff (stackoverflow non script solution).
I got how to create ivy.xml in my Groovy script, but I don't understand how to create build.xml and ivysetting.xml on the fly and setup whole system to work.
Could you please help to understand Groovy's way?
UPDATE:
I found that the following command works fine for me:
curl -v -F r=thirdparty -F hasPom=false -F e=jar -F g=<my_groupId> -F a=<my_artifactId> -F v=<my_artifactVersion> -F p=jar -F file=#module.jar -u admin:admin123 http://<my_nexusServer>:8081/nexus/service/local/repositories
As I understand curl perform POST request to Nexus services. Am I correct?
And now I'm trying to build HTTP POST request using Groovy HTTPBuilder.
How I should transform curl command parameters into Groovy's HTTPBuilder request?
Found a way to do this with the groovy HttpBuilder.
based on info from sonatype, and a few other sources.
This works with http-builder version 0.7.2 (not with earlier versions)
And also needs an extra dependency: 'org.apache.httpcomponents:httpmime:4.2.1'
The example also uses basic auth against nexus.
import groovyx.net.http.Method
import groovyx.net.http.ContentType;
import org.apache.http.HttpRequest
import org.apache.http.HttpRequestInterceptor
import org.apache.http.entity.mime.MultipartEntity
import org.apache.http.entity.mime.content.FileBody
import org.apache.http.entity.mime.content.StringBody
import org.apache.http.protocol.HttpContext
import groovyx.net.http.HttpResponseException;
class NexusUpload {
def uploadArtifact(Map artifact, File fileToUpload, String user, String password) {
def path = "/service/local/artifact/maven/content"
HTTPBuilder http = new HTTPBuilder("http://my-nexus.org/")
String basicAuthString = "Basic " + "$user:$password".bytes.encodeBase64().toString()
http.client.addRequestInterceptor(new HttpRequestInterceptor() {
void process(HttpRequest httpRequest, HttpContext httpContext) {
httpRequest.addHeader('Authorization', basicAuthString)
}
})
try {
http.request(Method.POST, ContentType.ANY) { req ->
uri.path = path
MultipartEntity entity = new MultipartEntity()
entity.addPart("hasPom", new StringBody("false"))
entity.addPart("file", new FileBody(fileToUpload))
entity.addPart("a", new StringBody("my-artifact-id"))
entity.addPart("g", new StringBody("my-group-id"))
entity.addPart("r", new StringBody("my-repository"))
entity.addPart("v", new StringBody("my-version"))
req.entity = entity
response.success = { resp, reader ->
if(resp.status == 201) {
println "success!"
}
}
}
} catch (HttpResponseException e) {
e.printStackTrace()
}
}
}
`
Ivy is an open source library, so, one approach would be to call the classes directly. The problem with that approach is that there are few examples on how to invoke ivy programmatically.
Since groovy has excellent support for generating XML, I favour the slightly dumber approach of creating the files I understand as an ivy user.
The following example is designed to publish files into Nexus generating both the ivy and ivysettings files:
import groovy.xml.NamespaceBuilder
import groovy.xml.MarkupBuilder
// Methods
// =======
def generateIvyFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml."ivy-module"(version:"2.0") {
info(organisation:"org.dummy", module:"dummy")
publications() {
artifact(name:"dummy", type:"pom")
artifact(name:"dummy", type:"jar")
}
}
}
return file
}
def generateSettingsFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml.ivysettings() {
settings(defaultResolver:"central")
credentials(host:"myrepo.com" ,realm:"Sonatype Nexus Repository Manager", username:"deployment", passwd:"deployment123")
resolvers() {
ibiblio(name:"central", m2compatible:true)
ibiblio(name:"myrepo", root:"http://myrepo.com/nexus", m2compatible:true)
}
}
}
return file
}
// Main program
// ============
def ant = new AntBuilder()
def ivy = NamespaceBuilder.newInstance(ant, 'antlib:org.apache.ivy.ant')
generateSettingsFile("ivysettings.xml").deleteOnExit()
generateIvyFile("ivy.xml").deleteOnExit()
ivy.resolve()
ivy.publish(resolver:"myrepo", pubrevision:"1.0", publishivy:false) {
artifacts(pattern:"build/poms/[artifact].[ext]")
artifacts(pattern:"build/jars/[artifact].[ext]")
}
Notes:
More complex? Perhaps... however, if you're not generating the ivy file (using it to manage your dependencies) you can easily call the makepom task to generate the Maven POM files prior to upload into Nexus.
The REST APIs for Nexus work fine. I find them a little cryptic and of course a solution that uses them cannot support more than one repository manager (Nexus is not the only repository manager technology available).
The "deleteOnExit" File method call ensures the working files are cleaned up properly.