All,
I would like to get a list of files off of a server with the full url in tact. For example, I would like to get all the TIFFs from here.
http://hyperquad.telascience.org/naipsource/Texas/20100801/*
I can download all the .tif files with wget but I am looking for is just the full url to each file like this.
http://hyperquad.telascience.org/naipsource/Texas/20100801/naip10_1m_2597_04_2_20100430.tif
http://hyperquad.telascience.org/naipsource/Texas/20100801/naip10_1m_2597_04_3_20100424.tif
http://hyperquad.telascience.org/naipsource/Texas/20100801/naip10_1m_2597_04_4_20100430.tif
http://hyperquad.telascience.org/naipsource/Texas/20100801/naip10_1m_2597_05_1_20100430.tif
http://hyperquad.telascience.org/naipsource/Texas/20100801/naip10_1m_2597_05_2_20100430.tif
Any thoughts on how to get all these files in to a list using something like curl or wget?
Adam
You'd need the server to be willing to give you a page with a listing on it. This would normally be an index.html or just ask for the directory.
http://hyperquad.telascience.org/naipsource/Texas/20100801/
It looks like you're in luck in this case so, at risk of upsetting the web master, the solution would be to use wget's recursive option. Specify a maximum recursion of 1 to keep it constrained to that single directory.
I would use lynx shell web browser to get the list of links + grep and awk shell tools to filter the results, like this:
lynx -dump -listonly <URL> | grep http | grep <regexp> | awk '{print $2}'
..where:
URL - is the start URL, in your case: http://hyperquad.telascience.org/naipsource/Texas/20100801/
regexp - is the regular expression that selects only files that interest you, in your case: \.tif$
Complete example commandline to get links to TIF files on this SO page:
lynx -dump -listonly http://stackoverflow.com/questions/6989681/getting-a-list-of-files-on-a-web-server | grep http | grep \.tif$ | awk '{print $2}'
..now returns:
http://hyperquad.telascience.org/naipsource/Texas/20100801/naip10_1m_2597_04_2_20100430.tif
http://hyperquad.telascience.org/naipsource/Texas/20100801/naip10_1m_2597_04_4_20100430.tif
http://hyperquad.telascience.org/naipsource/Texas/20100801/naip10_1m_2597_05_2_20100430.tif
If you wget http://hyperquad.telascience.org/naipsource/Texas/20100801/, the HTML that is returned contains the list of files. If you don't need this to be general, you could use regexes to extract the links. If you need something more robust, you can use an HTML parser (e.g. BeautifulSoup), and programmatically extract the links on the page (from the actual HTML structure).
With winscp have a find window that is possible search for all files in directories and subdirectories from a directory in the own web - after is possible select all and copy, and have in text all links to all files -, need have the username and password for connect ftp:
https://winscp.net/eng/download.php
I have a client-server system that retrieves the file names from an assigned folder in the app server's folder, then displays thumbnails in the client.
CLIENT: (slThumbnailNames is a string list)
== on the server side ===
A TIDCmdTCPServer has a CommandHandler GetThumbnailNames (a commandhandler is a procedure)
Hints: sMFFBServerPictures is generated in the oncreate method of the app server.
sThumbnailDir is passed to the app server from the client.
`slThumbnailNames := funGetThumbnailNames(sThumbNailPath);
function TfMFFBClient.funGetThumbnailNames(sThumbnailPath:string):TStringList;
var
slThisStringList:TStringList;
begin
slThisStringList := TStringList.Create;
dmMFFBClient.tcpMFFBClient.SendCmd('GetThumbnailNames,' + sThumbnailPath,700);
dmMFFBClient.tcpMFFBClient.IOHandler.Capture(slThisStringList);
result := slThisStringList;
end;
procedure TfMFFBServer.MFFBCmdTCPServercmdGetThumbnailNames(
ASender: TIdCommand);
var
sRec:TSearchRec;
sThumbnailDir:string;
i,iNumFiles: Integer;
begin
try
ASender.Response.Clear;
sThumbnailDir := ASender.Params[0];
iNumFiles := FindFirst(sMFFBServerPictures + sThumbnailDir + '*_t.jpg', faAnyfile, SRec );
if iNumFiles = 0 then
try
ASender.Response.Add(SRec.Name);
while iNumFiles = 0 do
begin
if (SRec.Attr and faDirectory <> faDirectory) then
ASender.Response.Add(SRec.Name);
iNumFiles := FindNext(SRec);
end;
finally
FindClose(SRec)
end
else
ASender.Response.Add('NO THUMBNAILS');
except
on e:exception do
begin
messagedlg('Error in procedure TfMFFBServer.MFFBCmdTCPServercmdGetThumbnailNames'+#13+
'Error msg: ' + e.Message,mterror,[mbok],0);
end;
end;
end;`
Related
So I have this script on Powershell which generates a csv that is used on a Power BI template. When it is done with the csv it is supposed to open the template and show a report with the data updated but I still need to refresh it by hand.
Edit: Forget it. The template opens updated but I still don't know if I can generate the report with commands
I have thought about using a "refresh update" and upload the report to the Power BI server but I don't want it to be only avaliable online.
This is my script (very basic version since its long):
#lots of commands to make the csv
Invoke-Expression "file"
The commands generate a clean csv which can be used with a common import but in R they appear as empty fields - probably because idk how to use R properly.
Is there any way I can do what I want using this script without having to upload the report first?
Thanks for your answers!
Since nobody is going to answer me, I'll give my final solution which is pretty bad but I want to close this asap so if anyone's having the same problem they can find a fast temporal solution.
My script to make tests:
##This creates a table which is used on a Power BI template
$num=0
$numpro=Read-Host -Prompt "Enter number of products"
echo("Name,Amount,Price") > products.txt
DO {
$num=$num + 1
$product=Read-Host -Prompt "Name of the product: "
$amount=Read-Host -Prompt "Amount we have of the product: "
$price=Read-Host -Prompt "Price for each product unit: "
echo("$product,$amount,$price") >> products.txt
}while($numpro -gt $num)
import-csv products.txt -delimiter "," | Export-csv products.csv
##clean lines
cat products.csv | where { $_ -match "#"} > delete.txt
$erase=Get-Content delete.txt
cat products.csv | %{$_ -replace "$erase",""} > def.txt
GC def.txt | where{$_ -ne ""} > products.csv
rm products.txt
rm delete.txt
rm def.txt
Invoke-Expression "full path to our .pbit"
Just use the script a first time without using "Invoke-Expression" to create a first table that you will use to create the template.
After creating the template, just use it full.
Btw: You have to use absolute paths on Power BI so don't change the file that contains the table and, if you do that, be sure to change the source and save the template again.
I'm new to unix. I have a file which has network connection details.I am trying to extract only the hostname and port number from the file using shell script.The data is like this "(example.easyway.com=(description=(address_list=(protocol=tcp)(host=184.43.35.345)(port=1234))(connect=port))"
I've 100 lines of connection information like this.I have to extract only the Host name and port and paste it in a new file. can anyone guide me to do this?
There a different ways in Unix to do so, something like
sed 's/^..\([^=]*\)=.*port=\([^)]*\).*/\1 \2/' file
I think you will not understand this and want something easier now. You can tryit with some steps, checking after each step:
cut -d= -f1,7 file | cut -d")" -f1 | cut -c2-
The easiest way is, when you are unfamiliar with these tool, is opening the file in some editor, replace global the string =(description=(address_list=(protocol=tcp)(host= by a space (or use regular expressions in your editor), the same for ))(connect=port)) and sit for 10 minutes to edit te remainig part of the 100 lines.
That looks like Oracle TNS configuration to me. Presuming that host always comes before port this call out to Perl would to the trick
perl -ne 'print "$1:$2\n" if(/host=([\w\.-]+).*port=(\d+)/)' < my-tns-config.txt
If the order of port and host is unpredictable then this would work
perl -ne 'print "$1:$2\n" if(/host=([\w\.-]+).*port=(\d+)|port=(\d+).*host=([\w\.-.]+)/)' < my-tns-config.txt
Check https://regex101.com/ or https://regexper.com for an explanation of those regular expressions.
M.
I want to remove lots of temporary PS datasets with dataset name like MYTEST.**, but still can't find an easy way to handle the task.
I meant to use a Shell command below to remove them
cat "//'dataset.list'"| xargs -I '{}' tsocmd "delete '{}'"
However, first I have to save the dataset list into a PS dataset or Unix file. In Unix, we can redirect output of ls command into a text file: "ls MYTEST.* > dslist", but on TSO or ISPF panel, seems no simple command to do that.
Anyone has any clue on this? Your comment would be appreciated.
Rexx ISPF option is probably the easiest and can be used in the future, but options include:
Use the save command in ispf 3.4 to save to a file, then use a rexx program on the file created by the save command
listcat command, in particular
listcat lvl(MYTEST) ofile(ddname)
then write a rexx program to do the actual delete
Alternatively you can use the ISPF services LMDINIT, LMDLISTY & LMDFREE in a rexx program running under ISPF i.e.
/* Rexx ispf program to process datasets */
Address ispexec
"LMDINIT LISTID(lidv) LEVEL(MYTEST)"
"LMDLIST LISTID("lidv") OPTION(list) dataset(dsvar) stats(yes)"
do while rc = 0
/* Delete or whatever */
end
"LMDFREE LISTID("lidv")"
For all these methods you need to fully qualify the first High level qualifier.
Learning what Rexx / ISPF will serve you into the future. In the ISPF Editor, you can use the model command to get Templates / information for all the ISPF commands:
Command ====> Model LMDINIT
will add a template for the lmdinit command. There are templates for rexx, cobol, pl1, ISPF-panels, ISPF-skeletons messages etc.
Thanks Bruce for the comprehensive answer. According to Bruce's tips, I just worked out a one-line Shell command as below:
tsocmd "listcat lvl(MYTEST) " | grep -E "MYTEST(\..+)+" | cut -d' ' -f3 | xargs -I '{}' tsocmd "delete '{}'"
Above command works perfectly.
Update - The IDCAMS DELETE command has had the MASK operand for a while. You use it like:
DELETE 'MYTEST.**' MASK
Documentation for z/OS 2.1 is here.
Is there a standard way in a unixesque (sh/bash/zsh) system to execute a group of scripts as if the group of scripts was one script? (Think index.html). The point is to avoid additional helper scripts like you usually find and keep small programs self sufficient and easier to maintain.
Say I have two (in bold) ruby scripts.
/bin /bin/foo_master /bin/foo_master/main
/bin/foo_master/helper.rb
So now when I execute foo_master
seo#macbook ~ $foo_master [/bin/foo_master/main]: Make
new friends, but keep the old. [/bin/foo_master/helper.rb]: One
is silver and the other gold.
If you're trying to do this without creating a helper script, the typical way to do this would just be to execute both (note: I'll use : $; to represent the shell prompt):
: $; ./main; ./helper.rb
Now, if you're trying to capture the output of both into a file, say, then you can group these into a subshell, with parenthesis, and capture the output of the subshell as if it was a single command, like so:
: $; (./main; ./helper.rb) > index.html
Is this what you're after? I'm a little unclear on what your final goal is. If you want to make this a heavily repeatable thing, then one probably would want to create a wrapper command... but if you just want to run two commands as one, you can do one of the above two options, and it should work for most cases. (Feel free to expand the question, though, if I'm missing what you're after.)
I figured out how to do this in a semi-standard complaint fashion.
I used the eval syntax in shell scripting to lambda evaluate the $PATH at runtime. So in my /etc/.zshrc
$REALPATH = $PATH
$PATH = $REALPATH:`find_paths`
where find_paths is a function that recursively searches the $PATH directories for folders (pseudocode below)
(foreach path in $PATH => ls -d -- */)
So we go from this:
seo#macbook $ echo $PATH
/bin/:/usr/bin/
To this, automagically:
seo#macbook $ echo $PATH
/bin/:/usr/bin/:/bin/foo_master/
Now I just rename main to "foo_master" and voilĂ ! Self contained executable, dare I say "app".
Yep that's an easy one!
#!/bin/bash
/bin/foo_master/main
/bin/foo_master/helper.rb
Save the file as foo_master.sh and type this in the shell:
seo#macbook ~ $sudo chmod +x foo_master.sh
Then to run type:
seo#macbook ~ $./foo_master.sh
EDIT:
The reason that an index.html file is served at any given directory is because the HTTP Server explicitly looks for one. (In server config files you can specify names of files to look for to server like index.html i.e. index.php index.htm foo.html etc). Thus it is not magical. At some point, a "helper script" is explicitly looking for files. I don't think writing a script like above is a step you can skip.
I'm trying to automate a check for missing routes a Play! web application.
The routing table is in a file in the following format:
GET /home Home.index
GET /shop Shop.index
I've already managed to use my command line-fu to crawl through my code and make a list of all the actions that should be present in the file. This list is in the following format:
Home.index
Shop.index
Contact.index
About.index
Now I'd like to pipe the output of this text into another command that checks if each line is present in the route file. I'm not sure how to proceed though.
The result should be something like this:
Contact.index
About.index
Does someone have a helpful suggestion on how I can accomplish this?
try this line:
awk 'NR==FNR{a[$NF];next}!($0 in a)' routes.txt list.txt
EDIT
if you want the above line to accept list from stdin:
cat list.txt|awk 'NR==FNR{a[$NF];next}!($0 in a)' routes.txt -
replace cat list.txt with your magic command