Tcl - Recursive walking and FTP upload - recursion

How can I do a recursive walk through a local folder in order to upload everything it has inside it to the desired ftp folder? Here's what I have so far :
package require ftp
set host **
set user **
set pass **
set ftpdirectory **
set localdirectory **
proc upload {host user pass dir fileList} {
set handle [::ftp::Open $host $user $pass]
ftpGoToDir $handle $dir
# some counters for our feedback string
set j 1
set k [llength $fileList]
foreach i $fileList {
upload:status "uploading ($j/$k) $i"
::ftp::Put $handle $i
incr j
}
::ftp::Close $handle
}
#---------------
# feedback
#---------------
proc upload:status {msg} {
puts $msg
}
#---------------
# go to directory in server
#---------------
proc ftpGoToDir {handle path} {
::ftp::Cd $handle /
foreach dir [file split $path] {
if {![::ftp::Cd $handle $dir]} {
::ftp::MkDir $handle $dir
::ftp::Cd $handle $dir
}
}
}
proc watchDirChange {dir intv {script {}} {lastMTime {}}} {
set nowMTime [file mtime $dir]
if [string eq $lastMTime ""] {
set lastMTime $nowMTime
} elseif {$nowMTime != $lastMTime} {
# synchronous execution, so no other after event may fire in between
catch {uplevel #0 $script}
set lastMTime $nowMTime
}
after $intv [list watchDirChange $dir $intv $script $lastMTime]
}
watchDirChange $localdirectory 5000 {
puts stdout {Directory $localdirectory changed!}
upload $host $user $pass $ftpdirectory [glob -directory $localdirectory -nocomplain *]
}
vwait forever
Thanks in advance :)

You're already using the ftp package, so that means you've got tcllib installed. Good. That means in turn that you've got the fileutil package as well, and can do this:
package require fileutil
# How to do the testing; I'm assuming you only want to upload real files
proc isFile f {
return [file isfile $f]
}
set filesToUpload [fileutil::find $dirToSearchFrom isFile]
The fileutil::find command is very much like a recursive glob, except that you specify the filter as a command instead of via options.
You might prefer to use rsync instead though; it's not a Tcl command, but it is very good and it will minimize the amount of data actually transferred.

Related

Returning output from scriptblock from Get-Job?

I'm trying to test out asynchronous functionality in PowerShell 3.
So I figured I would query the uptimes of some servers remotely, and wrote the following script:
function getServerUptimes() {
# Obtain credentials for logging into
# the remote machines...
$cred = Get-Credential
$compsHash = #{
"server1.domain.com" = $null;
"server2.domain.com" = $null;
}
# Define an async block to run...no blocking in this script...
$cmd = {
param($computer, $dasCred)
Write-Host "which: $($computer)"
$session = new-cimsession $computer -Credential $dasCred
return (get-CimInstance win32_operatingsystem -CimSession $session | Select PScomputername, LastBootuptime);
}
ForEach($comp in $compsHash.Keys) {
Write-Host "${comp}"
# Kick off an async process to create a cimSession
# on each of these machines...
Start-Job -ScriptBlock $cmd -ArgumentList $comp, $cred -Name "Get$($comp)"
}
$results = Get-Job | Wait-Job | Receive-Job
# Retrieve the job, so we can look at the output
ForEach($comp in $compHash.Keys) {
$dasJob = Get-Job -Name "Get$($comp)"
Write-Host $dasJob.output
}
}
However, I don't really seem to get back any output in the resulting $dasJob object, I returned the value in my scriptblock, where is it going?
You already have the output (not including Write-Host output, of course) in the variable $results. In PowerShell you retrieve job output via Receive-Job. The property Output seems to always be empty (not sure why).

Powershell Scheduled Task Open Web Page to Execute Script

In an effort to setup a "cron job" like scheduled task on Windows I've setup a Powershell script using code recommended via previous stackoverflow question.
I have some backups that I need to cleanup daily and delete old backups so I created a asp.net script to perform this task - the file name is BackupCleanup.aspx and I have confirmed that the ASP.net script does work when executed on its own by visiting the above url - I however cannot get it to execute using the Powershell script below.
The Powershell Script code I'm using is:
$request = [System.Net.WebRequest]::Create("http://127.0.0.1/BackupCleanup.aspx")
$response = $request.GetResponse()
$response.Close()
I have created this file with a PS1 extension, it shows properly in my os (Windows 2008) - I have tried both manually executing this task by right clicking and choosing "Run with Powershell" and also have scheduled this as a task - both to no avail.
I cannot figure out why the script does not work - any help would be GREATLY appreciated.
Here is the Powershell script I use to call up web pages using IE. Hopefully this will work for you as well.
Function NavigateTo([string] $url, [int] $delayTime = 100)
{
Write-Verbose "Navigating to $url"
$global:ie.Navigate($url)
WaitForPage $delayTime
}
Function WaitForPage([int] $delayTime = 100)
{
$loaded = $false
while ($loaded -eq $false) {
[System.Threading.Thread]::Sleep($delayTime)
#If the browser is not busy, the page is loaded
if (-not $global:ie.Busy)
{
$loaded = $true
}
}
$global:doc = $global:ie.Document
}
Function SetElementValueByName($name, $value, [int] $position = 0) {
if ($global:doc -eq $null) {
Write-Error "Document is null"
break
}
$elements = #($global:doc.getElementsByName($name))
if ($elements.Count -ne 0) {
$elements[$position].Value = $value
}
else {
Write-Warning "Couldn't find any element with name ""$name"""
}
}
Function ClickElementById($id)
{
$element = $global:doc.getElementById($id)
if ($element -ne $null) {
$element.Click()
WaitForPage
}
else {
Write-Error "Couldn't find element with id ""$id"""
break
}
}
Function ClickElementByName($name, [int] $position = 0)
{
if ($global:doc -eq $null) {
Write-Error "Document is null"
break
}
$elements = #($global:doc.getElementsByName($name))
if ($elements.Count -ne 0) {
$elements[$position].Click()
WaitForPage
}
else {
Write-Error "Couldn't find element with name ""$name"" at position ""$position"""
break
}
}
Function ClickElementByTagName($name, [int] $position = 0)
{
if ($global:doc -eq $null) {
Write-Error "Document is null"
break
}
$elements = #($global:doc.getElementsByTagName($name))
if ($elements.Count -ne 0) {
$elements[$position].Click()
WaitForPage
}
else {
Write-Error "Couldn't find element with tag name ""$name"" at position ""$position"""
break
}
}
#Entry point
# Setup references to IE
$global:ie = New-Object -com "InternetExplorer.Application"
$global:ie.Navigate("about:blank")
$global:ie.visible = $true
# Call the page
NavigateTo "http://127.0.0.1/BackupCleanup.aspx"
# Release resources
$global:ie.Quit()
$global:ie = $null
I had the same issue. I manually opened powershell and executed my script and I received "WebPage.ps1 cannot be loaded because running scripts is disabled on this system.".
You have to allow scripts to run
Execute the below in PowerShell
Set-ExecutionPolicy RemoteSigned -Scope LocalMachine

tcl deep recursive file search, search for files with *.c extension

Using an old answer to search for a file in tcl:
https://stackoverflow.com/a/435094/984975
First lets discuss what I am doing right now:
Using this function:(credit to Jacson)
# findFiles
# basedir - the directory to start looking in
# pattern - A pattern, as defined by the glob command, that the files must match
proc findFiles { basedir pattern } {
# Fix the directory name, this ensures the directory name is in the
# native format for the platform and contains a final directory seperator
set basedir [string trimright [file join [file normalize $basedir] { }]]
set fileList {}
# Look in the current directory for matching files, -type {f r}
# means ony readable normal files are looked at, -nocomplain stops
# an error being thrown if the returned list is empty
foreach fileName [glob -nocomplain -type {f r} -path $basedir $pattern] {
lappend fileList $fileName
}
# Now look for any sub direcories in the current directory
foreach dirName [glob -nocomplain -type {d r} -path $basedir *] {
# Recusively call the routine on the sub directory and append any
# new files to the results
set subDirList [findFiles $dirName $pattern]
if { [llength $subDirList] > 0 } {
foreach subDirFile $subDirList {
lappend fileList $subDirFile
}
}
}
return $fileList
}
And calling the following command:
findFiles some_dir_name *.c
The current result:
bad option "normalize": must be atime, attributes, channels, copy, delete, dirname, executable, exists, extension, isdirectory, isfile, join, lstat, mtime, mkdir, nativename, owned, pathtype, readable, readlink, rename, rootname, size, split, stat, tail, type, volumes, or writable
Now, if we run:
glob *.c
We get a lot of files but all of them in the current directory.
The goal is to get ALL the files in ALL sub-folders on the machine with their paths.
Anyone who could help?
What I really want to do is find the directory with the highest count of *.c files.
However, if I could list all the files and their paths, I could count, how many files are in each directory and get the one with the highest count.
You are using an old version of Tcl. [file normalize] was introduced in Tcl 8.4 around 2002 or so. Upgrade already.
If you can't - then you use glob but call it once just for files and then walk the directories. See the glob -types option.
Here's a demo:
proc on_visit {path} {
puts $path
}
proc visit {base glob func} {
foreach f [glob -nocomplain -types f -directory $base $glob] {
if {[catch {eval $func [list [file join $base $f]]} err]} {
puts stderr "error: $err"
}
}
foreach d [glob -nocomplain -types d -directory $base *] {
visit [file join $base $d] $glob $func
}
}
proc main {base} {
visit $base *.c [list on_visit]
}
main [lindex $argv 0]
I would use the ::fileutil::traverse function to do it.
Something like:
package require ::fileutil::traverse
proc check_path {path} {
string equal [file extension $path] ".c"
}
set obj [::fileutil::traverse %AUTO% -filter check_path]
array set pathes {}
$obj foreach file {
if {[info exists pathes([file dirname $file])]} {
incr pathes([file dirname $file])
} else {
set pathes([file dirname $file]) 1
}
}
# print pathes and find the biggest
foreach {name value} [array get pathes] {
puts "$name : $value"
}
For quick (1 level) file pattern matching use:
glob **/*.c
If you want to recursively search, then use:
proc ::findFiles { baseDir pattern } {
set dirs [ glob -nocomplain -type d [ file join $baseDir * ] ]
set files {}
foreach dir $dirs {
lappend files {*}[ findFiles $dir $pattern ]
}
lappend files {*}[ glob -nocomplain -type f [ file join $baseDir $pattern ] ]
return $files
}
puts [ join [ findFiles $basepath "*.tcl" ] \n ]

Install WordPress using bash shell without visiting /wp-admin/install.php?

I wrote this little BASH script that creates a folder,unzips Wordpress and creates a database for a site.
The final step is actually installing Wordpress, which usually involves pointing your browser to install.php and filling out a form in the GUI.
I want to do this from the BASH shell, but can't figure out how to invoke wp_install() and pass it the parameters it needs:
-admin_email
-admin_password
-weblog_title
-user_name
(line 85 in install.php)
Here's a similar question, but in python
#!/bin/bash
#ask for the site name
echo "Site Name:"
read name
# make site directory under splogs
mkdir /var/www/splogs/$name
dirname="/var/www/splogs/$name"
#import wordpress from dropbox
cp -r ~/Dropbox/Web/Resources/Wordpress/Core $dirname
cd $dirname
#unwrap the double wrap
mv Core/* ./
rm -r Core
mv wp-config-sample.php wp-config.php
sed -i 's/database_name_here/'$name'/g' ./wp-config.php
sed -i 's/username_here/root/g' ./wp-config.php
sed -i 's/password_here/mypassword/g' ./wp-config.php
cp -r ~/Dropbox/Web/Resources/Wordpress/Themes/responsive $dirname/wp-content/t$
cd $dirname
CMD="create database $name"
mysql -uroot -pmypass -e "$CMD"
How do I alter the script to automatically run the installer without the need to open a browser?
Check out wp-cli, based on Drush for Drupal.
wp core install --url=url --title=site-title [--admin_name=username] --admin_email=email --admin_password=password
All commands:
wp core [download|config|install|install_network|version|update|update_db]
wp db [create|drop|optimize|repair|connect|cli|query|export|import]
wp eval-file
wp eval
wp export [validate_arguments]
wp generate [posts|users]
wp home
wp option [add|update|delete|get]
wp plugin [activate|deactivate|toggle|path|update|uninstall|delete|status|install]
wp post-meta [get|delete|add|update]
wp post [create|update|delete]
wp theme [activate|path|delete|status|install|update]
wp transient [get|set|delete|type]
wp user-meta [get|delete|add|update]
wp user [list|delete|create|update]
I was having the same problem as you are. I tried Victor's method and it didn't quite work.
I made a few edits and it works now!
You have to add php tags inside of the script to make the code work, otherwise it just echoes to the terminal.
My script directly calls the wp_install function of upgrade.php, bypassing install.php completely (no edits to other files required).
I made my script named script.sh, made it executable, dropped it in the wp-admin directory, and ran it from the terminal.
#!/usr/bin/php
<?php
function get_args()
{
$args = array();
for ($i=1; $i<count($_SERVER['argv']); $i++)
{
$arg = $_SERVER['argv'][$i];
if ($arg{0} == '-' && $arg{1} != '-')
{
for ($j=1; $j < strlen($arg); $j++)
{
$key = $arg{$j};
$value = $_SERVER['argv'][$i+1]{0} != '-' ? preg_replace(array('/^["\']/', '/["\']$/'), '', $_SERVER['argv'][++$i]) : true;
$args[$key] = $value;
}
}
else
$args[] = $arg;
}
return $args;
}
// read commandline arguments
$opt = get_args();
define( 'WP_INSTALLING', true );
/** Load WordPress Bootstrap */
require_once( dirname( dirname( __FILE__ ) ) . '/wp-load.php' );
/** Load WordPress Administration Upgrade API */
require_once( dirname( __FILE__ ) . '/includes/upgrade.php' );
/** Load wpdb */
require_once(dirname(dirname(__FILE__)) . '/wp-includes/wp-db.php');
$result = wp_install($opt[0], $opt[1], $opt[2], false, '', $opt[3]);
?>
I called the file like this: # ./script.sh SiteName UserName email#address.com Password
Maybe you need to modify the Wordpress original installer a bit.
First, create a wrapper php CLI script, let's say its name is wrapper.sh:
#!/usr/bin/php -qC
function get_args()
{
$args = array();
for ($i=1; $i<count($_SERVER['argv']); $i++)
{
$arg = $_SERVER['argv'][$i];
if ($arg{0} == '-' && $arg{1} != '-')
{
for ($j=1; $j < strlen($arg); $j++)
{
$key = $arg{$j};
$value = $_SERVER['argv'][$i+1]{0} != '-' ? preg_replace(array('/^["\']/', '/["\']$/'), '', $_SERVER['argv'][++$i]) : true;
$args[$key] = $value;
}
}
else
$args[] = $arg;
}
return $args;
}
// read commandline arguments
$opt = get_args();
require "install.php";
This will allow you to invoke the script from the command line, and pass arguments to it directly into the $opt numeric array.
You can then pass the needed vars in a strict order you define, for instance:
./wrapper.sh <admin_email> <admin_password> <weblog_title> <user_name>
In the install.php you need to change the definition of the before mentioned vars, as it follows:
global $opt;
$admin_email = $opt[0];
$admin_password = $opt[1];
$weblog_title = $opt[2];
$user_name = $opt[3];
Then let the install script do its job.
This is an untested method, and also very open to any modifications you need. It's mainly a guideline for using a wrapper php/cli script to define the needed variable w/out having to send them via a HTTP REQUEST / query string. Maybe it's rather a weird way to get things done, so please, feel free to give any constructive/destructive feedback :-)
It's incredible how little discussion there is on this topic. I think it's awesome that WP-CLI was released and now acquired by Automattic, which should help to keep the project going long-term.
But relying on another dependency is not ideal, esp. when dealing with automated deployment...
This is what we came up with for SlickStack...
First, we save a MySQL "test" query and grep for e.g. wp_options as variables:
QUERY_PRODUCTION_WP_OPTIONS_EXIST=$(mysql --execute "SHOW TABLES FROM ${DB_NAME} WHERE Tables_in_${DB_NAME} LIKE '${DB_PREFIX}options';")
GREP_WP_OPTIONS_STRING_PRODUCTION=$(echo "${QUERY_PRODUCTION_WP_OPTIONS_EXIST}" | grep --no-messages "${DB_PREFIX}"options)
...doing it this way helps to avoid false positives like when queries/grep might spit out warnings etc.
And the if statement that will populate the WordPress database conditionally:
## populate database if wp_options not exists ##
if [[ -z "${GREP_WP_OPTIONS_STRING_PRODUCTION}" ]]; then
/usr/bin/php -qCr "include '/var/www/html/wp-admin/install.php'; wp_install('SlickStack', '\"${SFTP_USER}\"', '\"${SFTP_USER}\"#\"${SITE_DOMAIN_EXCLUDING_WWW}\"', 1, '', \"${SFTP_PASSWORD}\");"
fi
The -q keeps it quiet to avoid parsing conflicts and the -r tells it to execute the following code. I'm pretty sure we don't really need the -C flag here, but I added it anyways just in case.
Note: I had to play around with the if statement a few times, because the wp_install array is sensitive and I found that wrapping the password variable in single quotes resulted in a broken MD5 hash code, so if any issues try adding/removing quotation marks...

Exporting ODBC System DSNs from a windows 2003 machine?

Is there a way to export all the ODBC System DSNs from a windows 2003 machine?
System DSN information is stored under the HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI registry key. You could export that key to a .reg file and import on another machine.
UPDATE:
You can also do it programmatically. Here are a few examples:
http://www.codeproject.com/KB/database/DSNAdmin.aspx
http://support.microsoft.com/kb/110507
http://blogs.technet.com/b/heyscriptingguy/archive/2004/11/10/can-i-create-and-delete-a-dsn-using-a-script.aspx
I have just done this myself with a very simple bat script for 32bit ODBC sources
regedit /e c:\backup\odbc.reg "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI"
and for the 64bit sources or if you are on a 32bit operating system:
regedit /e c:\backup\odbc.reg "HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI"
This backs up all of the DSN's however you could then specify the DNS you want.
System DSN's are stored in windows registry under HKLM\Software\ODBC\ODBC.INI node
So if you export this node to a *.reg file and run this reg file on a target machine, it should work.
The only thing, this reg file will contain some file paths which maybe computer specific, eg
c:\WINNT\System32\bla-bla-bla.dll includes WINNT folder which on target machine may be called like WINDOWS. So you will need to spend a bit time to make sure all paths in *.reg file are correct for target machine where you would finally import.
I wrote some Powershell functions for copying ODBC connections from one computer to another, they are posted (and kept updated) at:
http://powershell.com/cs/media/p/32510.aspx
# Usage:
# $srcConfig = Get-OdbcConfig srcComputerName
# Import-OdbcConfig trgComputerName $scrConfig
# Only returns data when setting values
function Get-OdbcConfig {
param( $srcName )
if ( Test-Connection $srcName -Count 1 -Quiet ) {
# cycle through the odbc and odbc32 keys
$keys = "SOFTWARE\ODBC\ODBC.INI", "SOFTWARE\Wow6432Node\ODBC\ODBC.INI"
foreach ( $key in $keys ){
# open remote registry
$type = [Microsoft.Win32.RegistryHive]::LocalMachine
$srcReg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey( $type, $srcName )
$OdbcKey = $srcReg.OpenSubKey( $key )
# red through each key
foreach ( $oDrvr in $OdbcKey.GetSubKeyNames() ){
# form the key path
$sKey = $key + "\" + $oDrvr
$oDrvrKey = $srcReg.OpenSubKey( $sKey )
# cycle through each value, capture the key path, name, value and type
foreach ( $oDrvrVal in $oDrvrKey.GetValueNames() ) {
$regObj = New-Object psobject -Property #{
Path = $sKey
Name = $oDrvrVal
Value = $oDrvrKey.GetValue( $oDrvrVal )
Type = $oDrvrKey.GetValueKind( $oDrvrVal )
}
# dump each to the console
$regObj
}
}
}
}
# can't ping
else { Write-Host "$srcName offline" }
}
function Import-OdbcConfig {
param( $trgName, $srcConfig )
if ( Test-Connection $trgName -Count 1 -Quiet ) {
# open remote registry
$type = [Microsoft.Win32.RegistryHive]::LocalMachine
$trgReg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey( $type, $trgName )
# sort out the key paths and cycle through each
$paths = $srcConfig | select -Unique Path
foreach ( $key in $paths ){
# check for the key and create it if it's not there
if ( ! $trgReg.OpenSubKey( $key.Path ) ) { $writeKey = $trgReg.CreateSubKey( $key.Path ) }
# open the path for writing ($true)
$trgKey = $trgReg.OpenSubKey( $key.Path, $true )
# cycle through each value, check to see if it exists, create it if it doesn't
foreach ( $oDrvr in $srcConfig | where { $_.Path -eq $key.Path } ) {
if ( ! $trgKey.GetValue( $oDrvr.Name ) ) {
$oType = $oDrvr.Type
$writeValue = $trgKey.SetValue( $oDrvr.Name, $oDrvr.Value, [Microsoft.Win32.RegistryValueKind]::$oType )
$objObj = new-object psobject -Property #{
Path = $oDrvr.Path
Name = $oDrvr.Name
Value = $trgKey.GetValue( $oDrvr.Name )
Type = $trgKey.GetValueKind( $oDrvr.Name )
}
}
$objObj
}
}
}
# can't ping
else { Write-Host "$srcName offline" }
}
Using these functions together you can copy all of one computers ODBC connections to another:
$srcConfig = Get-OdbcConfig srcComputerName
Import-OdbcConfig trgComputerName $scrConfig
It's possible to include only your favorite ODBC connection by filtering on the path:
Import-OdbcConfig trgComputerName ( $scrKeys | where { $_.Path -eq "SOFTWARE\ODBC\ODBC.INI\GoodDatabase" } )
Or filtering out ODBC connections you don't like:
Import-OdbcConfig trgComputerName ( $scrKeys | where { $_.Path -ne "SOFTWARE\ODBC\ODBC.INI\DatabaseIHate" } )
Import-OdbcConfig only returns data when setting values or can't ping target, if there's nothing to create it won't say anything.
If you can't find the registrations there, depending on if they are User DSN/System DSN, they can very will be in:
[HKEY_USERS\"User SID(dont' look for this, it will be a long
number)\Software\ODBC\ODBC.INI]

Resources