Using eclim with cygwin's vim - eclim

Fellow Eclim fans, I have been relegated to Windows, with cygwin as my only memory of a real operating system. Windows 7 is admittedly better than its predecessors, but I'm a pretty die-hard *nix fan. Anyway, I'm stuck. If anyone has any ideas, I'd be glad to hear them!
$ uname -a
CYGWIN_NT-6.1 AAXA22A492 1.7.32(0.274/5/3) 2014-08-13 23:06 x86_64 Cygwin
$ vim --version | head -3
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Aug 25 2014 19:00:15)
Included patches: 1-417
Compiled by <cygwin#cygwin.com>
$ cat .vimrc
" required for eclime (and general sanity):
set nocompatible
filetype plugin indent on
syntax on
set tabstop=3 shiftwidth=3 expandtab
set ic
$ tree -L 2 .vim
.vim
|-- eclim
| |-- autoload
| |-- bin
| |-- compiler
| |-- dict
| |-- doc
| |-- ftplugin
| |-- indent
| |-- plugin
| `-- syntax
`-- plugin
`-- eclim.vim
11 directories, 1 file
$ vim
Error detected while processing function <SNR>8_Init..eclim#LoadVimSettings..ecl
im#UserHome..eclim#cygwin#WindowsHome..<SNR>10_Cygpath..eclim#util#System..eclim
#util#EchoTrace:
line 7:
E121: Undefined variable: g:EclimHighlightTrace
E116: Invalid arguments for function <SNR>11_EchoLevel
Error detected while processing function <SNR>8_Init..eclim#LoadVimSettings..ecl
im#UserHome..eclim#cygwin#WindowsHome..<SNR>10_Cygpath:
line 6:
E171: Missing :endif
Error detected while processing function <SNR>8_Init..eclim#LoadVimSettings..ecl
im#UserHome..eclim#cygwin#WindowsHome:
line 2:
E171: Missing :endif
Error detected while processing function <SNR>8_Init..eclim#LoadVimSettings..ecl
im#UserHome:
line 3:
E171: Missing :endif
Press ENTER or type command to continue

Eric solved this problem on this thread -- solution copied below.
Thanks Eric!
--- a/org.eclim.core/vim/eclim/autoload/eclim.vim
+++ b/org.eclim.core/vim/eclim/autoload/eclim.vim
## -352,6 +350,11 ## function! eclim#ShutdownEclim() " {{{
endfunction " }}}
function! eclim#LoadVimSettings() " {{{
+ if !exists('g:EclimLogLevel')
+ let g:EclimLogLevel = 'info'
+ let g:EclimHighlightTrace = 'Normal'
+ endif
+
let settings_file = eclim#UserHome() . '/.eclim/.eclim_settings'
if filereadable(settings_file)
let lines = readfile(settings_file)
--- a/org.eclim.core/vim/eclim/autoload/eclim/client/nailgun.vim
+++ b/org.eclim.core/vim/eclim/autoload/eclim/client/nailgun.vim
## -153,7 +153,7 ## function! eclim#client#nailgun#GetEclimCommand(home) " {{{
if has('win32unix')
" in cygwin, we must use 'cmd /c' to prevent issues with eclim script +
" some arg containing spaces causing a failure to invoke the script.
- return 'cmd /c "' . eclim#cygwin#WindowsPath(command) . '"'
+ return [0, 'cmd /c "' . eclim#cygwin#WindowsPath(command) . '"']
endif
return [0, '"' . command . '"']
endfunction " }}}

Related

How to catch "$variable is not defined" in jq?

Let's pretend I'm running something like this:
jq -nr --arg target /tmp \
'(["echo","Hello, world"]|#sh)+">\($target)/sample.txt"' \
| sh
Everything is fine unless I forgot to pass variable $target:
$ jq -nr '(["echo","Hello, world"]|#sh)+">\($target)/sample.txt"'
jq: error: $target is not defined at <top-level>, line 1:
(["echo","Hello, world"]|#sh)+">\($target)/sample.txt"
jq: 1 compile error
How can I catch this and use default value?
I've tried:
$target?
($target)?
try $target catch null
$target? // null
But it seems to be parsing-time error, which obviously can't be caught at runtime. Have I've missed any dynamic syntax?
I've found that command-line arguments can be found in $ARGS.name, but there are two drawbacks:
This was introduced in version 1.6, but I have 1.5 on CentOS 7.
It doesn't catch locally defined variables.
Assuming you need to do something more useful with jq than write 'Hello World' over a text file. I propose the following,
Maybe we can learn some programming tips from Jesus:
"Give to Caesar what belongs to Caesar, and give to God what belongs to God"
Suppose that Caesar is bash shell and God is jq, bash is appropriate to work and test the existence of files, directories and environment variables, jq is appropriate to process information in json format.
#!/bin/bash
dest_folder=$1
#if param1 is not given, then the default is /tmp:
if [ -z $dest_folder ]; then dest_folder=/tmp ; fi
echo destination folder: $dest_folder
#check if destination folder exists
if [ ! -d $dest_folder ]
then
echo "_err_ folder not found"
exit 1
fi
jq -nr --arg target $dest_folder '(["echo","Hello, world"]|#sh)+">\($target)/sample.txt"' | sh
#if the file is succesfully created, return 0, if not return 1
if [ -e "$dest_folder/sample.txt" ]
then
echo "_suc_ file was created ok"
exit 0
else
echo "_err_ when creating file"
exit 1
fi
Now you can include this script as a step in a more complex batch, because it is congruent with linux style, returning 0 on success.

warning: here-document at line 4 delimited by end-of-file (wanted `limit')

I did and try but not able to rectify
opal#opal-Inspiron-15-3567:~/PRABHAT/unix$ bash valcode.sh
valcode.sh: line 5: unexpected EOF while looking for matching ``' valcode.sh: line 19: syntax error: unexpected end of file
IFS="|"
while echo "Enter deparment code:" ; do
read dcode
set -- `grep "^$dcode" <<-limit
01|accounts|6123
02 | admin | 5423
03 | marketing |6521
04 | personnel |2365
05 | production | 9876
06 | sales | 1006
limit'
case $# in
3) echo "deparment name : $2\nEmp-id of head of dept :$3\n"
shift 3 ;;
*) echo "Invalid code" ; continue
esac
done
the output is not coming as per desire
On line 4 you write `grep but the backtick ` is unmatched. Backticks always come in pairs so the interpreter keeps going looking for the match. Eventually it reached the end of the file without finding it and gives up.
Adding the matching backtick (at the end of the line?) will solve this problem.

How can I merge PDF files (or PS if not possible) such that every file will begin in a odd page?

I am working on a UNIX system and I'd like to merge thousands of PDF files into one file in order to print it. I don't know how many pages they are in advance.
I'd like to print it double sided, such that two files will not be on the same page.
Therefore it I'd the merging file to be aligned such that every file will begin in odd page and a blank page will be added if the next place to write is an even page.
Here's the solution I use (it's based on #Dingo's basic principle, but uses an easier approach for the PDF manipulation):
Create PDF file with a single blank page
First, create a PDF file with a single blank page somewhere (in my case, it is located at /path/to/blank.pdf). This command should work (from this thread):
touch blank.ps && ps2pdf blank.ps blank.pdf
Run Bash script
Then, from the directory that contains all my PDF files, I run a little script that appends the blank.pdf file to each PDF file with an odd page number:
#!/bin/bash
for f in *.pdf; do
let npages=$(pdfinfo "$f"|grep 'Pages:'|awk '{print $2}')
let modulo="($npages %2)"
if [ $modulo -eq 1 ]; then
pdftk "$f" "/path/to/blank.pdf" output "aligned_$f"
# or
# pdfunite "$f" "/path/to/blank.pdf" "aligned_$f"
else
cp "$f" "aligned_$f"
fi
done
Combine the results
Now, all aligned_-prefixed files have even page numbers, and I can join them using
pdftk aligned_*.pdf output result.pdf
# or
pdfunite aligned_*.pdf result.pdf
Tool info:
ps2pdf is in the ghostscript package in most Linux distros
pdfinfo, pdfunite are from the Poppler PDF rendering library (usually the package name is poppler-utils or poppler_utils)
pdftk is usually its own package, the pdftk package
your problem can be more easily solved if you look at this from an another point of view
to obtain that, in printing, page 1 of second pdf file will be not attached to last page of first pdf file on the same sheet of paper, and, more generally, first page of subsequent pdf file will be not printed on the back of the same sheet with the last page of the precedent pdf file
you need to perform a selective addition of one blank page only to pdf files having and odd number of pages
I wrote a simple script named abbblankifneeded that you can put in a file and then copy in /usr/bin or /usr/local/bin
and then invoke in folder where you have your pdf with this syntax
for f in *.pdf; do addblankifneeded $f; done
this script adds a blank page at end to pdf files having an odd number of pages, skipping pdf files having already an even number of pages and then join together all pdf into one
requirements: pdftk, pdfinfo
NOTE: depending from your bash environment, you may need to replace sh interpreter with bash interpreter in the first line of script
#!/bin/sh
#script to add automatically blank page at the end of a pdf documents, if count of their pages is a not a module of 2 and then to join all pdfs into one
#
# made by Dingo
#
# dokupuppylinux.co.cc
#
#http://pastebin.com/u/dingodog (my pastebin toolbox for pdf scripts)
#
filename=$1
altxlarg="`pdfinfo -box $filename| grep MediaBox | cut -d : -f2 | awk '{print $3 FS $4}'`"
echo "%PDF-1.4
%µí®û
3 0 obj
<<
/Length 0
>>
stream
endstream
endobj
4 0 obj
<<
/ProcSet [/PDF ]
/ExtGState <<
/GS1 1 0 R
>>
>>
endobj
5 0 obj
<<
/Type /Halftone
/HalftoneType 1
/HalftoneName (Default)
/Frequency 60
/Angle 45
/SpotFunction /Round
>>
endobj
1 0 obj
<<
/Type /ExtGState
/SA false
/OP false
/HT /Default
>>
endobj
2 0 obj
<<
/Type /Page
/Parent 7 0 R
/Resources 4 0 R
/Contents 3 0 R
>>
endobj
7 0 obj
<<
/Type /Pages
/Kids [2 0 R ]
/Count 1
/MediaBox [0 0 595 841]
>>
endobj
6 0 obj
<<
/Type /Catalog
/Pages 7 0 R
>>
endobj
8 0 obj
<<
/CreationDate (D:20110915222508)
/Producer (libgnomeprint Ver: 2.12.1)
>>
endobj
xref
0 9
0000000000 65535 f
0000000278 00000 n
0000000357 00000 n
0000000017 00000 n
0000000072 00000 n
0000000146 00000 n
0000000535 00000 n
0000000445 00000 n
0000000590 00000 n
trailer
<<
/Size 9
/Root 6 0 R
/Info 8 0 R
>>
startxref
688
%%EOF" | sed -e "s/595 841/$altxlarg/g">blank.pdf
pdftk blank.pdf output fixed.pdf
mv fixed.pdf blank.pdf
pages="`pdftk $filename dump_data | grep NumberOfPages | cut -d : -f2`"
if [ $(( $pages % 2 )) -eq 0 ]
then echo "$filename has already a multiple of 2 pages ($pages ). Script will be skipped for this file" >>report.txt
else
pdftk A=$filename B=blank.pdf cat A B output blankadded.pdf
mv blankadded.pdf $filename
pdffiles=`ls *.pdf | grep -v -e blank.pdf -e joinedtogether.pdf| xargs -n 1`; pdftk $pdffiles cat output joinedtogether.pdf
fi
exit 0
You can use PDFsam:
gratis
runs on Microsoft Windows, Mac OS X and Linux
portable version available (at least on Windows)
can add a blank page after each merged document if the document has an odd number of pages
Disclaimer: I'm the author of the tools I'm mentioning here.
sejda-console
It's a free and open source command line interface for performing pdf manipulations such as merge or split. The merge command has an option stating:
[--addBlanks] : add a blank page after each merged document if the number of pages is odd (optional)
Since you just need to print the pdf I'm assuming you don't care about the order your documents are merged. This is the command you can use:
sejda-console merge -d /path/to/pdfs_to_merge -o /outputpath/merged_file.pdf --addBlanks
It can be downloaded from the official website sejda.org.
sejda.com
This is a web application backed by Sejda and has the same functionalities mentioned above but through a web interface. You are required to upload your files so, depending on the size of your input set, it might not be the right solution for you.
If you select the merge command and upload your pdf documents you will have to flag the checkbox Add blank page if odd page number to get the desired behaviour.
Here is a PowerShell version of the most popular solution using pdftk. I did this for windows but you can use PowerShell Core for other platforms.
# install pdftk server if on windows
# https://www.pdflabs.com/tools/pdftk-server/
$blank_pdf_path = ".\blank.pdf"
$input_folder = ".\input\"
$aligned_folder = ".\aligned\"
$final_output_path = ".\result.pdf"
foreach($file in (Get-ChildItem $input_folder -Filter *.pdf))
{
# easy but might break if pdfinfo output changes
# takes 7th line with the "Page: 2" and matches only numbers
(pdfinfo $file.FullName)[7] -match "(\d+)" | Out-Null
$npages = $Matches[1]
$modulo = $npages % 2
if($modulo -eq 1)
{
$output_path = Join-Path $aligned_folder $file.Name
pdftk $file.FullName $blank_pdf_path output $output_path
}
else
{
Copy-Item $file.FullName -Destination $aligned_folder
}
}
$aligned_pdfs = Join-Path $aligned_folder "*.pdf"
pdftk $aligned_pdfs output $final_output_path
Preparation
Install Python and make sure you have the pyPDF package.
Create a PDF file with a single blank in /path/to/blank.pdf (I've created blank pdf pages here).
Save this as pdfmerge.py in any directory of your $PATH. (I'm not a Windows user. This is straight forward under Linux. Please let me know if you get errors / if it works.)
Make pdfmerge.py executable
Every time you need it
Run uniprint.py a directory that contains only PDF files you want to merge.
pdfmerge.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from argparse import ArgumentParser
from glob import glob
from pyPdf import PdfFileReader, PdfFileWriter
def merge(path, blank_filename, output_filename):
blank = PdfFileReader(file(blank_filename, "rb"))
output = PdfFileWriter()
for pdffile in glob('*.pdf'):
if pdffile == output_filename:
continue
print("Parse '%s'" % pdffile)
document = PdfFileReader(open(pdffile, 'rb'))
for i in range(document.getNumPages()):
output.addPage(document.getPage(i))
if document.getNumPages() % 2 == 1:
output.addPage(blank.getPage(0))
print("Add blank page to '%s' (had %i pages)" % (pdffile, document.getNumPages()))
print("Start writing '%s'" % output_filename)
output_stream = file(output_filename, "wb")
output.write(output_stream)
output_stream.close()
if __name__ == "__main__":
parser = ArgumentParser()
# Add more options if you like
parser.add_argument("-o", "--output", dest="output_filename", default="merged.pdf",
help="write merged PDF to FILE", metavar="FILE")
parser.add_argument("-b", "--blank", dest="blank_filename", default="blank.pdf",
help="path to blank PDF file", metavar="FILE")
parser.add_argument("-p", "--path", dest="path", default=".",
help="path of source PDF files")
args = parser.parse_args()
merge(args.path, args.blank_filename, args.output_filename)
Testing
Please make a comment if this works on Windows and Mac.
Please always leave a comment if it doesn't work / it could be improved.
It works on Linux. Joining 3 PDFs to a single 200-page PDF took less then a second.
Martin had a good start. I updated to PyPdf2 and made a few tweaks like sorting the output by filename.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from argparse import ArgumentParser
from glob import glob
from PyPDF2 import PdfFileReader, PdfFileWriter
import os.path
def merge(pdfpath, blank_filename, output_filename):
with open(blank_filename, "rb") as f:
blank = PdfFileReader(f)
output = PdfFileWriter()
filelist = sorted(glob(os.path.join(pdfpath,'*.pdf')))
for pdffile in filelist:
if pdffile == output_filename:
continue
print("Parse '%s'" % pdffile)
document = PdfFileReader(open(pdffile, 'rb'))
for i in range(document.getNumPages()):
output.addPage(document.getPage(i))
if document.getNumPages() % 2 == 1:
output.addPage(blank.getPage(0))
print("Add blank page to '%s' (had %i pages)" % (pdffile, document.getNumPages()))
print("Start writing '%s'" % output_filename)
with open(output_filename, "wb") as output_stream:
output.write(output_stream)
if __name__ == "__main__":
parser = ArgumentParser()
# Add more options if you like
parser.add_argument("-o", "--output", dest="output_filename", default="merged.pdf",
help="write merged PDF to FILE", metavar="FILE")
parser.add_argument("-b", "--blank", dest="blank_filename", default="blank.pdf",
help="path to blank PDF file", metavar="FILE")
parser.add_argument("-p", "--path", dest="path", default=".",
help="path of source PDF files")
args = parser.parse_args()
merge(args.path, args.blank_filename, args.output_filename)
`
The code by #Chris Lercher in https://stackoverflow.com/a/12761103/1369181 did not quite work for me. I do not know whether that is because I am working on Cygwin/mintty. Also, I have to use qpdf instead of pdftk. Here is the code that has worked for me:
#!/bin/bash
for f in *.pdf; do
npages=$(pdfinfo "$f"|grep 'Pages:'|sed 's/[^0-9]*//g')
modulo=$(($npages %2))
if [ $modulo -eq 1 ]; then
qpdf --empty --pages "$f" "path/to/blank.pdf" -- "aligned_$f"
else
cp "$f" "aligned_$f"
fi
done
Now, all "aligned_" files have even page numbers, and I can join them using qpdf (thanks to https://stackoverflow.com/a/51080927):
qpdf --verbose --empty --pages aligned_* -- all.pdf
And here the useful code from https://unix.stackexchange.com/a/272878 that I have used for creating the blank page:
echo "" | ps2pdf -sPAPERSIZE=a4 - blank.pdf
This one worked for me. Have used pdfcpu on macos.
Can be installed this way:
brew install pdfcpu
And have slightly adjusted the code from https://stackoverflow.com/a/12761103/1369181
#!/bin/bash
mkdir aligned
for f in *.pdf; do
let npages=$(pdfcpu info "$f"|grep 'Page count:'|awk '{print $3}')
let modulo="($npages %2)"
if [ $modulo -eq 1 ]; then
pdfcpu page insert -pages l -mode after "$f" "aligned/$f"
else
cp "$f" "aligned/$f"
fi
done
pdfcpu merge merged-aligned.pdf aligned/*.pdf
rm -rf aligned
NB! It creates and removes "aligned" directory in the current directory. So feel free to improve it to make it safe for use.

convert a `find` like output to a `tree` like output

This question is a generalized version of the Output of ZipArchive() in tree format question.
Just before I am wasting time on writing this (*nix command line) utility, it will be a good idea to find out if someone already wrote it. I would like a utility that will get as its' standard input a list such as the one returned by find(1) and will output something similar to the one by tree(1)
E.g.:
Input:
/fruit/apple/green
/fruit/apple/red
/fruit/apple/yellow
/fruit/banana/green
/fruit/banana/yellow
/fruit/orange/green
/fruit/orange/orange
/i_want_my_mommy
/person/men/bob
/person/men/david
/person/women/eve
Output
/
|-- fruit/
| |-- apple/
| | |-- green
| | |-- red
| | `-- yellow
| |-- banana/
| | |-- green
| | `-- yellow
| `-- orange/
| |-- green
| `-- orange
|-- i_want_my_mommy
`-- person/
|-- men/
| |-- bob
| `-- david
`-- women/
`-- eve
Usage should be something like:
list2tree --delimiter="/" < Input > Output
Edit0: It seems that I was not clear about the purpose of this exercise. I like the output of tree, but I want it for arbitrary input. It might not be part of any file system name-space.
Edit1: Fixed person branch on the output. Thanks, #Alnitak.
In my Debian 10 I have tree v1.8.0. It supports --fromfile.
--fromfile
Reads a directory listing from a file rather than the file-system. Paths provided on the command line are files to read from rather than directories to search. The dot (.) directory indicates that tree should read paths from standard input.
This way I can feed tree with output from find:
find /foo | tree -d --fromfile .
Problems:
If tree reads /foo/whatever or foo/whatever then foo will be reported as a subdirectory of .. Similarly with ./whatever: . will be reported as an additional level named . under the top level .. So the results may not entirely meet your formal expectations, there will always be a top level . entry. It will be there even if find finds nothing or throws an error.
Filenames with newlines will confuse tree. Using find -print0 is not an option because there is no corresponding switch for tree.
I whipped up a Perl script that splits the paths (on "/"), creates a hash tree, and then prints the tree with Data::TreeDumper. Kinda hacky, but it works:
#!/usr/bin/perl
use strict;
use warnings;
use Data::TreeDumper;
my %tree;
while (<>) {
my $t = \%tree;
foreach my $part (split m!/!, $_) {
next if $part eq '';
chomp $part;
$t->{$part} ||= {};
$t = $t->{$part};
}
}
sub check_tree {
my $t = shift;
foreach my $hash (values %$t) {
undef $hash unless keys %$hash;
check_tree($hash);
}
}
check_tree(\%tree);
my $output = DumpTree(\%tree);
$output =~ s/ = undef.*//g;
$output =~ s/ \[H\d+\].*//g;
print $output;
Here's the output:
$ perl test.pl test.data
|- fruit
| |- apple
| | |- green
| | |- red
| | `- yellow
| |- banana
| | |- green
| | `- yellow
| `- orange
| |- green
| `- orange
|- i_want_my_mommy
`- person
|- men
| |- bob
| `- david
`- women
`- eve
An other tool is treeify written in Rust.
Assuming you have Rust installed get it with:
$ cargo install treeify
So, I finally wrote what I hope will become the python tree utils. Find it at http://pytree.org
I would simply use tree myself but here's a simple thing that I wrote a few days ago that prints a tree of a directory. It doesn't expect input from find (which makes is different from your requirements) and doesn't do the |- display (which can be done with some small modifications). You have to call it like so tree <base_path> <initial_indent>. intial_indent is the number of characters the first "column" is indented.
function tree() {
local root=$1
local indent=$2
cd $root
for i in *
do
for j in $(seq 0 $indent)
do
echo -n " "
done
if [ -d $i ]
then
echo "$i/"
(tree $i $(expr $indent + 5))
else
echo $i
fi
done
}

How to extract the name of immediate directory along with the filename?

I have a file whose complete path is like
/a/b/c/d/filename.txt
If I do a basename on it, I get filename.txt. But this filename is not too unique.
So, it would be better if I could extract the filename as d_filename.txt i.e.
{immediate directory}_{basename result}
How can I achieve this result?
file="/path/to/filename"
echo $(basename $(dirname "$file")_$(basename "$file"))
or
file="/path/to/filename"
filename="${file##*/}"
dirname="${file%/*}"
dirname="${dirname##*/}"
filename="${dirname}_${filename}"
This code will recursively search through your hierarchy starting with the directory that you run the script in. I've coded the loop in such a way that it will handle any filename you throw at it; file names with spaces, newlines etc.
*Note**: the loop is currently written to not include any files in the directory that this script resides in, it only looks at subdirs below it. This was done as it was the easiest way to make sure the script does not include itself in its processing. If for some reason you must include the directory the script resides in, it can be changed to accommodate this.
Code
#!/bin/bash
while IFS= read -r -d $'\0' file; do
dirpath="${file%/*}"
filename="${file##*/}"
temp="${dirpath}_${filename}"
parent_file="${temp##*/}"
printf "dir: %10s orig: %10s new: %10s\n" "$dirpath" "$filename" "$parent_file"
done < <(find . -mindepth 2 -type f -print0)
Test tree
$ tree -a
.
|-- a
| |-- b
| | |-- bar
| | `-- c
| | |-- baz
| | `-- d
| | `-- blah
| `-- foo
`-- parent_file.sh
Output
$ ./parent_file.sh
dir: ./a/b/c/d orig: blah new: d_blah
dir: ./a/b/c orig: baz new: c_baz
dir: ./a/b orig: bar new: b_bar
dir: ./a orig: foo new: a_foo
$ FILE=/a/b/c/d/f.txt
$ echo $FILE
/a/b/c/d/f.txt
$ echo $(basename ${FILE%%$(basename $FILE)})_$(basename $FILE)
d_f.txt
don't need to call external command
s="/a/b/c/d/filename.txt"
t=${s%/*}
t=${t##*/}
filename=${t}_${s##*/}
Take the example:
/a/1/b/c/d/file.txt
/a/2/b/c/d/file.txt
The only reliable way to qualify file.txt and avoid conflicts is to build the entire path into the new filename, e.g.
/a/1/b/c/d/file.txt -> a_1_b_c_d_file.txt
/a/2/b/c/d/file.txt -> a_2_b_c_d_file.txt
You may be able to skip part of the beginning if you know for sure that it will be common to all files, e.g if you know that all files reside somewhere underneath the directory /a above:
/a/1/b/c/d/file.txt -> 1_b_c_d_file.txt
/a/2/b/c/d/file.txt -> 2_b_c_d_file.txt
To achieve this on a per-file basis:
# file="/path/to/filename.txt"
new_file="`echo \"$file\" | sed -e 's:^/::' -e 's:/:_:g'`"
# new_file -> path_to_filename.txt
Say you want do do this recursively in a directory and its subdirectories:
# dir = /a/b
( cd "$dir" && find . | sed -e 's:^\./::' | while read file ; do
new_file="`echo \"$file\" | sed -e 's:/:_:g'`"
echo "rename $dir/$file to $new_file"
done )
Output:
rename /a/b/file.txt to file.txt
rename /a/b/c/file.txt to c_file.txt
rename /a/b/c/e/file.txt to c_e_file.txt
rename /a/b/d/e/file.txt to d_e_file.txt
...
The above is highly portable and will run on essentially any Unix system under any variant of sh (inclusing bash, ksh etc.)

Resources