How should I deal with "sph2pipe command not found" error message? - wav

I'm trying to use the sph2pipe tool to convert the SPH files into wav or mp3 files. Although I have downloaded and installed the tool downloaded from here: https://www.ldc.upenn.edu/language-resources/tools/sphere-conversion-tools
still don't see any program that I can use..
On windows 10, after downloading sph2pipe and click the .exe file, a window just quickly popped up and never showed up again. And then I can't find any program called sph2pipe from the system and no command named sph2pipe either.
On Mac, I downloaded the program from where I forgot, but after clicked the executable file on mac, I got this document saying
Last login: Tue May 8 18:57:21 on ttys001
Pennys-MBP:~ me$ /Users/me/Downloads/SPH/sph2pipe_v2.5/sph2pipe ; exit;
Usage: sph2pipe [-h hdr] [-t|-s b:e] [-c 1|2] [-p|-u|-a] [-f typ] infile [outfile]
default conditions (for 'sph2pipe infile'):
input file contains sphere header
output full duration of input file
output all channels from input file
output same sample coding as input file
output format is WAV on Wintel machines, SPH elsewhere
output is written to stdout
optional controls (items bracketed separately above can be combined):
-h hdr -- treat infile as headerless, get sphere info from file 'hdr'
-t b:e -- output portion between b and e sec (floating point)
-s b:e -- output portion between b and e samples (integer)
-c 1 -- only output first channel
-c 2 -- only output second channel
-p -- force conversion to 16-bit linear pcm
-u -- force conversion to 8-bit ulaw
-a -- force conversion to 8-bit alaw
-f typ -- select alternate output header format 'typ'
five types: sph, raw, au, rif(wav), aif(mac)
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
[Process completed]
But still when try to type sph2pipe on my terminal, I got the response:
-bash: sph2pipe: command not found
Can somebody help me? I need to do the conversion very soon.
Thank you!

I figured it out:
sph2pipe.exe file file.wav

Related

Unzip Password Protected File in R using WinZip

I am trying to use R to unzip password protected files from a drive without using 7-zip. My organisation doesn't have access to it, we use WinZip for everything.
I have searched far and wide here but cannot find a post that satisfies the question.
I have a file that is zipped and contains a single XML file. I need to automate the collation of this data, my thinking is unzip then read. I have found these that I can't see what I need to do:
Using unzip does not support passwords - unzip a .zip file
e.g. unzip(file.xml.zip) produces
Warning message: In unzip(zipfile = "file.xml.zip") : zip file is corrupt
And the file is not corrupt as I can manually unzip it fine afterwards.
Using 7-Zip (I can't access this) - Unzip a password protected file with Powershell
Reading without unzipping (get "error reading from the connection) - Extract files from password protected zip folder in R
read_xml(unz("file.xml", "file.xml.zip"))
produces
Error in open.connection(x, "rb") : cannot open the connection In addition: Warning message: In open.connection(x, "rb") : cannot open zip file 'file.xml'
I have tried looking at Expand-Archive in PowerShell and trying to call that through R but am not having much luck, please someone help me!
With PowerShell I use
Expand-Archive -Path 'file'
which produces:
Exception calling "ExtractToFile" with "3" argument(s): "The archive entry was compressed using an unsupported compression method."
I don't have WinZip, but since both it and unzip.exe (within Rtools-4.2) support password-encoding, then we should be able to use similar methods. (Or perhaps you can use unzip included with Rtools.)
Setup:
$ echo 'hello world' > file1.txt
$ echo -e 'a,b\n11,22' > file2.csv
$ c:/rtools42/usr/bin/zip.exe -P secretpassword files.zip file1.txt file2.txt
adding: file1.txt (stored 0%)
adding: file2.txt (stored 0%)
$ unzip -v files.zip
Archive: files.zip
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
12 Stored 12 0% 2023-02-09 10:03 af083b2d file1.txt
10 Stored 10 0% 2023-02-09 10:03 1c1d572e file2.csv
-------- ------- --- -------
22 22 0% 2 files
$ unzip -c files.zip file1.txt
Archive: files.zip
[files.zip] file1.txt password:
Okay, now we have a password-protected zip file.
In R,
readLines(pipe("unzip -q -P secretpassword -c files.zip file1.txt"))
# [1] "hello world"
read.csv(pipe("unzip -q -P secretpassword -c files.zip file2.csv"))
# a b
# 1 11 22
WinZip does support a command-line interface, so we should be able to use it within pipe (or system or similar). It does support passwords, I believe it uses the -s argument instead of -P. I don't know if it supports extracting a file to stdout, so you might need to explore its command-line options for that, and if not then work out storing the document to a temporary directory.
Or, assuming you have Rtools installed, you can use its unzip as above without relying on WinZip.
Note:
Including the password as a command-line argument is relatively unsafe: users on the same host (if a multi-user system) can see the password in clear text by looking at the process list. I'm not certain if there's an easy way around this.

RTPDump File does not play correctly (tatering) when converted via Sox, but plays correctly in wireshark

I am trying to Convert the *.rtpdump file, created by Wireshark into wav file by Sox.
In Wireshark the original file is played without any tatering sound in the audio file, but when I convert it to wav file via SOX (on Windows), there is continuous tatering sound throughout the clip and the actual voice remains in background.
I tried the u-law encoding, a-law and others, the best it can get is with u-law, but it's also not so much audible. I tried the lowpass, gain, treble things but that also is not helping, changing channels, bit rate and other options make it worse.
Tried many things but tatering is not going
sox.exe -t raw -r 8000 -e u-law -c 1 66.rtpdump -t wav d:\out.wav -V
sox.exe -t raw -r 8000 -e a-law -c 1 66.rtpdump -t wav d:\out.wav -V
The first few bytes within each packet are causing this tatering sound.
I removed these bytes and the combined all the packets without these bytes to create a tatering free sound.

Unix: Egrep -a in solaris

I need the equivalent of "egrep -a " option in solaris.
Currently this option works fine in Red Hat linux, but we wanted to migrate some code in another server which is in solaris flavour.
So need to know the equivalent of "egrep -a " in solaris.
Thanks.
The feature requested (see manual) is documented
-a
--text
Process a binary file as if it were text; this is equivalent to the --binary-files=text option.
--binary-files=type
If a file's allocation metadata, or if its data read before a line is selected for output, indicate that the file contains binary data, assume that the file is of type type. By default, type is ‘binary’, and grep normally outputs either a one-line message saying that a binary file matches, or no message if there is no match. When matching binary data, grep may treat non-text bytes as line terminators.
If type is ‘without-match’, grep assumes that a binary file does not match; this is equivalent to the -I option.
If type is ‘text’, grep processes a binary file as if it were text; this is equivalent to the -a option.
Warning: ‘--binary-files=text’ might output binary garbage, which can have nasty side effects if the output is a terminal and if the terminal driver interprets some of it as commands.
Solaris grep (see manual) has no such feature. Third-party packages are available, e.g., CSWggrep from OpenCSW.
The -a option is a GNU grep extension which has no POSIX equivalent.
If you have a full Solaris 10 installation, you probably already have GNU grep installed in /usr/sfw/bin/ggrep (the double g is not a typo).
You can then replace the egrep -a ... occurence(s) by:
if [ -x /usr/sfw/bin/ggrep ] ; then
EGREP_A="/usr/sfw/bin/ggrep -E"
else
EGREP_A="egrep -a"
fi
$EGREP_A ...
If not installed, you can easily install that package SUNWggrp from you installation media (CD/DVD).
If for some reason, you can't do it, you need to provide more details about what kind of binary file it is and what pattern you are searching in it.
There are certainly other ways to overcome this issue with standard Solaris tools.

Why does du report a different file size from ls on this single file?

Hello I am trying to find size of a huge directory in my Unix system.
I'm using the command "du -k". But it is giving me weird results.
I narrowed down to just check the size of one tif file.
When did a ls -l
-rw-rw-rw- 1 dsp.ts5 datafeed 83239394 Jun 10 2013 V001.tif
The file size here is approx 83MB.
and when executed du -k V001.tif
108914 V001.tif
The file size here is 108MB!
I am having a hard time finding out why the two commands are returning different results?
du -k returns the number of 1K blocks.
ls -l shows the number of bytes.
It's important to understand the flags that you pass to your programs. I think you're using du -sk without understanding why you need the -s or the -k. Specifically, -s makes no sense if you are giving it the name of a single file, because -s is for summarizing the results of many files or directories.
man du and man ls will tell you all about the options that you can use.

How do I use the nohup command without getting nohup.out?

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Resources