Masscan, scan multiple ranges from txt file - ip

I have txt file with
x.x.x.x/22
x.x.x.x/23
x.x.x.x/24
Etc etc etc
How can i get masscan to read these ranges, and perform a scan on all of them ?

sudo masscan -p80 -iL file_name.txt
You can customise your port number. The scan must be run in the same directory where you created your text file.
scan output-
enter image description here

Related

redirect output of sql to file in unix

I need to run a stored proc and redirect its output to a text file. Right now I am using the below command to do it, but in the text file I have the headers and also the columns are separated by spaces. I need the output file to start with the first row of data and no spaces between the columns. Can someone please advise how to do this?
Thanks!
Command:
isql -S <server>-U <user>-P <password> -w1024 << EOB1 >> <text file>
use <db_name>
go
exec <proc>
go
EOB1

rsync exlude list from database?

I know I can exclude rsync files listed in a text file, but can I make rsync read a sqlite (or other) database as an exclude list?
Otherwise I guess I could dump the sqlite to a text file, but I would like to eliminate the extra step, since I have many files in many directories.
The man page says:
--exclude-from=FILE
This option is related to the --exclude option, but it specifies a FILE that contains exclude patterns (one per line). Blank lines in the file and lines starting with ";" or "#" are ignored. If FILE is -, the list will be read from standard input.
So just pipe the file names into rsync:
sqlite3 my.db "SELECT filename FROM t" | rsync --exclude-from=- ...

Open files listed in txt

I have a list of files with their full path in a single text file. I would like to open them all at once in Windows. The file extension will tell Windows what programme to use. Can I do this straight from the command line or would I need to make a batch file? Tips on how to write the batch file appreciated.
My text file looks like the following:
J:/630/630A/SZ299_2013-04-19_19_36_52_M01240.WAV
J:/630/630A/SZ299_2013-04-19_20_15_39_M02312.WAV
J:/630/630A/SZ299_2013-04-19_21_48_07_M04876.WAV
etc
The .WAV extension is associated with Adobe Audition, which is a sound editing programme. When each path is hyperlinked in an Excel column, they can be opened with one click. Clicking on the first link will open both Audition and the hyperlinked file in it. Clicking another hyperlink will open the next file in the same instance of the programme. But this is too slow for hundreds of paths. If I open many files straight from R, e.g.
shell("J:/630/630A/SZ299_2013-04-19_19_36_52_M01240.WAV", intern=TRUE)
shell("J:/630/630A/SZ299_2013-04-19_20_15_39_M02312.WAV", intern=TRUE)
etc
each file will be opened in a new instance of the programme, which is nasty. So batch seems preferable.
for /f "delims=" %%a in (yourtextflename) do "%%a"
should do this as a batch line.
You could run this directly from the prompt if you like, but you'd need to replace each %% with % to do so.
It's a lot easier to put the code into a batch:
#echo off
setlocal
for /f "delims=" %%a in (%1) do "%%a"
then you'd just need to enter
thisbatchfilename yourtextfilename
and yourtextfilename will be substituted for %1. MUSCH easier to type - and that's what batch is all about - repetitive tasks.
Following on from this post, which uses the identify function in R to create a subset selection of rows (from a larger dataset called "testfile") by clicking on coordinates in a scatterplot. One of the columns contains the list of Windows paths to original acoustic datafiles. The last line below will open all files in the listed paths in only one instance of the programme linked to the Windows file extension.
selected_rows = with(testfile, identify(xvalue, yvalue))
SEL <-testfile[selected_rows,]
for (f in 1:nrow(SEL)){system2("open",toString(SEL[f,]$path))}

Viewing the full contents of a file Unix

I want to be able to see all lines of text in a file, originally i only needed the top of the file and had been using
head -n 50 'filename.txt'
I could just do head -n 1000 as most files contain less than this but would prefer a bettr alternative
Have you considered the use of a text editor. These are often times installed by default on *nix systems? Vi is usually available.
vi filename
nano filename
or
pico filename

UNIX Script for searching in logs and files and extracting lines

I'm trying to write a script that can search in log files for a specific text and write down the line in a txt file. Log files are archived once every 2 days, so i need to search in archived files also.
Something like:
-bash-3.2$ ssh server.com
-bash-3.2$ cd test/log/
less server.log.2012-06-19.gz | grep "text" -> ~/test.txt
I'm kind of a newbie in UNIX
Thanks
like this?
zgrep text server.log* >~/test.txt
gzcat <your_gz_file>|grep string >output_file

Resources