Using unix commands, how would I be able to take website information and place it inside a variable?
I have been practicing with curl -sS which allows me to strip out the download progress output and just print the downloaded data (or any possible error) in the console. If there is another method, I would be glad to hear it.
But so far I have a website and I want to get certain information out of it, so I am using curl and cut like so:
curl -sS "https://en.wikipedia.org/wiki/List_of_Olympic_medalists_in_judo?action=raw | cut -c"19-"
How would I put this into a variable? My attempts have not been successful so far.
Wrap any command in $(...) to capture the output in the shell, which you could then assign to a variable (or do anything else you want with it):
var=$(curl -sS "https://en.wikipedia.org/wiki/List_of_Olympic_medalists_in_judo?action=raw | cut -c"19-")
Related
I'm writing a bash script that needs to both be able to cd in the current shell and use less to display longform text. To be able to cd, I understand that I need to source the script when I call it, which I've done via an alias in my ZSH config. However, when I do this, less breaks: instead of echo -e "$result" | less displaying its usual scrolling buffer, the long text gets dumped into the shell.
For context, this is a bash script acting as a wrapper for a Node.js script so as to be able to have native access to bash commands (like cd, open, etc.). The alias in my zshrc is as follows (with the path truncated): alias bk='source ~/.../bookmark/bookmark.sh'.
Is there any way to satisfy both the need to cd and the need to use less?
Fixed! This turned out to be an issue in my script's logic. I was using condition=$(echo $result | cut -c 1-3), but in reality need the first three characters (not columns) of $result, which I then did by using $result | head -c 3. What's interesting about this is that fetching the first three columns from $result when determined by running ./bookmark.sh works as an equivalent to fetching the first three characters, but running the alias yields the issue here.
First question may take care of this. When capturing in tshark using fields, like "-e tcp.flags", is there a way to have the output the flag label, like "FIN", instead of "0x1"? I've done a few searches through documentation. Probably right under my nose.
If not, then I need a function in my data pipeline to convert the hex into the labels. I thought about having a dictionary like "{'0x1':'FIN'}" and map it, but I'm not sure of all the flag combos that might appear.
So I am taking the hex string, converting it to an integer, then to a binary string. I turn that into a list "[0,0,0,0,0,1]" and use that like a filter against a label list like "[u, a, p, r, s, f]" that returns any labels joined, like "f" or "a_s". Using Python.
Is this function necessary? Is there a more efficient/ elegant way to convert the hex to labels?
Normally I would suggest using -e tcp.flags.str, but this doesn't display properly for me at least running on Windows 10 w/TShark (Wireshark) 3.3.0 (v3.3.0rc0-1433-gcac1426dd6b2). For example, here's what I get for what should be a "SYN" indication only:
tshark.exe -r tcpfile.pcap -c 1 -T fields -e frame.number -e tcp.flags -e tcp.flags.str
1 0x00000002 A·A·A·A·A·A·A·A·A·A·SA·
You can try it on your system and maybe it'll display as intended (In Wireshark, it's displayed correctly as ··········S·, so it may be a tshark bug or a problem with my shells - tried with both cmd and powershell.) In any case, if it doesn't display properly for you on your system, you can try using the tcp-flags-postdissector.lua dissector that Didier Stevens wrote, which was inspired by Snort and which I believe served as the inspiration for the Wireshark built-in tcp.flags.str field. I personally preferred a '.' instead of '*' for flag bits that aren't set, so I tweaked the Lua dissector to behave that way. Use it as is, or tweak it anyway you choose. With the Lua dissector, I get the expected output:
tshark.exe -r tcpfile.pcap -c 1 -T fields -e frame.number -e tcp.flags tcpflags.flags
1 0x00000002 ........S.
Since the same incorrect string is displayed in both cmd and powershell, it looks like a tshark bug to me, so I filed Wireshark Bug 16649.
I use rsync to backup a few thousands of files and pipe the output to a file.
Given the number of files I'd like to see a list of only those transfers that had issues as well as a summary to show which completed.
So, using the -q flag displays nicely by exception any error only.
Using --stats shows a helpful summary at the end.
The problem is that I cannot combine them because it appears that -q suppresses the stats output.
Any ideas welcome.
This did the trick for me :
rsync -azh --stats <source> <destination>
-a/--archive: archive mode; equals -rlptgoD (no -H,-A,-X)
-z/--compress: compress file data during the transfer
-h/--human-readable: output numbers in a human-readable format
--stats: give some file-transfer stats
Perhaps this will help someone else. In the end the only thing that worked was to swap the output as suggested here.
So in my case it was simply redirecting as follows:
2>> /output.log >> /output.log
i have a scenario in which i need to download files through curl command and want my script to pause for some time before downloading the second one. I used sleep command like
sleep 3m
but it is not working.
any idea ???
thanks in advance.
Make sure your text editor is not putting a /r /n and only a /n for every new line. This is typical if you are writing the script on windows.
Use notepad++ (windows) and go to edit|EOL convention|UNIX then save it. If you are stuck with it on the computer, i have read from here [talk.maemo.org/showthread.php?t=67836] that you can use [tr -d "\r" < oldname.sh > newname.sh] to remove the problem. if you want to see if it has the problem use [od -c yourscript.sh] and /r will occur before any /n.
Other problems I have seen it cause is cd /dir/dir and you get [cd: 1: can't cd to /dir/dir] or copy scriptfile.sh newfilename the resulting file will be called newfilenameX where X is an invisible character (ensure you can delete it before trying it), if the file is on a network share, a windows machine can see the character. Ensure it is not the last line for a successful test.
Until i figured it out (i knew i had to ask google for something that may manifest in various ways) i thought that there was an issue with this linux version i was using (sleep not working in a script???).
Are you sure you are using sleep the right way? Based on your description, you should be invoking it as:
sleep 180
Is this the way you are doing it?
You might also want to consider wget command as it has an explicit --wait flag, so you might avoid having the loop in the first place.
while read -r urlname
do
curl ..........${urlname}....
sleep 180 #180 seconds is 3 minutes
done < file_with_several_url_to_be_fetched
?
I have a query regarding the execution of a complex command in the makefile of the current system.
I am currently using shell command in the makefile to execute the command. However my command fails as it is a combination of a many commands and execution collects a huge amount of data. The makefile content is something like this:
variable=$(shell ls -lart | grep name | cut -d/ -f2- )
However the make execution fails with execvp failure, since the file listing is huge and I need to parse all of them.
Please suggest me any ways to overcome this issue. Basically I would like to execute a complex command and assign that output to a makefile variable which I want to use later in the program.
(This may take a few iterations.)
This looks like a limitation of the architecture, not a Make limitation. There are several ways to address it, but you must show us how you use variable, otherwise even if you succeed in constructing it, you might not be able to use it as you intend. Please show us the exact operations you intend to perform on variable.
For now I suggest you do a couple of experiments and tell us the results. First, try the assignment with a short list of files (e.g. three) to verify that the assignment does what you intend. Second, in the directory with many files, try:
variable=$(shell ls -lart | grep name)
to see whether the problem is in grep or cut.
Rather than store the list of files in a variable you can easily use shell functionality to get the same result. It's a bit odd that you're flattening a recursive ls to only get the leaves, and then running mkdir -p which is really only useful if the parent directory doesn't exist, but if you know which depths you want to (for example the current directory and all subdirectories one level down) you can do something like this:
directories:
for path in ./*name* ./*/*name*; do \
mkdir "/some/path/$(basename "$path")" || exit 1; \
done
or even
find . -name '*name*' -exec mkdir "/some/path/$(basename {})" \;