Including ASCII art in R - r

I'm writing a small program and wanted to know if there is a way to include ASCII art in R. I was looking for an equivalent of three quotes (""" or ''') in python.
I tried using cat or print with no success.

Unfortunately R can only represent literal strings by using single quotes or double quotes and that makes representing ascii art awkward; however, you can do the following to get a text representation of your art which can be output using R's cat function.
1) First put your art in a text file:
# ascii_art.txt is our text file with the ascii art
# For test purposes we use the output of say("Hello") from cowsay package
# and put that in ascii_art.txt
library(cowsay)
writeLines(capture.output(say("Hello"), type = "message"), con = "ascii_art.txt")
2) Then read the file in and use dput:
art <- readLines("ascii_art.txt")
dput(art)
which gives this output:
c("", " -------------- ", "Hello ", " --------------", " \\",
" \\", " \\", " |\\___/|", " ==) ^Y^ (==",
" \\ ^ /", " )=*=(", " / \\",
" | |", " /| | | |\\", " \\| | |_|/\\",
" jgs //_// ___/", " \\_)", " ")
3) Finally in your code write:
art <- # copy the output of `dput` here
so your code would contain this:
art <-
c("", " -------------- ", "Hello ", " --------------", " \\",
" \\", " \\", " |\\___/|", " ==) ^Y^ (==",
" \\ ^ /", " )=*=(", " / \\",
" | |", " /| | | |\\", " \\| | |_|/\\",
" jgs //_// ___/", " \\_)", " ")
4) Now if we simply cat the art variable it shows up:
> cat(art, sep = "\n")
--------------
Hello
--------------
\
\
\
|\___/|
==) ^Y^ (==
\ ^ /
)=*=(
/ \
| |
/| | | |\
\| | |_|/\
jgs //_// ___/
\_)
Added
This is an addition several years later. In R 4.0 there is a new syntax that makes this even easier. See ?Quotes
Raw character constants are also available using a syntax similar
to the one used in C++: ‘r"(...)"’ with ‘...’ any character
sequence, except that it must not contain the closing sequence
‘)"’. The delimiter pairs ‘[]’ and ‘{}’ can also be used, and ‘R’
can be used in place of ‘r’. For additional flexibility, a number
of dashes can be placed between the opening quote and the opening
delimiter, as long as the same number of dashes appear between the
closing delimiter and the closing quote.
For example:
hello <- r"{
--------------
Hello
--------------
\
\
\
|\___/|
==) ^Y^ (==
\ ^ /
)=*=(
/ \
| |
/| | | |\
\| | |_|/\
jgs //_// ___/
\_)
}"
cat(hello)
giving:
--------------
Hello
--------------
\
\
\
|\___/|
==) ^Y^ (==
\ ^ /
)=*=(
/ \
| |
/| | | |\
\| | |_|/\
jgs //_// ___/
\_)

Alternative approach: Use an API
URL - artii
Steps:
Fetch the data ascii_request <- httr::GET("http://artii.herokuapp.com/make?text=this_is_your_text&font=ascii___")
Retrieve the response - ascii_response <- httr::content(ascii_request,as = "text", encoding = "UTF-8")
cat it out - cat(ascii_response)
If not connected to the web, you can set up your own server. Read more here
Thanks to #johnnyaboh for setting up this amazing service

Try the cat function, something like this should work:
cat(" \"\"\" ")

Related

R function to check the multiple texts in a string

I am facing a problem in finding a solution in R
I have to find out the strings having 4 texts :
1. " { M/s ",
2. " { M/s. ",
3. " ( S/O - ",
4. " ( W/O - "
and put the output in if statement in R
dd<- data.frame(narr=c("Ratnakar:LIMITED::::CNAAJPIOP0::::Ratnakar:LIMITED",
"BAR-BOKALAWA:::Kl RAM I:: { M/s. REJOICE CONFECTIONARS ::BARBOKALAWA:::Kl RAM I",
"P2A:::REFUND::: { M/s AANCHAL SAREES :::1(NETPREM KUMAR SINGH)",
"P2A:: SUNDER ( S/O - JITENDER PAL ::REFUND:::::rajdhani:lawn",
"SAA::PRUD:::P2A::::SAA::PRUD",
"SAA-NOON:MOO: RAJNI ( W/O - RAM NIVAS::P2A::REFUND::SAA:NOON:MOO",
"CMS.CAR:::SAA:::CMS::CAR"))
This is running fine : str_detect(dd$narr, " M/s | M/s.| W/O | C/O | S/O ")
But, This is not running : str_detect(dd$narr, " { M/s | { M/s.| ( W/O | ( C/O | ( S/O ")
Error is coming :
Error in stri_detect_regex(string, pattern, negate = negate, opts_regex = opts(pattern)) :
Error in {min,max} interval. (U_REGEX_BAD_INTERVAL)
Please help me out.
str_detect(dd$narr, " \\{ M/s | \\{ M/s\\.| \\( W/O | \\( C/O | \\( S/O ")
?regexp says: Any metacharacter with special meaning may be quoted by preceding it with a backslash.
stringr::str_detect(dd$narr, " \\{ M/s | \\{ M/s\\.| \\( W/O | \\( C/O | \\( S/O ")
#[1] FALSE TRUE TRUE TRUE FALSE TRUE FALSE

Calculate length of string with only spaces in Unix for a fixed width file

I am having a fixed width file created via a BTEQ script in Unix. I am able to calculate the length correctly for all fields until the field which has some value.
In BTEQ script:
SELECT RPAD(COALESCE(SSO_ID,''),50,' ')||
RPAD(COALESCE(GENERIC4,''),100,' ')||
RPAD(COALESCE(GENERIC5,''),100,' ')||
RPAD(COALESCE(GENERIC6,''),100,' ')||
RPAD(COALESCE(GENERIC7,''),100,' ')||
RPAD(COALESCE(GENERIC8,''),100,' ')||
RPAD(COALESCE(GENERIC9,''),100,' ')||
RPAD(COALESCE(GENERIC10,''),100,' ')
FROM <view_name>
FILENM is an output file created via the above BTEQ script:
LENGTH_SSO_ID=`cat $FILENM | grep 57080249 |cut -c1361-1410| tr " " "~" | wc -c`
LENGTH_GENERIC4=`cat $FILENM | grep 57080249 |cut -c1711-1810| tr " " "~" | wc -c`
LENGTH_GENERIC5=`cat $FILENM | grep 57080249 cut -c1811-1910| tr " " "~" | wc -c`
LENGTH_GENERIC6=`cat $FILENM | grep 57080249 cut -c1911-2010| tr " " "~" | wc -c`
LENGTH_GENERIC7=`cat $FILENM | grep 57080249 cut -c2011-2110| tr " " "~" | wc -c`
LENGTH_GENERIC8=`cat $FILENM | grep 57080249 cut -c2111-2210| tr " " "~" | wc -c`
LENGTH_GENERIC9=`cat $FILENM | grep 57080249 |cut -c2211-2310| tr " " "~" | wc -c`
LENGTH_GENERIC10=`cat $FILENM | grep 57080249 |cut -c2311-2410| tr " " "~" | wc -c`
This is output I'm getting, where ideally I should get 100 length each for GENERIC columns.
LENGTH of SSO_ID=50
LENGTH of GENERIC4=5
LENGTH of GENERIC5=0
LENGTH of GENERIC6=0
LENGTH of GENERIC7=0
LENGTH of GENERIC8=0
LENGTH of GENERIC9=0
LENGTH of GENERIC10=0
Please advise why it is giving the incorrect length for columns which have spaces always and occur at the end of the record.

I need list of files in long format that contain string

I want to have the list of files, in long format (ls -l) including date and time, that contain an specific string and if possible, number of occurrences.
The most I have achieved is a list of files (just the name) with number of occurrences:
grep -c 'string' * | grep -v :0
That shows something like:
filename:number of occurrences
But I cannot improve it to show also file date and time. It has to be something simple, but I am a bit newbie.
I have used -s for ignoring the folder warnings. ':0$' is the regex for ending in :0. awk then calls ls -l on just the filenames found, then | tr '\n' ' ' replaces the newlines of the ls command with spaces. We output the number of occurences at the end of each line so we don't lose the info while going forward. The last awk is to just to print the columns needed.
grep -c 'form-group' * -s | grep -v ':0$' | awk -F ':' '{ printf system( "ls -l \"" $2 "\" | tr \"\n\" \" \"" ); print " " $3 }' | awk -F ' ' '{ print $6 " " $7 " " $8 " " $9 " : " $11 }'
Here is some sample output:
Sep 1 13:47 xxx.blade.php : 12
Sep 1 13:47 xxx.blade.php : 5
Sep 1 13:47 xxx.blade.php : 6
Sep 11 17:25 xxx.blade.php : 4
Sep 4 15:03 xxx.blade.php : 6

How can I omit a specific comma from my output?

I wanted to format my output using awk/sed, but I can't understand how to do it. I am using following command:
uptime | awk '{print $1" " $2" " $3$4" " $6" " $10$11$12}'
15:36:17 up 177days, 7 0.39,0.43,0.36
My desired output is
15:36:17 up 177days 7 0.39,0.43,0.36
I wanted to omit only first comma, i.e. the one after "177days".
Use either sub (substitute the comma with the empty string) or substr (make a substring with all but the last character):
uptime | awk '{sub(",","",$4); print $1" " $2" " $3$4" " $6" " $10$11$12}'
uptime | awk '{print $1" " $2" " $3 substr($4,1,length($4)-1) " " $6" " $10$11$12}'
Try piping the output of your command to a sed that'll match any alphabetic character followed by a comma and replace it with that character:
uptime | awk '{print $1" " $2" " $3$4" " $6" " $10$11$12}' | sed -r 's/(\w),/\1/'

parameterize UNIX statements

I have a bunch of statements in UNIX that I want to loop to use parameterized value for their calculations.
more /var/xacct_data/xxxx/log_flattener/xxxx/logfile_current | grep " F " //E,I,D
**xxxx = mpay,mmg,tvr**
/var/xacct_data/faff/faff1/log_flattener/faffsnp1/ logfile_current | grep " F " //E , I, D also
/var/xacct_data/faff/faff1/log_flattener/faffdbt1 /logfile_current | grep " F " //E , I, D also
/var/xacct_data/faff/faff1/log_flattener/fafftxn1 /logfile_current | grep " F " //E , I, D also
/var/xacct_data/faff/faff2/log_flattener/faffdbt2/ logfile_current | grep " F " //E , I, D also
I want to store these paths in a file. Read from the file in a unix shell script. and run the unix commands on the above paths, while manipulating the above paths by substituting some values in the path..
For example, in the above code block, in the top most path. I want to replace the xxxx with the three values given. mpay, mmg and tvr. how do i go about it??
For every grep " F " I want to use E, I and D as parameters for the current path. how do i do it??
The left part of the pipe seem truncated but for the grep side, I think you are looking for
... | grep " [FEID] "
This should get you started. I won't write the entire script for you.
In bash, zsh, etc...
for directory in mpay mmg tvr; do
for char in F E I D; do
echo "Looking for lines containing ${char} in ${directory} directory..."
grep "${char}" /var/xacct_data/${directory}/log_flattener/${directory}/logfile_current
done
done
No need for more here. grep takes a filename as input.

Resources