How Does Deeplinking Work in The ThunderCore Hub Wallet? - thundercore

Can I create deeplinks that would directly open a particular web page?
Are those links sharable?

The ThunderCore Hub wallet supports deeplinking to arbitrary URLs through the URL https://ttsite.link/
Sample Session
$ URL=https://www.google.com
$ printf $URL | base64 -w 0 | printf 'https://ttsite.link/'$(cat)'\n'
https://ttsite.link/aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbQ==
base64 is from GNU coreutils. You can also use e.g. python:
printf $URL | python3 -c 'import sys,base64;print(base64.b64encode(sys.stdin.buffer.read()).decode("ascii"))' |
printf 'https://ttsite.link/'$(cat)'\n'
Clicking on https://ttsite.link/aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbQ== on a mobile device with the ThunderCore Hub app installed would then open the URL in the app.

Related

how to create script for downloading random youtube videos?

i need a script that:
creates a folder in the current path named videos
continuously checks, via pwgen, if randomly generated youtube URLs retrieve valid URLs
when a URL is created launches a parallel process to download youtube or vimeo videos
the youtube videos are encoded in *.mov and stored in the movie folder
the names range from 1 to infinite
when the video download finishes the parallel process stops
when the script stops it deletes de movie folder
the purpose of this script is to:
create an interactive installation with openframeworks, or a similar tool
i want to use:
youtube-dl, ffmpeg and pwgen
I will be using:
mac os high sierra
everything will be opensource and published on github
pwgen will have to take as arguments:
the number of necessary characters to form a url and the number of hashes to generate
youtube-dl and ffmpeg will start from something like:
youtube-dl -t mov URL
that's all i know by now
while true; do video_id=$(LC_CTYPE=C tr -dc 'A-Za-z0-9_-' < /dev/urandom | head -c 11) if [[ $(curl -s --head -w %{http_code} https://www.youtube.com/watch?v=$video_id -o /dev/null) = 200 ]]; then youtube-dl -t mov youtube-dl https://www.youtube.com/watch?v=$video_id fi done

Need a way to copy a file to Livelink from a cmd prompt (a la davcopy)

Has anyone written something like davcopy for Livelink? (davcopy works with SharePoint)
I have downloaded davcopy and it hangs when trying to use it with Livelink.
I've asked Open Text and their response is "There is not way to do this out of the box, it will requires writing a webservices application."
I'm not sure how to write a webservice application for livelink; so, before I explore that I was wondering if anyone had done an implementation of davcopy for Livelink.
I know about a command line application which is using MS powershell to do what you want (http://www.gatevillage.net/public/content-server-desktop-library-powershell-suite)
It wouldn't be too difficult to write something like this with Ruby or Perl. Both support WS/SOAP.
With which version of Livelink (or Content Server) do you work?
You can use the curl command line tool to upload, download or delete files in Livelink. It makes HTTP requests against CS REST API, which is available in CS 10.0 or newer.
For example, uploading a file "file.ext" to folder 8372 at http://server/instance/cs as Admin:
curl \
-F "type=144" \
-F "parent_id=8372" \
-F "name=file.ext" \
-F "file=#/path/to/file.ext" \
-u "Admin:password" \
-H "Expect:" \
http://server/instance/cs/api/v1/nodes
The "Expect" header has to be forced empty, because CS REST API does not support persistent connections, but curl would always enable them for this request.

Download all files of a particular type from a website using wget stops in the starting url

The following did not work.
wget -r -A .pdf home_page_url
It stop with the following message:
....
Removing site.com/index.html.tmp since it should be rejected.
FINISHED
I don't know why it only stops in the starting url, do not go into the links in it to search for the given file type.
Any other way to recursively download all pdf files in an website. ?
It may be based on a robots.txt. Try adding -e robots=off.
Other possible problems are cookie based authentication or agent rejection for wget.
See these examples.
EDIT: The dot in ".pdf" is wrong according to sunsite.univie.ac.at
the following cmd works for me, it will download pictures of a site
wget -A pdf,jpg,png -m -p -E -k -K -np http://site/path/
This is certainly because of the links in the HTML don't end up with /.
Wget will not follow this has it think it's a file (but doesn't match your filter):
page
But will follow this:
page
You can use the --debug option to see if it's the actual problem.
I don't know any good solution for this. In my opinion this is a bug.
In my version of wget (GNU Wget 1.21.3), the -A/--accept and -r/--recursive flags don't play nicely with each other.
Here's my script for scraping a domain for PDFs (or any other filetype):
wget --no-verbose --mirror --spider https://example.com -o - | while read line
do
[[ $line == *'200 OK' ]] || continue
[[ $line == *'.pdf'* ]] || continue
echo $line | cut -c25- | rev | cut -c7- | rev | xargs wget --no-verbose -P scraped-files
done
Explanation: Recursively crawl https://example.com and pipe log output (containing all scraped URLs) to a while read block. When a line from the log output contains a PDF URL, strip the leading timestamp (25 characters) and tailing request info (7 characters) and use wget to download the PDF.

How to decrypt AES-128 encrypted m3u8 video files?

I trying to decrypt AES-128 encrypted m3u8 video files such as this one :
the m3u8 file :
#EXTM3U
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-ALLOW-CACHE:NO
#EXT-X-VERSION:2
#EXT-X-FAXS-CM:MII6lAYJKoZIhvcNAQcCoII6hTCCOoECAQExCzAJBgUrDgMCGgUAM... very long key...
#EXT-X-KEY:METHOD=AES-128,URI="faxs://faxs.adobe.com",IV=0X99b74007b6254e4bd1c6e03631cad15b
#EXT-X-TARGETDURATION:8
#EXTINF:8,
video.mp4Frag1Num0.ts
#EXTINF:8,
video.mp4Frag1Num1.ts
...
I've tried with openssl :
openssl aes-128-cbc -d -kfile key.txt -iv 99b74007b6254e4bd1c6e03631cad15b -nosalt -in video_enc.ts -out video_dec.ts
key.txt contains the very long key
-->
bad decrypt
1074529488:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:539:
What am-I doing wrong ?
This might be a bit of a hack, but given a URL to an .m3u8 file, it will download and decrypt the files that make up the stream:
#!/usr/bin/env bash
curl "$1" -s | awk 'BEGIN {c=0} $0 ~ "EXT-X-KEY" {urlpos=index($0,"URI=")+5; ivpos=index($0,"IV="); keyurl=substr($0, urlpos, ivpos-urlpos-2); iv=substr($0, ivpos+5); print "key=`curl -s '\''"keyurl"'\'' | hexdump -C | head -1 | sed \"s/00000000//;s/|.*//;s/ //g\"`"; print "iv="iv} $0 !~ "-KEY" && $0 ~ "http" {printf("curl -s '\''"$0"'\'' | openssl aes-128-cbc -K $key -iv $iv -d >seg%05i.ts\n", c++)}' | bash
This script generates a second script that extracts keys and initialization vectors and uses them to decrypt while downloading. It needs curl, awk, hexdump, sed, and openssl to run. It'll probably choke on an unencrypted stream, or on a stream that uses something other than AES-128 (is any other encryption supported?).
You'll get a bunch of files: seg00000.ts, seg00001.ts, etc. Use tsMuxeR (https://www.videohelp.com/software/tsMuxeR) to merge these into a single file (simple concatenation didn't work for me...it's what I tried first):
(echo "MUXOPT --no-pcr-on-video-pid --new-audio-pes --vbr --vbv-len=500"; (echo -n "V_MPEG4/ISO/AVC, "; for i in seg*.ts; do echo -n "\"$i\"+"; done; echo ", fps=30, insertSEI, contSPS, track=258") | sed "s/+,/,/"; (echo -n "A_AAC, "; for i in seg*.ts; do echo -n "\"$i\"+"; done; echo ", track=257") | sed "s/+,/,/") >video.meta
tsMuxeR video.meta video.ts
(Track IDs and framerate may need adjustment...get the values to use by passing one of the downloaded files to tsMuxeR.)
Then use ffmpeg to remux to something a bit more widely understood:
ffmpeg -i video.ts -vcodec copy -acodec copy video.m4v
In order to decrypt encrypted video stream you need encryption key.
This key is not part of the stream. It should be obtained separately.
EXT-X-FAXS-CM header contains DRM meta-data and not the key.
This is excert from Adobe Media Server developer guide:
The Adobe Access Server protected variant playlist also needs to include the #EXT-X-FAXS-CM tag. The value of #EXT-X-FAXS-CM tag in variant playlist is the relative URI referring to the DRM metadata of one of the individual streams.At the client, the #EXT-X-FAXS-CM tag in variant playlist will be used to create the DRM session. The same DRM session will be used for all encrypted M3U8 files inside the variant playlist.
Full guide can be found here:
http://help.adobe.com/en_US/adobemediaserver/devguide/WS5262178513756206-4b6aabd1378392bb59-7fe8.html
There is also mention that faxs://faxs.adobe.com URI is for local key serving.
So key obtained locally from a device.
While some of the bash scripts in the existing answers get you part (or even all) of the way, depending which site you're trying to download from, you might hit other obstacles (different auth method, custom license server mount, etc.)
I've found streamlink to be the most robust solution for this, which also lets you stream directly (rather than download), if that's what you're after, and it has all the site-specific work already done for you for a long list of sites (see plugins section, but keep in mind it's under active development and the latest release was in June, so for some of the newer ones you'll have to git clone and install from source).
In many cases, VLC will happily convert an .m3u8 video to an unencrypted .ts or .mp4. In the VLC graphical interface, go to Media > Convert/Save.
Even through this file includes AES encrypted data, openssl don't know the m3u8 format. However FFmpeg might be able to handle it.

RSS+SSL (gmail) via command line?

My goal is to be able to read new messages from a gmail account via a linux server. I guess I could do this via IMAP or something, but I'd like to avoid that complexity if possible given that gmail has this nice feed set up:
https://mail.google.com/mail/feed/atom/
The only issue is that I'm not sure how to authenticate the call to pull this. Is this possible?
A good starting point should be:
curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | sed -n "s/<title>\(.*\)<\/title.*name>\(.*\)<\/name>.*/\2 - \1/p"
Checks the Gmail ATOM feed for your account, parses it and outputs a list of unread messages.
Also, see this thread: http://www.commandlinefu.com/commands/view/3380/check-your-unread-gmail-from-the-command-line
OTOH, I would recommend using mutt and IMAP.

Resources