Get last page http request golang - http

I am doing an http requestlike this one:
resp, err := http.Get("http://example.com/")
Then I am getting the header link:
link := resp.Header.Get("link")
Which gives me a result like this:
<page=3>; rel="next",<page=1>; rel="prev";<page=5>; rel="last"
Question
How can I parse this into a more legible way? I specifically trying to get the last page but firstand nextpage should useful as well.
I tried with Splitsand Regular expressionswithout success.

Are you sure that is the format of the output? It looks like one of ; should be a ,.
A single Link http header with multiple values, should be of the format (notice the comma after "prev")
<page=3>; rel="next",<page=1>; rel="prev",<page=5>; rel="last"
The order should be split on , for each link. Split each link on ; for values or key-value pairs, and then if they value matches <(.*=.*)>, discard the angle brackets and use the remaining key and value.

Here's a solution of how to match your page numbers.
http://play.golang.org/p/kzurb38Fwx
text := `<page=3>; rel="next",<page=1>; rel="prev";<page=2>; rel="last"`
re := regexp.MustCompile(`<page=([0-9]+)>; rel="next",<page=([0-9]+)>; rel="prev";<page=([0-9]+)>; rel="last"`)
matches:= re.FindStringSubmatch(text)
if matches != nil {
next := matches[1]
prev := matches[2]
last := matches[3]
fmt.Printf("next = %s, prev = %s, last = %s\n", next, prev, last)
}
Later Edit: you can probably also use the xml package to achieve the same result, by parsing that output as an XML, but you would need to transform your output a bit.

Related

Get word at position in Atom

From a linter provider, I receive a Point compatible array(line, column) where the error occured. Now I would like to hightlight the word surrounding that point, basically the result one would get if that exact point was double-clicked in the editor. Something like
const range = textEditor.getWordAtPosition(point)
Is what I hoped for, but couldn't find in the documentation.
Thanks for your help!
After looking around for a while, there seems to be no API method for the given need. I ended up writing a small helper function based upon this answer:
function getWordAtPosition(line, pos) {
// Perform type conversions.
line = String(line);
pos = Number(pos) >>> 0;
// Search for the word's beginning and end.
const left = Math.max.apply(null, [/\((?=[^(]*$)/,/\)(?=[^)]*$)/, /\,(?=[^,]*$)/, /\[(?=[^[]*$)/, /\](?=[^]]*$)/, /\;(?=[^;]*$)/, /\.(?=[^.]*$)/, /\s(?=[^\s]*$)/].map(x => line.slice(0, pos).search(x))) + 1
let right = line.slice(pos).search(/\s|\(|\)|\,|\.|\[|\]|\;/)
// The last word in the string is a special case.
if (right < 0) {
right = line.length - 1
}
// Return the word, using the located bounds to extract it from the string.
return str.slice(left, right + pos)
}
Here, the beginning of the word is determined by the latest occurance of one of the characters (),.[]; or a blank.
The end of the word is determined by the same characters, however here the first occurance is taken as a delimeter.
Given the original context, the function can the be called using the API method ::lineTextForBufferRow and the desired postion (column) as follows:
const range = getWordAtPosition(textEditor.lineTextForBufferRow(bufferRow), 10)

Can R read html-encoded emoji characters?

Question
My question, explained below, is:
How can R be used to read a string that includes HTML emoji codes like πŸ€—?
I'd like to:
(1) represent the emoji symbol (e.g., as a unicode symbol: πŸ€—) in the parsed string, OR(2) convert it into its text equivalent (":hugging face:")
Background
I have an XML dataset of text messages (from the Android/iOS app Signal) that I am reading into R for a text mining project. The data look like this, with each text message represented in an sms node:
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!-- File Created By Signal -->
<smses count="1">
<sms protocol="0" address="+15555555555" contact_name="Jane Doe" date="1483256850399" readable_date="Sat, 31 Dec 2016 23:47:30 PST" type="1" subject="null" body="Hug emoji: πŸ€—" toa="null" sc_toa="null" service_center="null" read="1" status="-1" locked="0" />
</smses>
Problem
I am currently reading the data using the xml2 package for R. When I use the xml2::read_xml function, however, I get the following error message:
Error in doc_parse_raw(x, encoding = encoding, base_url = base_url, as_html = as_html, :
xmlParseCharRef: invalid xmlChar value 55358
Which, as I understand, indicates that the emoji character is not recognized as valid XML.
Using the xml2::read_html function does work, but drops the emoji character. A small example of this is here:
example_text <- "Hugging emoji: πŸ€—"
xml2::xml_text(xml2::read_html(paste0("<x>", example_text, "</x>")))
(Output: [1] "Hugging emoji: ")
This character is valid HTML -- Googling πŸ€— actually converts it in the search bar to the "hugging face" emoji, and brings up results relating to that emoji.
Other information I've found that seems relevant to this question
I've been searching Stack Overflow, and have not found any questions relating to this particular issue. I've also not been able to find a table that straightforwardly gives HTML codes next to the emoji they represent, and so am not able to do an (albeit inefficient) conversion of these HTML codes to their textual equivalents in a big loop before parsing the dataset; for example, neither this list nor its underlying dataset seem to include the string 55358.
tl;dr: the emoji aren't valid HTML entities; UTF-16 numbers have been used to build them instead of Unicode code points. I describe an algorithm at the bottom of the answer to convert them so that they are valid XML.
Identifying the Problem
R definitely handles emoji:
In fact, a few packages exist for handling emoji in R. For example, the emojifont and emo packages both let you retrieve emoji based on Slack-style keywords. It's just a question of getting your source characters through from the HTML-escaped format so that you can convert them.
xml2::read_xml seems to do fine with other HTML entities, like an ampersand or double quotes. I looked at this SO answer to see whether there were any XML-specific constraints on HTML entities, and it seemed like they were storing emoji fine. So I tried changing the emoji codes in your reprex to the ones in that answer:
body="Hug emoji: πŸ˜€πŸ˜ƒ"
And, sure enough, they were preserved (though they're obviously not the hug emoji anymore):
> test8 = read_html('Desktop/test.xml')
> test8 %>% xml_child() %>% xml_child() %>% xml_child() %>% xml_attr('body')
[1] "Hug emoji: \U0001f600\U0001f603"
I looked up the hug emoji on this page, and the decimal HTML entity given there is not πŸ€—. It looks like the UTF-16 decimal codes for the emoji have been wrapped in &# and ;.
In conclusion, I think the answer is that your emoji are, in fact, not valid HTML entities. If you can't control the source, you might need to do some pre-processing to account for these errors.
So, why does the browser convert them properly? I'm wondering if the browser is a little more flexible with these things and is making some guesses about what those codes could be. I'm just speculating, though.
Converting UTF-16 to Unicode code points
After some more investigation, it looks like valid emoji HTML entities use the Unicode code point (in decimal, if it's &#...;, or hex, if it's &#x...;). The Unicode code point is different from the UTF-8 or UTF-16 code. (That link explains a lot about how emoji and other characters are variously encoded, BTW! Good read.)
So we need to convert the UTF-16 codes used in your source data to Unicode code points. Referring to this Wikipedia article on UTF-16, I've verified how it's done. Each Unicode code point (our target) is a 20-bit number, or five hex digits. When going from Unicode to UTF-16, you split it up into two 10-bit numbers (the middle hex digit gets cut in half, with two of its bits going to each block), do some maths on them and get your result).
Going backwards, as you want to, it's done like this:
Your decimal UTF-16 number (which is in two separate blocks for now) is 55358 56599
Converting those blocks to hex (separately) gives 0x0d83e 0x0dd17
You subtract 0xd800 from the first block and 0xdc00 from the second to give 0x3e 0x117
Converting them to binary, padding them out to 10 bits and concatenating them, it's 0b0000 1111 1001 0001 0111
Then we convert that back to hex, which is 0x0f917
Finally, we add 0x10000, giving 0x1f917
Therefore, our (hex) HTML entity is πŸ€—. Or, in decimal, &#129303
So, to preprocess this dataset, you'll need to extract the existing numbers, use the algorithm above, then put the result back in (with one &#...;, not two).
Displaying emoji in R
As far as I'm aware, there's no solution to printing emoji in the R console: they always come out as "U0001f600" (or what have you). However, the packages I described above can help you plot emoji in some circumstances (I'm hoping to expand ggflags to display arbitrary full-colour emoji at some point). They can also help you search for emoji to get their codes, but they can't get names given the codes AFAIK. But maybe you could try importing the emoji list from emojilib into R and doing a join with your data frame, if you've extracted the emoji codes into a column, to get the English names.
JavaScript Solution
I had this exact same problem, but needed the solution in JavaScript, not R. Using rensa's comment above (hugely helpful!), I created the following code to solve this issue, and I just wanted to share it in case anyone else happens across this thread as I did, but needed it in JavaScript.
str.replace(/(&#\d+;){2}/g, function(match) {
match = match.replace(/&#/g,'').split(';');
var binFirst = (parseInt('0x' + parseInt(match[0]).toString(16)) - 0xd800).toString(2);
var binSecond = (parseInt('0x' + parseInt(match[1]).toString(16)) - 0xdc00).toString(2);
binFirst = '0000000000'.substr(binFirst.length) + binFirst;
binSecond = '0000000000'.substr(binSecond.length) + binSecond;
return '&#x' + (('0x' + (parseInt(binFirst + binSecond, 2).toString(16))) - (-0x10000)).toString(16) + ';';
});
And, here's a full snippet of it working if you'd like to run it:
var str = 'πŸ˜ŠπŸ˜˜πŸ˜€πŸ˜†πŸ˜‚πŸ˜'
str = str.replace(/(&#\d+;){2}/g, function(match) {
match = match.replace(/&#/g,'').split(';');
var binFirst = (parseInt('0x' + parseInt(match[0]).toString(16)) - 0xd800).toString(2);
var binSecond = (parseInt('0x' + parseInt(match[1]).toString(16)) - 0xdc00).toString(2);
binFirst = '0000000000'.substr(binFirst.length) + binFirst;
binSecond = '0000000000'.substr(binSecond.length) + binSecond;
return '&#x' + (('0x' + (parseInt(binFirst + binSecond, 2).toString(16))) - (-0x10000)).toString(16) + ';';
});
document.getElementById('result').innerHTML = str;
// πŸ˜ŠπŸ˜˜πŸ˜€πŸ˜†πŸ˜‚πŸ˜
// is turned into
// πŸ˜ŠπŸ˜˜πŸ˜€πŸ˜†πŸ˜‚πŸ˜
// which is rendered by the browser as the emojis
Original:<br>πŸ˜ŠπŸ˜˜πŸ˜€πŸ˜†πŸ˜‚πŸ˜<br><br>
Result:<br>
<div id='result'></div>
My SMS XML Parser application is working great now, but it stalls out on large XML files so, I'm thinking about rewriting it in PHP. If/when I do, I'll post that code as well.
I've implemented the algorithm described by rensa above in R, and am sharing it here. I am happy to release the code snippet below under a CC0 dedication (i.e., putting this implementation into the public domain for free reuse).
This is a quick and unpolished implementation of rensa's algorithm, but it works!
utf16_double_dec_code_to_utf8 <- function(utf16_decimal_code){
string_elements <- str_match_all(utf16_decimal_code, "&#(.*?);")[[1]][,2]
string3a <- string_elements[1]
string3b <- string_elements[2]
string4a <- sprintf("0x0%x", as.numeric(string3a))
string4b <- sprintf("0x0%x", as.numeric(string3b))
string5a <- paste0(
# "0x",
as.hexmode(string4a) - 0xd800
)
string5b <- paste0(
# "0x",
as.hexmode(string4b) - 0xdc00
)
string6 <- paste0(
stringi::stri_pad(
paste0(BMS::hex2bin(string5a), collapse = ""),
10,
pad = "0"
) %>%
stringr::str_trunc(10, side = "left", ellipsis = ""),
stringi::stri_pad(
paste0(BMS::hex2bin(string5b), collapse = ""),
10,
pad = "0"
) %>%
stringr::str_trunc(10, side = "left", ellipsis = "")
)
string7 <- BMS::bin2hex(as.numeric(strsplit(string6, split = "")[[1]]))
string8 <- as.hexmode(string7) + 0x10000
unicode_pattern <- string8
unicode_pattern
}
make_unicode_entity <- function(x) {
paste0("\\U000", utf16_double_dec_code_to_utf8(x))
}
make_html_entity <- function(x) {
paste0("&#x", utf16_double_dec_code_to_utf8(x), ";")
}
# An example string, using the "hug" emoji:
example_string <- "test πŸ€— test"
output_string <- stringr::str_replace_all(
example_string,
"(&#[0-9]*?;){2}", # Find all two-character "&#...;&#...;" codes.
make_unicode_entity
# make_html_entity
)
cat(output_string)
# To print Unicode string (doesn't display in R console, but can be copied and
# pasted elsewhere:
# (This assumes you've used 'make_unicode_entity' above in the str_replace_all
# call):
stringi::stri_unescape_unicode(output_string)
Translated Chad's JavaScript answer to Go since I too had the same issue, but needed a solution in Go.
https://play.golang.org/p/h9JBFzqcd90
package main
import (
"fmt"
"html"
"regexp"
"strconv"
"strings"
)
func main() {
emoji := "πŸ˜ŠπŸ˜˜πŸ˜€πŸ˜†πŸ˜‚πŸ˜"
regexp := regexp.MustCompile(`(&#\d+;){2}`)
matches := regexp.FindAllString(emoji, -1)
var builder strings.Builder
for _, match := range matches {
s := strings.Replace(match, "&#", "", -1)
parts := strings.Split(s, ";")
a := parts[0]
b := parts[1]
c, err := strconv.Atoi(a)
if err != nil {
panic(err)
}
d, err := strconv.Atoi(b)
if err != nil {
panic(err)
}
c = c - 0xd800
d = d - 0xdc00
e := strconv.FormatInt(int64(c), 2)
f := strconv.FormatInt(int64(d), 2)
g := "0000000000"[2:len(e)] + e
h := "0000000000"[10:len(f)] + f
j, err := strconv.ParseInt(g + h, 2, 64)
if err != nil {
panic(err)
}
k := j + 0x10000
_, err = builder.WriteString("&#x" + strconv.FormatInt(k, 16) + ";")
if err != nil {
panic(err)
}
}
fmt.Println(html.UnescapeString(emoji))
emoji = html.UnescapeString(builder.String())
fmt.Println(emoji)
}

goquery insert newline after text found

I am using "github.com/PuerkitoBio/goquery" to parse numbers from inside 'value' tag in the html document like below
<tab>
<value>1,2,3</value>
<value>2,4,6</value>
<value>5,6,7</value>
</tab>
and what I got with code snippet below is 1,2,32,4,65,6,7 so without newline which is not what I want . I need multiple 3 'values'(to append each of them later to slice) not one
func parseGoQuery(b io.Reader) {
doc, err := goquery.NewDocumentFromReader(b)
fmt.Println(doc.Find("tab").Find("value").Text())
}
try this:
doc.Find("tab").Find("value").Each(func(_ int, value *goquery.Selection) {
fmt.Println(value.Text())
})
The above code iterates over all value elements, and then print the text of each element in one line, which is exactly what you want.

One HTTP Delimiter to Rule Them All

I have a configuration file in the format of blah = foo. I would like to have entries like:
http = https://stackoverflow.com/questions,header keys and values,string to search for.
I'm okay requiring that the the url be urlecncoded. Is there any ASCII character I can use that won't be valid value anywhere in the above example (After splitting once on =)? My example uses a comma but I think that is valid in a header value?
After pouring through some RFCs I figure someone is more familiar with this can save me some pain.
Also my project is in Go if there are existing std library that might help with this...
You can use a non-ascii character and urlencode, for example using the middle dot (compose + ^ + . on linux):
const sep = `Β·`
const t = `http = https://stackoverflow.com/questionsΒ·string to search forΒ·header=valueΒ·header=value`
func parseLine(line string) (name, url, search string, headers []string) {
idx := strings.Index(line, " = ")
if idx == -1 {
return
}
name = line[:idx]
parts := strings.Split(line[idx+3:], sep)
if len(parts) < 3 {
// handle invalid line
}
url, search = parts[0], parts[1]
headers = parts[2:]
return
}
Although, using JSON is probably the best and most long-term maintainable option.
For completeness sake, a json version would look like:
type Site struct {
Url string
Query string
Headers map[string]string
}
const t = `[
{
"url": "https://stackoverflow.com/questions",
"query": "string to search for",
"headers": {"header": "value", "header2": "value"}
},
{
"url": "https://google.com",
"query": "string to search for",
"headers": {"header": "value", "header2": "value"}
}
]`
func main() {
var sites []Site
err := json.Unmarshal([]byte(t), &sites)
fmt.Printf("%+v (%v)\n", sites, err)
}
Essentially you have to look at RFC 3986, RFC 7230 and friends to see what can occur.
URIs are simple if you insist on them to be valid, just use the space character or "<" and ">" as delimiters.
Field values can be almost anything; HTTP forbids control characters though, so you might be able to use horizontal TABs (if you're ok with getting into trouble with invalid field values).

xQuery substring problem

I now have a full path for a file as a string like:
"/db/Liebherr/Content_Repository/Techpubs/Topics/HyraulicPowerDistribution/Released/TRN_282C_HYD_MOD_1_Drive_Shaft_Rev000.xml"
However, now I need to take out only the folder path, so it will be the above string without the last back slash content like:
"/db/Liebherr/Content_Repository/Techpubs/Topics/HyraulicPowerDistribution/Released/"
But it seems that the substring() function in xQuery only has substring(string,start,len) or substring(string,start), I am trying to figure out a way to specify the last occurence of the backslash, but no luck.
Could experts help? Thanks!
Try out the tokenize() function (for splitting a string into its component parts) and then re-assembling it, using everything but the last part.
let $full-path := "/db/Liebherr/Content_Repository/Techpubs/Topics/HyraulicPowerDistribution/Released/TRN_282C_HYD_MOD_1_Drive_Shaft_Rev000.xml",
$segments := tokenize($full-path,"/")[position() ne last()]
return
concat(string-join($segments,'/'),'/')
For more details on these functions, check out their reference pages:
fn:tokenize()
fn:string-join()
fn:replace can do the job with a regular expression:
replace("/db/Liebherr/Content_Repository/Techpubs/Topics/HyraulicPowerDistribution/Released/TRN_282C_HYD_MOD_1_Drive_Shaft_Rev000.xml",
"[^/]+$",
"")
This can be done even with a single XPath 2.0 (subset of XQuery) expression:
substring($fullPath,
1,
string-length($fullPath) - string-length(tokenize($fullPath, '/')[last()])
)
where $fullPath should be substituted with the actual string, such as:
"/db/Liebherr/Content_Repository/Techpubs/Topics/HyraulicPowerDistribution/Released/TRN_282C_HYD_MOD_1_Drive_Shaft_Rev000.xml"
The following code tokenizes, removes the last token, replaces it with an empty string, and joins back.
string-join(
(
tokenize(
"/db/Liebherr/Content_Repository/Techpubs/Topics/HyraulicPowerDistribution/Released/TRN_282C_HYD_MOD_1_Drive_Shaft_Rev000.xml",
"/"
)[position() ne last()],
""
),
"/"
)
It seems to return the desired result on try.zorba-xquery.com. Does this help?

Resources