How to replace comma with a dot in GTM for JSON structured data? - google-tag-manager

I am noob with structured data implementation and don't have any code knowledge.
I have been looking for a week how to solve a warning with price in Google structured data testing tool.
My prices are with a comma which is not accepted by Google.
By checking the http://schema.org/price it tells me that "Use '.' (Unicode 'FULL STOP' (U+002E)) rather than ',' to indicate a decimal point. Avoid using these symbols as a readability separator."
I have a CSS variable element #PdtPrixRef named in a variable "Product-price" with a comma "12.5" but I can't find how to replace it in my structured data with the value "12.5"... Someone to help me?
Hereafter my actual script :
My actual GTM script
Should I add something to my script or making an VARIABLE (Custom Js)?
I think it's something like
value.replace(",", ".")
But I do't know how to write the full proper function from beginning to end...

Yes you can just create a Custom JavaScript Variable
Here is the code
function(){
var price = {{Product-price}};
return price.replace("," , ".");
}
Then using this variable to your JSON-LD script.

Related

Why fn:substring-after Xquery function could not be used inside ML TDE

In my ML db, we have documents with distributor code like 'DIST:5012' (DIST:XXXX) XXXX is a four-digit number.
currently, in my TDE, the below code works well.
However instead of concat all the raw distributor codes, I want to simply concat the number part only. I used the fn:substring-after XQuery function. However, it won't work. It won't show that distributorCode column in the SQL View anymore. (Below code does not work.)
What is wrong? How to fix that?
Both fn:substring-after and fn:string-join is in TDE Dialect page.
https://docs.marklogic.com/9.0/guide/app-dev/TDE#id_99178
substring-after() expects a single string as input, not a sequence of strings.
To demonstrate, this will not work:
let $dist := ("DIST:5012", "DIST:5013")
return substring-after($dist, "DIST:")
This will:
for $dist in ("DIST:5012", "DIST:5013")
return substring-after($dist, "DIST:")
I need to double check what XPath expressions will work in a DTE, you might be able to change it to apply the substring-after() function in the last step:
fn:string-join( distributors/distributor/urn/substring-after(., 'DIST:'), ';')

Read embedded data that starts with numbers?

I have embedded data that I have imported into Qualtrics use a web service block. The data comes from a .json file and reads something like 0.male, 1.male, 2.male, etc.
I have been trying to read this into my survey using the Qualtrics.SurveyEngine.getEmbeddedData method but without luck.
I'm trying to do something that takes the form.
let n = 2
Qualtrics.SurveyEngine.getEmbeddedData(n + ".male")
but this has been returning a NULL result. Is it possible to read embedded data that starts with a number?
Also see:
https://community.qualtrics.com/XMcommunity/discussion/15991/read-in-embedded-variables-using-a-loop#latest
The issue isn't the number, it is the dot. getEmbeddedData() doesn't work when the name contains a dot. See https://stackoverflow.com/a/51802695/4434072 for possible alternatives.

U-SQL get filename of input and use for output

I have a filename of test.csv and I want the output to be test.txt.
I can extract the filename of the input but don't know how to use it for the output?
OUTPUT #result TO "/output/{filename}.txt"
USING Outputters.Text(outputHeader:false, quoting:false);
The filename is in the #result.
This feature isn't supported as of yet.
Does anyone have a work around?
U-SQL How can I get the current filename being processed to add to my extract output?
Ideally I would like dd-mm-yy-test.text?
How do I append the day month and year?
I am using USQL for this.
Thanks
Let me address both issues you're laying out in this question:
To use the same output name as the input, there would have to be a way to access rowset values into u-sql variables which I'm pretty sure cannot be done, taking into account that the language is built around the necessity to process many files at once.
To append a date into the output you would only need to declare the current datetime at some point and then use it to write the output file name like this:
DECLARE #now DateTime = DateTime.Now;
OUTPUT #output TO "/tests/output/" + #now.ToString("dd-MM-yyyy") + "-output.csv" USING Outputters.Csv();

Using Marklogic Xquery data population

I have the data as below manner.
<Status>Active Leave Terminated</Status>
<date>05/06/2014 09/10/2014 01/10/2015</date>
I want to get the data as in the below manner.
<status>Active</Status>
<date>05/06/2014</date>
<status>Leave</Status>
<date>09/10/2014</date>
<status>Terminated</Status>
<date>01/10/2015</date>
please help me on the query, to retrieve the data as specified above.
Well, you have a string and want to split it at the whitestapces. That's what tokenize() is for and \s is a whitespace. To get the corresponding date you can get the current position in the for loop using at. Together it looks something like this (note that I assume that the input data is the current context item):
let $dates := tokenize(date, "\s+")
for $status at $pos in tokenize(Status, "\s+")
return (
<status>{$status}</status>,
<date>{$dates[$pos]}</date>
)
You did not indicate whether your data is on the file system or already loaded into MarkLogic. It's also not clear if this is something you need to do once on a small set of data or on an on-going basis with a lot of data.
If it's on the file system, you can transform it as it is being loaded. For instance, MarkLogic Content Pump can apply a transformation during load.
If you have already loaded the content and you want to transform it in place, you can use Corb2.
If you have a small amount of data, then you can just loop across it using Query Console.
Regardless of how you apply the transformation code, dirkk's answer shows how you need to change it. If you are updating content already in your database, you'll xdmp:node-delete() the original Status and date elements and xdmp:node-insert-child() the new ones.

Retrieve whole lyrics from URL

I am trying to retrieve the whole lyrics of a band from the web.
I have noticed that they build URLs using ".../firstletter/bandname/songname.html"
Here is an example.
http://www.azlyrics.com/lyrics/acdc/itsalongwaytothetopifyouwannarocknroll.html
I was thinkining about creating a function that would read.csv the URLs.
That part was kind of easy because I can get the titles by a simple copy paste and save as .csv. Then, use that vector to pass the function for each value in order to construct the URL name.
But I tried to read the first one just to see what it looks like and I found that there will be too much "cleaning the data" if my goal is to build a csv file with each lyric.
x <-read.csv(url("http://www.azlyrics.com/lyrics/acdc/itsalongwaytothetopifyouwannarocknroll.html"))
I think my approach is not the best (or maybe I need a better data cleaning strategy)
The HTML page has a tell on where the lyrics begin:
Usage of azlyrics.com content by any third-party lyrics provider is prohibited by our licensing agreement. Sorry about that.
Taking advantage of that, you can detect this string, and then read everything up to the end of the div:
m <- readLines("http://www.azlyrics.com/lyrics/acdc/itsalongwaytothetopifyouwannarocknroll.html")
giveaway <- "Sorry about that."
#You can add the full line in case you think one of the lyrics might have this sentence in it.
start <- grep(giveaway, m) + 1 # Where the lyric starts
end <- grep("</div>", m[start:length(m)])[1] + start
# Take the first </div> after the start of the lyric, and then fix the position by adding the start
lyrics <- paste(gsub("<br>|</div>", "", m[start:end]), collapse = "\n")
#This is just an example of how to clear the remaining tags and join the text.
And then:
> cat(lyrics) #using cat() prints the line breaks
Ridin' down the highway
Goin' to a show
Stop in all the byways
Playin' rock 'n' roll
.
.
.
Well it's a long way
It's a long way, you should've told me
It's a long way, such a long way
Assuming that "cleaning the data" means you would be parsing through html tags. I recommend using DOM scraping library that would extract only the text lyrics from the page and save those lyrics to CSV, database or wherever. That way you wouldn't have to do any data cleaning. I don't know what programming language your using, but a simple google search will show you a lot of DOM querying and parsing libraries for any language.
Here is an example with PHP
http://simplehtmldom.sourceforge.net/manual.htm
$html = file_get_html('http://www.azlyrics.com/lyrics/acdc/itsalongwaytothetopifyouwannarocknroll.html');
// Find all images
$lyrics = $html->find('div.ringtone',1)->next_sibling();
print($lyrics.innertext);
now you have lyrics. Save Them.(code not tested);
If your using the R-Language. Use this library here. You will be able to query the DOM and extract the lyrics easily.
https://github.com/hadley/rvest

Resources