Recently our ARM template deployments are failing because the output gets percent-escaped.
We need the Logic App url as output of it's ARM template and output it as:
"outputs": {
"url": {
"type": "object",
"value": {
"key": "[concat(variables('LogicAppShortName'),'-url')]",
"value": "[listCallbackUrl(concat(resourceId('Microsoft.Logic/workflows/', variables('LogicAppName')), '/triggers/manual'), '2016-06-01').value]"
}
}
}
The ARM template output worked correctly some days/weeks ago and resulted in
...?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=...
but now it results in
...?api-version=2016-06-01&sp=%252Ftriggers%252Fmanual%252Frun&sv=1.0&sig=...
thus the %2F gets escaped again and becomes %252F, making the URL invalid.
Also, the ARM deployment shows more log lines, previously just
Updated output variable 'ArmOutputs', which contains the outputs
section of the current deployment object in string format.
now 4 lines, suggesting something has changed:
Updated output variable 'ArmOutputs.url.type', which contains the outputs section of the current deployment object in string format.
Updated output variable 'ArmOutputs.url.value.key', which contains he outputs section of the current deployment object in string format.
Updated output variable 'ArmOutputs.url.value.value', which contains the outputs section of the current deployment object in string format.
Updated output variable 'ArmOutputs', which contains the outputs section of the current deployment object in string format.
Anyone any idea how to fix this bug/"feature"?
I have run into the same issue and fixed it by setting a new pipeline variable DECODE_PERCENTS to true.
Related
I have a JSON file that is loaded by two different Autoloaders.
One uses schema evolution and besides replacing spaces in the json property names, writes the json directly to a delta table, and I can see all the values are there properly.
In the second one I am mapping to a defined schema and only use a subset of properties. So use a lot of withColumn and then a select to narrows to my defined column list.
Autoloader definition:
df = (spark
.readStream
.format('cloudFiles')
.option('cloudFiles.format', 'json')
.option('multiLine', 'true')
.option('cloudFiles.schemaEvolutionMode','rescue')
.option('cloudFiles.includeExistingFiles','true')
.option('cloudFiles.schemaLocation', bronze_schema)
.option('cloudFiles.inferColumnTypes', 'true')
.option('pathGlobFilter','*.json')
.load(upload_path)
.transform(lambda df: remove_spaces_from_columns(df))
.withColumn(...
Writer:
df.writeStream.format('delta') \
.queryName(al_stream_name) \
.outputMode('append') \
.option('checkpointLocation', checkpoint_path) \
.option('mergeSchema', 'true') \
.trigger(once = True) \
.table(bronze_table)
Issue is that some of the source columns are ok load and I get their values, and others are constantly null in the output table.
For example:
.withColumn('vl_rating', col('risk_severity.value')) # works
.withColumn('status', col('status.name')) # always null
...
.select(
'rating',
'status',
...
json is quite simple, these are all string values, they are always populated. The same code works against another simular json file in another autoloader without issue.
I have run out of ideas to fault find on this. My imports are minimal, outside of Autoloader the JSON loads fine.
e.g
%python
import pyspark.sql.functions as psf
jsontest = spark.read.option('inferSchema','true').json('dbfs:....json')
df = jsontest.withColumn('status', psf.col('status.name')).select('status')
display(df)
Results in the values of the status.name property of the json file
Any ideas would be greatly appreciated.
I have found generally what is causing this. Interesting cause!
I am scanning a whole directory of json files, and the schema evolves over time (as expected). But when I clear out the autoloader schema and checkpoint directories and only scan the latest json file it all works correctly.
So what I surmise is that something in schema evolution with the older json files causes Autoloader to get into a state where it will not put certain properties into the stream to the writer.
If anyone has any recommendation on how to implement some data quality analysis in an Autoloader I would be most appreciative if you would share.
I am in Terraform 14 and I am trying to add labels to my template file which should generate a YAML:
Template File:
labels:
${labels}
Code:
locals {
labels = merge(
var.labels,
map(
"module", basename(abspath(path.module)),
"module_version", var.module_version
)
)
prometheus_config = templatefile("${path.module}/prometheus.tmpl", {
labels = indent(8, yamlencode(local.labels))
})
When I try to add the labels indenting with 8 this outputs in the template file causing YAML errors:
Error Output:
labels:
"module": "my_module"
"module_version": "1.0"
As you can see the module_version has indent 8 which is correct but the module line is not indented.
I tried many things like moving ${labels} everywhere in the beginning, with multiple indentations but nothing seems to work.
It is for this reason that the templatefile documentation recommends using yamlencode for the entire data structure, rather than trying to concantenate bits of YAML together using just string templates. That way the yamlencode function can guarantee you a correctly-formatted result and you only have to produce a suitable data structure:
In your case, that would involve replacing the template contents with the following:
${yamlencode({
labels = labels
})}
...and then replacing the prometheus_config definition with the following:
locals {
prometheus_config = templatefile("${path.module}/prometheus.tmpl", {
labels = local.labels
})
}
Notice that the yamlencode call is now inside the template, and it covers the entire YAML document rather than just a fragment of it.
With a simple test configuration I put together with some hard-coded values for the variables you didn't show, I got the following value for local.prometheus_config:
"labels":
"module": "example"
"module_version": "1.0"
If this was a full example of the YAML you are aiming to generate then I might also consider simplifying but just inlining the yamlencode call directly inside the local value definition, and not have the separate template file at all:
locals {
prometheus_config = yamlencode({
labels = local.labels
})
}
If the real YAML is much larger or is likely to grow larger later then I'd probably still keep that templatefile call, but I just wanted to note this for completeness, since there's more than one way to get this done.
So using terraform 14 I was not able to transform lists or maps into yaml with yamlencode. Every option I tried using the suggested answer produced results with the wrong indentation. Maybe due to the many indentation levels in the file... I am not sure. So I dropped the use of yamlencode in the solution.
Solution:
I decided to use inline solution so for lists I transform then with string with join and for maps I use jsonencode so:
# var.list is a list in terraform
# local.abels is a map in terraform
thanos_config = templatefile("${path.module}/thanos.tmpl", {
urls = join(",", var.list),
labels = jsonencode(local.labels)
})
The resulting plan output is a bit ugly but it works.
I am currently creating a REST API with restrserve.
While trying to add a logger to my application I ran into this problem.
By setting the log level as: application$logger$set_log_level("error"),
my console output on error has a JSON structure like:
{
"timestamp":"2020-09-22 18:25:37.425642",
"level":"ERROR",
"name":"Application",
"pid":38662,
"msg":"",
"context":{
"request_id":"alphanumeric string",
"message":{ here is the description of the error }
}
}
I set up a printer function following the instructions on the site, but among the available fields (timestamp, level, logger_name, pid, message), "context" is not present,so basically, by just printing the message my logs are empty files.
This is my printer function:
application$logger$set_printer(FUN = function(timestamp, level, logger_name, pid, message, ...){
write(message, file = paste("/Log/Monitor_", pid,
timestamp, ".txt", sep = ""))
# attempting to print "context" raises an error!
})
Is there a way to print the "context" field present in the console output to a file?
Thanks in advance, any suggestions would be very helpful
It is always a good idea to take a look to the source. There is extra ... argument where context is passed.
Generally it is advisable to use lgr package for user-level logging.
I am trying to use variables from a variables phyton file.
I think I am doing everything as described in the user manual, yet, the variables remain unavailable to me.
This is the variables file
TEST_VAR = 'Michiel'
def get_variables(environment = 'UAT'):
if environment.upper() == 'INT':
ENV = {"NAME": "Bol.com INT",
"BROWSER": "safari",
"URL": "www.bol.com"}
else:
ENV = {"NAME": "Bol.com UAT",
"BROWSER": "firefox",
"URL": "www.bol.com"}
return ENV
I import this like this:
*** Settings ***
Variables Variables.py ${ENVIRONMENT}
Library Selenium2Library
Resource ../PO/MainPage.robot
Resource ../PO/LoginPage.robot
*** Variables ***
${ENVIRONMENT}
*** Keywords ***
Initialize suite
log ${TEST_VAR}
log start testing on ${ENV.NAME}
Start application
open browser ${ENV.URL} ${ENV.BROWSER}
Stop application
close browser
Both files are at the same level in the folder structure.
Yet the variables from the file are not available to me. Not even the normal variable.
Can someone tell me what I am doing wrong here?
Must be something small that I'm forgetting.
Many thanks.
When you use a variable file like this, you don't end up with a variable named ${ENV}. Instead, every key in the dictionary that you return becomes a variable.
From the robot framework user guide (emphasis added):
An alternative approach for getting variables is having a special get_variables function (also camelCase syntax getVariables is possible) in a variable file. If such a function exists, Robot Framework calls it and expects to receive variables as a Python dictionary or a Java Map with variable names as keys and variable values as values.
In this specific case that means that you will end up with the variables ${NAME}, ${BROWSER} and ${URL}.
If you want to set a variable named ${ENV}, you will have to make that part of the returned dictionary
# Variables.py
from robot.utils import DotDict
def get_variables(environment="UAT"):
...
return {
"ENV": DotDict(ENV)
}
I've written some PROGRESS code that outputs some data to a user defined file. The data itself isn't important, the output process works fine. It's basically
DEFINE VARIABLE filePath.
UPDATE filePath /*User types in something like C:\UserAccount\New.txt */
OUTPUT TO (VALUE) filePath.
Which works fine, a txt file is created in the input directory. My question is:
Does progress have any functionality that would allow me to check if an input
file path is valid? (Specifically, if the user has input a valid directory, and if they have permission to create a file in the directory they've chosen)
Any input or feedback would be appreciated.
FILE-INFO
Using the system handle FILE-INFO gives you a lot of information. It also works on directories.
FILE-INFO:FILE-NAME = "c:\temp\test.p".
DISPLAY
FILE-INFO:FILE-NAME
FILE-INFO:FILE-CREATE-DATE
FILE-INFO:FILE-MOD-DATE
FILE-INFO:FILE-INFO
FILE-INFO:FILE-MOD-TIME
FILE-INFO:FILE-SIZE
FILE-NAME:FILE-TYPE
FILE-INFO:FULL-PATHNAME
WITH FRAME f1 1 COLUMN SIDE-LABELS.
A simple check for existing directory with write rights could be something like:
FUNCTION dirOK RETURNS LOGICAL (INPUT pcDir AS CHARACTER):
FILE-INFO:FILE-NAME = pcDir.
IF INDEX(FILE-INFO:FILE-TYPE, "D") > 0
AND INDEX(FILE-INFO:FILE-TYPE, "W") > 0 THEN
RETURN TRUE.
ELSE
RETURN FALSE.
END FUNCTION.
FILE-NAME:FILE-TYPE will start with a D for directories and a F for plain files. It also includes information about reading and writing rights. Check the help for more info. If the file doesn't exist basically all attributes except FILE-NAME will be empty or unknown (?).
Edit: it seems that FILE-TYPE returns W in some cases even if there's no actual writing rights in that directory so I you might need to handle this through error processing instead
ERROR PROCESSING
OUTPUT TO VALUE("f:\personal\test.txt").
PUT UNFORMATTED "Test" SKIP.
OUTPUT CLOSE.
CATCH eAnyError AS Progress.Lang.ERROR:
/* Here you could check for specifically error no 98 indicating a problem opening the file */
MESSAGE
"Error message and number retrieved from error object..."
eAnyError:GetMessage(1)
eAnyError:GetMessageNum(1) VIEW-AS ALERT-BOX BUTTONS OK.
END CATCH.
FINALLY:
END FINALLY.
SEARCH
When checking for a single file the SEARCH command will work. If the file exists it returns the complete path. It does however not work on directory, only files. If you SEARCH without complete path e g SEARCH("test.p") the command will search through the directories set in the PROPATH environment variable and return the first matching entry with complete path. If there's no match it will return unknown value (?).
Syntax:
IF SEARCH("c:\temp\test.p") = ? THEN
MESSAGE "No such file" VIEW-AS ALERT-BOX ERROR.
ELSE
MESSAGE "OK" VIEW-AS ALERT-BOX INFORMATION.
SYSTEM-DIALOG GET-FILE character-field has an option MUST-EXIST if you want to use a dailogue to get filename/dir from user. Example from manual
DEFINE VARIABLE procname AS CHARACTER NO-UNDO.
DEFINE VARIABLE OKpressed AS LOGICAL INITIAL TRUE.
Main:
REPEAT:
SYSTEM-DIALOG GET-FILE procname
TITLE "Choose Procedure to Run ..."
FILTERS "Source Files (*.p)" "*.p",
"R-code Files (*.r)" "*.r"
MUST-EXIST
USE-FILENAME
UPDATE OKpressed.
IF OKpressed = TRUE THEN
RUN VALUE(procname).
ELSE
LEAVE Main.
END.