I am learning how to do instrumentation using Kamon library.
This is my build.sbt
libraryDependencies ++= Seq(
"io.kamon" %% "kamon-core" % "0.6.7"
)
This is my plugins.sbt (in project folder)
addSbtPlugin("io.kamon" % "sbt-aspectj-runner" % "1.0.1")
This is my code
import kamon.Kamon
object KamonTest extends App {
Kamon.start()
val counter = Kamon.metrics.counter("foo")
1 to 100000 foreach { x =>
Thread.sleep(10)
counter.increment()
}
readLine()
print("press any key to exit")
readLine()
Kamon.shutdown()
}
Now when I run this app and run jmc and then go inside the MBEAN browser. I see this
So I cannot find the counter "foo" which I defined in my code.
I was able to solve the issue by the help of the gitter channel of Kamon
In order to publish to JMX console, we need the following two dependencies more in build.sbt
"io.kamon" %% "kamon-scala" % "0.6.7",
"io.kamon" %% "kamon-jmx" % "0.6.7"
We also need the following entries in application.conf
kamon.jmx {
subscriptions {
histogram = [ "**" ]
min-max-counter = [ "**" ]
gauge = [ "**" ]
counter = [ "**" ]
trace = [ "**" ]
trace-segment = [ "**" ]
system-metric = [ "**" ]
http-server = [ "**" ]
kamon-mxbeans = [ "**" ]
}
}
kamon.modules {
kamon-mxbeans {
auto-start = yes
requires-aspectj = no
extension-class = "kamon.jmx.extension.JMXMetricImporter"
}
}
kamon.kamon-mxbeans {
mbeans = [
{ "name": "example-mbean", "jmxQuery": "example:type=myBean,name=*",
"attributes": [
{ "name": "foo", "type": "counter" }
]
}
],
identify-delay-interval-ms = 1000,
identify-interval-ms = 1000,
value-check-interval-ms = 1000
}
Related
We are trying to do DynamoDB migration from prod account to stage account.
In the source account, we are making use of "Export" feature of DDB to put the compressed .json.gz files into destination S3 bucket.
We have written a glue script which will read the exported .json.gz files and writes it to DDB table.
We are making the code generic, so we should be able to migrate any DDB table from prod to stage account.
As part of that process, while testing we are facing issues when we are trying to write a NUMBER SET data to target DDB table.
Following is the sample snippet which is raising ValidationException when trying to insert into DDB
from decimal import Decimal
def number_set(datavalue):
# datavalue will be ['0', '1']
set_of_values = set()
for value in datavalue:
set_of_values.add(Decimal(value))
return set_of_values
When running the code, we are getting following ValidationException
An error occurred while calling o82.pyWriteDynamicFrame. Supplied AttributeValue is empty, must contain exactly one of the supported datatypes (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: UKEU70T0BLIKN0K2OL4RU56TGVVV4KQNSO5AEMVJF66Q9ASUAAJG; Proxy: null)
However, if instead of Decimal(value) if we use int(value) then no ValidationException is being thrown and the job succeeds.
I feel that write_dynamic_frame_from_options will try to infer schema based on the values the element contains, if the element has "int" values then the datatype would be "NS", but if the element contains all "Decimal type" values, then it is not able to infer the datatype.
The glue job we have written is
dyf = glue_context.create_dynamic_frame_from_options(
connection_type="s3",
connection_options={
"paths": [file_path]
},
format="json",
transformation_ctx = "dyf",
recurse = True,
)
def number_set(datavalue):
list_of_values = []
for value in datavalue:
list_of_values.append(Decimal(value))
print("list of values ")
print(list_of_values)
return set(list_of_values)
def parse_list(datavalue):
list_of_values = []
for object in datavalue:
list_of_values.append(generic_conversion(object))
return list_of_values
def generic_conversion(value_dict):
for datatype,datavalue in value_dict.items():
if datatype == 'N':
value = Decimal(datavalue)
elif datatype == 'S':
value = datavalue
elif datatype == 'NS':
value = number_set(datavalue)
elif datatype == 'BOOL':
value = datavalue
elif datatype == 'M':
value = construct_map(datavalue)
elif datatype == 'B':
value = datavalue.encode('ascii')
elif datatype == 'L':
value = parse_list(datavalue)
return value
def construct_map(row_dict):
ddb_row = {}
for key,value_dict in row_dict.items():
# value is a dict with key as N or S
# if N then use Decimal type
ddb_row[key] = generic_conversion(value_dict)
return ddb_row
def map_function(rec):
row_dict = rec["Item"]
return construct_map(row_dict)
mapped_dyF = Map.apply(frame = dyf, f = map_function, transformation_ctx = "mapped_dyF")
datasink2 = glue_context.write_dynamic_frame_from_options(
frame=mapped_dyF,
connection_type="dynamodb",
connection_options={
"dynamodb.region": "us-east-1",
"dynamodb.output.tableName": destination_table,
"dynamodb.throughput.write.percent": "0.5"
},
transformation_ctx = "datasink2"
)
can anyone help us in how can we unblock from this situation?
Record that we are trying to insert
{
"region": {
"S": "to_delete"
},
"date": {
"N": "20210916"
},
"number_set": {
"NS": [
"0",
"1"
]
},
"test": {
"BOOL": false
},
"map": {
"M": {
"test": {
"S": "value"
},
"test2": {
"S": "value"
},
"nestedmap": {
"M": {
"key": {
"S": "value"
},
"nestedmap1": {
"M": {
"key1": {
"N": "0"
}
}
}
}
}
}
},
"binary": {
"B": "QUFBY2Q="
},
"list": {
"L": [
{
"S": "abc"
},
{
"S": "def"
},
{
"N": "123"
},
{
"M": {
"key2": {
"S": "value2"
},
"nestedmaplist": {
"M": {
"key3": {
"S": "value3"
}
}
}
}
}
]
}
}
I am trying to convert a data.frame into JSON text. How can I include an array in this data.frame? I.e. the variable characteristics needs to contain multiple elements. I do not succeed in this.
Here is my code:
data <- data.frame(sequenceNbr = c(1:3),
requestTypeCode = "MR001",
managementStartingDate = format(Sys.Date(),"%Y-%m-%d"),
TopicCode = "TH02",
Nbr = c("1760448", "6580364", "1391363"),
TypeCode = "R003008",
char1 = "CK001001", char2 = "CK001002",
stringsAsFactors = FALSE) %>%
group_by(sequenceNbr) %>%
nest( Assignment=c(TopicCode),
entity = c(Nbr),
ToControl = c(TypeCode),
characteristics = c(char1,char2)
)
data <- data %>%
mutate( Assignment = purrr::map(Assignment, as.list),
entity = purrr::map(entity, as.list))
json <- jsonlite::toJSON(list(`DefList`=data),
auto_unbox = TRUE, pretty = TRUE)
This gives as output
{
"DefList": [
{
"sequenceNbr": 1,
"requestTypeCode": "MR001",
"managementStartingDate": "2020-06-16",
"Assignment": {
"TopicCode": "TH02"
},
"entity": {
"Nbr": "1760448"
},
"ToControl": [
{
"TypeCode": "R003008"
}
],
"characteristics": [
{
"char1": "CK001001",
"char2": "CK001002"
}
]
},
{
"sequenceNbr": 2,
"requestTypeCode": "MR001",
"managementStartingDate": "2020-06-16",
"Assignment": {
"TopicCode": "TH02"
},
"entity": {
"Nbr": "6580364"
},
"ToControl": [
{
"TypeCode": "R003008"
}
],
"characteristics": [
{
"char1": "CK001001",
"char2": "CK001002"
}
]
}
]
}
What I want as output is this (there is a difference in the characteristics-element):
{
"DefList": [
{
"sequenceNbr": 1,
"requestTypeCode": "MR001",
"managementStartingDate": "2020-06-16",
"Assignment": {
"TopicCode": "TH02"
},
"entity": {
"Nbr": "1760448"
},
"ToControl": [
{
"TypeCode": "R003008"
}
],
"characteristics": [
"CK001001",
"CK001002"
]
},
{
"sequenceNbr": 2,
"requestTypeCode": "MR001",
"managementStartingDate": "2020-06-16",
"Assignment": {
"TopicCode": "TH02"
},
"entity": {
"Nbr": "6580364"
},
"ToControl": [
{
"TypeCode": "R003008"
}
],
"characteristics": [
"CK001001",
"CK001002"
]
}
]
}
It should also work when characteristics contains only one value. Any idea's? I think the data.frame structure is the limiting factor, since it cannot contain an array. I see in this post : https://blog.exploratory.io/working-with-json-data-in-very-simple-way-ad7ebcc0bb89 that it is correct JSON syntax, but I do not manage to obtain it. Thanks in advance for your help.
I have the following json file:
{
"actions": [
{
"values": "test",
"features": [
{
"v1": 100,
"v2": {
"dates": [
"2020-04-08 06:58:26",
"2020-04-08 06:58:26"
]
}
}
]
}
]
}
I would like to append n-times the object within the "actions" array to the end of it, creating n+1 total objects.
Expected output if n=2:
{
"actions": [
{
"values": "test",
"features": [
{
"v1": 100,
"v2": {
"dates": [
"2020-04-08 06:58:26",
"2020-04-08 06:58:26"
]
}
}
]
},
{
"values": "test",
"features": [
{
"v1": 100,
"v2": {
"dates": [
"2020-04-08 06:58:26",
"2020-04-08 06:58:26"
]
}
}
]
},
{
"values": "test",
"features": [
{
"v1": 100,
"v2": {
"dates": [
"2020-04-08 06:58:26",
"2020-04-08 06:58:26"
]
}
}
]
}
]
}
I found this answer [How can I duplicate an existing object within a JSON array using jq? however it only works with one element at the end.
You can just use a reduce() function with range() together to create the index to include the object at.
jq --arg n 2 'reduce range(0, ($n|tonumber)) as $d (.; .actions[$d+1] += .actions[0] )' json
I am trying to create a .json file from a string of coordinates to display. I can get to the point of creating the file but the JSON Is not correct. Code follows
json="10,10;10,5;5,5;5,10"
List<Coords> eList = new List<Coords>();
Coords d = new Coords();
d.type = "Polygon";
d.coordinates = Newtonsoft.Json.JsonConvert.DeserializeObject(json);
List<def> deflist = new List<def>();
def f = new def();
f.type = "GeometryCollection";
f.geometries = d;
THE RESULTS ARE
{
"type": "GeometryCollection",
"geometries": {
"type": "Polygon",
"coordinates": [
[
[
10,
10
],
[
10,
5
],
[
5,
5
],
[
5,
10
]
]
]
}
}
-- SHOULD LOOK LIKE THIS
{
"type": "GeometryCollection",
"geometries": {
"type": "Polygon",
"coordinates": [
[[10,10],[10,5],[5,5],[5,10]]
]
}
}
the coordinates are indented and formatted in a way I can't understand. Any suggestions would be greatly appreciated.
The File is being generated to be used with Telerik RadMap Control.
I want to import data from text file.which contain arround lakhs of records
I am using bulk insert do it like this
BULK
INSERT vw_bulk_insert_test
FROM '\\server\c$\csvtext.txt'--\\server\SQLEXPRESS\csvtest.txt'
WITH
(FIRSTROW=2,
check_CONSTRAINTS,
FIELDTERMINATOR = '~',
ROWTERMINATOR = '\n'
)
GO
But before insert I want to validate values of each column without using cursor.Like if second row will have values of all fields except unit_number(Column) then it should create a error log specifying unit_number value is missing.
Personally, I would bulk-insert into a temp table, and then do validations/conversions from the temp table into the table where things will ultimately reside using either TSQL or TSQL in the form of stored procedures created for this purpose.
You have this syntax
BULK INSERT
[ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ]
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE = 'format_file_path' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]
[ [ , ] ERRORFILE = 'file_name' ]
)]