setup bosun snmp scollector to monitor CPU memory keep get error - bosun

i am trying to run scollector from bosun.
after I run the scolector, It cannot show me the memory information, but CPU information was right.
this CODE:
Host = "http://localhost:8070"
DisableSelf = true
Freq = 60
Filter = ["snmp-generic", "snmp-ifaces"]
[[SNMP]]
Community = "test"
Host = "name"
MIBs = [ "devicename"]
[Tags]
product = "fw"
[MIBs]
[MIBs.fw]
BaseOid = ".1.3.6.1.4.1.2620"
[[MIBs.fw.Metrics]]
Metric = "os.cpu"
Oid = ".1.6.7.2.4.0"
Unit = "percent"
RateType = "gauge"
[[MIBs.fw.Metrics]]
Metric = "os.mem.used"
Oid = ".1.6.7.4.5.0"
Unit = "bytes"
RateType = "gauge"
THIS IS LOG
**2016/11/07 17:24:42 error: interval.go:64: snmp-generic-name-fw: asn1: structure error: tags don't match (2 vs {class:0 tag:4 length:11 isCompound:false}) {optional:false explicit:false application:false defaultValue:<nil> tag:<nil> stringType:0 timeType:0 set:false omitEmpty:false} #2**
2016/11/07 17:24:43 info: queue.go:90: {"metric":"os.cpu","timestamp":1478539482,"value":2,"tags":{"host":"name","product":"fw"}}

It looks to me like this is an issue converting data types. The error is from deep in the bowels of the asn1 library we are using but I think it boils down to: cpu is represented as an integer, while memory is a string.
Our snmp collector attempts to parse all values into a big.Int, but apparently the string value is not able to be coerced into that by our asn1 library.
Unfortunately I don't see a good way to make this work, except perhaps look for an oid that returns an integer type. Without knowing what device you are using, that is as good as I can offer I'm afraid.

Related

HTTPRequest roblox

i'm currently making a roblox whitelist system and it's almost finished but i need 1 thing more i scripted it and its not work (code below) i didn't found nothing to fix what i have (script and screenshoot of error below), thanks.
local key = 1
local HttpService = game:GetService("HttpService")
local r = HttpService:RequestAsync({
Url = "https://MyWebsiteUrl.com/check.php?key="..key,
Method = "GET"
})
local i = HttpService:JSONDecode(r.Body)
for n, v in pairs(i) do
print(tostring(n)..", "..tostring(v))
end
I assume the website that you are using to validate the key
returns the response in raw if so then
local key = 1
local HttpService = game:GetService("HttpService")
local r = HTTPService:GetAsync("https://MyWebsiteUrl.com/check.php?key="..key)
local response = JSON:Decode(r)
print(response)
I think this is because you tried to concat a string (the url) with a number (the key variable) try to make the key a string

Google Earth Engine download problems, is this caused by immutable server side objects?

I have a function that will download an image collection as a TFrecord or a geotiff.
Heres the function -
def download_image_collection_to_drive(collection, aois, bands, limit, export_format):
if collection.size().lt(ee.Number(limit)):
bands = [band for band in bands if band not in ['SCL', 'QA60']]
for aoi in aois:
cluster = aoi.get('cluster').getInfo()
geom = aoi.bounds().getInfo()['geometry']['coordinates']
aoi_collection = collection.filterMetadata('cluster', 'equals', cluster)
for ts in range(1, 11):
print(ts)
ts_collection = aoi_collection.filterMetadata('interval', 'equals', ts)
if ts_collection.size().eq(ee.Number(1)):
image = ts_collection.first()
p_id = image.get("PRODUCT_ID").getInfo()
description = f'{cluster}_{ts}_{p_id}'
task_config = {
'fileFormat': export_format,
'image': image.select(bands),
'region': geom,
'description': description,
'scale': 10,
'folder': 'output'
}
if export_format == 'TFRecord':
task_config['formatOptions'] = {'patchDimensions': [256, 256], 'kernelSize': [3, 3]}
task = ee.batch.Export.image.toDrive(**task_config)
task.start()
else:
logger.warning(f'no image for interval {ts}')
else:
logger.warning(f'collection over {limit} aborting drive download')
It seems whenever it gets to the second aoi it fails, Im confused by this as if ts_collection.size().eq(ee.Number(1)) confirms there is an image there so it should manage to get product id from it.
line 24, in download_image_collection_to_drive
p_id = image.get("PRODUCT_ID").getInfo()
File "/lib/python3.7/site-packages/ee/computedobject.py", line 95, in getInfo
return data.computeValue(self)
File "/lib/python3.7/site-packages/ee/data.py", line 717, in computeValue
prettyPrint=False))['result']
File "/lib/python3.7/site-packages/ee/data.py", line 340, in _execute_cloud_call
raise _translate_cloud_exception(e)
ee.ee_exception.EEException: Element.get: Parameter 'object' is required.
am I falling foul of immutable server side objects somewhere?
This is a server-side value, problem, yes, but immutability doesn't have to do with it — your if statement isn't working as you intend.
ts_collection.size().eq(ee.Number(1)) is a server-side value — you've described a comparison that hasn't happened yet. That means that doing any local operation like a Python if statement cannot take the comparison outcome into account, and will just treat it as a true value.
Using getInfo would be a quick fix:
if ts_collection.size().eq(ee.Number(1)).getInfo():
but it would be more efficient to avoid using getInfo more than needed by fetching the entire collection's info just once, which includes the image info.
...
ts_collection_info = ts_collection.getInfo()
if ts_collection['features']: # Are there any images in the collection?
image = ts_collection.first()
image_info = ts_collection['features'][0] # client-side image info already downloaded
p_id = image_info['properties']['PRODUCT_ID'] # get ID from client-side info
...
This way, you only make two requests per ts: one to check for the match, and one to start the export.
Note that I haven't actually run this Python code, and there might be some small mistakes; if it gives you any trouble, print(ts_collection_info) and examine the structure you actually received to figure out how to interpret it.

How can I inference with multiple input network on TensorRT?

I would like to test GQ-CNN which is network in Dex-Net on tensorRT.
I successfully converted tflite file to uff file but when I tried to inference with that network, there is an error I couldn't figure out.
[TensorRT] ERROR: Parameter check failed at: ../builder/Network.cpp::addLRN::149, condition: lrnWindow & 0x1
python3: uff/orders.cpp:330: void UffParser::addTranspose(ParserLayer&, std::vector<int>): Assertion `outputs.size() == 1' failed.
The error is appeared when building model.
I tried to find clue from google but there are no codes and no references.
There's only different thing compare with example code that works well.
(I wrote captions which codes I added. If I remove that codes and replace model file to single input network, it works well.)
I registered input twice like below code because GQ-CNN has multiple input.
So I guess that registering multiple input using uffparser could be the main reason of that error.
class ModelData(object):
MODEL_FILE = "./gqcnn.uff"
INPUT_NAME_1 = "Placeholder"
INPUT_SHAPE_1 = (1, 32, 32)
INPUT_NAME_2 = "Placeholder_1"
INPUT_SHAPE_2 = (2,)
OUTPUT_NAME = "softmax/Softmax"
def build_engine(model_file):
# For more information on TRT basics, refer to the introductory samples.
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
builder.max_workspace_size = common.GiB(1)
builder.fp16_mode = True
#builder.int8_mode = True
# Parse the Uff Network
parser.register_input(ModelData.INPUT_NAME_1, ModelData.INPUT_SHAPE_1)
parser.register_input(ModelData.INPUT_NAME_2, ModelData.INPUT_SHAPE_2) # added code
parser.register_output(ModelData.OUTPUT_NAME)
parser.parse(model_file, network)
# Build and return an engine.
return builder.build_cuda_engine(network)
# do inference
with build_engine(ModelData.MODEL_FILE) as engine:
# Build an engine, allocate buffers and create a stream.
# For more information on buffer allocation, refer to the introductory samples.
inputs, outputs, bindings, stream = common.allocate_buffers(engine)
with engine.create_execution_context() as context:
for idx in range(len(val_images)) :
start = time.time()
val_image = val_images[idx]
val_pose = val_poses[idx] # added code
np.copyto(inputs[0].host, val_image)
np.copyto(inputs[1].host, val_pose) # added code
[prediction] = common.do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)
Is there anyone who succeeded to inference with multiple input model?

rkafka.read() doesn't return a message (Returns double quotes only)

Trying to return a message through rkafka library in R.
Followed the same rkafka documentation # https://cran.r-project.org/web/packages/rkafka/vignettes/rkafka.pdf
Output returns "" without the actual message in it. Kafka tool confirms that the message is sent by the producer.
CODE:
prod1=rkafka.createProducer("127.0.0.1:9092")
rkafka.send(prod1,"test","127.0.0.1:9092","Testing once")
rkafka.closeProducer(prod1)
consumer1=rkafka.createConsumer("127.0.0.1:2181","test")
print(rkafka.read(consumer1))
Output:
[1] ""
Desired Output would return "Testing once".
In order to read the messages of a topic that have already been written to the topic (before the consumer has been started) you need to set offset value to the smallest possible (equivalent to --from-beginning). According to rkafka docs autoOffseetReset argument defaults to largest
autoOffsetReset
smallest : automatically reset the offset to the
smallest offset largest : automatically reset the offset to the
largest offset anything else: throw exception to the consumer
Required:Optional Type:String default:largest
In order to be able to consume messages you need to set autoOffsetReset to "smallest".
consumer1=rkafka.createConsumer("127.0.0.1:2181","test", autoOffsetReset="smallest")
Update: This Code Works:
library(rkafka)
prod1=rkafka.createProducer("127.0.0.1:9092")
rkafka.send(prod1,"test","127.0.0.1:9092","Testing once")
rkafka.send(prod1,"test","127.0.0.1:9092","Testing twice")
rkafka.closeProducer(prod1)
consumer1=rkafka.createConsumer("127.0.0.1:2181","test",groupId = "test-consumer-
group",zookeeperConnectionTimeoutMs = "100000",autoCommitEnable = "NULL",
autoCommitInterval = "NULL",autoOffsetReset = "NULL")
print(rkafka.read(consumer1))
print(rkafka.readPoll(consumer1))
rkafka.closeConsumer(consumer1)
The key is to restart Kafka after deleting the logs it generates.

XQuery (saxon) failing with a schema (XPath works)

I switched in saxon from XPath to XQuery and on the selects where I have a schema I'm getting the error message:
A typed input document can only be used with a schema-aware query
My setup is:
InputSource xmlSource = new InputSource(xmlData);
SAXSource saxSource = new SAXSource(reader, xmlSource);
Source schemaSource = new StreamSource(schemaFile);
Configuration config = createEnterpriseConfiguration();
config.addSchemaSource(schemaSource);
Processor processor = new Processor(config);
SchemaValidator validator = new SchemaValidatorImpl(processor);
DocumentBuilder doc_builder = processor.newDocumentBuilder();
if(!preserveWhiteSpace)
doc_builder.setWhitespaceStrippingPolicy(WhitespaceStrippingPolicy.ALL);
doc_builder.setSchemaValidator(validator);
XdmNode root_node = doc_builder.build(saxSource);
XQueryCompiler compiler = processor.newXQueryCompiler();
Is there something additional I need to do on queries where there is a schema?
thanks - dave
Call XQueryCompiler.setSchemaAware(true);
This isn't the default because it's good for the optimizer to know whether the data is likely to be typed or untyped, and it's inefficient to generate schema-aware code if the data is untyped (conversely, when the data is typed, schema-aware code is typically faster -- though the savings can be eaten up by the extra cost of validating the input).

Resources