Using Bosun, how can I make an alert not trigger for certain times of day? - bosun

I'm using bosun to monitor a metric that should be non-zero most of the working day, but is ok to be zero/unavailable during the night.
alert myalert {
$notes = `This alert triggers when cw- orders haven't been received recently.`
template = noweborders
unknownIsNormal = 0
$metricLabel = Orders
$metric = q("max:1d-max:rate{counter,,1}:metricname{filtercategory=cw-,host=*}", "2w", "")
$graph = q("max:1m-max:rate{counter,,1}:metricname{filtercategory=literal_or(cw-),host=wildcard(*)}", "1d", "")
$uptimeStoppedWarn = since($metric) > d("2h")
$uptimeStoppedCrit = since($metric) > d("4h")
$lastOrder = ungroup(since($metric)) / 60 / 60
warn = $uptimeStoppedWarn
crit = $uptimeStoppedCrit
warnNotification = georgeemail
critNotification = georgeemail
}
How can I best adapt this alert so if the metric would be zero or unknown between the hours of say, 8pm and 8am it wouldn't trigger the alert? I've looked through the documentation, but I'm not sure how to do queries relating to time of day.

If you're really not interested whether or not your alert condition is met during certain times, you could introduce an additional condition to your alerts and make use of the epoch() function. epoch() returns the current timestamp at the evaluation of the alerts. Then add that to the crit or warn condition. Something like this will work:
alert myalert {
$notes = `This alert triggers when cw- orders haven't been received recently.`
(...)
$eight_pm = 20*60*60
$eight_am = 8*60*60
$seconds_today = epoch() % 86400
$is_before_8am = $seconds_today < $eight_am
$is_after_8pm = $eight_pm <= $seconds_today
$should_alert = !$is_before_8am && !$is_after_8pm
warn = $uptimeStoppedWarn && $should_alert
crit = $uptimeStoppedCrit && $should_alert
warnNotification = georgeemail
critNotification = georgeemail
}
This will prevent the alert from going into the critical or warning state outside of 8am to 8pm. Might be worth extracting that into a macro if you're using that across multiple alerts.

Related

Im aware im trying to get data for multiple accounts but where can i specify that rather than receiving this error in R?

library(rgoogleads)
library(gargle)
token <- token_fetch()
token
gads_auth(email = 'xx#gmail.com'
Authentication complete.
ad_group_report <- gads_get_report(
resource = "ad_group",
fields = c("ad_group.campaign",
"ad_group.id",
"ad_group.name",
"ad_group.status",
"metrics.clicks",
"metrics.cost_micros"),
date_from = "2021-01-08",
date_to = "2021-01-10",
where = "ad_group.status = 'ENABLED'",
order_by = c("metrics.clicks DESC", "metrics.cost_micros")
)
i Multi account request
! The request you sent did not return any results, check the entered parameters and repeat the opposition.
Why do I receive this error? I have never received it in Radwords package. Where do I mention the argument for the multiple accounts?
https://cran.r-project.org/web/packages/rgoogleads/rgoogleads.pdf

My discord bot keeps getting disconnected for some reason

I haven't changed my bot in weeks and for some reason I would get an error message like this every day for the past 5 or so days https://imgur.com/VCLx2kv
I don't think the error is caused by my code besides the whole loop thing which I don't know how to fix and it hasn't caused me any problems before, but if you're curious about that part I have the part of code that causes that issue below
I already tried regenerating my token.
#client.event
async def dead_check():
i = 1
d = datetime.now()
date = str(d.strftime("%Y-%m-%d"))
server = client.get_server(id = '105388450575859712')
while i == 1:
async for message in client.logs_from(discord.Object(id='561667365927124992'), limit=9999999):
if date in message.content:
usid = message.content.split('=')
usid1 = usid[1].split(' ')
count = message.content.split('#')
cd = message.content.split('?')
ev = cd[1]
if ev == '00':
number = 0
elif ev == '01':
number = 1
elif ev == '10':
number = 2
elif ev == '11':
number = 3
name = count[0]
await client.send_message(discord.Object(id='339182193911922689'), '#here\n' + name + ' has reached the deadline for the **FRICKLING** program.\nThe user has attended ' + str(number) + ' events.')
async for message in client.logs_from(discord.Object(id='567328773922619392'), limit=9999):
if date in message.content and message.reactions:
usid = message.content.split(' ')
user=await client.get_user_info(usid[0])
await client.send_message(discord.Object(id='567771853796540465'), user.mention + ' needs to be paid, if you have already paid him - react with :HYPERS:')
await client.delete_message(message)
await asyncio.sleep(60*60*24)
#client.event
async def on_ready():
await client.change_presence(game=Game(name='with nuclear waste'))
print('Ready, bitch')
asyncio.get_event_loop().run_until_complete(dead_check())
Have you tried reducing the limit of those logs_from calls? 9999999 is a pretty big number, and it may have slowed things down enough that the heartbeat isn't being sent at the proper times. You should also sanitize that image of the error message, it contains your bot token.
Credit to Patrick Haugh, but I wanted to close this thread and he didnt post it as an answer

MturkR, R, automatically posting microbatches

I am new to MTurk, but have some fluency with R. I am using the MTurkR package the first time, and I am trying to create "micro-batches" that post on MTurk over time. The code I am using can be found below (the XXXX parts are obviously filled with the correct values). I don't get any error messages, the code runs, and posts the HIT both in the Sandbox and in the real correctly. However, the HITs posted do not show up in the Sandbox Requester account, or the real requester account which means - as far as I understand - that I can't evaluate the workers who submit a completion code before they are paid automatically.
Could anyone point out where the error is, and how could I review the HITs?
Thanks.
K
##### Notes:
# 1) Change sandbox from TRUE to FALSE to run live (make sure to test in sandbox first!!)
##### Step 1: Load library, set parameters
#### Load MTurkR library
library(MTurkR)
#### HIT Layout ID
# Layout ID for the choice task
# my_hitlayoutid = "XXXX"
# Layout ID for the choice task in the Sandbox
my_hitlayoutid = "XXXX"
#### Set MTurk credentials
Sys.setenv(
AWS_ACCESS_KEY_ID = "XXXX",
AWS_SECRET_ACCESS_KEY = "XXXX"
)
#### HIT parameters
## Run in sandbox?
sandbox_val <- "FALSE"
## Set the name of your project here (used to retrieve HITs later)
myannotation <- "myannotation"
## Enter other HIT aspects
newhittype <- RegisterHITType(
title = "hope trial",
description = "Description",
reward = "2.5",
duration = seconds(hours = 1),
keywords = "survey, demographics, neighborhoods, employment",
sandbox = sandbox_val
)
##### Step 2: Define functions
## Define a function that will create a HIT using information above
createhit <- function() {
CreateHIT(
hit.type = newhittype$HITTypeId,
assignments = 2,
expiration = seconds(days = 30),
annotation = myannotation,
verbose = TRUE,
sandbox = sandbox_val,
hitlayoutid = my_hitlayoutid
)
}
## Define a function that will expire all running HITs
## This keeps HITs from "piling up" on a slow day
## It ensures that A) HIT appears at the top of the list, B) workers won't accidentally accept HIT twice
# expirehits <- function() {
# ExpireHIT(
# annotation = myannotation,
# sandbox = sandbox_val
# )
#}
##### Step 3: Execute a loop that runs createhit/expirehit functions every hour, and it will log the output to a file
## Define number of times to post the HIT (totalruns)
totalruns <- 2
counter <- 0
## Define log file (change the location as appropriate)
logfile <- file("/Users/kinga.makovi/Dropbox/Bias_Experiment/MTurk/logfile.txt", open="a")
sink(logfile, append=TRUE, type="message")
## Run loop (note: interval is hourly, but can be changed in Sys.sleep)
repeat {
message(Sys.time())
createhit()
Sys.sleep(10)
#expirehits()
counter = counter + 1
if (counter == totalruns){
break
}
}
## To stop the loop before it finishes, click the "STOP" button
## To stop logging, run sink()
You can't see HITs that are created via the API (through MTurkR or otherwise) in the requester website. It's a "feature". You'll have to access the HITs through MTurkR (e.g., SearchHITs() and GetHIT()).

Bosun how to add series with different tags?

I'm trying to add 4 series using bosun expressions. They are from 1,2,3,4 weeks ago. I shifted them using shift() to have current time. But I can't add them since they have the shift=1w etc tags. How can I add these series together?
Thank you
edit: here's the query for 2 weeks
$period = d("1w")
$duration = d("30m")
$week1end = tod(1 * $period )
$week1start = tod(1 * $period + $duration )
$week2end = tod(2 * $period )
$week2start = tod(2 * $period + $duration )
$q1 = q("avg:1m-avg:os.cpu{host=myhost}", $week1start, $week1end)
$q2 = q("avg:1m-avg:os.cpu{host=myhost}", $week2start, $week2end)
$shiftedq1 = shift($q1, "1w")
$shiftedq2 = shift($q2, "2w")
$shiftedq1+ $shiftedq2
edit: here's what Bosun said
The problem is similar to: How do I add the series present in the output of an over query:
over("avg:1m-avg:os.cpu{host=myhost}", "30m", "1w", 2)
There is a new function called addtags that is pending documentation (see https://raw.githubusercontent.com/bosun-monitor/bosun/master/docs/expressions.md for draft) which seems to work when combined with rename. Changing the last line to:
$shiftedq1+addtags(rename($shiftedq2,"shift=shiftq2"),"shift=1w")
should generate a single result group like { host=hostname, shift=1w, shiftq2=2w }. If you add additional queries for q3 and q4 you probably need to rename the shift tag for those to unique values like shiftq3 and shiftq4.
If you were using a numbersets instead of seriessets, then the Transpose function would let you "Drop" the unwanted tags. This is useful when generating alerts, since crit and warn need a single number value not a series set:
$average_per_q = avg(merge($shiftedq1,$shiftedq2))
$sum_over_all = sum(t($average_per_q,"host"))
Result: { host=hostname } 7.008055555555557
Side note you probably want to use a counter for os.cpu instead of a gauge. Example: $q1 = q("avg:1m-avg:rate{counter,,1}:os.cpu{. Without that rate section you are using the raw counter values instead of the gauge value.

How to display WebDynpro ABAP in ABAP report?

I've just started coding ABAP for a few days and I have a task to call the report from transaction SE38 and have
the report's result shown on the screen of the WebDynPro application SE80.
The report take the user input ( e.g: Material Number, Material Type, Plant, Sale Org. ) as a condition for querying, so the WebDynPro application must allow user to key in this parameters.
In some related article they were talking about using SUBMIT rep EXPORTING LIST TO MEMORY and CALL FUNCTION 'LIST_FROM_MEMORY' but so far I really have no idea to implement it.
Any answers will be appreciated. Thanks!
You can export it to PDF. Therefore, when a user clicks on a link, you run the conversion and display the file in the browser window.
To do so, you start by creating a JOB using the following code below:
constants c_name type tbtcjob-jobname value 'YOUR_JOB_NAME'.
data v_number type tbtcjob-jobcount.
data v_print_parameters type pri_params.
call function 'JOB_OPEN'
exporting
jobname = c_name
importing
jobcount = v_number
exceptions
cant_create_job = 1
invalid_job_data = 2
jobname_missing = 3
others = 4.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Then, you need to get the printer parameters in order to submit the report:
call function 'GET_PRINT_PARAMETERS'
exporting
destination = 'LP01'
immediately = space
new_list_id = 'X'
no_dialog = 'X'
user = sy-uname
importing
out_parameters = v_print_parameters
exceptions
archive_info_not_found = 1
invalid_print_params = 2
invalid_archive_params = 3
others = 4.
v_print_parameters-linct = 55.
v_print_parameters-linsz = 1.
v_print_parameters-paart = 'LETTER'.
Now you submit your report using the filters that apply. Do not forget to add the job parameters to it, as the code below shows:
submit your_report_name
to sap-spool
spool parameters v_print_parameters
without spool dynpro
with ...(insert all your filters here)
via job c_name number v_number
and return.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
After that, you close the job:
call function 'JOB_CLOSE'
exporting
jobcount = v_number
jobname = c_name
strtimmed = 'X'
exceptions
cant_start_immediate = 1
invalid_startdate = 2
jobname_missing = 3
job_close_failed = 4
job_nosteps = 5
job_notex = 6
lock_failed = 7
others = 8.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Now the job will proceed and you'll need to wait for it to complete. Do it with a loop. Once the job is completed, you can get it's spool output and convert to PDF.
data v_rqident type tsp01-rqident.
data v_job_head type tbtcjob.
data t_job_steplist type tbtcstep occurs 0 with header line.
data t_pdf like tline occurs 0 with header line.
do 200 times.
wait up to 1 seconds.
call function 'BP_JOB_READ'
exporting
job_read_jobcount = v_number
job_read_jobname = c_name
job_read_opcode = '20'
importing
job_read_jobhead = v_job_head
tables
job_read_steplist = t_job_steplist
exceptions
invalid_opcode = 1
job_doesnt_exist = 2
job_doesnt_have_steps = 3
others = 4.
read table t_job_steplist index 1.
if not t_job_steplist-listident is initial.
v_rqident = t_job_steplist-listident.
exit.
else.
clear v_job_head.
clear t_job_steplist.
clear t_job_steplist[].
endif.
enddo.
check not v_rqident is initial.
call function 'CONVERT_ABAPSPOOLJOB_2_PDF'
exporting
src_spoolid = v_rqident
dst_device = 'LP01'
tables
pdf = t_pdf
exceptions
err_no_abap_spooljob = 1
err_no_spooljob = 2
err_no_permission = 3
err_conv_not_possible = 4
err_bad_destdevice = 5
user_cancelled = 6
err_spoolerror = 7
err_temseerror = 8
err_btcjob_open_failed = 9
err_btcjob_submit_failed = 10
err_btcjob_close_failed = 11
others = 12.
If you're going to send it via HTTP, you may need to convert it to BASE64 as well.
field-symbols <xchar> type x.
data v_offset(10) type n.
data v_char type c.
data v_xchar(2) type x.
data v_xstringdata_aux type xstring.
data v_xstringdata type xstring.
data v_base64data type string.
data v_base64data_aux type string.
loop at t_pdf.
do 134 times.
v_offset = sy-index - 1.
v_char = t_pdf+v_offset(1).
assign v_char to <xchar> casting type x.
concatenate v_xstringdata_aux <xchar> into v_xstringdata_aux in byte mode.
enddo.
concatenate v_xstringdata v_xstringdata_aux into v_xstringdata in byte mode.
clear v_xstringdata_aux.
endloop.
call function 'SCMS_BASE64_ENCODE_STR'
exporting
input = v_xstringdata
importing
output = v_base64data.
v_base64data_aux = v_base64data.
while strlen( v_base64data_aux ) gt 255.
clear t_base64data.
t_base64data-data = v_base64data_aux.
v_base64data_aux = v_base64data_aux+255.
append t_base64data.
endwhile.
if not v_base64data_aux is initial.
t_base64data-data = v_base64data_aux.
append t_base64data.
endif.
And you're done!
Hope it helps.
As previous speakers said, you should do extensive training before implementing such stuff in productive environment.
However, calling WebdynPro ABAP within report can be done with the help of WDY_EXECUTE_IN_PLACE function module. You should pass there Webdyn Pro application and necessary parameters.
CALL FUNCTION 'WDY_EXECUTE_IN_PLACE'
EXPORTING
* PROTOCOL =
INTERNALMODE = ' '
* SMARTCLIENT =
APPLICATION = 'Z_MY_WEBDYNPRO'
* CONTAINER_NAME =
PARAMETERS = lt_parameters
SUPPRESS_OUTPUT =
TRY_TO_USE_SAPGUI_THEME = ' '
IMPORTING
OUT_URL = ex_url
.
IF sy-subrc <> 0.
* Implement suitable error handling here
ENDIF.

Resources