This is my appinsight query. Basically I have a monitor program runs every 15 minutes and log a matrix of data. I want to compare latest matrix against previous matrix and latest matrix against prePrevious matrix, and create an warning only when both compare shows a jump (e.g. 20%)
//get the requests I am interested in first
let allRequest=requests
| where operation_Name == "SearchServiceFieldMonitor" and timestamp > ago(4h)
| extend IndexerMismatch = tostring(customDimensions.IndexerMismatch)
| extend Mismatch = split(IndexerMismatch, " in ")
| extend difference = toint(Mismatch[0])
, field = tostring(Mismatch[1])
, indexer = tostring(Mismatch[2])
, index = tostring(Mismatch[3])
, service = tostring(Mismatch[4])
, timestamp
|project field, indexer,index,service, timestamp,difference;
//get the latest requests
let latestRequest=allRequest
|summarize latesttime=arg_max(timestamp, *) by field, indexer,index,service
|project latestdifference=difference,latesttime,field, indexer,index,service;
//get the requests before latest (latest-1)
let previousRequest=allRequest
|join latestRequest on field, indexer,index,service
|extend timestampcopy=timestamp
|where timestamp<latesttime
|summarize previousdifftime=arg_max(timestamp, *) by field, indexer,index,service
|project latestdifference,previousdifference=difference,previousdifftime,latesttime,field, indexer,index,service;
//get the requests before latest-1, so latest-2
let beforepreviousRequest=allRequest
|join previousRequest on field, indexer,index,service
|where timestamp<previousdifftime
|summarize prevPrevtime=arg_max(timestamp, *) by field, indexer,index,service
|project latestdifference,previousdifference,prevprevdifference=difference,prevPrevtime,previousdifftime,latesttime,field, indexer,index,service;
// show requests that there is a jump of difference between latest and (latest-1) and latest and (latest-1)
beforepreviousRequest
|where (latestdifference -previousdifference)/previousdifference>0.2 and (latestdifference -prevprevdifference)/prevprevdifference>0.2
However, this query gives random results. If I click the run button a couple of times, it could show 0 record, 1 record or a couple of records. Is there something wrong with the join I am using or something?
The default join kind is "innerunique" which returns only one row (can be any row) from the left side. Change the kind to "inner" or any other kind to get consistent results.
Related
Good day,
Attempting to check IPAddress from SiginLogs with a datatable. I am able to perform the Scalar function ipv4_is_in_range() with a single value. Ips are changed for privacy
ex:
ipv4_is_in_range(IPAddress, '127.0.0.255/24')
When I try to use a declared datatable it does not recognize the values and returns nothing.
ex:
let srcIPs = datatable (checkIP:string) ['127.0.0.1/24'];
SigninLogs
| union srcIPs
| where ipv4_is_in_range( IPAddress, checkIP)
or
let srcIPs = datatable (checkIP:string) [
'127.0.0.1/24',
'8.8.8.8',
'1.1.1.1/16'
];
SigninLogs
| union srcIPs
| where ipv4_is_in_range( IPAddress, checkIP)
if I replace the 'where' with 'extend' I will get one IP address that does show correctly but will include another IP address that is not within that range.
My question is how do I get the function to recognize the values from srcIPs correctly?
#Michael. I went a head a revisited that document and reattempted. The workspace still shows and error when I hover ipv4_lookup stating it is not defined. YET. It still ran, something I didn't attempt. Now the code looks like.
let IP_Data = datatable(network:string)
[
"127.0.0.1",
"8.8.8.8/24",
"192.168.0.1",
"10.0.240.255/21"
];
SigninLogs
| evaluate ipv4_lookup(IP_Data, IPAddress, network)
| where UserType == "Member"
| project-reorder IPAddress, UserPrincipalName
So this code got me what I was looking for. TY all for your assistance.
Answering my own question with working code for record.
I'm trying to do an OUTER-JOIN in progress using this page as inspiration. My code is as follows
OPEN QUERY qMovto
FOR EACH movto-estoq
WHERE movto-estoq.lote BEGINS pc-lote
AND movto-estoq.it-codigo BEGINS pc-it-codigo
AND movto-estoq.dt-trans >= pd-data1
AND movto-estoq.dt-trans <= pd-data2
AND movto-estoq.cod-emitente = pi-cod-emitente,
EACH item OUTER-JOIN
WHERE movto-estoq.it-codigo = item.it-codigo,
EACH item-cli OUTER-JOIN
WHERE item-cli.item-do-cli BEGINS pc-item-cli
AND movto-estoq.cod-emitente = item-cli.cod-emitente
AND movto-estoq.it-codigo = item-cli.it-codigo
AND movto-estoq.un = item-cli.unid-med-cli,
EACH nota-fiscal OUTER-JOIN
WHERE movto-estoq.nro-docto = nota-fiscal.nr-nota-fis
BY movto-estoq.dt-trans DESCENDING BY movto-estoq.hr-trans DESCENDING.
The problem that is happening is when 1 element in null, all the other elements that are in the OUTER-JOIN are appearing as null as well, even though they are not null. Is there a better way to write this code? Should I put 'LEFT' before the OUTER-JOIN? Thanks for your time.
To make your example easier, consider making it work from ABL dojo. The following code:
define temp-table ttone
field ii as int
.
define temp-table tttwo
field ii as int
field cc as char
.
create ttone. ttone.ii = 1.
create ttone. ttone.ii = 2.
create tttwo. tttwo.ii = 2. tttwo.cc = "inner".
create tttwo. tttwo.ii = 3. tttwo.cc = "orphan".
define query q for ttone, tttwo.
open query q
for each ttone,
each tttwo outer-join where tttwo.ii = ttone.ii.
get first q.
do while available ttone:
message ttone.ii tttwo.cc.
get next q.
end.
Can be run from https://abldojo.services.progress.com/?shareId=600f40919585066c219797ed
As you can see, this results in :
1 ?
2 inner
The join which is not available is shown as unknown. The value of the outer part of the join is shown.
Since you do not show how you are getting an unknown value for everything, maybe you are concatenating the unknown values?
I've been trying to update a randomly selected row in my Sqlite database using Flask and the Flask-Sqlalchemy. I have just a few rows in the database with columns called "word", "yes", and "no", where word is a string and yes and no are integers. There are two buttons on the "vote" view, yes and no. When a button is pressed, the appropriate code executes, should increment the yes or no column, and the view is updated with a new random word from the Word table.
#app.route("/vote", methods=["GET", "POST"])
def vote():
#Get random row from database
query = db.session.query(Word)
rowCount = int(query.count())
row = query.offset(int(rowCount*random.random())).first()
#POST
# If "yes" button is pressed, increment yes column in database
if request.method == "POST":
if request.form.get("yes"):
row.yes += 1
db.session.commit()
return render_template("vote.html", row=row)
# otherwise increment no column
elif request.form.get("no"):
row.no += 1
db.session.commit()
return redirect(url_for("vote"))
#GET
# on get request, render vote.html
return render_template("vote.html", row=row)
This code is working, but the yes and no columns are only updated when the view comes back around to the random word the next time. If I close the browser right after clicking a button, the database is not incremented. I think this has something to do with db.session.commit(), or something about the session. It seems like:
row.yes += 1
is saved in the session object, but only committed when that database row is queried the next time. This code DID work when I replaced the query at the top of the method with:
row = Word.query.get(4)
which returns the row with id of 4. With this query, the yes or no column are updated immediately.
Any thoughts?
Thanks
Thanks all. I figured out the problem. The database incrementing was actually working fine, but I wasn't incrementing the correct rows. The problem was that I generated a random row from the database on each call of the vote() method, which meant that I got a random value for the GET request, and a different random value for the POST request, and ended up incrementing that different random value in the POST request.
I separated the logic out into two methods for the "/vote" route, getWord() and vote(), and created a randRow() method for the row generation. I needed to store the random row that gets generated when getWord() is called, so I used session variables so I could access the random row from the vote() method. It's a bit verbose, but seems to work.
Anyone have a better idea about how to achieve this?
#app.route('/vote', methods=["GET"])
def getWord():
wordObj = randRow()
session['word'] = wordObj.word
session['yesVotes'] = wordObj.yes
session['noVotes'] = wordObj.no
return render_template("vote.html", word=session['word'], yesVotes=session['yesVotes'], noVotes=session['noVotes'])
#app.route('/vote', methods=["POST"])
def vote():
# store session 'word' in word variable
# look up word in database and store object in wordObj
word = session['word']
wordObj = Word.query.filter_by(word=word).first()
# check button press on vote view, increment yes or no column
# depending on which button was pressed
if request.form.get("yes"):
wordObj.yes = wordObj.yes + 1
elif request.form.get("no"):
wordObj.no = wordObj.no + 1
db.session.commit()
return redirect(url_for("getWord"))
###### HELPERS ######
# returns a random row from the database
def randRow():
rowId = Word.query.order_by(func.random()).first().id
row = Word.query.get(rowId)
return row
I think you need to add the update into the session before the commit, using code like this:
[...]
row.yes += 1
db.session.add(row)
db.session.commit()
[...]
That's the pattern that I use for a basic update in Flask-SQLAlchemy.
I'm learning LINQ, and I'm trying to figure out how to get all members with the last order failed (each member can have many orders). For efficiency reasons I'd like to do it all in LINQ before putting it into a list, if possible.
So far I believe this is the right way to get all the members with a failed order which joined recently (cutoffDate is current date -10 days).
var failedOrders =
from m in context.Members
from o in context.Orders
where m.DateJoined > cutoffDate
where o.Status == Failed
select m;
I expect I need to use Last or LastOrDefault, or possibly I need to use
orderby o.OrderNumber descending
and then get the First or FirstOrDefault as suggested in this stackoverflow answer.
Note that I want to look at ONLY the last order for a given member and see if that has failed (NOT just find last failed order).
Normally you would write something like:
var failedOrders = from m in context.Members
where m.DateJoined > cutoffDate
select new
{
Member = m,
LastOrder = m.Orders.OrderByDescending(x => x.OrderNumber).FirstOrDefault()
} into mlo
// no need for null checks here, because the query is done db-side
where mlo.LastOrder.Status == Failed
select mlo; // or select mlo.Member to have only the member
This if there is a Members.Orders relationship
Plone 3.3.5: We have a middle sized Plone site and we'd like to update its workflows. Since it's a long-running process we noticed something strange going on. Our Archetypes accessors, not security related, where called when hitting "Update security settings" in portal_workflow.
Looks like the culprit is update_metadata=1 default setting in ZCatalog:
-> self.plone_log("treatmentToImagingHours: %s"%str(treatmentToImagingHours))
(Pdb) bt
/Users/moo/sits/parts/zope2/lib/python/ZServer/PubCore/ZServerPublisher.py(25)__init__()
-> response=b)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/Publish.py(401)publish_module()
-> environ, debug, request, response)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/Publish.py(202)publish_module_standard()
-> response = publish(request, module_name, after_list, debug=debug)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/Publish.py(119)publish()
-> request, bind=1)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/mapply.py(88)mapply()
-> if debug is not None: return debug(object,args,context)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/Publish.py(42)call_object()
-> result=apply(object,args) # Type s<cr> to step into published object.
<string>(4)_facade()
/Users/moo/sits/parts/zope2/lib/python/AccessControl/requestmethod.py(64)_curried()
-> return callable(*args, **kw)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(457)updateRoleMappings()
-> count = self._recursiveUpdateRoleMappings(portal, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(600)_recursiveUpdateRoleMappings()
-> ob.reindexObject(idxs=['allowedRolesAndUsers'])
/Users/moo/sits/eggs/Products.Archetypes-1.5.11-py2.4.egg/Products/Archetypes/CatalogMultiplex.py(115)reindexObject()
-> c.catalog_object(self, url, idxs=lst)
/Users/moo/sits/eggs/Plone-3.3rc2-py2.4.egg/Products/CMFPlone/CatalogTool.py(417)catalog_object()
-> update_metadata, pghandler=pghandler)
/Users/moo/sits/parts/zope2/lib/python/Products/ZCatalog/ZCatalog.py(536)catalog_object()
-> update_metadata=update_metadata)
/Users/moo/sits/parts/zope2/lib/python/Products/ZCatalog/Catalog.py(348)catalogObject()
-> self.updateMetadata(object, uid)
/Users/moo/sits/parts/zope2/lib/python/Products/ZCatalog/Catalog.py(277)updateMetadata()
-> newDataRecord = self.recordify(object)
/Users/moo/sits/parts/zope2/lib/python/Products/ZCatalog/Catalog.py(417)recordify()
-> if(attr is not MV and safe_callable(attr)): attr=attr()
/Users/moo/sits/products/SitsPatient/content/SitsPatient.py(2452)outSichECASS()
portal_workflow calls ob.reindexObject(idxs=['allowedRolesAndUsers']). However, this triggers refresh to all metadata.
1) Is this normal behavior?
2) Is this desired behavior?
3) Can I turn update_metadata off to speed up the process without breaking anything? Does portal security rely on metadata in any point?
Yes, this is normal behaviour. The catalog stores a subset of information that an object provides as a cache, so you can render pages with just catalog results without having to wake up the original objects. This includes the current workflow state for an object.
When reindexing, the catalog must update the metadata too, as it has no means of determining if that data has changed or not.
In this particular process, you cannot turn update_metadata off without patching; you'd have to either:
patch Products.ZCatalog.Catalog.catalogObject to switch off update_metadata there,
patch Products.Archetypes.CatalogMultiplex.CatalogMultiplex.reindexObject to call catalogObject with the update_metadata flag set to False,
patch the workflow tool to call reindexObjectSecurity instead of reindexObject.
You'd have to audit your catalog schema (metadata) columns to see if nothing will indeed change when you update workflow security.