I am using Heat to implement auto-scaling, below is a short part of my code:
heat_template_version: 2016-10-14
...
resources:
corey_server_group:
type: OS::Heat::AutoScalingGroup
depends_on: corey-server
properties:
min_size: 1
max_size: 5
resource:
type: CoreyLBSServer.yaml
properties:
......
CoreyLBSServer.yaml
heat_template_version: 2016-10-14
...
resources:
server:
type: OS::Nova::Server
properties:
flavor:
......
I am finding a way to scale down the specific instance, here are some I've tried but all of them didn't work, it always scales down the oldest one.
1.Shutdown the instance, then signal scaledown policy. (X)
2.According to this, find the stack-id from attribute refs_map, mark the resource server as unhealthy, then signal scaledown policy. (X)
3.Find the stack-id from attribute refs_map, set the stack status as FAILED, then signal scaledown policy. (X)
I tried to find out what strategy does AutoScalingGroup use while scaling down, from the code heat/common/grouputils.py, it sorts members by "created_time" then by name, so the oldest member will be deleted first when scaling down. But there is an exception, if include_failed is set, failed members will be put first in the list sorted by created_time then by name.
Update
I finally set my target as "failed" successfully, here is the command:
# firstly, print the physical_resource_id of corey_server_group
openstack stack resource show -c physical_resource_id <parent_stack_id> corey_server_group
# secondly, list resources in the corey_server_group
openstack stack resource list <physical_resource_id>
# thirdly, mark the target as unhealthy
openstack stack resource mark unhealthy <physical_resource_id> <resource_name>
# after these commands, you will see the resource_status of the target becomes "Check Failed"
But it has another problem, Heat will delete both "failed" and "oldest" resource while scaling down! How to scale down only the "Marked as failed" target?
After few days of tracing, I finally find out a way to scale down the specific instance in the AutoScalingGroup.
Let's take a glance at source code first: heat/common/grouputils.py#L114
Sort the list of instances first by created_time then by name. If
include_failed is set, failed members will be put first in the list
sorted by created_time then by name.
As you can see, include_failed is set to False by default, so unhealthy members won't be included in the list, that's why the procedure described in my question didn't work.
If you want to enable the feature of scaling down the particular instance, you must explicitly define include_failed=True while calling functions, below is some part of my code:
heat/engine/resources/aws/autoscaling/autoscaling_group.py
Cause I'm using AutoScalingGroup, I need to modify two files:
heat/engine/resources/aws/autoscaling/autoscaling_group.py
heat/engine/resources/openstack/heat/autoscaling_group.py
Restart Heat services, then you can mark the target as unhealthy and signal the policy to scale down the specific instance:
openstack stack resource mark unhealthy <physical_resource_id> <resource_name>
openstack stack resource signal <parent_stack_id> your_scaledown_policy
FYI, the table shows the different behavior between False and True (scaling_adjustment=1).
| include_failed=False (default) | include_failed=True
| |
Scale Up | Add one instance | Add one instance
| |
Scale down | Remove the oldest | Remove the oldest
| |
Stack Update | Nothing changed | Nothing changed
| |
Unhealthy + Up | Add one & remove unhealthy | Add one & fix unhealthy
| |
Unhealthy + Down | Remove one & remove unhealthy | Remove the unhealthy one
| |
Unhealthy + Update | Fix unhealthy | Fix unhealthy
Related
I have multiple azure functions in single azure function app resource where each function logs are stored with function name inoperation_Name column of application insights logs. For all azure functions names, I am logging messages with Warnings(severityLevel=2) and Errors(severityLevel=3).
Expected: I am trying to show all functions warnings, errors in a single pie chart and later to pin to dashboard. Piechart should give us visibility how many errors and warnings for each function have in a single azure function app resource.
Actual: Pie chart is displaying for all severity levels(combining) for each function name(operationname) for a single azure function app resource.
traces
| where severityLevel >1
| where cloud_RoleName == 'dev-test-functionapp' //Azure Function App Resource Name
| where operation_Name in ('Function1Name','Function2Name','Function3Name')
| summarize by operation_Name,severityLevel
| render piechart
If I understand correctly, this could work:
traces
| where severityLevel > 1
| extend severityLevel = case(severityLevel == 2, "Warning", severityLevel == 3, "Error", tostring(severityLevel))
| where cloud_RoleName == 'dev-test-functionapp'
| where operation_Name in ('Function1Name','Function2Name','Function3Name')
| summarize count() by s = strcat(severityLevel, "_", operation_Name)
| render piechart
(Not an answer)
Replying the OP as to my comment about the choice of visualization.
Pie chart is an overused visualization.
It is great for storytelling for some scenarios when you want to emphasize the dominance of one or two elements or the lack of such.
It is quite bad for anything else.
It makes it very difficult to observe the details when there are more than just few elements and it is also very difficult to see the ratio between those elements.
Here is another option of unstacked column
I have a zone for a single TLD. I am trying to process the file data and convert it into JSON for other services that uses this data. Here's the first five lines of the file I have:
com. 900 in soa a.gtld-servers.net. nstld.verisign-grs.com. 1612915221 1800 900 604800 86400
0-------------------------------------------------------------0.com. 172800 in ns ns1.domainit.com.
0-------------------------------------------------------------0.com. 172800 in ns ns2.domainit.com.
0-------------------------------------------------------------5.com. 172800 in ns fns.frogsmart.net.
0-------------------------------------------------------------5.com. 172800 in ns sns.frogsmart.net.
0-------------------------------------------------------------5.com. 172800 in ns tns.frogsmart.net.
Now I am not sure as how to interpret this file's data. I have looked at reference and example zone files at multiple places but, it does not resemble this format. One of the references can be found here. I just need some pointers on how to interpret each line. My understanding are the following:
The first value is the domain name
The next value is a number which, if I use the first line as header seems to be 900 (not sure what is)
The next value is in (not sure what this is)
The next value is soa which is ns (I think this means Start of Authority for domain is with Name server)
Lastly, the name server which, if I use the first line as header seems to a.gtld-servers.net (I think this is the primary SOA address)
Now the other properties (the first line I think indicates 10 properties) but these are not present in this file I am trying to process. That's all I could figure out so far and some help will be greatly appreciated.
First a warning: zonefiles can be big, especially .com one and converting that to JSON, especially if you intend to fully build the object in memory before using it, you might have trouble.
So you should start by asking yourselves if you really need all the data (for example as seen below what will you do with SOA content?) and if JSON is the most adequate representation, especially if not in a streaming way.
DNS data is explained in RFC 1034+1035.
More specifically ยง3.3.13 in RFC1035:
3.3.13. SOA RDATA format
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
/ MNAME /
/ /
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
/ RNAME /
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| SERIAL |
| |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| REFRESH |
| |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| RETRY |
| |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| EXPIRE |
| |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| MINIMUM |
| |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
where:
MNAME The of the name server that was the
original or primary source of data for this zone.
RNAME A which specifies the mailbox of the
person responsible for this zone.
SERIAL The unsigned 32 bit version number of the original
copy
of the zone. Zone transfers preserve this value. This
value wraps and should be compared using sequence space
arithmetic.
REFRESH A 32 bit time interval before the zone should be
refreshed.
RETRY A 32 bit time interval that should elapse before a
failed refresh should be retried.
EXPIRE A 32 bit time value that specifies the upper limit on
the time interval that can elapse before the zone is no
longer authoritative.
But do not that the semantics have changed in later RFCs, the MINIMUM is now called the NEGATIVE TTL.
Also IN (case not significant) means INternet but all records will have that, consider it as a left over of past DNS experiments around classes that never worked.
The query is actually pretty simple:
traces
| extend SdIds = customDimensions.SdIds
| where isnull(customDimensions.AmountOfBlobStorageLoadedRows) == false
or isnull(customDimensions.AmountOfRowsAfterTransformation) == false
or isnull(customDimensions.AmountOfRowsIngestedToDW) == false
| summarize
BlobReadSum=sum(toint(customDimensions.AmountOfBlobStorageLoadedRows)),
TransformationSum=sum(toint(customDimensions.AmountOfRowsAfterTransformation)),
SavedToDWSum=sum(toint(customDimensions.AmountOfRowsIngestedToDW))
by tostring(SdIds)
| order by BlobReadSum desc, TransformationSum desc, SavedToDWSum desc
| limit 10
The following picture shows the application insights log tool. Like expected, the biggest values appear first in the chart:
However, the picture below shows the output of the same query, using the same time range, published to a shared dashboard:
What happened to the order?
Is there any setting that may interfere on this?
You could add | sort tostring(SdIds) after | order in the suffix of your query:
| order by BlobReadSum desc, TransformationSum desc, SavedToDWSum desc
| sort tostring(SdIds)
| limit 10
In azure log analytics dashboard parts there's an automatic sort for the x axis when its type is string.
You might notice that the chart sort in the dashboard would be just the opposite. In this case click "Open chart in Analytics" in the top right corner of your part, and change the desc/asc sort configuration of | sort tostring(SdIds) command.
In my angular application I am tracking filters that users utilize on one of the pages. What I can later see in Logs, is the following (query for last 24 hours)
What I am interested in, is the count of filters groupped by its name. So I created the following query:
However the problem as you can see, is that my y-axis starts from 1 instead of 0. To users this looks like the last two filters don't have any values, where in reality they both have count of 1.
I have tried to use ymin=0 together with render function, however it did not work (chart still starts from 1). Then I have read I need to use make-series() function and so I tried:
customEvents
| where timestamp >= ago(24h)
| where customDimensions.pageName == 'product'
| make-series Count=count(name) default=0 on timestamp from datetime(2019-10-10) to datetime(2019-10-11) step 1d by name
| project name, Count
However the result is some weird matrix instead of a regular table:
I have just started with application insights thus any help in respect to this matter would be more than appreciated. Thank you
in Workbooks in application insights you could do almost this query (see below for a simplification?), then use the chart settings and set the axis min/max explicitly:
but why are you using make-series but then summarizing to just one series?
in this specific case is summarize simpler:
customEvents
| where timestamp between(datetime(2019-10-10) .. datetime(2019-10-11))
| where customDimensions.pageName == 'product'
| summarize Count=count(name) by name
| render barchart
in the logs blade (where you are), you could do this query, and I believe you can use
render barchart title="blah" ymin=0
(at some point workbooks will be able to "see" all the rendeer options like ymin/ymax/xmin/xmax/title/etc, but right now they're all stripped out at service layer)
A bit late to the party, but the correct syntax to pass in ymin and ymax when using a query is this:
| ...
| render barchart with (ymin=0, ymax=100)
See https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/renderoperator?pivots=azuremonitor
I'm working on a solution for Cassandra that's proving impossible.
We have a table that will return a set of candidates given some search criteria. The row with the highest score is returned back to the user. We can do this quite easily with SQL, but there's a need to migrate to Cassandra. Here are the tables involved:
Value
ID | VALUE | COUNTRY | STATE | CITY | COUNTY
--------+---------+----------+----------+-----------+-----------
1 | 50 | US | | |
--------+---------+----------+----------+-----------+-----------
2 | 25 | | TX | |
--------+---------+----------+----------+-----------+-----------
3 | 15 | | | MEMPHIS |
--------+---------+----------+----------+-----------+-----------
4 | 5 | | | | BROWARD
--------+---------+----------+----------+-----------+-----------
5 | 30 | | NY | NYC |
--------+---------+----------+----------+-----------+-----------
6 | 20 | US | | NASHVILLE |
--------+---------+----------+----------+-----------+-----------
Scoring
ATTRIBUTE | SCORE
-------------+-------------
COUNTRY | 1
STATE | 2
CITY | 4
COUNTY | 8
A query is sent that can have any of those four attributes populated or not. We search through our values table, calculate the scores, and return the highest one. If a column in the values table is null, it means it's applicable for all.
ID 1 is applicable for all states, cities, and counties within the US.
ID 2 is applicable for all countries, cities, and counties where the state is TX.
Example:
Query: {Country: US, State: TX}
Matches Value IDs: [1, 2, 3, 4, 6]
Scores: [1, 2, 4, 8, 5(1+4)]
Result: {id: 4} (8 was the highest score so Broward returns)
How would you model something like this in Cassandra 2.1?
Found out the best way to achieve this was using Solr with Cassandra.
Somethings to note though about using Solr, since all the resources I needed were scattered amongst the internet.
You must first start Cassandra with Solr. There's a command with the dse tool for starting cassandra with Solr enabled.
$CASSANDRA_HOME/bin/dse cassandra -s
You must create your keyspace with network topology stategy and solr enabled.
CREATE KEYSPACE ... WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'Solr': 1}
Once you create your table within your solr enabled keyspace, create a core using the dsetool.
$CASSANDRA_HOME/bin/dsetool create_core keyspace.table_name generateResources=true reindex=true
This will allow solr to index your data and generate a number of secondary indexes against your cassandra table.
To perform the queries needed for columns where values may or may not exist requires a somewhat complex query.
SELECT * FROM keyspace.table_name WHERE solr_query = '{"q": "{(-column:[* TO *] AND *:*) OR column:value}"';
Finally, you may notice when searching for text, your solr query column:"Hello" may pick up other unwanted values like HelloWorld or HelloThere. This is due to the datatype used in your schema.xml for Solr. Here's how to modify this behavior:
Head to your Solr Admin UI. (Normally http://hostname:8983/solr/)
Choose your core in the drop down list in the left pane, should be named keyspace.table_name.
Look for Config or Schema, both should take you to the schema.xml.
Copy and paste that file to some text editor. Optionally, you could try using wget or curl to download the file, but you need the real link which is provided in the text field box to the top right.
There's a tag <fieldtype>, with the name TextField. Replace org.apache.solr.schema.TextField with org.apache.solr.schema.StrField. You must also remove the analyzers, StrField does not support those.
That's it, hopefully I've saved people from all the headaches I encountered.