How to graph RRD stackable data by standard deviation to maximize readability - graph

It's a pretty specific use, but it can come in handy.
Imagine you have a different values to stack, some data varying a lot and some being almost constant.
If you use the default order and the variable data is stacked under the constant data, the variable data will make the constant data have a very variable base.
So if you stack first at bottom the less variable data, it could help.
Example: These two graphs show how to improve readibility by stacking deeper data that move the less, i.e. has the smaller standard deviation.
Default graphing could lead to bad readibility
Improved readibility when sorting by standard deviation

use rrdtool graph with PRINT command to get the standard deviation of the data sources (stdev_array)
sort stdev_array
graph by stacking in the order of stdev_array
Here is the code in PHP but any language can do it.
I'm using RRDtool 1.4.5
Don't forget to define $rrd_path (path to rrd file), $img_path (path where to write the image), $data_sources (an array of DS names, depends on how you build your RRD), $rrd_colors (an array of hexa colors).
$rrd_colors_count = count($rrd_colors);
$stdev_command = "rrdtool graph /dev/null ";
foreach ($data_sources as $index => $ds_name)
{
$stdev_command .= "DEF:serv$index=$rrd_path:$ds_name:AVERAGE ";
$stdev_command .= "VDEF:stdev$index=serv$index,STDEV PRINT:stdev$index:%lf ";
}
exec($stdev_command, $stdev_order, $ret);
if ($ret === 0)
{
array_shift($stdev_order); // remove first useless line "0x0" (may depend on your rrdtool version?)
asort($stdev_order); // sort by standard deviation keeping the indexes
}
else $stdev_order = $data_sources; // backup in case $stdev_command failed
$graph_command = "rrdtool graph $img_path ";
$graph_command .= "AREA:0 ";
foreach ($stdev_order as $index => $useless)
{
$ds_name = $data_sources[$index];
$graph_command .= "DEF:line$index=$rrd_path:$ds_name:AVERAGE ";
$graph_command .= "STACK:line$index" . $rrd_colors[$index%$rrd_colors_count].' ';
}
exec($graph_command, $out, $ret);
// check $ret (and $out) to see if all is good

Related

Fetch a post under certain no of words or characters in wordpress?

Is it possible to fetch a post with content under 140 characters or 25 words ?
if possible how to do it
here is my random post code
// Random post link
function randomPostlink(){
$RandPostQuery = new WP_Query(array('post_type'=>array('tip'),'posts_per_page' => 1,'orderby'=>'rand'));
while ( $RandPostQuery->have_posts() ) : $RandPostQuery->the_post();
echo the_permalink();
endwhile;
wp_reset_postdata();
}
Character count is easy, you can just add the condition AND CHAR_LENGTH(post_content) < 140 to your where clause.
Word count is more difficult because there is no built in MySQL function for counting words. You can find simple solutions that don't work in every use case as well as complete solutions that use stored functions. I'll use a simple solution for the sake of example.
What you need to do is add a filter to the where clause and apply your additional conditions there:
add_filter( 'posts_where', 'venki_post_length_limit' );
function venki_post_length_limit($where = '') {
remove_filter( 'posts_where', 'venki_post_length_limit' );
$where .= ' AND (
CHAR_LENGTH(post_content) < 140 OR
(LENGTH(post_content) - LENGTH(REPLACE(post_content, ' ', ''))+1) < 25
) ';
return $where;
}
Notice that I remove the filter as soon as the function is called. This is so you don't apply this same condition to every query.
You should also be aware that both of those conditions are costly compared to a simple lookup on a column value (especially the word count). Neither can utilize indexes. If you have a large number of posts you may run into performance issues if you're running this query frequently. A better solution might be to calculate the word and character count when the post is created/updated and store that as meta data.

Lasso 9 Hangs on Inserting Pair with Map Value into Array?

EDIT: I accidentally misrepresented the problem when trying to pare-down the example code. A key part of my code is that I am attempting to sort the array after adding elements to it. The hang appears on sort, not insert. The following abstracted code will consistently hang:
<?=
local('a' = array)
#a->insert('test1' = map('a'='1'))
#a->insert('test2' = map('b'='2')) // comment-out to make work
#a->sort
#a
?>
I have a result set for which I want to insert a pair of values into an array for each unique key, as follows:
resultset(2) => {
records => {
if(!$logTypeClasses->contains(field('logTypeClass'))) => {
local(i) = pair(field('logTypeClass'), map('title' = field('logType'), 'class' = field('logTypeClass')))
log_critical(#i)
$logTypeClasses->insert(#i) // Lasso hangs on this line, will return if commented-out
}
}
}
Strangely, I cannot insert the #i local variable into thread variable without Lasso hanging. I never receive an error, and the page never returns. It just hangs indefinitely.
I do see the pairs logged correctly, which leads me to believe that the pair-generating syntax is correct.
I can make the code work as long as the value side of the pair is not a map with values. In other words, it works when the value side of the pair is a string, or even an empty map. As soon as I add key=value parameters to the map, it fails.
I must be missing something obvious. Any pointers? Thanks in advance for your time and consideration.
I can verify the bug with the basic code you sent with sorting. The question does arise how exactly one sorts pairs. I'm betting you want them sorted by the first element in the pair, but I could also see the claim that they should be sorted by last element in the pair (by values instead of by keys)
One thing that might work better is to keep it as a map of maps. If you need the sorted data for some reason, you could do map->keys->asArray->sort
Ex:
local(data) = map('test1' = map('a'=2,'b'=3))
#data->insert('test2' = map('c'=33, 'd'=42))
local(keys) = #data->keys->asArray
#keys->sort
#keys
Even better, if you're going to just iterate through a sorted set, you can just use a query expression:
local(data) = map('test1' = map('a'=2,'b'=3))
#data->insert('test2' = map('c'=33, 'd'=42))
with elm in #data->eachPair
let key = #elm->first
let value = #elm->second
order by #key
do { ... }
I doubt you problem is the pair with map construct per se.
This test code works as expected:
var(testcontainer = array)
inline(-database = 'mysql', -table = 'help_topic', -findall) => {
resultset(1) => {
records => {
if(!$testcontainer->contains(field('name'))) => {
local(i) = pair(field('name'), map('description' = field('description'), 'name' = field('name')))
$testcontainer->insert(#i)
}
}
}
}
$testcontainer
When Lasso hangs like that with no feedback and no immediate crash it is usually trapped in some kind of infinite loop. I'm speculating that it might have to do with Lasso using references whenever possible. Maybe some part of your code is using a reference that references itself. Or something.

phpexcel removecolumn taking too much time

I will write my requirements and how I write code to achieve it.
I have to generate and save excel file on server with specific styles and formulae. Which user will later download. User will have to select which columns he want when generating excel.
logic I wrote
I placed an excel file with similar styling already on server but with empty cells which I will fill later. That way I can avoid code of styling all those required cells.
Then I am filling all the columns with data from database. Now I read list of columns that needs to be deleted in a posted array and deleting in reverse order to make sure I delete right columns. This works but It takes too much time to delete each column. it is taking atleast 4 to 5 mins to delete single column if column number increases , deleting time is increasing exponentially.
Code
$objReader = PHPExcel_IOFactory::createReader('Excel5');
$objPHPExcel = $objReader->load($inputFileName);
$objPHPExcel->getProperties()->setCreator(user_data('name'))
->setLastModifiedBy(user_data('name'))
->setTitle("Grid file")
->setSubject("Grid file")
->setDescription("Grid file")
->setKeywords("Grid file")
->setCategory("Grids");
$col = 0;
$worksheet = $objPHPExcel->getActiveSheet();
for ($i = 19; $i < count($grid_items) + 19; $i++) {
$col = 0;
foreach ($grid_items[$i - 19] as $columnname => $value) {
$coval = PHPExcel_Cell::stringFromColumnIndex($col) . ($i);
$worksheet->setCellValue($coval, $value);
$col++;
}
}
$worksheet->removeColumnByIndex(11);
$worksheet->removeColumnByIndex(12);
$worksheet->removeColumnByIndex(13);
$worksheet->removeColumnByIndex(14);
$objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel5');
$finalFilename = 'Master_Grid_excel_' . $this->job_id . '-' . date('Y-m-d-H-i-s') . '.xls';
$objWriter->save(SITE_ROOT . 'uploads/rfp/' . $finalFilename);
return ;
Well removeColumn() is computationally intensive anyway; but you are calling it 4 times when you only need to call it once. The removeColumn() and removeColumnByIndex() methods accept an optional second argument specifying the number of columns to remove, defaulting to 1; but if you want to remove a number of consecutive columns (such as 11, 12, 13 and 14) then you can do:
$worksheet->removeColumnByIndex(11, 4);
and that 1 call will be 4 times faster than 4 individual calls.
Note that the same additional argument applies to removing rows as well as columns; and to inserting columns and rows as well.
However: if you modified the logic of your
foreach ($grid_items[$i - 19] as $columnname => $value) {
loop so that it didn't write those columns in the first place, and you removed any unnecessary columns to eliminate the header line entries before that loop; then you wouldn't be executing the removeColumn() against a fully populated spreadsheet.
when you run $worksheet->removeColumnByIndex(11) and next column is 12 change to =>11
so you can try to use ..php function array_reverse() big=>small
$d= array_reverse(sort($chk));
foreach ($d as $v){
$worksheet->removeColumnByIndex($v);
}

Alternative to Recursive Function

I have been working on MLM (multi level marketing) application.
Below is the code snippet (not entire code) of recursive function which I had written in initial phase and was working properly. But now the MLM tree is too deep and recursive function stops. It says maximum nesting level exceeded. I increased nesting function call levels few times but now I dont want to increase it further as I know that's not right solution.
Can anyone suggest a alternative code (may be iterative) to me for this?
<?php
function findallpairs($username, $totalusers= 0)
{
$sql = "select username,package_id from tbl_user where
parent_id = '".$username."' order by username";
$result = mysql_query($sql);
if(mysql_num_rows($result) > 0)
{
while($row = mysql_fetch_array($result))
{
$username = $row["username"];
$totalusers++;
$arrtmp = findallpairs($username, $totalusers);
$totalusers = $arrtmp["totalusers"];
}
}
$arrpoints["totalusers"] = $totalusers;
return $arrpoints;
}
?>
Note : Please remember my original code is too big but I have been pasting just the important aspect of the logic here.
It would be a great help for me if I find the alternative solution to this.
Thanks in advance!
How deep are you going?
The day makes a mutliway tree within your sql database. Trees are recursive structures, and recursive code is what naturally fits.
You may be able use use what i'm calling quasi-memiozation.
This should be easy if you have the children listed in the DB structure. Take a result for all users with no childrin, memioize their value into a hash or tree with the key being the user ID and the value 1. Then just mass iterate over each user (or just the parents of memiozed entries) and if it has values memiozed for all its children, add them together and memoioze that value. Repeat the iteration until you find the root (a user with no parent)
If you don't have a record of children it's likely terribly inefficient.

Is there a standard way to diff du outputs to detect where disk space usage has grown the most

I work with a small team of developers where we share a unix file system to store somewhat large datasets. This file system has a somewhat prohibitive quota on it so about once a month we have to figure out where our free space has gone and see what we can recover.
Obviously we use du a fair amount but this is still a tedious process. I had the thought that we may be able to keep last months du output around and compare it to this months to see where we've had the most growth. My guess this plan isn't very original.
With this in mind I am asking if there are any scripts out there that already do this.
Thanks.
I wrote a program to do this called diff-du. I can't believe nobody had already done this! Anyhow, I find it useful and I hope you will too.
I really don't know if there is a standard way but I need it sometime ago and I wrote a small perl script to handle that. Here is the part of my code:
#!/usr/bin/perl
$FileName = "du-previous";
$Location = ">";
$Sizes;
# Current +++++++++++++++++++++++++++++
$Current = `du "$Location"`;
open my $CurrentFile, '<', \$Current;
while (<$CurrentFile>) {
chomp;
if (/^([0-9]+)[ \t]+(.*)$/) {
$Sizes{$2} = $1;
}
}
close($CurrentFile);
# Previous ++++++++++++++++++++++++++++
open(FILE, $FileName);
while (<FILE>) {
chomp;
if (/^([0-9]+)[ \t]+(.*)$/) {
my $Size = $Sizes{$2};
$Sizes{$2} = $Size - $1;
}
}
close(FILE);
# Show result +++++++++++++++++++++++++
SHOW: while (($key, $value) = each(%Sizes)) {
if ($value == 0) {
next SHOW;
}
printf("%-10d %s\n", $value, $key);
}
close(FILE);
#Save Current +++++++++++++++++++++++++
open my $CurrentFile, '<', \$Current;
open(FILE, ">$FileName");
while (<$CurrentFile>) {
chomp;
print FILE $_."\n";
}
close($CurrentFile);
close(FILE);
The code is not very error-tolerant so you may adjust it.
Basically the code, get the current disk usage information, compare the size with the lastest time it run (saved in 'du-previous'), print the different and save the current usage information.
If you like it, take it.
Hope this helps.
What you really really want is the awesome kdirstat.
For completeness, I've also found du-diff and don't see it mentioned in any other answer. Andrew's diff-du (mentioned in another answer) seems to be more advanced that this one.

Resources