Enterprise Custom field formula in MS Project server 2016 - ms-project

I am trying to implement a custom formula in Enterprise Custom field of MS Project.
Formula is as below:
``` ([Baseline Cost]*[DurationCustom])/[Baseline Duration]
`[Baseline Cost] and [Baseline Duration] are fields directly from MS Project,
[DurationCustom] is an enterprise custom field with Entity:task and Type:Number.` ```
For instance if
[Baseline Cost] = 1 ----- Type:Cost(₹1.00)
[DurationCustom] = 100 ----- Type:Number(100)
[Baseline Duration] = 100 ----Type:Number(100d)
Expected result: 1
Current result: 0.21
Can anyone suggest if the error is due to the data type.
Also, if there is a possibility to convert "100d" to simply a number "100".
P.S: I am using MS Project server 2016.
Thanks

I suggest you do the following. Break down your formula into three parts (separately) to see what value appears for each of the variables. For example, what do you get with the following:
Number1 = [Baseline Cost]
Then do the same for [DurationCustom] and [Baseline Duration]. That will tell you how Project "sees" the values and likely tell you why you get the result you do.
As far as converting "100d" to "100", yes it can be done but it isn't necessary. The "100d" is a display value based on your option setting. Internally, "100d" is stored as a number, in minutes, without any unit suffix.
John

Related

With or without the Keira3 editor, is there a way to tie a ".levelup" command or a level-up reward to a quest?

I'm a novice and am working on learning the basics. I have no experience in coding.
I'm trying to make a world for myself and my friends that features quest rewards being the sole way to gain levels, and in full increments - exactly like milestone leveling in Dungeons and Dragons.
Is there a way to level up a character, or have an automated ".levelup" command be used on a character, triggering when that player completes a (custom) quest? Additionally, is this something that can be done in Keira3? Or will I need to use other tools?
I've tried granting quest reward consumables that use the spells 47292 and 24312 (https://wotlkdb.com/?spell=47292 and https://wotlkdb.com/?spell=24312) but those appear to just be the visual level-up effects.
There are multiple ways to achieve this. The most convenient way that I can think of, is to compile the core with the eluna module: https://github.com/azerothcore/mod-eluna
It allows for scripting with the easily accessible Lua language. For example, you can use the following code:
local questId = 12345
local questNpc = 23456
local maxLevel = 80
local CREATURE_EVENT_ON_QUEST_REWARD = 34 --(event, player, creature, quest, opt) - Can return true
local function MyQuestCompleted(event, player, creature, quest, opt)
if player then -- check if the player exists
if player:GetLevel() < maxLevel and quest = questId then -- check if the player has completed the right quest and isn't max level
player:SetLevel( player:GetLevel() + 1 )
end
end
end
RegisterCreatureEvent( questNpc , CREATURE_EVENT_ON_QUEST_REWARD , MyQuestCompleted)
See https://www.azerothcore.org/pages/eluna/index.html for the documentation.

How to change the interval of a plugin in telegraf?

Using: telegraf version 1.23.1
Thats the workflow Telegraf => Influx => Grafana.
I am using telegraf to check my metrics on a shared server. So far so good, i already could initalize the Telegraf uWSGI Plugin and display the data of my running django projects in grafana.
Problem
Now i wanted to check some folder size too with the [[inputs.filecount]] Telegraf Plugin and this works also well. However i do not need Metrics for every 10s for this plugin. So i change the interval like mentioned in the Documentation in the [[inputs.filecount]] Plugin.
telegraf.conf
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "5s"
flush_interval = "10s"
flush_jitter = "0s"
#... PLUGIN
[[inputs.filecount]]
# set different interval for this input plugin every 10min
interval=“600s”
collection_jitter=“20s”
# Default from Doc =>
directories = ["/home/myserver/logs", "/home/someName/growingData, ]
name = "*"
recursive = true
regular_only = false
follow_symlinks = false
size = "0B"
mtime = "0s"
After restarting Telegram with Supervisor it crashed because it could not parse the new lines.
supervisor.log
Error running agent: Error loading config file /home/user/etc/telegraf/telegraf.conf: Error parsing data: line 208: invalid TOML syntax
So that are these lines i added because i thought that is how the Doc it mention it.
telegraf.conf
# set different interval for this input plugin every 10min
interval=“600s”
collection_jitter=“20s”
Question
So my question is. How can i change or setup the interval for a single input plugin in telegraf?
Or do i have to apply a different TOML syntax like [[inputs.filecount.agent]] or so?
I assume that i do not have to change any output interval also? Because i assume even though its currently 10s, if this input plugin only pulls/inputs data every 600s it should not matter, some flush cycle will push the Data to influx .
How can i change or setup the interval for a single input plugin in telegraf?
As the link you pointed to shows, individual inputs can set the interval and collection_jitter options. There is no difference in the TOML syntax for example I can do the following for the memory input plugin:
[[inputs.mem]]
interval="600s"
collection_jitter="20s"
I assume that i do not have to change any output interval also?
Correct, these are independent of each other.
line 208: invalid TOML syntax
Knowing what exactly is on line 208 and around that line will hopefully resolve your issue and get you going again. Also make sure your quotes that you used are correct. Sometimes when people copy and paste quotes they get ” vs " which can cause issues!

why does field value comes as 1'000,24 instead of 1,000.24 when the format is >,>>>,>>9.99 in progress 4gl?

we have recently upgraded to oe rdbms 11.3 version from 9.1d. While generating
reports,i found the field value of a field comes as 2'239,00 instead of
2,239.00.I checked the format its >,>>>,>>9.99.
what could be the reason behind this?
The admin installing the database didn't to it's homework and selected wrong default numeric and decimal separator.
However no greater harm done:
Set these startup parameters
-numsep 44 -numdec 46
This is an simplified database startup example with added parameters as above:
proserve /db/db -H dbserver -S dbservice -numsep 44 -numdec 46
When you install Progress you are prompted for the numeric format to use. That information is then written to a file called "startup.pf" which is located in the install directory (C:\Progress\OpenEdge by default on Windows...)
If you picked the wrong numeric format you can edit startup.pf with any text editor. It should look something like this:
#This is a placeholder startup.pf
#You may put any global startup parameters you desire
#in this file. They will be used by ALL Progress modules
#including the client, server, utilities, etc.
#
#The file dlc/prolang/locale.pf provides examples of
#settings for the different options that vary internationally.
#
#The directories under dlc/prolang contain examples of
#startup.pf settings appropriate to each region.
#For example, the file dlc/prolang/ger/german.pf shows
#settings that might be used in Germany.
#The file dlc/prolang/ger/geraus.pf gives example settings
#for German-speaking Austrians.
#
#Copy the file that seems appropriate for your region or language
#over this startup.pf. Edit the file to meet your needs.
#
# e.g. UNIX: cp /dlc/prolang/ger/geraus.pf /dlc/startup.pf
# e.g. DOS, WINDOWS: copy \dlc\prolang\ger\geraus.pf \dlc\startup.pf
#
# You may want to include these same settings in /dlc/ade.pf.
#
#If the directory for your region or language does not exist in
#dlc/prolang, please check that you have ordered AND installed the
#International component. The International component provides
#these directories and files.
#
-cpinternal ISO8859-1
-cpstream ISO8859-1
-cpcoll Basic
-cpcase Basic
-d mdy
-numsep 44
-numdec 46
Changes to the startup.pf file are GLOBAL -- they impact all sessions started on this machine. If you only want to change a single session then you can add the parameters to the command line (or the shortcut icons properties) or to a local .pf file or to an ini file being used by that session.
You can also programmatically override the format in your code by using the SESSION system handle:
assign
session:numeric-decimal-point = "."
session:numeric-separator = ","
.
display 123456.999.
(You might want to consider saving the current values and restoring them if this is a temporary change.)
(You can also use the shorthand session:numeric-format = "american". or "european" for the two most common cases.)

Laravel error on merge collection later to upgrate from 4.0 to 4.1

Later to upgrade Laravel version i found that the Collection::merge method isn't working well.
Not sure if it is my problem, i can't find an error. Lets see some information:
print_r($ecb->count());
print_r($boc->count());
// merge both
$cubes = $ecb->merge($boc);
print_r($cubes->count());
dd();
output:
36 27 1
the merge should to give like output 36 + 27 (there isn't duplicate element on the collection)
More debug information:
print_r($ecb->toArray());
print_r($boc->toArray());
// merge both
$cubes = $ecb->merge($boc);
print_r($cubes->toArray());
dd();
output (is a bit long): http://laravel.io/bin/PdVj1#7
Any idea?
Thanks
Yes - it appears to have changed between 4 and 4.1
See this Github issue: https://github.com/laravel/framework/issues/3445
In essence Eloquent collections, upon merging, remove models with duplicate primary keys.
I'm running Laravel 4.1.29 - and I get a different output to you with count() - but in essence it just removes duplicate ids.
I see that in Laravel 4.1 merge delete element with same ids ( https://github.com/laravel/framework/issues/3445 )
To have the same behavior i should to change the code like it:
$boc->each(function($cube) use ($ecb)
{
$ecb->push($cube);
});
The merge function uses Model#getKey() to differentiate different models - do the models you are using have a primary key specified properly? I notice they don't have the standard id field.

Logfile analysis in R?

I know there are other tools around like awstats or splunk, but I wonder whether there is some serious (web)server logfile analysis going on in R. I might not be the first thought to do it in R, but still R has nice visualization capabilities and also nice spatial packages. Do you know of any? Or is there a R package / code that handles the most common log file formats that one could build on? Or is it simply a very bad idea?
In connection with a project to build an analytics toolbox for our Network Ops guys,
i built one of these about two months ago. My employer has no problem if i open source it, so if anyone is interested i can put it up on my github repo. I assume it's most useful to this group if i build an R Package. I won't be able to do that straight away though
because i need to research the docs on package building with non-R code (it might be as simple as tossing the python bytecode files in /exec along with a suitable python runtime, but i have no idea).
I was actually suprised that i needed to undertake a project of this sort. There are at least several excellent open source and free log file parsers/viewers (including the excellent Webalyzer and AWStats) but neither parse server error logs (parsing server access logs is the primary use case for both).
If you are not familiar with error logs or with the difference between them and access
logs, in sum, Apache servers (likewsie, nginx and IIS) record two distinct logs and store them to disk by default next to each other in the same directory. On Mac OS X,
that directory in /var, just below root:
$> pwd
/var/log/apache2
$> ls
access_log error_log
For network diagnostics, error logs are often far more useful than the access logs.
They also happen to be significantly more difficult to process because of the unstructured nature of the data in many of the fields and more significantly, because the data file
you are left with after parsing is an irregular time series--you might have multiple entries keyed to a single timestamp, then the next entry is three seconds later, and so forth.
i wanted an app that i could toss in raw error logs (of any size, but usually several hundred MB at a time) have something useful come out the other end--which in this case, had to be some pre-packaged analytics and also a data cube available inside R for command-line analytics. Given this, i coded the raw-log parser in python, while the processor (e.g., gridding the parser output to create a regular time series) and all analytics and data visualization, i coded in R.
I have been building analytics tools for a long time, but only in the past
four years have i been using R. So my first impression--immediately upon parsing a raw log file and loading the data frame in R is what a pleasure R is to work with and how it is so well suited for tasks of this sort. A few welcome suprises:
Serialization. To persist working data in R is a single command
(save). I knew this, but i didn't know how efficient is this binary
format. Thee actual data: for every 50 MB of raw logfiles parsed, the
.RData representation was about 500 KB--100 : 1 compression. (Note: i
pushed this down further to about 300 : 1 by using the data.table
library and manually setting compression level argument to the save
function);
IO. My Data Warehouse relies heavily on a lightweight datastructure
server that resides entirely in RAM and writes to disk
asynchronously, called redis. The proect itself is only about two
years old, yet there's already a redis client for R in CRAN (by B.W.
Lewis, version 1.6.1 as of this post);
Primary Data Analysis. The purpose of this Project was to build a
Library for our Network Ops guys to use. My goal was a "one command =
one data view" type interface. So for instance, i used the excellent
googleVis Package to create a professional-looking
scrollable/paginated HTML tables with sortable columns, in which i
loaded a data frame of aggregated data (>5,000 lines). Just those few
interactive elments--e.g., sorting a column--delivered useful
descriptive analytics. Another example, i wrote a lot of thin
wrappers over some basic data juggling and table-like functions; each
of these functions i would for instance, bind to a clickable button
on a tabbed web page. Again, this was a pleasure to do in R, in part
becasue quite often the function required no wrapper, the single
command with the arguments supplied was enough to generate a useful
view of the data.
A couple of examples of the last bullet:
# what are the most common issues that cause an error to be logged?
err_order = function(df){
t0 = xtabs(~Issue_Descr, df)
m = cbind( names(t0), t0)
rownames(m) = NULL
colnames(m) = c("Cause", "Count")
x = m[,2]
x = as.numeric(x)
ndx = order(x, decreasing=T)
m = m[ndx,]
m1 = data.frame(Cause=m[,1], Count=as.numeric(m[,2]),
CountAsProp=100*as.numeric(m[,2])/dim(df)[1])
subset(m1, CountAsProp >= 1.)
}
# calling this function, passing in a data frame, returns something like:
Cause Count CountAsProp
1 'connect to unix://var/ failed' 200 40.0
2 'object buffered to temp file' 185 37.0
3 'connection refused' 94 18.8
The Primary Data Cube Displayed for Interactive Analysis Using googleVis:
A contingency table (from an xtab function call) displayed using googleVis)
It is in fact an excellent idea. R also has very good date/time capabilities, can do cluster analysis or use any variety of machine learning alogorithms, has three different regexp engines to parse etc pp.
And it may not be a novel idea. A few years ago I was in brief email contact with someone using R for proactive (rather than reactive) logfile analysis: Read the logs, (in their case) build time-series models, predict hot spots. That is so obviously a good idea. It was one of the Department of Energy labs but I no longer have a URL. Even outside of temporal patterns there is a lot one could do here.
I have used R to load and parse IIS Log files with some success here is my code.
Load IIS Log files
require(data.table)
setwd("Log File Directory")
# get a list of all the log files
log_files <- Sys.glob("*.log")
# This line
# 1) reads each log file
# 2) concatenates them
IIS <- do.call( "rbind", lapply( log_files, read.csv, sep = " ", header = FALSE, comment.char = "#", na.strings = "-" ) )
# Add field names - Copy the "Fields" line from one of the log files :header line
colnames(IIS) <- c("date", "time", "s_ip", "cs_method", "cs_uri_stem", "cs_uri_query", "s_port", "cs_username", "c_ip", "cs_User_Agent", "sc_status", "sc_substatus", "sc_win32_status", "sc_bytes", "cs_bytes", "time-taken")
#Change it to a data.table
IIS <- data.table( IIS )
#Query at will
IIS[, .N, by = list(sc_status,cs_username, cs_uri_stem,sc_win32_status) ]
I did a logfile-analysis recently using R. It was no real komplex thing, mostly descriptive tables. R's build-in functions were sufficient for this job.
The problem was the data storage as my logfiles were about 10 GB. Revolutions R does offer new methods to handle such big data, but I at last decided to use a MySQL-database as a backend (which in fact reduced the size to 2 GB though normalization).
That could also solve your problem in reading logfiles in R.
#!python
import argparse
import csv
import cStringIO as StringIO
class OurDialect:
escapechar = ','
delimiter = ' '
quoting = csv.QUOTE_NONE
parser = argparse.ArgumentParser()
parser.add_argument('-f', '--source', type=str, dest='line', default=[['''54.67.81.141 - - [01/Apr/2015:13:39:22 +0000] "GET / HTTP/1.1" 502 173 "-" "curl/7.41.0" "-"'''], ['''54.67.81.141 - - [01/Apr/2015:13:39:22 +0000] "GET / HTTP/1.1" 502 173 "-" "curl/7.41.0" "-"''']])
arguments = parser.parse_args()
try:
with open(arguments.line, 'wb') as fin:
line = fin.readlines()
except:
pass
finally:
line = arguments.line
header = ['IP', 'Ident', 'User', 'Timestamp', 'Offset', 'HTTP Verb', 'HTTP Endpoint', 'HTTP Version', 'HTTP Return code', 'Size in bytes', 'User-Agent']
lines = [[l[:-1].replace('[', '"').replace(']', '"').replace('"', '') for l in l1] for l1 in line]
out = StringIO.StringIO()
writer = csv.writer(out)
writer.writerow(header)
writer = csv.writer(out,dialect=OurDialect)
writer.writerows([[l1 for l1 in l] for l in lines])
print(out.getvalue())
Demo output:
IP,Ident,User,Timestamp,Offset,HTTP Verb,HTTP Endpoint,HTTP Version,HTTP Return code,Size in bytes,User-Agent
54.67.81.141, -, -, 01/Apr/2015:13:39:22, +0000, GET, /, HTTP/1.1, 502, 173, -, curl/7.41.0, -
54.67.81.141, -, -, 01/Apr/2015:13:39:22, +0000, GET, /, HTTP/1.1, 502, 173, -, curl/7.41.0, -
This format can easily be read into R using read.csv. And, it doesn't require any 3rd party libraries.

Resources