I am looking to rebuild the XP calculations for my build, changing them to have linear scaling as you level up, with notable increases for killing things above your level. I know the formula I would like to use, but I cannot seem to find where the calculations are handled in AzerothCore.
If anyone could point me to the correct class, or even better, correct method within the class it would be much appreciated.
Found it, its actually scattered over several classes, with
quest XP in questDef.cpp uint32 Quest::XPValue(Player* player) const on line 175
Kill XP calculations are in Formulas.h what I want to edit is the formula in BaseGain on line 103
It is also worth noting that someone by the name of "pussywizard" removed the script manager calls on XP calculations for optimization. they are just commented out, but would need to be restored for any module that needs them.
Related
I am learning FP and got introduced to the concept of property-based testing and for someone from OOP world PBT looks both useful and dangerous. It does check a lot of options, but what if there is one (or some) options that fail, but they didn't fail during your first let's say Jenkins build. Then next time you run the build the test may or may not fail, doesn't it kill the entire idea of repeatable builds?
I see that some people explored options to make the tests deterministic, but then if such test doesn't catch an error it will never catch it.
So what's better approach here? Do we sacrifice build repeatability to eventually uncover a bug or do we take the risk of never uncovering it, but get our repeatability back?
(I hope that I properly understood the concept of PBT, but if I didn't I would appreciate if somebody could point out my misconceptions)
Doing a lot of property-based testing I don’t see indeterminism as a big problem. I basically experience three types of it:
A property is really indeterministic b/c some external factor - e.g. timeout, delay, db config - makes it so. Those flaky tests also show up in example-based testing and should be eliminated by making the external factor deterministic.
A property fails rarely because the triggering condition is only sometimes met by pseudo random data generation. Most PBT libraries have ways to reproduce those failing runs, eg by re-using the random seed of the failing test run or even remembering the exact constellation in a database of some sort. Those failures reveal problems and are one of the reasons why we’re doing random test cases generation in the first place.
Coverage assertions („this condition will be hit in at least 5 percent of all cases“) may fail from time to time even though they are generally true. This can be mitigated by raising the number of tries. Some libs, eg quickcheck, do their own calculation of how many tries are needed to prove/disprove coverage assumptions and thereby mostly eliminate those false positives.
The important thing is to always follow up on flaky failures and find the bug, the indeterministic external factor or the wrong assumption in the property‘s invariant. When you do that, sporadic failures will occur less and less often. My personal experience is mostly with jqwik but other people have been telling me similar stories.
You can have both non-determinism and reproducible builds by generating the randomness outside the build process. You could generate it during development or during external testing.
One example would be to seed your property based tests, and to automatically modify this seed on commit. You're still making a tradeoff. A developer could be alerted of a bug unrelated to what they're working on, and you lose some test capacity since the tests might change less often.
You can tip the tradeoff further in the deterministic direction by making the seed change less often. You could for example have one seed for each program component or file, and only change it when a related file is committed.
A different approach would be to not change the seed during development at all. You would instead have automatic QA doing periodic or continuous testing with random seeds and use them to generate bug reports/issues that can be dealt with when convenient.
johanneslink's analysis of non-determinism is spot on.
There's one thing I would like to add: non-determinism is not only a rare and small cost, it's also beneficial. If the first run of your test suite is successful, insisting on determinism means insisting that future runs (of the same suite against the same system) will find zero bugs.
Usually most test suites contain many independent tests of many independent system parts, and commits rarely change large parts of the system. So even across commits, most tests test exactly the same thing before and after, where once again determinism guarantees that you will find zero bugs.
Allowing for randomness means every run has at least a chance of discovering a bug.
That of course raises the question of regression tests. I think the standard argument is something like this: to maximize value per effort you should focus your testing on the most bug-prone parts of the code. Having observed a bug in the past provides evidence about which part of the code is buggy (and which kind of bug it's likely to have). You should use that evidence to guide your testing effort. (Often with a laser-like focus on one concrete bug.)
I think this is a very reasonable argument. I also think there's more than one way of making good use of the evidence provided by bugs.
For example, you might write a generator which produces data of the same kind and shape as the data which triggered the bug the first time, and/or which is tailor made to trigger the bug.
And/or, you might want to write tests verifying specifically those properties that were violated by the buggy behavior.
If you want to judge how good these tests are, I recommend running them a couple of times (on normally sized input batches). If they trigger the bug every time, it's likely to do so in the future also.
Here's a (hopefully thought-)provoking question: is it worse to release software which has a bug it has had before, or release software with new bugs? In other words: is catching past bugs more important than catching new ones—or do do it primarily because it's easier?
If you think we do it in part because it's easier, then I don't think it matters that re-catching the bug is probabilistic: what you should really care about is something like the average bug-catching abilities of property testing—its benefits elsewhere should outweigh the fairly small chance that an old bug squeaks through, even though it got caught in (say) 5 consecutive runs of the tests when you evaluated your regression tests.
Now, if you can't reliably generate random inputs that trigger the bug even though you understand the bug just fine, or the generator which does it is large and complicated and thus costly to maintain, hand-picking a regression example seems like a perfectly reasonable choice.
I've been working on getting TrinityCore up and running, battling with the horror that is Ubuntu in order to get things working. Finally got the workflow down, finished two related projects, and I was going to start tinkering with the code. But I found AzerothCore, and I'm very intrigued. Got a few questions about the differences between it and TrinityCore.
First off, AC is advertised as having a modular design, which is brilliant. TC has a single instance of modularity with it's script system, which is also very good; edit the C++ source script, save it, and the server will reload it at runtime without having to recompile the whole server. Is that functionality also present in AC? And how robust is the module system?
My reason for asking is that I want to add more dynamic features rather than focusing on instances, phases, and quests that are repeatable by every single character. The first step for that, would be to change the AI scripting system. So rather than having one monolithic script attached to an NPC, an array of scripts arranged in a hierarchy with conditions that are processed periodically would be a great first foray into the actual code base. Would it be possible to contain that functionality in a replacement module?
Another question I have, is regarding the prevalence of bugs. TC's development does seem a touch slow, and it's community not all that active. How is AC's development in regards to the robustness of the low level systems? With TC, for instance, every so often there would be floating NPCs making their way around Goldshire, which is a rather immersion-breaking bug. Does AC have similarly obvious bugs?
Is that functionality also present in AC?
No AC don't have this. Generally because AC runs on old ACE platform.
Modules is a just another way of implanting custom scripts and nothing more for now.
You always need to rebuild sources when make changes in modules or add newone.
Another question I have, is regarding the prevalence of bugs. TC's development does seem a touch slow, and it's community not all that active. How is AC's development in regards to the robustness of the low level systems? With TC, for instance, every so often there would be floating NPCs making their way around Goldshire, which is a rather immersion-breaking bug. Does AC have similarly obvious bugs?
All thing what can see player not a coder in AC is good, bugs are minimal and almost all according to vanila or TBC, wotlk part done for 99%
TC has a single instance of modularity with it's script system, which is also very good; edit the C++ source script, save it, and the server will reload it at runtime without having to recompile the whole server. Is that functionality also present in AC?
There is no reload yet in AC, so currently you have to recompile and then restart your server manually.
And how robust is the module system?
The module system in AC is based on the same hooking system (called "scripts") from TC/MaNGOS.
Another question I have, is regarding the prevalence of bugs. TC's development does seem a touch slow, and it's community not all that active. How is AC's development in regards to the robustness of the low level systems?
AC is also based in TC so it is possible to have some common bugs.
However, in AC, all changes are first send via PRs, than they are code-reviewed and manually tested. Also the Travis build should pass, and it makes sure that the core compiles (same as TC) but also makes sure that the change does not introduce DB startup errors.
On the other side, in TC there is no manual testing and new changes are often pushed directly into the master branch by the TC developers (while PRs from new contributors are still code-reviewed first).
None of the emu's except TrinityCore support hot swapping, so technically yes all emu's require you to shutdown before making changes. However that really isn't the issue, the major issue is that some of these emu's don't support drop in modules essentially forcing the end user to recompile the entire server. That sucks big time, on the other hand I'm not a fan of TrinityCore while the hot swap feature might appeal to some. That also opens up a potential flaw in case your emu is hacked and someone were to gain admin rights, this could turn a feature like hot swapping into a potential disaster. On the other hand TrinityCore allows you to choose your compiles between static and dynamic so there is that, but then you could argue there are other emulators that can do it better.
I met a problem when using clBuildProgram() on GTX 750. The kernel failed to build with error code -5(CL_OUT_OF_RESOURCES) and an empty build log.
There is a possible solution, which is adding '-cl-nv-verbose' as input option to clBuildProgram(). However, it doesn't work for all kernels.
Based on that, I tried another optimization option which is '-cl-opt-disable'. It also works for some kernels.
Then I got confused.
I cannot find the real reason for causing the error;
Why do different build-options make sense for some kernels?
The error seems like architecture independent.Since the same Opencl code is executed successfully on GTX 750, while failed on Tesla P100.
Does anyone has ideas?
Possible reasons I can think of:
Running out of registers. This happens if you have a lot of (private) variables in your kernel code, especially arrays. Each core only has a certain amount of registers available (architecture dependent), and it may not be possible for the compiler to "spill" them to global memory. If this is the problem, you can try to rearrange your code so your variables have more limited scope, or you can try to move some arrays to local memory (bearing in mind this is shared between work items in a group, and also limited in size). A good GPU profiler/code analysis tool should be able to tell you how much register pressure there is, so if you've got the kernel working on some hardware, you should be able to find out register pressure for that, and draw conclusions for other hardware too.
Code size itself. I didn't think this should be much of a problem anymore on modern GPUs, but it might be possible if you have truly gigantic kernels.
I hear the term "BootStrap" thrown around a lot, but I'm not really sure what it refers to. I know there is a bootstrap CSS, but what exactly does the term mean?
Literally, a bootstrap is a tab on the sides or back of boots that helps you to pull them on. Putting on your shoes or boots is usually the last step of getting dressed; similarly, in programming it's been applied to the initialization or start-up step of a program.
See also the Wikipedia entry for bootstrapping:
Bootstrapping or booting refers to a group of metaphors which refer to a self-sustaining process that proceeds without external help.
[.. in Software Loading] booting is the process of starting a computer, specifically in regards to starting its software. The process involves a chain of stages, in which at each stage a smaller simpler program loads and then executes the larger more complicated program of the next stage. It is in this sense that the computer "pulls itself up by its bootstraps", i.e. it improves itself by its own efforts
[.. in Software Development] bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g., ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language.
A shoehorn is another means to help you don footwear but it's idiomatically come to mean cramming something into a tight space.
In computer science Bootstrap (or more commonly "Boot") generally refers to the setup/start/initialization step of a process. It can mean many things depending on the context: starting a physical machine, setting up variables and services for an application to use, or even laying the css groundwork for a website to implement.
Bootstrapping let you create your own most complex design by just minimal configuration, rather than develop it from the scratch.
I started using Redcar a couple days ago as my primary text editor for programming on my Ubuntu machine. It's definitely a buggy software, and it's obvious that it's still in development, but overall I like it more than anything else I've come across. That said, I just discovered that I apparently can't do any more than 10 or so undos with the software. Even worse, I wasn't able to figure out any way to change this limit. This is kind of a dealbreaker for me, since I routinely write lots of code that I then choose to revert to something I had only a minute earlier.
Does anybody know if there is any way to raise this limit? Alternatively, does anybody know any other comparable text-editors for Linux? One of the most important features for me that any software I use needs to have is the ability to show me where a partner bracelet/bracket/parenthesis is when I move the cursor (or rather, whatever the keyboard equivalent of the cursor is called) onto it. I'm writing software that uses lots of callbacks, nested if statements, and nested loops, so I need to be able to easily tell where corresponding structures are in my code.
Best, and thanks in advance for any responses,Sami