Why do require to change code runtime? - functional-programming

I found many languages provides some way to change code runtime. Many people ask queries regarding how to change code in this or that language runtime. Here I mean by change code is that rewrite code itself at runtime by using reflection or something else.
I have around 6 year of experience in Java application development. I never come again any problem where I have to change code at time.
Can anyone explain why we require to change code at runtime?

I have experienced three huge benefits of changing code at runtime:
Fixing bugs in a production environment without shutting down the application server. This allowed us to fix bugs on just some part of the application without interrupting the whole system.
Possibility of changing the business rule without having to deploy a new version of the application. A quicker deploy of features.
Writing unit test is easier. For example, you can mock dependencies, add some desired behaviour to some objects and etc. Spock Framework does well this.
Of course, we had this benefits because we have a very well defined development process on how to proceed on this situations.

At times you may need to call a method based on the input, that was received earlier in the program.
It could be used for dynamic calculation of value based on the key index, where every key is calculated in a different way or calculation requires fetching required data from different sources. Instead of using switch statement you can invoke a method dynamically using methodName+indexOfTheKey.

Related

Static data storage on server-side

Why some data on server-side are still stored in DBC files, not in SQL-DB? In particular - spells (spells.dbc). What for?
We have a lot of bugs in spells and it's very hard to understand what's wrong with spell, but it's harder to find it spell...
Spells, Talents, achievements, etc... Are mostly found in DBC files because that is the way Blizzard did it back in the day. It's true that in 2019 this is a pretty outdated way to work indeed. Databases are getting stronger and more versatile and having hard-coded data is proving to be hard to work with. Hell, DBCs aren't really that heavy anyways and the reason why we haven't made this change yet is that... We have no other reason other than it being a task that takes a bit of time and It is monotonous to do.
We are aware that Trinity core has already made this change but they have far more contributors than we do if that serves as an excuse!
Nonetheless, this is already in our to-do list if you check the issue tracker at the main repository.
While It's true that we can't really edit DBC files because we would lose all the progress when re-extracted or lost the files, however, we can modify spells in a C++ file called SpellMgr.
There we have a function called SpellMgr::LoadDbcDataCorrections().
The main problem while doing this change is that we have to modify the core to support this change, and the function above contains a lot of corrections. Would need intense testing to make sure nothing is screwed up in the process.
In here by altering bits you can remove or add certain properties to the desired spells instead of touching the hard coded dbc files.
If you want an example, in this link, I have changed an Archimonde spell to have no cast time.
NOTE:
In this line, the commentary about damage can be miss leading but that's because I made a mistake and I haven't finished this pull request yet as of 18/04/2019.
The work has been started, notably by Kaev. I think at least 3 DBCs are now useless server side (but probably still needed client side, they are called DataBaseClient for a reason) like item.dbc.
Also, the original philosophy (for ALL cores, not just AC) was that we would not touch DBC because we don't do custom modifications, so there was no interest in having them server side.
But we wanted to change this and started to make them available directly in the DB, if you wish to help with that, it would be nice!
Why?
Because when emulation started, dbc fields were 90% unknown. So, developers created a parser for them that just required few code changes to support new fields as soon as their functionality was discovered.
Now that we've discovered 90% of required dbc fields and we've also created some great conversion tools for DBC<->SQL, it's just a matter of "effort".
SQL conversion is useful to avoid using of client data on server (you can totally overwrite them if you don't want to go against EULA) or just extends/customize them.
Here you are the issue about DBC->SQL conversion: https://github.com/azerothcore/azerothcore-wotlk/issues/584

Frama-C's extensible printer and projects

I am trying to make changes to the behavior of a function and print the results to a file. The ViewCfg plug-in described in the Plug-in Development Guide does something similar, but I am trying to avoid having to use Ast.get, which ViewCfg uses. I am thinking of extending Printer.extensible_printer which, according to the Frama-C API Documentation, is something I can do if I want to obtain a custom pretty-printer.
However, if I extend the pretty-printer as described in the API docs, unless I'm doing something wrong, I notice that the changes I make take place regardless of which project is set as the current project. I'm using File.create_project_from_visitor to create a new project and Project.set_current to set the new project as the current project before I use the custom pretty-printer.
Is it true that any change made by a class that extends Printer.extensible_printer applies to all projects? I happen to be using frama-c-Aluminium-20160502, which I know is not the latest version.
EDIT: Sorry, I should have made this clearer in the beginning, but I'm not actually making changes to the behavior of a function. I'm reading in the behavior of a function, then based on that, I'm trying to generate as output valid C code that's meant to be read as input by another program.
Regarding AST.get, the only reason I was avoiding it was that it sounds like it gets the entire AST, while I'm only interested in part of it, i.e. behaviors. But if I'm just making things harder for myself by avoiding it, then I'll go ahead and use it.

How to handle changes in objects' structure in automated testing?

I’m curious to know how feasible it is to get away from the dependency onto the application’s internal structure when you create an automated test case. Or you may need to rewrite the test case when a developer modifies a part of the code for a bug fix, etc.
We could write several automated test cases based on the applications internal object structure, but lets assume that the object hierarchy changes after 6 months or so, how do we approach these kind of issues?
I can't speak for other testing tools but at least in QTP's case the testing tool introduces a level of abstraction over the application so that non-functional changes in the application often (but not always) have no effect on the way the testing tool identifies the object.
For example in QTP all web elements are considered to be direct children of the document so that changes in the DOM (such as additional tables) don't change the object's description.
In TestComplete, there are a couple of ways to make sure that the changed app structure does not break you tests.
You can set up the Aliases tree of the Name Mapping feature. In this case, if the app structure is changed, you need to modify the Aliases tree appropriately and your test will stay working without requirement to modify them.
You can use the Extended Find feature of the Name Mapping in order to ignore parts of the the actual object tree and search for a needed objects on deeper levels.
This is what I was forced to do after losing all my work twice due to changes on the DOM structure:
Every single time I need to work with an object, I use the Find function with the ID of the object, searching for the object on the Page object. This way, whenever the DOM gets updated, my tests still run smoothly.
The only thing that will break my tests is if the object's ID get changed, but that's not very probable to happen.
Here you can find some examples of the helper functions I use.

Filtering Data in ASP.NET Web Services

I've been using this site for quite a while, usually being able to sort out my questions by browsing through the questions and following tags. However, I've recently come across a question that is rather hard to lookup amongst the great number of questions asked - a question I hope some of you might be able to share your opinion on.
As my problem is a bit hard to fit into a single line, going in the title, I'll try to give a bit more details on the problem I've encountered. So, as the title says I need to filter, or limit, some of the response data my standard ASP.NET Soap-based Web service returns on invoking various web methods. The web service is used to return data used by other systems (a data repository more or less), where the client today is able to specify a few parameters on how the data should be filtered and in return a full-set of data back.
Well, easy enough I thought, just put additional filtering options on the existing web methods which needs a bit more filtered applied, make adjustments on the server-side and we are all set to go - well, unfortunately it turned out to be a bit more tricky then this.
The problem I am facing is that I'm working on a web service running in a production environment, which needs to be extended in such that additional filters can be applied to existing web method being invoked w/o affecting the calls already being made by other systems used by the customer using their client stubs. This is where I am a bit troubled, since I can't seem to find a "right solution" on extending the current web service.
Today, the filter is send as a custom data structure which holds information on which data should filtered, but I am not sure if I can simply just add more information to this data structure w/o breaking code at the clients? One of my co-workers suggested that I could implement a solution where I would extend the web.config on the server-side to hold a section with details on which data should be excluded (filtered out), but I don't find this to be a viable solution long-sighted - and I don't trust customers with such an option since this is likely to go wrong at some point. So the solution I am looking for is a way that I can apply a "second filter" to the data I am requesting from the client so instead of getting a full-set of data back it should only give a fraction, it implemented in such that the filter can be easily modified and it must not affect the current client calls.
Any suggestions on how I should approach this problem?
Thanks!
Kind regards,
E.
A pretty common practice is to create another instance of the application OR use part of the url to signify the version of the endpoint they are connecting to, perhaps the virtual directory is the date. That way old calls will go to the old API and new calls will come in on the new API.
http://api.example.com/dostuff
vs
http://api.example.com/6-7-2011/dostuff

ASP.NET based Workflow Engine

I am working on a design spec for a new application that will be heavily workflow driven.
Before I re-invent the wheel, is there a decent lightweight workflow engine that plugs into ASP.NET already around?
Basically, I'm looking for something that handles moving through a defined set of workflow pages while handling state management automatically.
If this isn't around already, I'll definitely try to abstract the engine from my app and put it on codeplex, as it would be really handy.
Any suggestions?
Note: .NET 2.0, so no WWF, though I think WWF is overkill for my needs.
EDIT: Seems like there is a legitimate need for this, and there isn't a product out there...So I might build this.
Here is what I'm picturing:
Custom Page class called WebFlowPage
All WebFlowPage's are registered in a Workflow mapper.
Each WebFlowPage has some form of state object.
A HttpHandler handles picking the appropriate WebFlowPage based upon the workflow, and populating it from the state object.
Is the workflow dynamic, or static?
If the workflows are simple, you could roll your own workflow engine.
In certain situations, it can be fairly simple, and just a couple of data tables to handle the rules, processing and state.
Alot of workflow engines are built for large scale processing (credit card applications, for example). For small scale, you should at least consider your own, which would eliminate the overhead and dependency of/on an engine.
Not sure exactly what you wish to do here, but Ra-Ajax can easily keep state at least if you want your solution ajaxified...
For reference purposes you might want to check out the Ajax Calendar sample or even the (banalistically implemented) Ajax Wizard sample. It surely beats the hell out of doing it with JavaScript...
And every time you "do something" you're in "server-land" which means you can store temporaries all the time as you wish...
The project is LGPL
(PS!
Yes I do work with it)
Building a custom workflow engine is not trivial, although it may seem simple at first. We've tried that. It depends a lot on the complexity of the logic you need it to cover.
Given the current state of the Windows Workflow Foundation and the lack of another framework that abstracts the workflow concepts, I would choose WF if you need complex logic, asynchronous handling or branches in your workflows.
Tracking your state through the workflow can be accomplished by carrying some kind of xml payload or storing the state in a database,
If your workflow is actually a sequential set of forms that need to be filled in by the user, tracking the steps and guiding the user to the next step can be accomplished with some simple custom solution.
You could take a look at the InRule engine too.
Also, there is nxBRE.
These too are mostly used for business rules.
InRule is proprietary, whereas nxBRE supports RuleML (the defacto standard).
You might need to make your own implementation for the pages, and use the rule engine as the "structure".
At this moment, I know that Sharepoint 2007 supports page workflows (using WF), but this would imply using .NET Framework 3 and deployng sharepoint.
My suggestion would be to use whatever you find more light and easier to use.
I think the term "workflow" is very open to interpretation. I have been working lately with a type of workflow that is very different from what you seem to be describing. Mine is a state machine based workflow where the state of a particular record determines what actions a user can take to move the record to the next step in the business process. So "workflow" in this instance means how the record flows from one state to another until it is finally completed.
Your usage of workflow seems to have more to do with moving a user from one page to another in a linear multi-step process, which is a completely different use case (correct me if I'm wrong). So before coming up with a general purpose "workflow" engine that anyone could use, I would recommend defining a little bit better exactly what types of situations this system would handle.
I've been using this for a few months http://objectflow.codeplex.com. Not asp specific but it may fit your needs
While browsing the web for some workflow & BPM resources, I found the following project: NetBPM. Unfortunately, the project seems to be stopped.
I don't think there is a workflow engine that will automatically handle state for you, but if you are moving through a set of pages like a process such as checkout on an ecommerce site, perhaps the ASP.NET wizard control could help you?
There are few workflow options. "Aspose" and "Skelta" are the offers I´m evaluating.
Fábio
you can use WorkFlow Engine, just read the document and run the Demo.
all of the features you need for a dynamic workflow engine they added in there.

Resources