Modularization of PL/SQL packages [closed] - plsql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Currently I am doing a restructuring project mainly on the Oracle PL/SQL packages in our company. It involves working on many of the core packages of our company. We never had documentation for the back end work done so far and the intention of this project is to create a new set of APIs based on the current logic in a structured way along with avoiding all unwanted logic that currently exists in the system.
We are also making a new module currently for the main business of the organization that would work based on these newly created back-end APIs.
As I started of this project, I found out that most of the wrapper APIs had around more than 8000 lines of code. I managed to covert this code into many single APIs and invoked them from the wrapper API.
This activity in itself has been a time-consuming process but I was able to cut down the number of lines of code to just 900 in the wrapper API by calling independent APIs for each business functionality.
I would like to know from you experts if this mode of modularizing the code is good and worth the time invested in it as I am not sure if it would have many performance benefits.
But from a code readability perspective, this is definitely helping and now I am able to understand the 8000 lines of code much better after restructuring and I am sure the other developers in my organization too will understand.
Requesting you to let me know if I am doing the right thing and if its having its advantages apart from readability please do mention them. Sorry for the long explanation.
And is it okay having more than 1000 lines of code in a wrapper API.

Easy to debug
Easy to update
Easy to Modify/maintain
Less change proneness due to low coupling.
Increases reuse if the modules are made generic
Can identify unused code easily

Related

What is currently the best workflow for statistical analysis and report writing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Workflow for statistical analysis and report writing
This question had a lot of good answers, but as pointed out, they are outdated.
I mostly work on scripts that will probably never be re-run after a paper has been published. Are packages worth the trouble in cases where I don't need to redistribute the codes to the world for easy access? What about the organization of data? How can makefiles be used?
I think if you use the basics laid out by Josh Reichs in that post you provided, making sure that you create a directory to save everything in, then you are good to go.
My added step for the modern world would be to product a markdown report in one of the available formats.
rMarkdown- which you can run right out of rStudio
rNotebooks - which
you can run right out of rStudio
Jupyter Notebooks - which you can
run out of Anaconda or Jupyter with some easy tweaking.
The beauty of these three report systems is that you get to integrate the thought process, code, data, graphs and visualizations in a single spot.
So, if as you say no one will ever re-run your code, then they will at least see it to appease suspicions. Also, if they do choose to repeat your process, they just follow your logic and process in a duplicate document (especially easy with the notebooks)
As for using packages. That is a more complex question. If the packages are well orchestrated and save you a ton of time cleaning, sorting and structuring data, USE THEM! Time is money. If the things you are using them for are simple, straight forward, just as easy to program yourself and recognizable by those who would jury your paper, it probably does not matter either way.
The one place where I feel it matters is complex processes that are difficult (read that as easy to do wrong yourself) and have been implemented, tested and vetted by prior researchers.
Using those packages garners credibility and makes it easier for peers to accept your methods at face value. But if you are on the cutting edge..you should feel free to slice away. Maybe make a package of your own!

What are the limitations of Shiny apps compared to another web programming language like Ruby on Rails? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am using Shiny to create interactive graphs on a website, but it doesn't seem to have support for things like comment threads, or database storage. Are you supposed to somehow use Shiny within another language?
This question was downvoted, and I hope I won't lose scarce rep points by answering it. I can't speak for the Shiny development team, and I'm only a novice Shinyapps developer, but ...
It seems to me that Shiny aims to make it easy for for R programmers to build small to medium-sized, self-contained, web-based graphic-centric interactive data-analysis displays, without adding an unreasonable amount of code to what they wrote to do their actual work, i.e. the analysis. This is a fairly common requirement for researchers and practitioners (as opposed to full-time professional developers) coming from the R heritage and culture (stats and data science). Shiny achieves this aim pretty well!
You can find out more about the kinds of problems that Shiny aims to solve by going to the source. Note that it says Turn your analyses into interactive web applications, not Build a full-service website with interactive chat and a backing store. It sounds as if you want something different in scale and kind, and you may be wasting time by trying to shoehorn your requirement into the Shiny problem/solution space. I've occasionally hammered nails into wood using a pair of pliers because my toolbox was at the bottom of the ladder, but that didn't make it the right thing to do!

Webservice performance - Separate ASMX for each function or both functions inside the same ASMX? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am using ASMX web services.
I have two functions so, should I create two functions in a single ASMX or should I create separate ASMX for each function?
Does that impact performance? Which choice will have the highest performance?
With all things performance you need to profile it before making changes to increase performance otherwise you could end up optimizing the wrong thing.
Most of the times the Pareto principle applies, a small portion of code or a few modules in the entire application are responsible for most of the execution time. Making optimizations there will have the greatest impact on performance.
Have you optimized everything that could be optimized and drawn the conclusion that the service endpoint can cause performance issues?
You should write the code how it's easier to maintain. Do those two functions belong together or are they completely unrelated? Does it make sense to have them exposed through one ASMX or two? That should be your criteria for how to define your endpoints.
My guess is that both choices will have similar performance but if you absolutely need to know build them both ways, profile them, and see which one performs better.

Advice on structuring an social science experiment in Meteor [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am playing around with Meteor as a framework for building web-deployed economics experiments. I am seeking advice on how to structure the app.
Here is the logic:
Users create an account and are put in a queue.
Groups of size N are formed from users in the queue.
Each group proceeds together through a series of stages.
Each stage consists of:
a. The display of information to and collection of simple input from group members.
b. Computing new information based on the inputs of all group members.
Groups move from stage to stage together, because the group data from one stage provides information needed in the next stage.
When the groups complete the last stage, data relating to their session is saved for analysis.
If a user loses their connection, they can rejoin by logging in and resume the state they were in before.
It is much like a multiplayer game, and I know there are many examples of those, so perhaps the answer to my question is just a pointer to a similarly structured and well-developed Meteor game.
I'm writing a framework for deploying multi-user experiments on Meteor. It basically does everything you asked for and a lot more.
https://github.com/HarvardEconCS/turkserver-meteor
We should talk! I'm currently using this for a couple of experiments but it is not well documented yet. However, it is a redesign of https://github.com/HarvardEconCS/TurkServer so all of the important features are there. Also, I'm looking to broaden out from using Amazon Mechanical Turk for subject recruitment, and Meteor makes that easy.
Even if you're not interested in using this, you should check out the source for how people are split into groups and kept segregated. The original idea came from here, but I've expanded on it significantly so that it is automagically managed by the smart package instead of having to do it yourself.

Software Design Description Practise [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
How many people actually write an SDD document before writing a single line of code?
How do you handle large CSCI's?
What standard do you use for SDD content?
What tailoring have you done?
I certainly have. Historically and on recent projects.
Years ago I worked in organisations where templates were everything.
Then I worked other places where the templates were looser or non-existent or didn't fit the projects I was working on.
Now the content of the software design is pretty much governed by what I need to describe to get the idea across to the audience.
"before writing a single line of code" there wouldn't be a a lot of detail. The documents I produce before I start coding are meant to get the idea of what we need to build across to the affected teams and senior management so they introduce high level architecture, functionality, technologies, risks and scope. Those last two are really important. The rest is to show other teams where you need to interface with them and to leave managers with a lingering notion that cool stuff is happening.
Most big software companies have their own practices. For example Motorola has detailed documentation for every aspect of software development process. There are standard templates for each type of documents. Having strict standards allows effectively maintain huge number of documents and integrate it with different tools. Each document obtains tracking number from special document-tracking system. They even have system (last time I seen it was in stage of early development) for automatically requirements tracking - you can say which line of code relate to given requirement\design guideline.
I would suppose that most people who write SDD documents and use terminology like CSCI have to be using a specific software development methodology and most likely are working for some serious government customer. They usually tend to take their preparations quite seriously and the documents are ready and approved before any development starts.
In an Agile process the development and the design document could be developed in parallel. It means that there will be plenty of refactoring to be done but it usually delivers very good results in the end.
In more formal processes (like RUP) a SAD document is mostly created during the elaboration/prototyping phase based on the team research.

Resources