How does the TraceProcessing library compare to the TraceEvent lIbrary used by PerfView? - .net-traceprocessing

There seems to be quite a few different ways to tackle parsing ETW events (TraceProcessing, TraceEvent, ETW2JSON, etc.). I'm interested specifically in the trade offs between the TraceProcessing library and the TraceEvent library.
Are they intended for different usecases?
Did TraceProcessing come about as a follow on to or evolution of TraceEvent?
Why would one choose the TraceProcessing library instead of TraceEvent library?

(I am a developer at Microsoft who works on the TraceProcessor project.)
I have a little more familiarity with TraceEvent than with ETW2JSON, but the same answer applies in both cases.
Each was developed by different organizations within Microsoft to best support our own performance and diagnostic investigations. In some cases, we took design inspiration or cues for TraceProcessor from other libraries, but largely we developed it for our specific use cases and according to our own design philosophies.
Those differences mean that each library has its own principles about performance, API design/consistency, and broad-based ETW support vs. well-developed first-class support for certain specific things.
TraceEvent does have .NET/CLR support, but focuses mostly on being a manged wrapper around the C++ ETW APIs. It does have support for other events, but very limited.
All in all, TraceEvent is great for first-class .NET/CLR data and the "basics" elsewhere.
TraceProcessor gives strong first-class support to the types of system and diagnostic data that you could see in other tools like XPerf and WPA. We’ve pushed for a clean, consistent, performant API across the library and emphasized strong typing to present each type of data in the most natural shape.

Related

Navigating the automatic differentiation ecosystem in Julia

Julia has a somewhat sprawling AD ecosystem, with perhaps by now more than a dozen different packages spanning, as far as I can tell, forward-mode (ForwardDiff.jl, ForwardDiff2.jl
), reverse-mode (ReverseDiff.jl, Nabla.jl, AutoGrad.jl), and source-to-source (Zygote.jl, Yota.jl, Enzyme.jl, and presumably also the forthcoming Diffractor.jl) at several different steps of the compilation pipeline, as well as more exotic things like NiLang.jl.
Between such packages, what is the support for different language constructs (control-flow, mutation, etc.), and are there any rules of thumb for how one should go about choosing a given AD for a given task? I believe there was a compare-and-contrast table on the Julia Slack at some point, but I can't seem find anything like that reproduced for posterity in the relevant discourse threads or other likely places (1, 2)
I'd also love to hear an informed answer to this. Some more links that might be of interest.
Diffractor now has a Github repo, which lays out the implementation plan. After reading the text there, my take is that it will require long-term implementation work before Diffractor is production ready. On the other hand, there is a feeling that Zygote may be in "maintenance mode" while awaiting Diffractor. At least from a distance, the situation seems a bit awkward. The good news is that the ChainRules.jl ecosystem seems to make it possible to easily swap between autodiff systems.
As of Sept 2021, Yota seems to be rapidly evolving. The 0.5 release brings support for ChainRules which seems to unlock it for production use. There is a lot of interesting discussion at this release thread. My understanding from reading through those threads is that the scope of Yota is more limited compared to Zygote (e.g., autodiff through mutation is not supported). This limited scope has the advantage of opening up optimization opportunities such as preallocation, and kernel fusion that may not be possible in a more general autodiff system. As such, Yota might be better suited to fill the niche of, e.g., PyTorch type modeling.

Does SCORM really need the flash player to work?

I have to do a college project, where I have to do an LMS, and one of the requirements is to allow the import of SCORM files. However, when I went to research I saw something about SCORM using the flash player, which ended support this year. Can anyone answer the question if SCORM really needs the flash player to work?
No. The only hard technical requirement is a JavaScript environment (or an environment that sufficiently mimics a JavaScript environment) which is why SCORM is very often considered a browser based specification. "Browsers" and therefore JS environments are finding their way into all kinds of places.
To elaborate a bit more on Brian's answer, take a look at the following resources.
SCORM1.2 Golf Examples
https://scorm.com/scorm-explained/technical-scorm/golf-examples/?utm_source=google&utm_medium=natural_search
SCORM Run-Time Reference
https://scorm.com/scorm-explained/technical-scorm/run-time/run-time-reference/
Rapid E-Learning Dev Software
Articulate
Captivate
Lectora
I have experience in all three software's listed above; however, I have standardized all training development in my team to use Lectora because of its flexibility and ability to add JS, jQuery, CSS, etc. to my content. You are really limited by your own skills at that point and if you have the computer programming skills then you can exploit Lectora to it's fullest potential.
Using Lectora, I can use Actions to modify any/all SCORM variables using the run-time reference listed above as my guide.

SIGNAL vs Esterel vs Lustre

I'm very interested in dataflow and concurrency focused languages. I've read up on the subject and repeatedly I see SIGNAL, Esterel, and Lustre mentioned; so I take it they're prominent players in those fields. However, many of their links in the resources I found are dead and they don't seem very accessible. I managed to find a couple compilers I can compile from source (Polychrony Toolset for SIGNAL and the Columbia Compiler for Esterel) but they've both had issues when trying to compile with cmake. Even textbooks teaching these languages have been tough to come by.
With the background of the way, my actual questions are: is anyone really familiar with this field of programming? Are these languages still big deals, or have they "died out" by now? Could it be they're just available to big companies with a hefty price tag, so the average programmer wouldn't really be able to pick those languages up?
I ran into a couple other dataflow/concurrent paradigm languages, such as Oz or E, but they seemed to be mostly for education and not suitable for real world projects. Not to say they aren't impressive languages, but their implementation was limited and it would be unlikely to see them in production contexts. Does anyone know of other languages in this field they can recommend that are actually accessible (have good documentation, tutorials, and an installable compiler to actually code in)? Or can anyone clarify a language such as Oz or E and hopefully show that they indeed are good enough for large real world projects?
All the languages you mentioned are not widespread. This means their compilers and runtime have bugs, the community is narrow and can give little help, and linking with general purpose libraries can be problematic.
I recommend to use an actively supported general purpose language such as Java, Scala, Kotlin or C++. They all have libraries to support asynchronous computations, and dataflow is no more than support of asynchronous procedure call. You even can develop your own dataflow library. This is not that hard: I wrote a dataflow library for Java which is only 40 kilobytes of source code.
Have you tried Céu? It is a recent variant of Esterel, and compiles to C. It is simple to understand, and provides a reactive and concurrent structuring of control flow. Native C calls can be made by just prefixing them with an underscore ("_printf").
http://ceu-lang.org
Also, see the paper "Structured Synchronous Reactive Programming with Céu" for a nice overview.
http://www.ceu-lang.org/chico/ceu_mod15_pre.pdf
These academics languages mostly disappeared as such and are used in industrial tools
Esterel-Lustre are the basis of in Ansys' SCADE
Signal is used in 3DS' ControlBuild
Esterel was used in Synopsys' ConcentricStudio.
Researchers use also Heptagon for synchronous language studies for code generation, formal methods, new concepts.

How to get a handle on all this middleware?

My organization has recently been wrestling the question of whether we should be incorporating different middleware products / concepts into our applications. Products we are looking at are things like Pegasystems, Oracle BPM / BPEL, BizTalk, Fair Isaac Blaze, etc., etc., etc.
But I'm having a hard time getting a handle on all this. Before I go forward with evaluating the usefulness (positive or negative) of these different products I'm trying to get an understanding of all the different concepts in this space. I'm overwhelmed with an alphabet soup of BPM, ESB, SOA, CEP, WF, BRE, ERP, etc. Some products seem to cover one or more of those aspects, others focus on doing one. The terms all seem very ambiguous and conflated with each other.
Is there a good resource out there to get a handle on all these different middleware concepts / patterns? A book? A website? An article that sums it up well? Bonus points if there is a resource that maps the various popular products into which pattern(s) they address.
Thanks,
~ Justin
I've spent the last 3-4 years blogging on the topics you mentioned (http://www.UdiDahan.com) as well as writing my own lightweight ESB (http://www.NServiceBus.com) and many more years working and consulting in this space. The main conclusion that I've come to is that strong business analysis and technologically-agnostic architecture is needed - no tool or technology can prevent a mess by itself.
There is the Enterprise Integration Patterns book which provides a good catalog of the technical patterns involved but doesn't touch on the necessary business analysis. I've found that Value Networks (http://en.wikipedia.org/wiki/Value_network_analysis) can be used as a good start for identifying business boundaries to which IT boundaries can be then aligned, resulting in the benefits of SOA, and the use of an ESB across those boundaries is justified.
CEP, WF, and BRE should be used within a boundary and not across them.
ERP packages tend to cross boundaries and, as such, should be integrated piecemeal into the boundaries mentioned - DDD anti-corruption layers can be used to insulate custom logic from those apps.
Hope that helps.
IBM and Oracle have SOA certifications. Since they're the leaders in the marketplace (Gartner Magic Quadrant), I would read about how they define SOA and ESBs (along with methodology and the components needed to support SOA like Governance, Registry, etc etc). It'll give you the high level overview that you're looking for and the use cases "all this middleware" is trying to solve.

Evolutionary vs throwaway prototyping [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Who is winning in the "Low vs High fidelity prototyping" debate?
Should prototype-zero (P0) be the first version of the final product? Or should be P-0 always a throwaway? What approach is the industry favoring?
Excelent article from wikipedia: Software prototyping
A prototype should always be a throwaway - a prototype is used to quickly prove a concept and influence the design of the real product. As such, a lot of things which are important for a real product (a thought-out architecture and design, reliability, security, maintainability, etc.) fall by the wayside. If you do take these things into account when building your prototype, you're not really building a prototype anymore.
My experience with prototypes where the code directly evolved into an actual product shows that the end-result suffers because of it - the lack of a real architecture resulted in a lot of cobbled-together code that had to be constantly hacked to add new features. I've even seen a case the original technology chosen for rapid development of the prototype was not the best choice for the actual product, and a complete re-write was necessary for V2.
I think we, the pedants, have lost this particular battle -- alleged "prototypes" (which by definition should be rewritten from scratch!!!-) are in fact being "evolved" into (often half-baked "betas"), etc.
Even today, I've applauded at the smart attempt by a colleague of mine to recapture the concept, even if the term is a lost battle: he's setting up a way for proofs of concept small projects to be developed (and, if the concept does get proven, transferred to software engineers for real prototyping, then development).
The idea is that, in our department, we have many people who aren't (and aren't in fact supposed to be!-) software developers, but are very smart, computer savvy, and in daily contact with the reality "in the trenches" -- they are the ones who are most likely to smell an opportunity for some potential innovation which could have real impact once implemented as a "production-ready" software project. Salespeople, account managers, business analysts, technology managers -- at our company, they all often fit this description.
But they're NOT going to program in C++, hardly at all in Java, maybe in Python but miles away from "productionized" -- indeed they're far more likely to whip up a smart proof of concept in php, javascript, perl, bash, Excel+VBA, and sundry other "quick and dirty" technologies we don't even want to dream about productionizing and supporting forevermore!-)
So by calling their prototypes "proofs of concept", we hope to encourage them to embody their daring concepts in concrete form (vague natural-language blabberings and much waving of hands being least useful, and alien to the company's culture anyway;-) and yet sharply indicate that such projects, if promoted to exist among the software engineers' goals and priorities, DO have to be programmed from scratch -- the proof-of-concept serves, at best, as a good draft/sketch spec for what the engineers are aiming for, definitely NOT to be incrementally enriched, but redone from the root up!-).
It's early to say how well this idea works -- ask me in three months, when we evaluate the quarter's endeavors (right now, we're just providing a blueprint for them, hot on the heels of evaluating last quarter's department- and company-wise undertakings!-).
Write the prototype, then keep refactoring it until it becomes the product.
The key is to not hesitate to refactor when necessary.
It helps to have few people working on it initially. With too many people working on something, refactoring becomes more difficult.
Response from BUNDALLAH, HAMISI
A prototype typically simulates only a few aspects of the features of the eventual program, and may be completely different from the eventual implementation.
Contrary to what my other colleagues have suggested above, I would NOT advise my boss to opt for the throw away prototype model. I am with Anita on this. Given the two prototype models and the circumstances provided, I would strongly advise the management (my boss) to opt for the evolutionary prototype model. The company being large with all the other variables given such as the complexity of the code, the newness of the programming language to be used, I would not use throw away prototype model. The throw away prototype model becomes the starting point from which users can re-examine their expectations and clarify their requirements. When this has been achieved, the prototype model is 'thrown away', and the system is formally developed based on the identified requirements (Crinnion, 1991). But with this situation, the users may not know all the requirements at once due to the complexity of the factors given in this particular situation. Evolutionary prototyping is the process of developing a computer system by a process of gradual refinement. Each refinement of the system contains a system specification and software development phase. In contrast to both the traditional waterfall approach and incremental prototyping, which required everyone to get everything right the first time this approach allows participants to reflect on lessons learned from the previous cycle(s). It is usual to go through three such cycles of gradual refinement. However there is nothing stopping a process of continual evolution which is often the case in many systems. According to Davis (1992), an evolutionary prototyping acknowledges that we do not understand all the requirements (as we have been told above that the system is complex, the company is large, the code will be complex, and the language is fairly new to the programming team). The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. The reason for this is that the Evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will be built. This technique allows the development team to add features, or make changes that couldn't be conceived during the requirements and design phase. For a system to be useful, it must evolve through use in its intended operational environment. A product is never "done;" it is always maturing as the usage environment change. Developers often try to define a system using their most familiar frame of reference--where they are currently (or rather, the current system status). They make assumptions about the way business will be conducted and the technology base on which the business will be implemented. A plan is enacted to develop the capability, and, sooner or later, something resembling the envisioned system is delivered. (SPC, 1997).
Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered.
In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system. To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest. (Bersoff and Davis, 1991).
However, the main problems with evolutionary prototyping are due to poor management: Lack of defined milestones, lack of achievement - always putting off what would be in the present prototype until the next one, lack of proper evaluation, lack of clarity between a prototype and an implemented system, lack of continued commitment from users. This process requires a greater degree of sustained commitment from users for a longer time span than traditionally required. Users must be constantly informed as to what is going on and be completely aware of the expectations of the 'prototypes'.
References
Bersoff, E., Davis, A. (1991). Impacts of Life Cycle Models of Software Configuration Management. Comm. ACM.
Crinnion, J.(1991). Evolutionary Systems Development, a practical guide to the use of prototyping within a structured systems methodology. Plenum Press, New York.
Davis, A. (1992). Operational Prototyping: A new Development Approach. IEEE Software.
Software Productivity Consortium (SPC). (1997). Evolutionary Rapid Development. SPC document SPC-97057-CMC, version 01.00.04.

Resources