Reference in understanding NCPDP D.0 - standards

I have am working on a Health care project and have come across NCPDP D.0 standard, although I have googled and found some basic information on wiki and other sites, I was looking are there any simple reference or open source example of software allowing pharmacy transactions in NCPDP D.0 format.
Has anyone in the community worked/working on this, and if they can share some information it would be of gr8 help.
Thanks,
HSR.

I think that you will be hard pressed to find any public documentation or open source software following this standard. I haven't been able to find it myself, and to gain access to the standard it's my understanding that you will need to be a member of NCPDP. This is a $675 bump and you are not allowed to share the standard outside of your organization. Standards Purchase.
In addition you'll also need the X12 5010 standard. On the X12 store this could set you back a few thousand dollars.
In any event, the documentation though incomplete is decent on the NCPDP.ORG site, you should check it out.

Related

SOA: Use SDO (Service Data Object)? [duplicate]

I've been programming in Delphi with Midas/DataSnap for quite long time and quite happy with it. Moving to .NET I'm more than happy with the ADO.NET DataSet. For CRUD application, I'm highly uncomfortable with any kind of ORM. Generic data-structure with automatic diff/delta handling get my job done better for me, an average database application developer.
Tried to study Java years ago, and could not find similar idea implemented. The closest I could find is SDO (Service Data Object). I thought it should be widely adopted when I saw it, but I'm wrong. Even the spec is rather old now, I still hardly find many people discuss on it or use it extensively. Assuming from information I can find on the internet, SDO usage is highly passive.
Wondering if it's dying ? Any experience in SDO you want to share ? Manual DTO coding is always better ?
Ok. I see. The answer is "no"
;)
Same for me when trying SDO first time. Old specs, passive feedback... Definitely NO.
I wouldn't recommend using SDO unless it's imposed on you by some other part of the project.
WebSphere process server uses SDO. It's not really a bad API once you learn it. But the spec and the documentation are vague. It doesn't spell out what happens if you ask for a field that doesn't exist, or whether it does type conversions while getting or setting fields, to name two gripes.
I don't think the API defines how to define new types, so that part will be implementation-specific. Type definitions are based on XSD, so you'll be working with those and all of the associated standards.
As others have implied, the API isn't widely used. This means it'll be hard to find people experienced with it, or help using it.

OpenCL research/ academic papers

I'm about to start my honours project at uni on OpenCL and how it can be used to improve modern game development. I know there is a couple of books out now/soon about learning opencl but I was wondering if anyone knows any good papers on opencl.
I've been looking but can't seem to find any. Part of my project requires a literary review and contrast so any help on this would be appreciated.
I'll not point you directly to any papers, instead I'll give you a few hints on where to look for them.
Google scholar, One of the best places on the web to search for papers on any subject. Searching for "opencl game development" turned up a few interesting results right on the first page; for sure there are other valuable results in the following pages.
IEEE Explore; IEEE is one of the de facto establishments on all thing computer and electronics; their journals and conferences have many publications on OpenCL in particular and parallel processing in general. IEEE Explore is their search engine, although usually all articles are also referenced in Google Scholar (but may be easier to find using IEEE explore).
ACM Digital Libray; ACM is a large and important institution like IEEE, but with even bigger focus on computing. You will find many papers on OpenCL there.
Google, Yahoo, Bing, etc; sometimes when everything else fails, using normal search engines can go a long way. You may find information about ongoing projects, important game developers blog posts, etc. All of these can be valid references if there aren't more (be sure to search really well before concluding there aren't more).
You should favor articles published in scientific journals over: a) papers or extended abstracts published in conference proceedings; b) corporate articles, not peer-reviewed, usually found in the respective corporation websites; c) articles published in general scientific knowledge magazines (e.g. Scientific American, etc.).
Sometimes you may not be given access to certain papers, which you will be requested to purchase. Usually, universities have subscriptions to many journals or such, as such you may have better luck trying to download the PDFs when accessing the web inside your institution. If you have no luck, sometimes the authors put "preview/unfinished" copies of the articles in their websites (sometimes they even put the dubiously legal published copy). As a last resort, you can always contact the authors directly, they'll most likely send you the article by email (it's of their own interest).
Finally, to learn OpenCL, I found that a mixture of reference manual, quick reference card and looking at examples from Intel, AMD, Nvidia and IBM SDK's goes a long way. No doubt a book will help, though I can't recommend you any, because I didn't read any.
This probably isn't the answer you wanted, but believe me, it's the answer you need to do a good work.
Good luck!

What are RFC's?

I think there are a lot of people out there unaware of RFC's (Request for Comments). I know what they are at a logical level, but can anybody give a good description for a new developer? Also, sharing some resources on how to use and read them would be nice.
The term comes from the days of ARPANET, the predecessor to the internet, where the researchers would basically just throw ideas out there to, well, make a request for comments from the other researchers on the project. They could be about pretty much anything and were not very formal at the time. If you go read them, it’s pretty comical how informal they were.
Now, there are more standards about what goes in RFC's and you can't get an RFC published until you have met strict guidelines and have done extensive research. They are pretty much reserved for well researched network standards that have been approved by the IETF.
From http://linux.about.com/cs/linux101/g/rfclparrequestf.htm
The name of the result and the process
for creating a standard on the
Internet. New standards are proposed
and published on the Internet, as a
Request For Comments. The proposal is
reviewed by the Internet Engineering
Task Force (http://www.ietf.org/), a
consensus-building body that
facilitates discussion, and eventually
a new standard is established, but the
reference number/name for the standard
retains the acronym RFC, e.g. the
official standard for e-mail message
formats is RFC 822.
See also: RFC Wikipedia Article
This could also mean "Request for Change" in an Agile environment. Just throwing that out there as everyone is so certain is just means "Request for Comments".
Wikipedia gives a good description of what [RFC] is about but in a nutshell it is a set of recommendation from the Internet Engineering Task Force applicable to the working of the Internet and Internet-connected systems. They are used as the standards.
So if you're looking for a definitive source of the information about the implementation of FTP, LDAP, IMAP, POP etc you don't have to look further than the appropriate RFC documents.
It's a Request For Comments. That title is a little misleading though, as it's often used as a name for standards, mostly those by the IETF. See Wikipedia

ASP.NET and System.Diagnostics tracing - have I missed something, or is this a bad idea?

For various common reasons I wanted to use tracing for my ASP.NET application. Especially since I found out about the possibility to use the Service Trace Viewer tool which allows you to examine your traces in a powerful way.
Since I had never used this trace thing before, I started stuying it. After a while of Google, SO and MSDN I finally have a good idea of how things work. But I also found one very distrubing thing.
When using trace in ASP.NET applications it makes a lot of sense to group the trace messages together by web requests. Especially since one of the reasons I want to use it is for studying performance problems. The above mentioned tool also supports this by using <Corrleation> tags in the generated XML files. Which in turn come from System.Diagnostics.Trace.CorrelationManager. It also allows other nice features like Activity starting/stopping, which provides an even better grouping of trace messages. Cool, right?
I though so too, until I started inspecting where the CorrelationManager actually lived. After all - it was a static property. After some playing around with Reflector I found out something horrifying - it's stored in CallContext! Which is the kind of thing we shouldn't be using in ASP.NET, right?
So... am I missing something here? Is tracing really fundamentally flawed in ASP.NET?
Added: Emm, I'm kinda on the verge of rewriting this stuff myself. I still want to use the neat tool for exploring the traces. Any reason I shouldn't do this? Perhaps there is something better yet? It would be really nice if I got some answers soon. :)
Added 2: A colleague of mine confirmed that this is not just a theoretical issue. He has observed this in the system he's working on. So it's settled. I'm going to build a new little system that does things just the way I want it to. :)
Added 3: Wow, cool... the guys at Microsoft couldn't find anything wrong with using Correlation Manager in ASP.NET. So apparently we're not getting a fix for this bug after all...
You raise a very interesting question. After looking at Reflector, I also see that CorrelationManager is using the CallContext to store the activity id. I have not worked with tracing much, so I can't really speak on behalf of what types of activities it tracks, but if it tracks a single activity across the entire life cycle of a page request, per the article you referenced above, there is a possibility that the activity id could become disassociated with the actual activity. This activity would appear to die halfway through.
HttpContext would seem ideal for tracking an entire page request from beginning to finish, since it will be carried over even if the execution changes to a different thread. However, the HttpContext will not be transferred to your business objects, where as the CallContext would. On a side note, I saw that CallContext can also be transferred when using remoting between client and server apps which is pretty nifty, but in the case of tracking the website, this would not really be all that useful.
If you haven't already, check out this guy's site. The issue described in this article isn't specifically the same issue that Cup(Of T) article mentioned, but it's still pretty interesting. He also provides several very informative links on the page that describe components of the CorrelateionManager.
Unfortunately, I don't really have an answer to your question, but I definitely find the topic interesting and will continue looking into it. So please update this post as you learn more. I'm curious to see what you or others (hopefully someone out there can shed some light on the topic) find while looking into this.
Anyway, good luck. I'll talk to some of the peeps at my work about this and post more later if I find anything.
Chris
OK, so this is how it ended.
My colleague called Microsoft and reported this bug to them. Being certified partners means we get access to some more prioritized fixing queue or something... don't know that stuff. Anyway, they're working on it. Hopefully we'll see a patch soon. :)
In the mean time I've created my own little tracing class. It doesn't support all the bells and whistles that the default trace framework does, but it's just what I need. :) More specifically:
It writes to the same XML format as the default XmlWriterTraceListener so I can use the tool to analyze the logs.
It has a built in log rotation - something my colleague had to do himself on top of XmlWriterTraceListener.
The actual logging is deferred to another thread so performance can be measured more accurately.
Correlations are now stored in HttpContext.Items so ASP.NET threading peculiarities don't affect it.
Happy end, I hope. :)

Requirements Gathering

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do you go about the requirements gathering phase? Does anyone have a good set of guidelines or tips to follow? What are some good questions to ask the stakeholders?
I am currently working on a new project and there are a lot of unknowns. I am in the process of coming up with a list of questions to ask the stakeholders. However I cant help but to feel that I am missing something or forgetting to ask a critical question.
You're almost certainly missing something. A lot of things, probably. Don't worry, it's ok. Even if you remembered everything and covered all the bases stakeholders aren't going to be able to give you very good, clear requirements without any point of reference. The best way to do this sort of thing is to get what you can from them now, then take that and give them something to react to. It can be a paper prototype, a mockup, version 0.1 of the software, whatever. Then they can start telling you what they really want.
See obligatory comic below...
In general, I try and get a feel for the business model my customer/client is trying to emulate with the application they want built. Are we building a glorified forms processor? Are we retrieving data from multiple sources in a single application to save time? Are we performing some kind of integration?
Once the general businesss model is established, I then move to the "must" and "must nots" for the application to dictate what data I can retrieve, who can perform what functions, etc.
Usually if you can get the customer to explain their model or workflow, you can move from there and find additional key questions.
The one question I always make sure to ask in some form or another is "What is the trickiest/most annoying thing you have to do when doing X. Typically the answer to that reveals the craziest business/data rule you'll have to implement.
Hope this helps!
Steve Yegge talks fun but there is money to be made in working out what other people's requirements are so i'd take his article with a pinch of salt.
Requirements gathering is incredibly tough because of the manner in which communication works. Its a four step process that is lossy in each step.
I have an idea in my head
I transform this into words and pictures
You interpret the pictures and words
You paint an image in your own mind of what my original idea was like
And humans fail miserably at this with worrying frequency through their adorable imperfections.
Agile does right in promoting iterative development. Getting early versions out to the client is important in identifying what features are most important (what ships in 0.1 - 0.5 ish), helps to keep you both on the right track in terms of how the application will work and quickly identifies the hidden features that you will miss.
The two main problem scenarios are the two ends of the scales:
Not having a freaking clue about what you are doing - get some domain experts
Having too many requirements - feature pit. - Question, cull (prioritise ;) ) features and use iterative development
Yegge does well in pointing out that domain experts are essential to produce good requirements because they know the business and have worked in it. They can help identify the core desire of the client and will help explain how their staff will use the system and what is important to the staff.
Alternatives and additions include trying to do the job yourself to get into the mindset or having a client staff member occasionally on-site, although the latter is unlikely to happen.
The feature pit is the other side, mostly full of failed government IT projects. Too much, too soon, not enough thought or application of realism (but what do you expect they have only about four years to make themselves feel important?). The aim here is to work out what the customer really wants.
As long as you work on getting the core components correct, efficient and bug-free clients usually remain tolerant of missing features that arrive in later shipments, as long as they eventually arrive. This is where iterative development really helps.
Remember to separate the client's ideas of what the program will be like and what they want the program to achieve.
Some clients can create confusion by communicating their requirements in the form of application features which may be poorly thought out or made redundant by much simpler functionality then they think they require. While I'm not advocating calling the client an idiot or not listening to them I feel that it is worth forever asking why they want a particular feature to get to its underlying purpose.
Remember that in either scenario it is of imperative importantance to root out the quickest path to fulfilling the customers core need and put you in a scenario where you are both profiting from the relationship.
Wow, where to start?
First, there is a set of knowledge someone should have to do analysis on some projects, but it really depends on what you are building for who. In other words, it makes a big difference if you are modifying an enterprise application for a Fortune 100 corporation, building an iPhone app, or adding functionality to a personal webpage.
Second, there are different kinds of requirements.
Objectives: What does the user want to accomplish?
Functional: What does the user need to do in order to reach their objective? (think steps to reach the objective/s)
Non-functional: What are the constraints your program needs to perform within? (think 10 vs 10k simultaneous users, growth, back-up, etc.)
Business rules: What dynamic constraints do you have to meet? (think calculations, definitions, legal concerns, etc.)
Third, the way to gather requirements most effectively, and then get feedback on them (which you will do, right?) is to use models. User cases and user stories are a model of what the user needs to do. Process models are another version of what needs to happen. System diagrams are just another model of how different parts of the program(s) interact. Good data modeling will define business concepts and show you the inputs, outputs, and changes that happen within your program. Models (and there are more than I listed) are really the key to the concern you list. A few good models will capture the needs and from models you can determine your requirements.
Fourth, get feedback. I know I mentioned this already, but you will not get everything right the first time, so get responses to what your customer wants.
As much as I appreciate requirements, and the models that drive them, users typically do not understand the ramifications of of all their requests. Constant communication with chances for review and feedback will give users a better understanding of what you are delivering. Further, they will refine their understanding based on what they see. Unless you're working for the government, iterations and / or prototypes are helpful.
First of all gather the requirements before you start coding. You can begin the design while you are gathering them depending on your project life cicle but you shouldn't ever start coding without them.
Requirements are a set of well written documents that protect both the client and yourself. Never forget that. If no requirement is present then it was not paid for (and thus it requires a formal change request), if it's present then it must be implemented and must work correctly.
Requirements must be testable. If a requirement cannot be tested then it isn't a requirement. That means something like, "The system "
Requirements must be concrete. That means stating "The system user interface shall be easy to use" is not a correct requirment.
In order to actually "gather" the requirements you need to first make sure you understand the businness model. The client will tell you what they want with its own words, it is your job to understand it and interpret it in the right context.
Make meetings with the client while you're developing the requirements. Describe them to the client with your own words and make sure you and the client have the same concept in the requirements.
Requirements require concise, testable example, but keep track of every other thing that comes up in the meetings, diagrams, doubts and try to mantain a record of every meeting.
If you can use an incremental life cycle, that will give you the ability to improve some bad gathered requirements.
You can never ask too many or "stupid" questions. The more questions you ask, the more answers you receive.
According to Steve Yegge that's the wrong question to ask. If you're gathering requirement it's already too late, your project is doomed.
High-level discussions about purpose, scope, limitations of operating environment, size, etc
Audition a single paragraph description of the system, hammer it out
Mock up UI
Formalize known requirements
Now iterate between 3 and 4 with more and more functional prototypes and more specs with more details. Write tests as you go. Do this until you have functional software and a complete, objective, testable requirements spec.
That's the dream. The reality is usually after a couple iterations everybody goes head-down and codes until there's a month left to test.
Gathering Business Requirements Are Bullshit - Steve Yegge
read the agile manifesto - working software is the only measurement for the success of a software project
get familiar with agile software practices - study Scrum , lean programming , xp etc - this will save you tremendous amount of time not only for the requirements gathering but also for the entire software development lifecycle
keep regular discussions with Customers and especially the future users and key-users
make sure you talk to the Persons understanding the problem domain - e.g. specialists in the field
Take small notes during the talks
After each CONVERSATION write an official requirement list and present it for approving. Later on it would be difficult to argue against all agreed documentation
make sure your Customers know approximately what are the approximate expenses in time and money for implementing "nice to have" requirements
make sure you label the requirements as "must have" , "should have" and "nice to have" from the very beginning, ensure Customers understand the differences between those types also
integrate all documents into the latest and final requirements analysis (or the current one for the iteration or whatever agile programming cycle you are using ... )
remember that requirements do change over the software life cycle , so gathering is one thing but managing and implementing another
KISS - keep it as simple as possible
study also the environment where the future system will reside - there are more and more technological restraints from legacy or surrounding systems , since the companies do not prefer to throw to the garbage the money they have invested for decades even if in our modern minds 20 years old code is garbage ...
Like most stages of the software development process its iteration works best.
First find out who your users are -- the XYZ dept,
Then find out where they fit into the organisation -- part of Z division,
Then find out what they do in general terms -- manage cash
Then in specific terms -- collect cash from tills, and check for till fraud.
Then you can start talking to them.
Ask what problem they want you want to solve -- you will get an answer like write a bamboozling system using OCR with shark technoligies.
Ignore that answer and ask some more questions to find out what the real problem is -- they cant read the till slips to reconcile the cash.
Agree a real solution with the users -- get a better ink ribbon supplier - or connect the electronic tills to the network and upload the logs to a central server.
Then agree in detail how they will measure the success of the project.
Then and only then propose and agree a detailed set of requirements.
I would suggest you to read Roger-Pressman's Software Engineering: A Practitioner's Approach
Before you go talking to the stakeholders/users/anyone be sure you will be able to put down the gathered information in a usefull and days-lasting way.
Use a sound-recorder if it is OK with the other person and the information is bulky.
If you heard something important and you need some reasonable time to write it down, you have two choices: ask the other person to wait a second, or say goodbye to that precious information. You wont remember it right, ask any neuro-scientist.
If you detect that a point need deeper review or that you need some document you just heard of, make sure you make a commitment with the other person to send that document or schedule another meeting with a more specific purpose. Never say "I'll remember to ask for that xls file" because in most cases you wont.
Not to long after the meeting, summarize all your notes, recordings and fresh thoughts. Just summarize it rigth. Create effective reminders for the commitments.
Again, just after the meeting, is the perfect time to understand why the gathering you just did was not as right as you thought at the end of the meeting. That's when you will be able to put down a lot of meaningful questions for another meeting.
I know the question was in the perspective of the pre-meeting, but please be aware that you can work on this matters before the meeting and end up with a much usefull, complete and quality gathering.
I've been using mind mapping (like a work breakdown structure) to help gather requirements and define the unknowns (the #1 project killer). Start at a high level and work your way down. You need to work with the sponsors, users and development team to ensure you get all the angles and don't miss anything. You can't be expected to know the entire scope of what they want without their involvement...you - as a project manager/BA - need to get them involved (most important part of the job).
There are some great ideas here already. Here are some requirements gathering principles that I always like to keep in mind:
Know the difference between the user and the customer.
The business owners that approve the shiny project are usually the customers. However, a devastating mistake is the tendency to confuse them as the user. The customer is usually the person that recognizes the need for your product, but the user is the person that will actually be using the solution (and will most likely complain later about a requirement your product did not meet).
Go to more than one person
Because we’re all human, and we tend to not remember every excruciating detail. You increase your likelihood of finding missed requirements as you talk to more people and cross-check.
Avoid specials
When a user asks for something very specific, be wary. Always question the biases and see if this will really make your product better.
Prototype
Don’t wait till launch to show what you have to the user. Do frequent prototypes (you can even call them beta versions) and get constant feedback throughout the development process. You’ll probably find more requirements as you do this.
I recently started using the concepts, standards and templates defined by the International Institute of Business Analysts organization (IIBA).
They have a pretty good BOK (Book of Knowledge) that can be downloaded from their website. They do also have a certificate.
Requirements Engineering is a bit of an art, there are lots of different ways to go about it, you really have to tailor it to your project and the stakeholders involved. A good place to start is with Requirements Engineering by Karl Wiegers:
http://www.amazon.com/Software-Requirements-Second-Pro-Best-Practices/dp/0735618798/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1234910330&sr=8-2
and a requirements engineering process which may consist of a number of steps e.g.:
Elicitation - for the basis for discussion with the business
Analysis and Description - a technical description for the purpose of the developers
Elaboration, Clarification, Verification and Negotiation - further refinement of the requirements
Also, there are a number of ways of documenting the requirements (Use Cases, Prototypes, Specifications, Modelling Languages). Each have their advantages and disadvantages. For example prototypes are very good for elicitation of ideas from the business and discussion of ideas.
I generally find that writing a set of use cases and including wireframe prototypes works well to identify an initial set of requirements. From that point it's a continual process of working with technical people and business people to further clarify and elaborate on the requirements. Keeping track of what was initially agreed and tracking additional requirements are essential to avoid scope creep. Negotiation plays a bit part here also between the various parties as per the Broken Iron Triangle (http://www.ambysoft.com/essays/brokenTriangle.html).
IMO the most important first step is to set up a dictornary of domain-specific words. When your client says "order", what does he mean? Something he receives from his customers or something he sends to his suppliers? Or maybe both?
Find the keywords in the stakeholders' business, and let them explain those words until you comprehend their meaning in the process. Without that, you will have a hard time trying to understand the requirements.
i wrote a blog article about the approach i use:
http://pm4web.blogspot.com/2008/10/needs-analysis-for-business-websites.html
basically: questions to ask your client before building their website.
i should add this questionnaire sheet is only geared towards basic website builds - like a business web presence. totally different story if you are talking about web-based software. although some of it is still relavant (e.g. questions relating to look and feel).
LM
I prefer to keep my requirements gathering process as simple, direct and thorough as possible. You can download a sample document that I use as a template for my projects at this blog posting: http://allthingscs.blogspot.com/2011/03/documenting-software-architectural.html

Resources