Is the Ada programming language still relevant in the military? - ada

I know a lot of programming languages now. Back when I was 18 I almost joined the US Air Force and there was a test on Ada. That was over a decade ago. Is the Ada programming language still relevant in the military as it once was?
I'm wondering if new military software projects are still programmed using Ada as their go to language.

There are still new projects being developed in Ada today. The mandate requiring Ada was scrapped years ago, but for some applications, Ada is the only reliably ("trusted") option.
Ada the Language: Alive and in Flight - October 10, 2016 Excerpt:
The Changing Context for DOD Software Development For nearly two
decades, the Ada programming language has been a cornerstone of
efforts by the Department of Defense (DOD) to improve its software
engineering practices. DOD created Ada in the 1970s to serve as a
department-wide standard that would satisfy its special requirements
for embedded and mission-critical software, and would also encourage
good software engineering. Both the new language and the new software
engineering ideas associated with it met with some criticism, and both
have evolved as a result. Today, Ada is the most commonly used
language for mission-critical defense software, which includes weapon
systems and performance-critical command, control, communications, and
intelligence (C3I) systems. DOD's inventory contains nearly 50 million
lines of Ada code in these applications (Hook et al., 1995). Given the
long operational life of such systems, DOD has made a significant
investment in Ada technology. Ada is the second most commonly used
language (after Cobol) for DOD automated information systems, which
include payroll and logistics programs. The DOD inventory contains
more than 8 million lines of Ada code in these applications (Hook et
al., 1995).

In November 2016 the US National Institute of Standards and Technology (NIST) published the report NIST-IR-8151 "Dramatically Reducing Software Vulnerabilities". The report is available at https://doi.org/10.6028/NIST.IR.8151.
The following is an excerpt from that report:
Two presentations at the Software Measures and Metrics to Reduce
Security Vulnerabilities (SwMM-RSV) workshop, Andrew Walenstein’s
“Measuring Software Analyzability” and James Kupsch’s “Dealing with
Code that is Opaque to Static Analysis,” point the direction to new
software measures. Both stressed that code should be amenable to
automatic analysis. Both presented approaches to define what it means
that code is readily analyzed, why analyzability contributes to
reduced vulnerabilities and how analyzability could be measured and
increased.
There are subsets of programming languages that are designed to be
analyzable, such as SPARK, or to be less error-prone, such as Less
Hatton’s SaferC. Workshop participants generally favored using better
languages, for example, functional languages, such as F# or ML.
However, there was no particular suggestion of the language, or
languages, of the future.
We note that with few exceptions, such as Ada 2012 [Barnes13], which
has SPARK, new languages have poor tool support. Supporting the
construction of tools is vital for the adoption and safe use of new
languages.

Yes, since Ada is used where mission critical devices can cause major disasters in case of a software bug (like in avionics, air traffic control and of course military), it is still used in those industries and I doubt they will change.

Related

SIGNAL vs Esterel vs Lustre

I'm very interested in dataflow and concurrency focused languages. I've read up on the subject and repeatedly I see SIGNAL, Esterel, and Lustre mentioned; so I take it they're prominent players in those fields. However, many of their links in the resources I found are dead and they don't seem very accessible. I managed to find a couple compilers I can compile from source (Polychrony Toolset for SIGNAL and the Columbia Compiler for Esterel) but they've both had issues when trying to compile with cmake. Even textbooks teaching these languages have been tough to come by.
With the background of the way, my actual questions are: is anyone really familiar with this field of programming? Are these languages still big deals, or have they "died out" by now? Could it be they're just available to big companies with a hefty price tag, so the average programmer wouldn't really be able to pick those languages up?
I ran into a couple other dataflow/concurrent paradigm languages, such as Oz or E, but they seemed to be mostly for education and not suitable for real world projects. Not to say they aren't impressive languages, but their implementation was limited and it would be unlikely to see them in production contexts. Does anyone know of other languages in this field they can recommend that are actually accessible (have good documentation, tutorials, and an installable compiler to actually code in)? Or can anyone clarify a language such as Oz or E and hopefully show that they indeed are good enough for large real world projects?
All the languages you mentioned are not widespread. This means their compilers and runtime have bugs, the community is narrow and can give little help, and linking with general purpose libraries can be problematic.
I recommend to use an actively supported general purpose language such as Java, Scala, Kotlin or C++. They all have libraries to support asynchronous computations, and dataflow is no more than support of asynchronous procedure call. You even can develop your own dataflow library. This is not that hard: I wrote a dataflow library for Java which is only 40 kilobytes of source code.
Have you tried Céu? It is a recent variant of Esterel, and compiles to C. It is simple to understand, and provides a reactive and concurrent structuring of control flow. Native C calls can be made by just prefixing them with an underscore ("_printf").
http://ceu-lang.org
Also, see the paper "Structured Synchronous Reactive Programming with Céu" for a nice overview.
http://www.ceu-lang.org/chico/ceu_mod15_pre.pdf
These academics languages mostly disappeared as such and are used in industrial tools
Esterel-Lustre are the basis of in Ansys' SCADE
Signal is used in 3DS' ControlBuild
Esterel was used in Synopsys' ConcentricStudio.
Researchers use also Heptagon for synchronous language studies for code generation, formal methods, new concepts.

Distribution in Ada

I'm searching for some good books/tutorials/guides on how to develop distributed applications using Ada.
I already have some books on Ada programming, but all of them don't talk about distribution or they only mention it very briefly.
The ideal thing would be a book/guide that focus on the practical side of things (implementation) but any resource, either free or commercial, is appreciated.
The "Burns & Welling" book covers concurrency in depth, but doesn't have as much to say about distributed systems as I would expect. Nevertheless it is probably essential reading if you're going to be doing a lot of this stuff.
I'm still reading Professor McCormick's book "Building Parallel, Real-time and embedded Applications with Ada" and it does an excellent job of getting a reader started with a wide range of application-oriented aspects of Ada - sadly missing in other books which focus o the base language - and that includes both the DSA (pure Ada) and PolyOrb (for mixed languages) approaches to distributed systems, including very readable code examples.
Start with this latter book (IMO). (and its lead author has been seen around these parts, so this is a good place to ask questions! :-)
Section 8 in the "PolyORB User's Guide" is a small tutorial on how to develop a distributed application in Ada using the Distributed Systems Annex (DSA).
The "PolyORB User's Guide" also contains examples of developing distributed applications using other constructs than the DSA, which might be of interest, but using the DSA is likely to give you the most elegant application if all the components primarily are in Ada.

Machine Learning and Natural Language Processing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Assume you know a student who wants to study Machine Learning and Natural Language Processing.
What specific computer science subjects should they focus on and which programming languages are specifically designed to solve these types of problems?
I am not looking for your favorite subjects and tools, but rather industry standards.
Example: I'm guessing that knowing Prolog and Matlab might help them. They also might want to study Discrete Structures*, Calculus, and Statistics.
*Graphs and trees. Functions: properties, recursive definitions, solving recurrences. Relations: properties, equivalence, partial order. Proof techniques, inductive proof. Counting techniques and discrete probability. Logic: propositional calculus, first-order predicate calculus. Formal reasoning: natural deduction, resolution. Applications to program correctness and automatic reasoning. Introduction to algebraic structures in computing.
This related stackoverflow question has some nice answers: What are good starting points for someone interested in natural language processing?
This is a very big field. The prerequisites mostly consist of probability/statistics, linear algebra, and basic computer science, although Natural Language Processing requires a more intensive computer science background to start with (frequently covering some basic AI). Regarding specific langauges: Lisp was created "as an afterthought" for doing AI research, while Prolog (with it's roots in formal logic) is especially aimed at Natural Language Processing, and many courses will use Prolog, Scheme, Matlab, R, or another functional language (e.g. OCaml is used for this course at Cornell) as they are very suited to this kind of analysis.
Here are some more specific pointers:
For Machine Learning, Stanford CS 229: Machine Learning is great: it includes everything, including full videos of the lectures (also up on iTunes), course notes, problem sets, etc., and it was very well taught by Andrew Ng.
Note the prerequisites:
Students are expected to have the following background: Knowledge of
basic computer science principles and skills, at a level sufficient to write
a reasonably non-trivial computer program. Familiarity with the basic probability theory.
Familiarity with the basic linear algebra.
The course uses Matlab and/or Octave. It also recommends the following readings (although the course notes themselves are very complete):
Christopher Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
Richard Duda, Peter Hart and David Stork, Pattern Classification, 2nd ed. John Wiley & Sons, 2001.
Tom Mitchell, Machine Learning. McGraw-Hill, 1997.
Richard Sutton and Andrew Barto, Reinforcement Learning: An introduction. MIT Press, 1998
For Natural Language Processing, the NLP group at Stanford provides many good resources. The introductory course Stanford CS 224: Natural Language Processing includes all the lectures online and has the following prerequisites:
Adequate experience with programming
and formal structures. Programming
projects will be written in Java 1.5,
so knowledge of Java (or a willingness
to learn on your own) is required.
Knowledge of standard concepts in
artificial intelligence and/or
computational linguistics. Basic
familiarity with logic, vector spaces,
and probability.
Some recommended texts are:
Daniel Jurafsky and James H. Martin. 2008. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition. Second Edition. Prentice Hall.
Christopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. MIT Press.
James Allen. 1995. Natural Language Understanding. Benjamin/Cummings, 2ed.
Gerald Gazdar and Chris Mellish. 1989. Natural Language Processing in Prolog. Addison-Wesley. (this is available online for free)
Frederick Jelinek. 1998. Statistical Methods for Speech Recognition. MIT Press.
The prerequisite computational linguistics course requires basic computer programming and data structures knowledge, and uses the same text books. The required articificial intelligence course is also available online along with all the lecture notes and uses:
S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach. Second Edition
This is the standard Artificial Intelligence text and is also worth reading.
I use R for machine learning myself and really recommend it. For this, I would suggest looking at The Elements of Statistical Learning, for which the full text is available online for free. You may want to refer to the Machine Learning and Natural Language Processing views on CRAN for specific functionality.
My recommendation would be either or all (depending on his amount and area of interest) of these:
The Oxford Handbook of Computational Linguistics:
(source: oup.com)
Foundations of Statistical Natural Language Processing:
Introduction to Information Retrieval:
String algorithms, including suffix trees. Calculus and linear algebra. Varying varieties of statistics. Artificial intelligence optimization algorithms. Data clustering techniques... and a million other things. This is a very active field right now, depending on what you intend to do.
It doesn't really matter what language you choose to operate in. Python, for instance has the NLTK, which is a pretty nice free package for tinkering with computational linguistics.
I would say probabily & statistics is the most important prerequisite. Especially Gaussian Mixture Models (GMMs) and Hidden Markov Models (HMMs) are very important both in machine learning and natural language processing (of course these subjects may be part of the course if it is introductory).
Then, I would say basic CS knowledge is also helpful, for example Algorithms, Formal Languages and basic Complexity theory.
Stanford CS 224: Natural Language Processing course that was mentioned already includes also videos online (in addition to other course materials). The videos aren't linked to on the course website, so many people may not notice them.
Jurafsky and Martin's Speech and Language Processing http://www.amazon.com/Speech-Language-Processing-Daniel-Jurafsky/dp/0131873210/ is very good. Unfortunately the draft second edition chapters are no longer free online now that it's been published :(
Also, if you're a decent programmer it's never too early to toy around with NLP programs. NLTK comes to mind (Python). It has a book you can read free online that was published (by OReilly I think).
How about Markdown and an Introduction to Parsing Expression Grammars (PEG) posted by cletus on his site cforcoding?
ANTLR seems like a good place to start for natural language processing. I'm no expert though.
Broad question, but I certainly think that a knowledge of finite state automata and hidden Markov models would be useful. That requires knowledge of statistical learning, Bayesian parameter estimation, and entropy.
Latent semantic indexing is a commonly yet recently used tool in many machine learning problems. Some of the methods are rather easy to understand. There are a bunch of potential basic projects.
Find co-occurrences in text corpora for document/paragraph/sentence clustering.
Classify the mood of a text corpus.
Automatically annotate or summarize a document.
Find relationships among separate documents to automatically generate a "graph" among the documents.
EDIT: Nonnegative matrix factorization (NMF) is a tool that has grown considerably in popularity due to its simplicity and effectiveness. It's easy to understand. I currently research the use of NMF for music information retrieval; NMF has shown to be useful for latent semantic indexing of text corpora, as well. Here is one paper. PDF
Prolog will only help them academically it is also limited for logic constraints and semantic NLP based work. Prolog is not yet an industry friendly language so not yet practical in real-world. And, matlab also is an academic based tool unless they are doing a lot of scientific or quants based work they wouldn't really have much need for it. To start of they might want to pick up the 'Norvig' book and enter the world of AI get a grounding in all the areas. Understand some basic probability, statistics, databases, os, datastructures, and most likely an understanding and experience with a programming language. They need to be able to prove to themselves why AI techniques work and where they don't. Then look to specific areas like machine learning and NLP in further detail. In fact, the norvig book sources references after every chapter so they already have a lot of further reading available. There are a lot of reference material available for them over internet, books, journal papers for guidance. Don't just read the book try to build tools in a programming language then extrapolate 'meaningful' results. Did the learning algorithm actually learn as expected, if it didn't why was this the case, how could it be fixed.

Evolutionary vs throwaway prototyping [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Who is winning in the "Low vs High fidelity prototyping" debate?
Should prototype-zero (P0) be the first version of the final product? Or should be P-0 always a throwaway? What approach is the industry favoring?
Excelent article from wikipedia: Software prototyping
A prototype should always be a throwaway - a prototype is used to quickly prove a concept and influence the design of the real product. As such, a lot of things which are important for a real product (a thought-out architecture and design, reliability, security, maintainability, etc.) fall by the wayside. If you do take these things into account when building your prototype, you're not really building a prototype anymore.
My experience with prototypes where the code directly evolved into an actual product shows that the end-result suffers because of it - the lack of a real architecture resulted in a lot of cobbled-together code that had to be constantly hacked to add new features. I've even seen a case the original technology chosen for rapid development of the prototype was not the best choice for the actual product, and a complete re-write was necessary for V2.
I think we, the pedants, have lost this particular battle -- alleged "prototypes" (which by definition should be rewritten from scratch!!!-) are in fact being "evolved" into (often half-baked "betas"), etc.
Even today, I've applauded at the smart attempt by a colleague of mine to recapture the concept, even if the term is a lost battle: he's setting up a way for proofs of concept small projects to be developed (and, if the concept does get proven, transferred to software engineers for real prototyping, then development).
The idea is that, in our department, we have many people who aren't (and aren't in fact supposed to be!-) software developers, but are very smart, computer savvy, and in daily contact with the reality "in the trenches" -- they are the ones who are most likely to smell an opportunity for some potential innovation which could have real impact once implemented as a "production-ready" software project. Salespeople, account managers, business analysts, technology managers -- at our company, they all often fit this description.
But they're NOT going to program in C++, hardly at all in Java, maybe in Python but miles away from "productionized" -- indeed they're far more likely to whip up a smart proof of concept in php, javascript, perl, bash, Excel+VBA, and sundry other "quick and dirty" technologies we don't even want to dream about productionizing and supporting forevermore!-)
So by calling their prototypes "proofs of concept", we hope to encourage them to embody their daring concepts in concrete form (vague natural-language blabberings and much waving of hands being least useful, and alien to the company's culture anyway;-) and yet sharply indicate that such projects, if promoted to exist among the software engineers' goals and priorities, DO have to be programmed from scratch -- the proof-of-concept serves, at best, as a good draft/sketch spec for what the engineers are aiming for, definitely NOT to be incrementally enriched, but redone from the root up!-).
It's early to say how well this idea works -- ask me in three months, when we evaluate the quarter's endeavors (right now, we're just providing a blueprint for them, hot on the heels of evaluating last quarter's department- and company-wise undertakings!-).
Write the prototype, then keep refactoring it until it becomes the product.
The key is to not hesitate to refactor when necessary.
It helps to have few people working on it initially. With too many people working on something, refactoring becomes more difficult.
Response from BUNDALLAH, HAMISI
A prototype typically simulates only a few aspects of the features of the eventual program, and may be completely different from the eventual implementation.
Contrary to what my other colleagues have suggested above, I would NOT advise my boss to opt for the throw away prototype model. I am with Anita on this. Given the two prototype models and the circumstances provided, I would strongly advise the management (my boss) to opt for the evolutionary prototype model. The company being large with all the other variables given such as the complexity of the code, the newness of the programming language to be used, I would not use throw away prototype model. The throw away prototype model becomes the starting point from which users can re-examine their expectations and clarify their requirements. When this has been achieved, the prototype model is 'thrown away', and the system is formally developed based on the identified requirements (Crinnion, 1991). But with this situation, the users may not know all the requirements at once due to the complexity of the factors given in this particular situation. Evolutionary prototyping is the process of developing a computer system by a process of gradual refinement. Each refinement of the system contains a system specification and software development phase. In contrast to both the traditional waterfall approach and incremental prototyping, which required everyone to get everything right the first time this approach allows participants to reflect on lessons learned from the previous cycle(s). It is usual to go through three such cycles of gradual refinement. However there is nothing stopping a process of continual evolution which is often the case in many systems. According to Davis (1992), an evolutionary prototyping acknowledges that we do not understand all the requirements (as we have been told above that the system is complex, the company is large, the code will be complex, and the language is fairly new to the programming team). The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. The reason for this is that the Evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will be built. This technique allows the development team to add features, or make changes that couldn't be conceived during the requirements and design phase. For a system to be useful, it must evolve through use in its intended operational environment. A product is never "done;" it is always maturing as the usage environment change. Developers often try to define a system using their most familiar frame of reference--where they are currently (or rather, the current system status). They make assumptions about the way business will be conducted and the technology base on which the business will be implemented. A plan is enacted to develop the capability, and, sooner or later, something resembling the envisioned system is delivered. (SPC, 1997).
Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered.
In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system. To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest. (Bersoff and Davis, 1991).
However, the main problems with evolutionary prototyping are due to poor management: Lack of defined milestones, lack of achievement - always putting off what would be in the present prototype until the next one, lack of proper evaluation, lack of clarity between a prototype and an implemented system, lack of continued commitment from users. This process requires a greater degree of sustained commitment from users for a longer time span than traditionally required. Users must be constantly informed as to what is going on and be completely aware of the expectations of the 'prototypes'.
References
Bersoff, E., Davis, A. (1991). Impacts of Life Cycle Models of Software Configuration Management. Comm. ACM.
Crinnion, J.(1991). Evolutionary Systems Development, a practical guide to the use of prototyping within a structured systems methodology. Plenum Press, New York.
Davis, A. (1992). Operational Prototyping: A new Development Approach. IEEE Software.
Software Productivity Consortium (SPC). (1997). Evolutionary Rapid Development. SPC document SPC-97057-CMC, version 01.00.04.

What are some good computer science resources for a blind programmer?

I'm a totally blind individual who would like to learn more of the theory aspect of computer science. I've had an intro data structures class and the general intro programming but would like to learn more on things such as software design, advanced data structures, and compiler design. I want to do this as a self study course not as part of college classes.
Unfortunately there aren’t many text books available on computer science from Recordings for the Blind and Dyslexic where I normally get my textbooks. I would appreciate any electronic resources preferably free that could help me get more of a computer science education rather then the newest language or platform that a lot of programming sites appear to focus on.
You might find the Experiences of a Blind Computer Scientist a good read.
MIT's Open Courseware would be a good resource for you with the amount of videos/audio they have.
Really though, for the core computer-science topics I find it pretty hard to beat some of the better textbooks out there. Some offer digital versions of their book with purchase and some don't. For those that don't, I would just purchase the book and then download via a torrent site a digital e-book equivelant. Since you already own the book I don't think this would be a major problem.
UC Berkley has a couple of computer science courses online for free as mp3 and video files (including RSS feed for each course). And if reading PDF files aren't an issue you could check out O'Reilly's Safari.
The text book for Structure and Interpretation of Computer Programs appears to be accessible. Software engineering radio is a good podcast that I listen to but recently has focused a lot on model driven development and UML which doesn't interest me. The UC Berkley
lectures are of varying quality, it's like all other college classes it depends on the professor. I've found I can follow along with the cs162 lectures fine but not so much with the cs61b. Part of this is because of the professor and part is probably because 61b is more math heavy since it's a data structures class. Unfortunately the RSS feeds are useless since the file names are meaningless. I used my podcatcher to download the entire lecture series, then used the converting capability of foobar 2000 to rename the files with there track number so I could listen to them in order. I've used Safari at work before and it is accessible although to expensive for me to get a yearly subscription. Open Courseware appears to have a lot of good stuff. Unfortunately I don't use itunes so instead of downloading each mp3 file individually I used the firefox extension DownThemAll! with a custom filter to grab all the mp3 files at once from the specific course I wanted. Another series of books that looks useful are the data structures books by Bruno R. Preiss several of which are available online at
http://www.brpreiss.com/books/opus5/
Some of the equations are represented as graphics but I can often tell what the general idea is by context.
I wonder would the Structure and Interpretation of Computer Programs video lectures by Hal Abelson and Gerald Jay Sussman be of any use?
If the audio content is enough on its own without the video, they are an excellent digital resource.
The podcast "software engineering radio" is excellent. Though not CS courseware, it is the most academic and intellectually stimulating podcast I have found about software development and computer science.
http://www.se-radio.net/
personally I am just blown away by the questioner. I mean, the challenge alone of programming is too much for most people but being without the primary sense used in the task is amazing to me. What is ironic though is I bet that given this challenge the questioner is still FAR more adept at most CS tasks than the people I work with day to day. Just saying.
I'm also a totally blind programmer, currently working for Microsoft. The most valuable resource for te technical books is Safari (safari.oreilly.com). You can read thousands of computer science texts there. if you're in the USA, you can also get many of those titles for free from BookShare (www.bookshare.org). In both cases graphical images will be an issue, but there's no easy solution for that. Most good books have enough descriptive text that one can manage without the diagrams.
I to am a new blind programmer! I only lost my vision 5 years ago. Anyway, I have been programming in Visual Basic 2008 throughout the past year. It turned out to be more accessible than I had at first suspected.
I start a Java class next semester and the required text is a free online text! It is posted below.
Introduction to Programming Using Java, Fifth Edition
http://math.hws.edu/javanotes/
Can some of you seasoned blind programmers share with us any blogs or websites where other blind programmers can be found??
Check out this Stack Overflow question about podcasts.
A language called Quorum is a lot like Python but optimized across a few more syntactic details, and the corresponding development environment is designed with the blind in mind. https://quorumlanguage.com/ This might fit especially well with the use case where most students are using Python.
A 2016 blog about CSed (actually a response to a blog post) points to
program-l discussion board for blind programmers at https://www.freelists.org/list/program-l
The EPIQ conference for blind and other programmers interested in Quorum
https://quorumlanguage.com/epiq.html
Also, see other ideas in a similar question on another SO site: https://cseducators.stackexchange.com/questions/3441/teaching-a-blind-high-school-student

Resources