In a SysML Block Definition Diagram, can a single "block" represent both hardware and software? - software-design

In contrast to class diagrams, it seems one can model multiple facets of a system in block diagrams such as software, hardware, entities, etc.
Is it possible that a single block can represent both software and hardware? Or would they always be separated into two separate blocks? E.g. When modeling some machinery, let's say the machinery has a physical button, and the machinery's software also has a "Button" software class. Would they be modeled as separate blocks or as an individual block?
The same question could be asked of a database entity that is mapped to an object oriented language as a class, and said entity also represents a real life physical actor (E.g. "User" software class & Real life physical user).
In the case that a single block can represent multiple facets, is there some form of notation to indicate "this block represents both hardware and software" - or would this just be implied based on the block having multiple and differing annotated relationships with other blocks?

A systems engineer would generally want to separate physical components and software components. The "digital twin" in software does not always represent its physical object accurately. For example, bad sensing may cause a digital twin be an inaccurate representation of the component in reality. Imagine an autonomous vehicle's position in traffic, or a "stalling" 737 MAX.
In an information system, the digital twin for a person is different from the actual person it represents. Imagine what would happen if your doctor only treated you according to inaccurate health records. You, the actual person, should be distinguishable from your health records. Moreover, the design of software should be distinguishable from the design of a database schema so the two can vary independently.
To answer your question, there's nothing to stop you from modeling everything as one block, but, if you conflate things in a model, you can't reason about them separately. Why would you want to model a physical thing and its digital twin, or a Java class and a database table as the same SysML block?

It all depends on what you are trying to communicate. Always model and create views with a specific use for the model and views in mind.
If you are just trying to communicate the concept of a system that relates people or a machine that has buttons, be they hardware or software/virtual then a single block works. If you want call attention to any features or relationships between the two possible buttons or other innerworkings of the system, then use a different block for each and create a third "system" block that "has" (aggregates/composes) those two buttons of different types or has a hardware subsystem and a software subsystem that have the buttons. If there are pertinent relationships between the two button types then show those. If the physical hardware button depends on the software implementation of a button class or function, then create that dependency relationship.
Elaborate and add detail as necessary and as soon as you have sufficiently communicated the concept, stop.
If you don't have a specific idea of what you are trying to communicate, but are trying to understand where one concept ends and the next begins and how to even think about the whole mess, try it several ways and you will probably gain a better understanding of the problem and straighten out your thinking. If a combined block doesn't show what you are trying to "say" then try different combinations of blocks and relationships. When you find something useful that solves the problem at hand, stop.
Don't get bogged down with the language

Related

What is software physical specification and logical specification?

What is software physical specification and logical specification? I understand about logical specifications which could be derived from user requirements like identifying attributes, entities and use-cases and draw the software using UML in graphical depiction. But what is the physical specification of software?
Logical vs physical terminology
The terminology logical vs. physical specification is related to the idea of an implementation-independent specification (logical) that is then refined to take into account implementation details and related constraints (physical).
This distinction can be made for any system view-point, such as architecture, data-flows and process design. But the terms are mainly used in the context of data modeling (ERD):
the logical specification describes how data meets the business requirements. Typically, you'd describe entities, their attributes and their relationships;
the physical specification describes how a logical data model is implemented in the database, taking into consideration also technical requirements and constraints. Typically, you'd find tables, columns, primary keys, foreign keys, indexes and everything that matters for the implementation.
Remark: The term "physical" probably dates back to the times where you had to design carefully the layout of the data in data (e.g. in COBOL you had to define the fields of a record at the byte level and that layout was really used to physically store the data on the disk; it was also very difficult to change it afterwards).
Purpose oriented terminology
Nowadays, specifications or models tend to be named according to their purpose. But how they are called and whether they are independent models or successive refinements of the same model is very dependent on the methodology. Some popular terminology:
Requirement specification / Analysis model, to express the business needs (i.e. problem space)
Design specification / model, to describe the solution (i.e. solution space)
Implementation specification / model, with all the technical details (i.e. one-to-one with the code, and therefore difficult to keep in sync).
Domain model, to express the design of business objects and business logic in a given domain, but without any application-specific design (i.e. like design model but with only elements that are of interest for the business).
UML
UML is UML and the same kind of diagrams may be used for different purposes. For example:
A use-case diagram represents in general user goal and tend to be mapped to requirements ("logical"). But use-cases can also show the relationship of an autonomous device / independent component to technical actors in its environment ("physical").
A class diagram can be used to document a domain model ("logical"). But a class diagram can also document the implementation details ("physical"). See for example this article with an example of logical vs. physical class diagram.

Amazon Alexa dynamic variables for intent

I am trying to build an Alexa Skills Kit, where a user can invoke an intent by saying something like
GetFriendLocation where is {Friend}
and for Alexa to recognize the variable friend I have to define all the possible values in LIST_OF_Friends file. But what if I do not know all the values for Friend and still would like to make a best match for ones present in some service that my app has access to.
Supposedly if you stick a small dictionary into a slot (you can put up to 50,000 samples), it becomes a "generic" slot and becomes very open to choosing anything, rather than what is given to it. In practice, I haven't had much luck with this.
It is a maxim in the field of Text To Speech that the more restrictive the vocabulary, the greater the accuracy. And, conversely, the greater the vocabulary, the lower the accuracy.
A system like VoiceXML (used mostly for telephone prompt software) has a very strict vocabulary, and generally performs well for the domains it has been tailored for.
A system like Watson TTS is completely open, but makes up for it's lack of accuracy by returning a confidence level for several different interpretations of the sounds. In short, it offloads much of the NLP work to you.
Amazon have, very deliberately, chosen a middle road for Alexa. Their intention model allows for more flexibility than VoiceXML, but is not as liberal as a dictation system. The result gives you pretty good options and pretty good quality.
Because of their decisions, they have a voice model where you have to declare, in advance, everything it can recognize. If you do so, you get consistent and good quality recognition. There are ways, as others have said, to "trick" it into supporting a "generic slot". However, by doing so, you are going outside their design and consistency and quality suffer.
As far as I know, I don't think you can dynamically add utterances for intents.
But for your specific question, there is a builtin slot call AMAZON.US_FIRST_NAME, which may be helpful.

When to separate systems into multiple Dynamics CRM instances

My organization is just starting to dive into Dynamics CRM and one of the questions that has come up is when should we combine various applications into one instance and when should they be separated into multiple instances?
I know the answer to that question depends on the situation, so I'm trying to come up with a list of questions that can be asked to help determine which direction makes the most sense.
I'm having a surprisingly difficult time finding any discussion of this online, so thought I'd ask here. So, what questions do you ask when deciding whether a system/set of functionality should be in a separate instance?
Edit:
I wasn't very clear about our type of organization. I work for a City with several Departments that provide different services and serve different customers with often very different functionality required.
I'm concerned about the urge to put all of these different systems that have different functionality and track different "customers" into one system. I fear there will be issues about managing all of the various entities that apply to different systems and ensuring that requests for changes from one set of users doesn't cause problems for a different set of users.
I'm sure sometimes it will make sense to combine multiple systems into one instance, but I think there may be just as many times where we don't want to put them together, so I wanted to come up with a list of questions to ask.
Some basic ones would be:
1) Do systems share common data (e.g., same customers)?
2) Do systems share common functionality?
3) Do systems collect the same kind of data?
4) Are there requirements to report on combined data from these systems?
5) Will it be easier to manage security by separating instances or through user roles?
In my experience a single instance is the norm. The benefits of a single instance are very significant in my opinion.
A few points you may want to consider:
Do you want data in silos? If so, multi-instance provides a very easy way to achieve this. However a single instance with appropriate security modelling can also achieve this.
Do you want to combine data across applications into a single business process? If so, multi-instance means you have to build an integration between instances. Single instance does not have this problem.
Do you want to use custom built features in every instance? If so, a single instance provides this straight away. Multi-instance requires separate development and deployment to every instance which may increase costs.
Have you considered licensing? I'm not a licensing expert, but I believe if you are online multi-instance will attract a higher license cost.
As a rule of thumb I would say a single instance is the default position, as it allows you to easily combine data and processes. If you want to go multi-instance just have a good reason why and be sure its not something that can be provided by a single instance.

Hidden Markov Models instead of FSM in a first person shooter game

I have been working on a course project in which we implemented an FPS using FSMs, by showing a top 2d view of the game, and using the bots and players and circles. The behaviour of bots was deterministic. For example, if the bot's health drops to below a threshhold, and the player is visible, the bot flees, else it looks for health packs.
However, I felt that in this case the bot isn't showing much of intelligence, as most of the decisions it takes are based on rules already decided by us.
What other techniques could I use, which would help me implement some real intelligence in the bot? I've been looking at HMMs, and I feel that they might help in bringing more uncertainty in the bot, and the bot might start being more autonomous in taking decisions than depending on pre defined rules.
What do you guys think? Any advice would be appreciated.
I don't think using a hidden Markov model would really be more autonomous. It would just be following the more opaque rules of the model rather than the explicit rules of the state machine. It's still deterministic. The only uncertainty they bring is to the observer, who doesn't have a simple ruleset to base predictions on.
That's not to say they can't be used effectively - if I recall correctly, several bots for FPS games used this sort of system to learn from players and develop their own AI.
But this does depend exactly on what you want to model with the process. AI is not really about algorithms, but about representation. If all you do is pick the same states that your current FSM has and observe an existing player's transitions, you're not likely to get a better system than having an expert input carefully tweaked rules for an FSM.
Given that you're not going to manage to implement "some real intelligence" as that is currently considered beyond modern science, what is it you want to be able to create? Is it a system that learns from its own experiments? A system that learns by observing human subjects? One that deliberately introduces unusual choices in order to make it harder for an opponent to predict?

Architecture for Satellite Parts of a Larger Application

I work for a firm that provides certain types of financial consulting services in most states in the US. We currently have a fairly straightforward CRUD application that manages clients and information about assets and services we perform for each. It only concerns itself with the fundamental data points and processes that are common to all locations--the least common denominator.
Now we want to implement support for tracking disparate data points and processes that vary from state to state while preserving the core nationally-oriented system. Like this:
(source: flickr.com)
The stack I'm working with is ASP.Net and SQL Server 2008. The national application is a fairly straightforward web forms thing. Its data access layer is a repository wrapper around LINQ to SQL entities and datacontext. There is little business logic beyond CRUD operations currently, but there would be more as the complexities of each state were introduced.
So, how to impelement the satellite pieces...
Just start glomming on the functionality and pursue a big ball of mud
Build a series of satellite apps that re-use the data-access layer but are otherwise stand-alone
Invest (money and/or time) in a rules engine (a la Windows Workflow) and isolate the unique bits for each state as separate rule-sets
Invest (time) in a plugin framework a la MEF and implement each state's functionality as a plugin
Something else
The ideal user experience would appear as a single application that seamlessly adapts its presentation and processes to whatever location the user is working with. This is particularly useful because some users work with assets in multiple states. So there is a strike against number two.
I have no experience with MEF or WF so my question in large part is whether or not mine is even the type of problem either is intended to address. They both kinda sound like it based on the hype, but could turn out to be a square peg for a round hole.
In all cases each state introduces new data points, not just new processes, so I would imagine the data access layer would grow to accommodate the addition of new tables and columns, but I'm all for alternatives to that as well.
Edit: I tried to think of some examples I could share. One might be that in one state we submit certain legal filings involving client assets. The filing has attributes and workflow that are different from other states that may require similar filings, and the assets involved may have quite different attributes. Other states may not have comparable filings at all, still others may have a series of escalating filings that require knowledge of additional related entities unique to that state.
Start with the Strategy design pattern, which basically allows you outline a "placeholder", to be replaced by concrete classes at runtime.
You'll have to sketch out a clear interface between the core app and the "plugins", and you have each strategy implement that. Then, at runtime, when you know which state the user is working on, you can instantiate the appropriate state strategy class (perhaps using a factory method), and call the generic methods on that, e.g. something like
IStateStrategy stateStrategy = StateSelector.GetStateStrategy("TX"); //State id from db, of course...
stateStrategy.Process(nationalData);
Of course, each of these strategies should use the existing data layer, etc.
The (apparent) downside with this solution, is just that you'll be hardcoding the rules for each state, and you cannot transparently add new rules (or new states) without changing the code. Don't be fooled, that's not a bad thing - your business logic should be implemented in code, even if its dependent on runtime data.
Just a thought: whatever you do, completely code 3 states first (with 2 you're still tempted to repeat identical code, with more it's too time-consuming if you decide to change the design).
I must admit I'm completely ignorant about rules or WF. But wouldn't it be possible to just have one big stupid ASP.Net include file with instructions for states separated from main logic without any additional language/program?
Edit: Or is it just the fact that each state has quote a lot a completely different functionality, not just some bits?

Resources