I am trying to draw a FSM diagram for a vending machine. The machine accepts nickles,dimes,quarters, half dollars, and dollar bills. There are 4 selections you can choose from. 3 are $1.15 and 1 is $1.50. Change is to be given if the person inserts more than the item is valued at.
FSM inputs
The cash receiver produces a 3-bit encoded value indicating no-coin/nickel/dime/quarter/half-dollar/dollar.
The comparator produces a 2-bit encoded signal indicating the result of comparing its input to the "cash box" value.
The item selector produces a 3-bit encoded value representing a selection to be purchased (dispensed).
My question is, will I need to have a state for each dollar value there can be? Right now I am trying it and I have close to 50 states and I am not even at the item selection part of the diagram. Is there a simpler way?
As FSMs do not provide means for quantitative values you will have to model these by states. This will result in an explosion of the number of states. Thats the resin why most reactive systems are modeled using extended strata machine concepts like Harel statecharts. These allow to use variables within a state machine which make it much simpler. A statechart for your case may look like this:
find a larger version here: vending machine statechart
I hope, that the meaning of the state machine is self explaining... This state chart defines a state machine that differs from FSM in various important aspects.
First you can define variables (left side) to holde quantitative values like number of coins price etc.
Choices (small rhombus) let you to structure transitions and thus refuse transition complexity
The rectangles named 'payment' and 'article supply' are parallel regions. Each region has its own active state. so the overall state is the combination of both active states. An FSM would require to define the cross product of the states of all parallel regions.
Using these mechanism is the only approach to maintain an meaningful number of states. By the way the example is build using the open source Yakindu Statechart Tools (on statecharts.org). It allows you to model and interactively simulate the models as well as generating state machine code.
Related
I'm trying to implement my own version of the Asynchronous Advantage Actor-Critic method, but it fails to learn the Pong game. My code was mostly inspired by Arthur Juliani's and OpenAI Gym's A3C versions. The method works well for a simple Doom environment (the one used in Arthur Juliani's code), but when I try the Pong game, the method diverges to a policy where it always executes the same action (always move down, or always move up, or always executes the no-op action). My code is located in my GitHub repository.
I have already adapted my network to resemble the architecture used by OpenAI Gym's A3C version, which is:
4 convolutional layers with the same specs, those being: 32 filters, 3x3 kernels, 2x2 strides, with padding (padding='same'). The output of the last convolutional layer is then flattened and fed to a LSTM layer with an output of size 256. The initial states C and H of the LSTM layer are given as an input. The output of the LSTM layer is then separated into two streams: a fully connected layer with an output size equals to the number of actions (policy) and another fully connected layer with only one output (value function) (more details in Network.py of my code);
The loss function used is just as is informed in the original A3C paper. Basically, the policy loss is the log_softmax of the linear policy times the advantage function. The value loss is the square of the difference between the value function and the discounted rewards. The total loss accounts for the value loss, policy loss, and the entropy. The gradients are clipped to 40 (more details in Network.py of my code);
There is only one global network and several worker networks (one network for each worker). Only the global network is updated. This update is done with respect to the local gradients of each worker network. Therefore, each worker simulate the environment for BATCH_SIZE iterations, saving the state, value function, chosen action, reward received, and the LSTM state. After BATCH_SIZE (I used BATCH_SIZE = 20) iterations, each worker pass those data into the network, calculate the discounted rewards, the advantage function, the total loss, and the local gradients. It then updates the global network with those gradients. Finally, the worker's local network is synchronized with the global network (local_net = global_net). All workers does that asynchronously (for more details in this step, check the work and train methods of the Worker class inside the Worker.py);
The LSTM states C and H are reset between episodes. It is also important to note that the current states C and H are kept locally by each worker;
To apply the gradients to the global network, I used the Adamoptimizer with learning rate = 1e-4.
I have already tried different configurations for the network (by trying several different convolutional layers configurations, including different activation functions), other optimizers (RMSPropOptimizer and AdadeltaOptimizer) with different parameters configurations, and different values to BATCH_SIZE. But it almost ends up diverging to a policy where it always executes only one action. I mean always because there are certain configurations where the agent maintains a policy similar to a random policy for several episodes, with no apparent improvements (I waited until 62k episodes before giving up in those cases).
Therefore, I would like to know if anyone have obtained success in training an agent in the Pong game using the A3C with a LSTM layer. If so, what are the parameters used? Any help would be appreciated!
[EDIT] As I said in the comments, I managed to partially solve the problem by feeding the correct LSTM state before calculating the gradients (instead of feeding an initialized LSTM state). This made the method learn reasonably well for the PongDeterministic environment. But the problem persists when I try the Breakout-v0: the agent reaches a mean score of 40 in about 65k episodes, but it seems to stop learning after this (it maintained this score for some time). I have checked the OpenAI starter agent several times and I can't find any significant differences between mine implementation with their's. Any help would be extremely appreciated!
How can defined topology in Castalia-3.2 for WBAN ?
How can import topology in omnet++ to casalia ?
where the topology defined in default WBAN scenario in Castalia?
with regard
thanks
Topology of a network is an abstraction that shows the structure of the communication links in the network. It's an abstraction because the notion of a link is an abstraction itself. There are no "real" links in a wireless network. The communication is happening in a broadcast medium and there are many parameters that dictate if a packet is received or not, such as the power of transmission, the path loss between transmitter and receiver, noise and interference, and also just luck. Still, the notion of a link could be useful in some circumstances, and some simulators are using it to define simulation scenarios. You might be used to simulators that you can draw nodes and then simply draw lines between them to define their links. This is not how Castalia models a network.
Castalia does not model links between the nodes, it models the channel and radios to get a more realistic communication behaviour.
Topology is often confused with deployment (I confuse them myself sometimes). Deployment is just the placement of nodes on the field. There are multiple ways to define deployment in Castalia, if you wish, but it is not needed in all scenarios (more on this later). People can confuse deployment with topology, because under very simplistic assumptions certain deployments lead to certain topologies. Castalia does not make these assumptions. Study the manual (especially chapter 4) to get a better understanding of Castalia's modeling.
After you have understood the modeling in Castalia, and you still want a specific/custom topology for some reason then you could play with some parameters to achieve your topology at least in a statistical sense. Assuming all nodes use the same radios and the same transmission power, then the path loss between nodes becomes a defining factor of the "quality" of the link between the nodes. In Castalia, you can define the path losses for each and every pair of nodes, using a pathloss map file.
SN.wirelessChannel.pathLossMapFile = "../Parameters/WirelessChannel/BANmodels/pathLossMap.txt"
This tells Castalia to use the specific path losses found in the file instead of computing path losses based on a wireless channel model. The deployment does not matter in this case. At least it does not matter for communication purposes (it might matter for other aspects of the simulation, for example if we are sampling a physical process that depends on location).
In our own simulations with BAN, we have defined a pathloss map based on experimental data, because other available models are not very accurate for BAN. For example the, lognormal shadowing model, which is Castalia's default, is not a good fit for BAN simulations. We did not want to enforce a specific topology, we just wanted a realistic channel model, and defining a pathloss map based on experimental data was the best way.
I have the impression though that when you say topology, you are not only referring to which nodes could communicate with which nodes, but which nodes do communicate with which nodes. This is also a matter of the layers above the radio (MAC and routing). For example it's the MAC and Routing that allow for relay nodes or not.
Note that in Castalia's current implementations of 802.15.6MAC and 802.15.4MAC, relay nodes are not allowed. So you can not create a mesh topology with these default implementations. Only a star topology is supported. If you want something more you'll have to implemented yourself.
If i am not wrong a qbit can have any value from 0 to 1 at any given time , But if you are moving some data from a register to another in a quantum computer how will we know what state will be transferred , to the register ?
The no-cloning theorem says it's not possible to copy quantum states (pure or mixed). If you want a copy, you have to measure the qubits, collapsing them to classical information, and then copying that - but you loose almost all the information encoded in the system and you're left with normal 0's and 1's.
However, it is possible to transfer a state using quantum teleportation - it destroys the original quantum state and re-creates it in another qubit using a classical information channel and a shared Bell state. But it is not exactly clear how this can be useful in a single processor as you could just use register renaming to the same effect (with a classical computer controlling the quantum processor, you can just tell it to start calling a certain physical qubit by some other name and achieve the same result).
Is it possible to estimate the heat generated by an individual process in runtime.
Temperature readings of the processor is easily accessible but what I need is process specific information.
Is it possible to map information such as cpu utilization, io, running time, memory usage etc to get some kind of an estimate?
I'm gonna say no. Because the overall temperature of your system components isn't a simple mathematical equation with everything that's moving and switching either.
Heat generated by and inside a computer is dependent on many external factors like hardware setup, ambient temperature of the room, possibly the age of the components, is there dust on them or in the fans, was the cooling paste correctly applied on the CPU or elsewhere, where heat sinks are present, how is heat being dissipated etc.etc.. In short, again, no.
Additionally, your computer runs a LOT of processes at any given time apart from the ones that you control (and "control" is a relative term). Even if it is possible to access certain sensory data for individual components (like you can see to some extent in the BIOS) then interpolating one single process' generated temperature in regard to the total is, well, impossible.
At the lowest levels (gate networks, control signalling etc.), an external individual no longer has any means to observe or measure what's going on but there as well, things are in a changing state, a variable amount of electricity is being used and thus a variable amount of heat generated.
Pertaining to your second question: that's basically what your task manager does. There are countless examples and articles on the internet on how to get that done in a plethora of programming languages.
That is, unless some of the actually smart people in this merry little community of keytappers and screengazers say that it IS actually possible, at which point I will be thoroughly amazed...
EDIT: Monitoring the processes is a first step in what you're looking for. take a look at How to detect a process start & end using c# in windows? and be sure to follow up on duplicates like the one mentioned by Hans.
You could take a look at PowerTOP or some other tool that monitors power usage. I am not sure how accurate it is across different systems but a power estimation should provide at least some relative information as the heat generated assuming the processes you are comparing are running in similar manners on hardware. In reality there are just too many factors to predict power, much less heat, effectively but you may be able to get an idea of the usage.
what is the relation between BTS and cell? I think one BTS hardware can cover few cells and also some cells could be covered by more than one BTS isn't it?
Is part of information, that mobile receives from GSM network identification of concrete BTS or mobile phone knows only cell-id?
Is part of information, that mobile receives from GSM network identification of BSC?
Ad 1: Typically one BTS can handle several cells. Common patterns are a one BTS covering a circular area with one round-radiating antenna or a three-sector BTS which covers three cells with sector-radiating antennas. One cell can only be handed by one BTS at a time. Two or more BTSes are not possible since the radio communication would interfere with each other. Note that this is completely different in WCDMA/UMTS since there is no concept of cells.
Ad 2: Since one cell is covered by exactly one BTS, the cell id uniquely identified the concrete BTS.
Ad 3: Since the BTS does not contain any control logic, the mobile communicates directly with the BSC, e.g. about radio resources.
Edit after comment:
1/ The BTS is "dumb" to say it simply. It does only what the BSC instructs it to do. E.g. The BSC tells the BTS as well as the mobile which frequencies to use for the radio communication. A BTS does not route traffic as it is hooked to exactly one BSC. It even does not route traffic to one of several mobiles attached to the BTS as this is done by the BSC. Think of the BTS as a Um-to-Abis physical layer and protocol transcoder.
2/ Actually my earlier statement that UMTS has no cell concept is not exactly true, it's just different.
GSM is FTDMA (frequency and time division multiple access). The radio channel is shared by using different frequecies (per cell) and timeslots (per mobile). Since radio frequency is used to distinguish participants, great care must be taken that not two GSM participants use the same frequency at the same time at the same location. The solution to this is cells, where geographic areas have different frequencies assigned. Network planning must ensure that no two neighbouring cells use the same frequencies as this may lead to interference since you cannot control exactly the size of a cell (e.g. due to absorption and reflection). In GSM, a BTS has a fixed number of radio transmission channels, the number depends on the BTS hardware configuration. If all channels are in use, the cell is full, this is indpendent of the location of a mobile in the cell.
UMTS is CDMA (code division multiple access). The radio channel is shared by encoding the payload in a way that allows to decode it later even if several senders use the same frequency range. That requires coding schemes which are collision free (all codes are different from each other to avoid senders using too similar codes) and a great deal of signal processing. As an analogy: on a party you can understand someone accross the room, even if ten people are talking. The more senders communicate within the cell, the smaller the cell gets in order to allow the BTS/Node-B distinguishing between senders. Therefore, in UMTS a cell size is not geographically fixed. The cell "breathes" depending on its load.
OK, this thread is quite old, but requires some further clarifications for next generations.
When talking about GSM physical network architecture, the term BTS (Base Transceiver System) refers to the physical site itself - the 'small house with the tower' (although modern small BTSs are just boxes hanged on walls or placed on roof tops).
Each such physical site can host one omni-directional cell, or several sector cells.
In GSM logical network architecture, there is some confusion.
The terms 'Cell' and 'Base Station' actually refer to the same physical entity (a set of transceiver units, each used to receive and transmit one of the paired UL/DL carrier frequencies allocated in the BA frequency set). Let's call this entity 'physical cell' just for clarification.
The term Base Station is used for radio resource management. A BSIC (BS Id Code, or BTS Id Code) is allocated for the 'physical cell' and is used in the radio-related conversations between the MS (Mobile Station) and the BSS (BTS and BSC), e.g. for measurement reports.
The BSIC is composed of 'local' parameters - Network Color Code (NCC) and BS Color Code (BCC), and is therefore unknown outside the network.
This is where the term Cell comes in:
The term Cell is used for Mobility Management. A Cell Identity (CI) is defined as a refinement of the Routing Area - one RA will include several cells in it.
The Global Cell Identifier (GCI) is composed of network, RA and CI, and is used for handovers inside and outside the network.
It is up to the BSC to convert the BSIC to the Cell Identity (the BSC may convert the BSIC directly to GCI, or the BSC converts to CI, and the MSC will convert it to GCI).
Hope that helps a bit.
BTS means different at different place!
MS, BTS, BSC, when these words appear together, BTS means something between your phone and the MSC.
Sometimes we call a site (a small house and a tower) as a BTS.
In NOKIA gsm equipment,cell is called segment. Every cell has at least one BTS,different BTS has different functions,Eg:BTS1 provide voice service,BTS2 provide EDGE service。
Phone get BCCH(freq)/NCC/BCC to identificate different cells. Decode the information from BCCH to get CI, LAC...etc.