I2C duty cycle significance - microcontroller

What is the significance of changing the duty cycle in i2c protocol? the feature is available in most of the advanced microcontrollers.

The duty cycle is significant, because different I²C modes have slightly different duty cycle.
Check the I²C Specification v5 Table 10, pg. 48.
Mode | t_HIGH | t_LOW | ratio
--------------+--------+-------+-------
Standard-mode | 4.00u | 4.7u | 0.85
Fast-mode | 0.60u | 1.3u | 0.46
Fast-mode Plus| 0.26u | 0.5u | 0.52
Your controller would need to decide on one ratio in order to be within the I²C specification.
So for instance, if the controller is using the standard mode timing ratio, this would prevent you from achieving fast mode timings with maximum clock frequency.
These are the ratios as defined in the standard for minimal t_HIGH:t_LOW. However, notice that the 100 kHz period is 10 us, but t_HIGH + t_LOW from the table is less than 10 us. Thus, the ratio of the actual values can vary as long as the t_HIGH and t_LOW minimum timings are met.
The point of these ratios is to illustrate that I²C timing constraints are different between I²C modes. They aren't mandatory ratios that controllers need to keep.
For example, 4 us high, 6 us low would be a 0.67 ratio, yet Standard-mode timings would be met.
STM32F4 example:
The STM32F4xx series only supports 100 kHz and 400 kHz communication speeds (RM0090, rev 5, pg. 818, Section 27.2).
I don't know where your ratios come from, but the reference manual states (RM0090, rev 5, pg. 849, Section 27.6.8) a 1:1 ratio for standard mode, and 1:2 or 9:16 ratio for fast mode.
So for instance, to achieve the highest standard mode clock frequency of 100 kHz, t_HIGH and t_LOW need to be programmed for 5 us, because the ratio is 1:1.
For Fast-mode, for example with a 1:2 ratio, you would need to program t_HIGH to 3.33 us and t_LOW to 6.66 us for 100 kHz. Yet that would not meet timing requirements for Standard-mode.
So you cannot use STM32F4 programmed for Fast-mode while keeping Standard-mode timings at highest Standard-mode frequency.
And vice versa: You cannot use Standard-mode and program 400 kHz Fast-mode, because the default 1:1 ratio is out-of-spec for 2.5 us, because t_LOW would be 1.25 us < 1.3 us.

Related

How do you get rid of Hz when calculating MIPS?

I'm learning computer structure.
I have a question about MIPS, one of the ways to calculate CPU execution time.
The MIPS formula is as follows.
And if the clock rate is 4 GHz and the CPI is 1.
I think MIPS is 4,000hz.
Because It's 4 * 10^9 * Hz / 1 * 10^6.
I don't know if it's right to leave units of Hz.
Hz is 1/s. MIPS is actually "mega instruction / s". To be clear, "Per" is the slash for division: Mega Instructions / Second
4GHz is 4G s⁻¹. Divide that by 1 cycle per instruction... but cycle is period, which is inverse of frequency.
It's not 4000Hz MIPS because the MIPS means "Per second". You wrote 4000 million instruction 1/s 1/s.
You eat the Hz because it's part of the name you are labeling it with.
For any quantity, it's important to know what units it's in. As well as a scale factor (like a parsec is many times longer than an angstrom), units have dimensions, and this is fundamental (at least for physical quantities like time; it can get less obvious when you're counting abstract things).
Those example units are both units of length so they have the same dimensions; it's physically meaningful to add or subtract two lengths, or if we divide them then length cancels out and we have a pure ratio. (Obviously we have to take care of the scale factors, because 1 parsec / 1 angstrom isn't 1, it's 3.0856776e+26.) That is in fact why we can say a parsec is longer than an angstrom, but we can't say it's longer than a second. (It's longer than a light-second, but that's not the only possible speed that can relate time and distance.)
1 m/s is not the same thing as 1 kg, or as dimensionless 1.
Time (seconds) is a dimension, and we can treat instructions-executed as another dimension. (I'll call it I since there isn't a standard SI unit for it, AFAIK, and one could argue it's not a real physical dimension. That doesn't stop this kind of dimensional analysis from being useful, though.)
(An example of a standard count-based unit is the mole in chemistry, a count of particles. It's an SI base unit.)
Counts of clock cycles can be treated as another dimension, in which case clock frequency is cycles / sec rather than just s-1. (Seconds, s, are the base SI unit of time.) If we want to make sure we're correctly cancelling it out in both sides, that's a useful approach, especially when we have quantities like cycles/instruction (CPI). Thus cycle time is s/c, seconds per cycle.
Hz has dimensions of s-1, so if it's something per second we should not use Hz, if something isn't dimensionless. (Clock frequencies normally are given in Hz, because "cycles" aren't a real unit in physics. That's something we're introducing to make sure everything cancels properly).
MIPS has dimensions of instructions / time (I / s), so the factors that contribute to it must cancel out any cycle counts. And we're not calling it Hz because we're considering "instructions" as a real unit, thus 4000 MIPS not 4000 MHz. (And MIPS is itself a unit so it's definitely not 4000 Hz MIPS; if it made sense to combine units that way, that would be dimensions of I/s2, which would be an acceleration not a speed.).
From your list of formulas, leaving out the factor of 10^6 (that's the M in MIPS, just a metric prefix in front of Instructions Per Sec, I/s)
instructions / total time obviously works without needing any cancelling.
I / (c * s / c) = I / s after cancelling cycles in the denominator
(I * c/s) / (I * c/I) cancel the Instructions in the denominator:
(I * c/s) / c cancel the cycles:
(I * 1/s) / 1 = I/s
(c/s) / (c/I) cancel cycles:
(1/s) / (1/I) apply 1/(1/I) = I reciprocal of reciprocal
(1/s) * I = I / s
All of these have dimensions of Instructions / Seconds, i.e. I/S or IPS. With a scale factor of 106, that's MIPS.
BTW, this is called "dimensional analysis", and in physics (and other sciences) it's a handy tool to see if a formula is sane, because both sides must have the same dimensions.
e.g. if you're trying to remember how position (or distance-travelled) of an accelerating object works, d = 1/2 * a * t^2 works because acceleration is distance / time / time (e.g. m/s^2), and time-squared cancels out the s^-2 leaving just distance. If you mis-remembered something like 1/2 a^2 * t, you can immediate see that's wrong because you'd have dimensions of m / s^4 * s = m / s^3 which is not a unit of distance.
(The factor of 1/2 is not something you can check with dimensional analysis; you only get those constant factors like 1/2, pi, e, or whatever from doing the full math, e.g. taking the derivative or integral, or making geometric arguments about linear plots of velocity vs. time.)

Distribution of money following different beta-distributions

I am trying to find a methodology (or even better, the code) to do the following in Netlogo. Any help is appreciated (I could always try to rewrite the code from R or Matlab to Netlogo):
I have $5000, which I want to distribute following different beta distributions among 10 000 actors. The maximum amount an actor may receive is $1.
Basically, I am looking for a way to generate random numbers to actors (10000 actors) in a [0,1] interval, following different beta distributions, where the mean of the distributed values remains equal to 0.5. This way the purchasing power of the population (10000 actors with a mean of 0.5 is $5000) remains equal for beta(1,1) (uniform population) as well as, for example, beta(4,1) (rich population).
an example with 5 actors distributing 2,5 dollar:
beta(1,1) 0,5 - 0,5 - 0,5 - 0,5 - 0,5 (mean 0,5)
beta(4,1) 0,1 - 0,2 - 0,5 - 0,7 - 1,0 (mean 0,5)
I've been thinking. If there is no apparent solution to this, maybe the following could work. We can write the shape of the frequency distribution of beta(4,1) as y=ax^2+b with some value for a and b (both increase exponentially).
In my case, the integral(0-1) of y=ax^2+b should be 5000. Playing around with values for a and b should give me the shape of beta(4,1).
The number of actors having 0.1 should then be the the integral(0-0.1) of y=ax^2+b where a and b are parameters of the shape resembling the beta(4,1).
Is my reasoning clear enough? Could someone extend this reasoning? Is there a link between the beta distribution and a function of a,b,x ?

How do I code a cost optimization function in R?

I'm busy with a small project where a large amount of samples have been taken from a manufacturing process (2700 samples of 11 items). A specified Upper and Lower Specification Limit has been set, and items under the LSL are said to cost $3 to fix, while items above the USL are said to cost $5 to fix. The data is spread with a uniform distribution.
How would I go about deciding where to centre the process (given that the distribution would stay the same along the centre line) to minimize total cost? I know how to do it iteratively, but I'd like a more optimal way to solve this problem.
EDIT: Here is an example of the data I'm working with.
One sample would be, for instance
45.62565379
47.06496942
46.39000538
46.44387364
45.81911053
45.25935862
48.75357907
46.50918593
46.87072887
46.60195194
48.09000017
There are 2701 more samples like the one above (albeit with different values) making up my population. The population mean is 47.66 and population standard deviation is 1.425. The UCL is 48.98 and the LCL is 46.34. The USL has been set to 50 and the LSL to 45.
Currently the process is centered around the population mean, but the amount of samples with means above 50 is proportionally larger than that of the amount of samples with means under 45, meaning that the process is more expensive, as it costs $5 to fix a batch above the USL and only $3 to fix it under the LSL. How do I decide where to centre the process if its distribution around the centre line will remain the same to minimize cost?

Calculating new temperature of an object when air temperature changes

I'm trying to calculate the new temperature of an object when the air temperature around it changes, given a period of time.
Basically I get periodic readings from an air temperature sensor in a refrigerator. In some cases these readings are every 5 minutes, in others every 1 minute, so the time between readings is variable.
For each reading I get, I'd like to also calculate the approximate temperature of food at its core; something like a chicken for example (I know that part is vague, but if there is a variable I can tweak then that is fine).
The result should be a "damped" version of the actual air temperature, as obviously any objects will slowly change temperature to eventually meet the air temperature.
Initially there used to be "food simulant" put around the sensor, so the temperature would automatically be damped, but this is no longer the case.
I do not know much about thermodynamics. I'm not sure if I can just add a percentage of the temperature change to the previous damped value, or if I need a calculation based on the last few air temperature readings, or what.
I guess I'm looking for a result a bit like:
10:00 2 degrees (air), 2 degrees (product)
10:05 2.5 degrees (air), 2.1 degrees (product)
10:10 2.5 degrees (air), 2.2 degrees (product)
10:20 2.7 degrees (air), 2.5 degrees (product)
I could do something really cheap like averaging the readings over the last 30 minutes but I don't think that will cut it!
I'm not really sure this is the correct forum for this question! I'd appreciate any help - thanks very much.
EDIT: I have since found a solution by reading up on Fourier's Law. I will post the solution once I have some time. Thanks to anyone who commented.
A simple model would be to assume that the product changes temperature by a fraction of the difference between the product temerature and the air temperature.
airTemp = readAirTemp();
productTemp = productTemp + factor * (airtemp - productTemp);
If the time interval between readings changes, then you need to change the factor.
The factor also depends on which product you want to emulate.
Let's assume a factor of 0.5 at a 5 minute time interval.
Example (core temperature of a 25 degree product placed in a 5 degree refridgerator):
Time ProductTemp Temp Calculation:
0 25 5 <astart condition>
5 15 5 ptemp = 25 + 0.5 * (5-25)
10 10 5 ptemp = 15 + 0.5 * (5-15)
15 7.5 5 ptemp = 10 + 0.5 * (5-10)
20 6.25 5 ptemp = 7.5 + 0.5 * (5-7.5)
25 5.625 5 ptemp = 6.25 + 0.5 * (5-7.5)
A real physics model would consider heat transfer through thermal radiation, heat conduction and convection. However, having a single variable to tweak (factor) is a good start if you want a simple yet fairly realistic model.
Edit:
This is not an exact model. If you put something really hot in the fridge (like 1000 degrees), then radiation would be the leading term in the cooling equation and temperature would decline faster. The above model should work well when the difference is small. The factor would depend on the item (the things you mentioned and also the amount of energy it takes to change the temperature of the food and the shape of the food - thin food cools faster) and its surrounding (can air flow freely around it or is the fridge full).
Calculating the factor is by no means simple. I recommend that you put a thermometer in a couple of types of food, put them in the fridge and measure in 5 minute intervals and then calculate the factor for each food type. The factor would still be inexact - a small apple cools faster than a big apple and so on.

Normalize values in the range of [0 -1]

This is a question about normalization of data that takes into account different parameters.
I have a set of articles in a website. The users use the rating system and rate the articles from 1 to 5 stars. 1 star means a bad article and marks the article 'bad'. 2 stars give an 'average' rating. 3,4 and 5 stars rate 'good', 'very good' and 'excellent'.
I want to normalize these ratings in the range of [0 - 2]. The normalized value will represent a score and will be used as a factor for boosting the article up or down in article listing. Articles with 2 or less stars, should get a score in the range of [0-1] so this boost factor will have a negative effect. Articles with rating of 2 or more stars should get a score in the range of [1-2] so this the boost factor will have a positive boost.
So for example, an article that has a 3.6 stars will get a boost factor of 1.4. This will boost the article up in the articles listing. An article with 1.9 stars will get a score of 0.8. This score will boost the article further down in the listing. An article with 2 stars will get a boost factor of 1 - no boost.
Furthermore I want to take into account the number of votes each article has. An article with a single vote of 3 stars must rank worse than an article of 4 votes and 2.8 stars average. (the boost factor could be 1.2 and 1.3 respectively)
If I understood you correctly, you should use a Sigmoid function, which refers to the special case of the Logistic function. Sigmoid and other logistic functions are often used in Neural networks to shrink (compress or normalize) input range of data (for example, to [-1,1] or [0,1] range).
I'm not going to solve your rating system, but a general way of normalising values is this.
Java method:
public static float normalise(float inValue, float min, float max) {
return (inValue - min)/(max - min);
}
C function:
float normalise(float inValue, float min, float max) {
return (inValue - min)/(max - min);
}
This method let you have negative values on both max and min. For example:
variable = normalise(-21.9, -33.33, 18.7);
Note: that you can't let max and min be the same value, or lett max be less than min. And inValue should be winth in the given range.
Write a comment if you need more details.
Based on the numbers, and a few I made up myself, I came up with these 5 points
Rating Boost
1.0 0.5
1.9 0.8
2.0 1.0
3.6 1.4
5.0 2.0
Calculating an approximate linear regression for that, I got the formula y=0.3x+0.34.
So, you could create a conversion function
float ratingToBoost(float rating) {
return 0.3 * rating + 0.34;
}
Using this, you will get output that approximately fits your requirements. Sample data:
Rating Boost
1.0 0.64
2.0 0.94
3.0 1.24
4.0 1.54
5.0 1.84
This obviously has linear growth, which might not be what you're looking for, but with only three values specified, it's hard to know exactly what kind of growth you expect. If you're not satisfied with linear growth, and you want e.g. bad articles to be punished more by a lower boosting, you could always try to come up with some more values and generate an exponential or logarithmic equation.

Resources