I'm trying to calculate the new temperature of an object when the air temperature around it changes, given a period of time.
Basically I get periodic readings from an air temperature sensor in a refrigerator. In some cases these readings are every 5 minutes, in others every 1 minute, so the time between readings is variable.
For each reading I get, I'd like to also calculate the approximate temperature of food at its core; something like a chicken for example (I know that part is vague, but if there is a variable I can tweak then that is fine).
The result should be a "damped" version of the actual air temperature, as obviously any objects will slowly change temperature to eventually meet the air temperature.
Initially there used to be "food simulant" put around the sensor, so the temperature would automatically be damped, but this is no longer the case.
I do not know much about thermodynamics. I'm not sure if I can just add a percentage of the temperature change to the previous damped value, or if I need a calculation based on the last few air temperature readings, or what.
I guess I'm looking for a result a bit like:
10:00 2 degrees (air), 2 degrees (product)
10:05 2.5 degrees (air), 2.1 degrees (product)
10:10 2.5 degrees (air), 2.2 degrees (product)
10:20 2.7 degrees (air), 2.5 degrees (product)
I could do something really cheap like averaging the readings over the last 30 minutes but I don't think that will cut it!
I'm not really sure this is the correct forum for this question! I'd appreciate any help - thanks very much.
EDIT: I have since found a solution by reading up on Fourier's Law. I will post the solution once I have some time. Thanks to anyone who commented.
A simple model would be to assume that the product changes temperature by a fraction of the difference between the product temerature and the air temperature.
airTemp = readAirTemp();
productTemp = productTemp + factor * (airtemp - productTemp);
If the time interval between readings changes, then you need to change the factor.
The factor also depends on which product you want to emulate.
Let's assume a factor of 0.5 at a 5 minute time interval.
Example (core temperature of a 25 degree product placed in a 5 degree refridgerator):
Time ProductTemp Temp Calculation:
0 25 5 <astart condition>
5 15 5 ptemp = 25 + 0.5 * (5-25)
10 10 5 ptemp = 15 + 0.5 * (5-15)
15 7.5 5 ptemp = 10 + 0.5 * (5-10)
20 6.25 5 ptemp = 7.5 + 0.5 * (5-7.5)
25 5.625 5 ptemp = 6.25 + 0.5 * (5-7.5)
A real physics model would consider heat transfer through thermal radiation, heat conduction and convection. However, having a single variable to tweak (factor) is a good start if you want a simple yet fairly realistic model.
Edit:
This is not an exact model. If you put something really hot in the fridge (like 1000 degrees), then radiation would be the leading term in the cooling equation and temperature would decline faster. The above model should work well when the difference is small. The factor would depend on the item (the things you mentioned and also the amount of energy it takes to change the temperature of the food and the shape of the food - thin food cools faster) and its surrounding (can air flow freely around it or is the fridge full).
Calculating the factor is by no means simple. I recommend that you put a thermometer in a couple of types of food, put them in the fridge and measure in 5 minute intervals and then calculate the factor for each food type. The factor would still be inexact - a small apple cools faster than a big apple and so on.
Related
I have a very large dataset (~55,000 datapoints) for chicken crops. Chickens are grown over ~35 day period. The dataset covers 10 sheds of ~20,000 chickens each. In the sheds are weighing platforms and as chickens step on them they send the weight recorded to a server. They are sending continuously from day 0 to the final day.
The variables I have are: House (as a number, House 1 up to House 10), Weight (measured in grams, to 5 decimal points) and Day (measured as a number between two integers, e.g. 12 noon on day 0 might be 0.5 in the day, whereas day 23.3 suggests a third of the way through day 23 (8AM). But as this data is sent continuously the numbers can be very precise).
I want to construct either a Time Series Regression model or an ML model so that if I take a new crop, as data is sent by the sensors, the model can make a prediction for what the end weight will be. Then as that crop cycle finishes it can be added to the training data and repeat.
Currently I'm using this very simple Weight VS Time model, but eventually would include things like temperature, water and food consumption, humidity etc.
I've run regression analyses on the data sets to determine the relationship between time and weight (it's likely quadratic, see image attached) and tried using randomForrest in R to create a model. The test model seemed to work well in regards to the MAPE value being similar to the training value, but that was by taking out one house and using that as the test.
Potentially what I've tried so far is completely the wrong methodology but this is a new area so I'm really not sure of the best approach.
I am trying to find a methodology (or even better, the code) to do the following in Netlogo. Any help is appreciated (I could always try to rewrite the code from R or Matlab to Netlogo):
I have $5000, which I want to distribute following different beta distributions among 10 000 actors. The maximum amount an actor may receive is $1.
Basically, I am looking for a way to generate random numbers to actors (10000 actors) in a [0,1] interval, following different beta distributions, where the mean of the distributed values remains equal to 0.5. This way the purchasing power of the population (10000 actors with a mean of 0.5 is $5000) remains equal for beta(1,1) (uniform population) as well as, for example, beta(4,1) (rich population).
an example with 5 actors distributing 2,5 dollar:
beta(1,1) 0,5 - 0,5 - 0,5 - 0,5 - 0,5 (mean 0,5)
beta(4,1) 0,1 - 0,2 - 0,5 - 0,7 - 1,0 (mean 0,5)
I've been thinking. If there is no apparent solution to this, maybe the following could work. We can write the shape of the frequency distribution of beta(4,1) as y=ax^2+b with some value for a and b (both increase exponentially).
In my case, the integral(0-1) of y=ax^2+b should be 5000. Playing around with values for a and b should give me the shape of beta(4,1).
The number of actors having 0.1 should then be the the integral(0-0.1) of y=ax^2+b where a and b are parameters of the shape resembling the beta(4,1).
Is my reasoning clear enough? Could someone extend this reasoning? Is there a link between the beta distribution and a function of a,b,x ?
I have got a 3 months time series of daily data (data is recorded every 5 mins). The data is pretty noisy.
I have already tried some MA methods. They work fine and the resulting curve is fairly smooth but the problem is that the peaks are almost smoothed out.
So my question is:
Is there any method to get rid of all this noise in the graph but preserve the peak values?
I have also read something about Kalman-Filtering, but I am not sure how this works and if this is suitable for my problem.
I tried the following code:
smooth <- rollapply(PCM4 [,3], width=10, FUN=mean, align = "center", fill=NA)
I also tried some different input values for window width, which made the resulting data smoother, but also reduced the peak values which is not what I want.
data set:
DateTime h v Q T
2014-12-18 11:45:00 0.112 0.515 17.141 15.4
2014-12-18 11:50:00 0.113 0.511 17.007 15.5
2014-12-18 11:55:00 0.114 0.518 17.480 15.5
unsmoothed plot:
smoothed plot (width=10):
As you see, the second plot is quite distorted and the first peak e.g. is at about 250 L/s instead of 500 L/s.
The reason for this is, that it´s computed from the rolling mean, so it gets quite distorted.
But the question is: Is there any better solution to fit my needs??
Is there any method to get rid of all this noise in the graph but preserve the peak values?
The challenge here is that you have not really said what is noise and what is signal. Normally, a wildly different ("peak") value would be classified as noise. When people say filtering, they are usually thinking of low-pass filtering (removing high frequency noise and keeping general trends). A sudden peak is going to be noise by that definition.
A Kalman Filter would give you a tool to use if you had a mathematical understanding of your system and its noise. In the KF's "predict" step you would have a mathematical model which would produce an expected value against which you would test your measurement. If you could predict peaks (either their value, or even just when they exist) a KF could help you.
An approach that might help is http://www.lifl.fr/~casiez/1euro/ the "1 Euro" filter. The core idea is that gross movements (your sudden peaks) are likely to be essentially true, while periods of low movement are noisy and should be averaged down. That filter opens up its bandwidth suddenly whenever there's a big movement, and then gradually clamps it down. It was designed for tracking human movements without reflecting the noise from the measurements.
This is a question about normalization of data that takes into account different parameters.
I have a set of articles in a website. The users use the rating system and rate the articles from 1 to 5 stars. 1 star means a bad article and marks the article 'bad'. 2 stars give an 'average' rating. 3,4 and 5 stars rate 'good', 'very good' and 'excellent'.
I want to normalize these ratings in the range of [0 - 2]. The normalized value will represent a score and will be used as a factor for boosting the article up or down in article listing. Articles with 2 or less stars, should get a score in the range of [0-1] so this boost factor will have a negative effect. Articles with rating of 2 or more stars should get a score in the range of [1-2] so this the boost factor will have a positive boost.
So for example, an article that has a 3.6 stars will get a boost factor of 1.4. This will boost the article up in the articles listing. An article with 1.9 stars will get a score of 0.8. This score will boost the article further down in the listing. An article with 2 stars will get a boost factor of 1 - no boost.
Furthermore I want to take into account the number of votes each article has. An article with a single vote of 3 stars must rank worse than an article of 4 votes and 2.8 stars average. (the boost factor could be 1.2 and 1.3 respectively)
If I understood you correctly, you should use a Sigmoid function, which refers to the special case of the Logistic function. Sigmoid and other logistic functions are often used in Neural networks to shrink (compress or normalize) input range of data (for example, to [-1,1] or [0,1] range).
I'm not going to solve your rating system, but a general way of normalising values is this.
Java method:
public static float normalise(float inValue, float min, float max) {
return (inValue - min)/(max - min);
}
C function:
float normalise(float inValue, float min, float max) {
return (inValue - min)/(max - min);
}
This method let you have negative values on both max and min. For example:
variable = normalise(-21.9, -33.33, 18.7);
Note: that you can't let max and min be the same value, or lett max be less than min. And inValue should be winth in the given range.
Write a comment if you need more details.
Based on the numbers, and a few I made up myself, I came up with these 5 points
Rating Boost
1.0 0.5
1.9 0.8
2.0 1.0
3.6 1.4
5.0 2.0
Calculating an approximate linear regression for that, I got the formula y=0.3x+0.34.
So, you could create a conversion function
float ratingToBoost(float rating) {
return 0.3 * rating + 0.34;
}
Using this, you will get output that approximately fits your requirements. Sample data:
Rating Boost
1.0 0.64
2.0 0.94
3.0 1.24
4.0 1.54
5.0 1.84
This obviously has linear growth, which might not be what you're looking for, but with only three values specified, it's hard to know exactly what kind of growth you expect. If you're not satisfied with linear growth, and you want e.g. bad articles to be punished more by a lower boosting, you could always try to come up with some more values and generate an exponential or logarithmic equation.
I am doing my master thesis in Electrical engineering about the impact of the humidity and
temperature on power consumption
I have a problem that is related to statistics, numerical methods and mathematics topics
I have real data for one year (year 2000)
Every day has 24 hours records for temperature, humidity, power consumption
So, the total points for one parameter, for example, temperature is 24*366 = 8784 points
I classified the pattern of the power to three patterns:
Daily, seasonally and to cover the whole year
The aim is to find a mathematical model of the following form:
P = f ( T , H , t , date )
Where,
P = power consumption,
T = temperature,
t = time in hours from 1 to 24,
date = the date number in the year from 1 to 366 ( or date number in a month from 1 to 31)
I started drawing in Matlab program a sample day, 1st August showing the effect of time,
humidity and temperature on power consumption::
http://www7.0zz0.com/2010/12/11/23/264638558.jpg
Then, I make the analysis wider to see what changes happened when drawing this day with the next day:
http://www7.0zz0.com/2010/12/11/23/549837601.jpg
After that I make it wider and include the 1st week of august:
http://www7.0zz0.com/2010/12/11/23/447153078.jpg
Then, the whole month, august:
http://www7.0zz0.com/2010/12/12/00/120820248.jpg
Then, starting from January, I plot power and temperature for 1st six months without
humidity (only for scaling):
http://www7.0zz0.com/2010/12/12/00/908911392.jpg
with humidity :
http://www7.0zz0.com/2010/12/12/00/102651717.jpg
Then, the whole year plot without humidity:
( P,T,H have constant values but I separate H only for scaling since H values are too much higher than P and H and that cause shrinking of the plot making small plots for P and T)
http://www7.0zz0.com/2010/12/11/23/290259320.jpg
and finally with humidity:
http://www7.0zz0.com/2010/12/11/23/842530863.jpg
The reason I have plotted these figures is to follow the behaviors of all parameters. How P is changing with respect to Temperature, Humidity, and time in hours and time in day number.
It is clear that these figures represent cyclic behavior but this behavior is not
constant. It is starting to increase and then decrease during the whole year.
For example the behavior of 1st January is almost the same as any other day in the year
but the difference is in shifting up or down, left or right.
Also, Temperature and Humidity are almost sinusoidal. However, Power consumption behavior is not purely sinusoidal as seen in the following figure:
http://www7.0zz0.com/2010/12/12/00/153503144.jpg
I am not expert in statistics and numerical methods, and this matter now does not have relation with electrical engineering concept.
The results I am aiming to get are:
Specify the day number in the year from 1 to 366,
then specify the hour in that day,
temperature and humidity also will be specified.
All of these parameters are to be specified by the user
The result:
The mathematical model should be capable to find the power consumption in that specific hour of that day.
Then, the Power found from the model will be compared to the measured real power from the
data and if the values are very close to each other, then the model will be accurate and
accepted.
I am sorry for this long question. I actually read many papers, many helps but I could not
reach to the correct approach of how to find one unified model by following the curves
behavior from starting till the end of the year and also having more than one independent
variable has disturbed me a lot.
I hope this problem is not difficult for statistics and mathematics experts.
Any help will be highly appreciated,
Thanks in advance
Regards
About this:
"Also, Temperature and Humidity are almost sinusoidal. However, Power consumption behavior is not purely sinusoidal"
Seems in local scale (several days/weeks order) temperature and humidity can be expressed as periodic train of Gaussians:
After such assumption we can model power consumption as superposition of temperature and humidity trains of Gaussians. Consider this opencalc spreadsheet chart:
in which f1 and f2 are train of gaussians (here only 4 peaks, but you may calculate as many as you need for data fitting) and f3 is superposition of these two trains,-
just (f12 + f22)1/2
However i don't know to what degree power consumption follows the addition of train of gaussians. You may invest time to explore this possibility.
Good luck!