cellwise variable k-omega SST turbulence model coefficients - openfoam

I am wondering how to make cellwise variable k-omega SST turbulence model coefficients in OpenFOAM.
In other words, instead of keeping the turbulence model coefficients to their default constant values in the whole domain, I would like them to have different values in each cell. These values are precomputed in some way but how to propagate in the model? Anyone did this before?

Related

Finding how variable affect output of time-series random-forest regression model

I created a Random-Forest Regression model for time-series data in R that have three predictors and one output variable.
Is there a way to find (perhaps in more absolute terms) how changes in a specific variable affect the prediction output?
I know about variable importance, I am not trying to find the variables that have the biggest effect instead I am trying to see if I pick input variable X_1 and increase its value (or decrease it) how that would change the prediction output.
Does it even makes sense to do this? or is it even possible with a random-forest model? Rereading my question a few times it made me dubious, but any insight/recommendation would be greatly appreciated.
I would guess what this question is actually about is called exploratory data analysis (EDA). For starters, I would calculate the correlations between the variables to get a feeling for the strength of the [linear] relationship between two variables. Further, I would look at scatter plots between the variables to get a feeling for the relationships. Depending on the variables [linear] regression could tell how an increase in variable x1 would affect variable x2.

Regress variable on variables the date before

For an econometrics project, I'm using R to estimate some effects with panel data.
To know if the strict exogeneity isn't too restrictive, I'm running the following 2SLS estimation to predict Y_it (which are sales) by X_it (some variables) using a first difference model.
I need to regress each component of Delta_X_it (=X_it - X_it-1) on a constant and all components of Delta_X_it-1
Then regress Delta_Y_it on the estimations of Delta_X_it
The 2nd step will be easy to implement if the first step is done, but this first step is the problem. I already first differenced all variables by group (here by Store), but I don't know how to tell R that I want to regress one variable at time t on variables at time t-1 while grouping by Store. Any idea on how to do so ?

bam() returns negative deviance explained values

I'm trying to run GAMs to analyze some temperature data. I have remote cameras and external temperature loggers, and I'm trying to model the difference in the temperatures recorded by them (camera temperature - logger temperature). Most of the time, the cameras are recording higher temperatures, but sometimes, the logger returns the higher temperature, in which case the difference ends up being a negative value. The direction of the difference is something that I care about, so I do have to have non-positive values as a response. My explanatory variables are percent canopy cover (quantitative), direct and diffuse radiation (quant.), and camera direction (ordered factor) as fixed effects as well as the camera/logger pair (factor) for a random effect.
I had mostly been using the gam() function in mgcv to run my models. I'm using a scat distribution since my data is heavy-tailed. My model code is as follows:
gam(f1, family = scat(link = "identity"), data = d)
I wanted to try using bam() since I have 60,000 data points (one temperature observation per hour of the day for several months). The gam() models run fine, though they take a while to run. But the exact same model formulas run in bam() end up returning negative deviance explained values. I also get 50+ warning messages that all say:
In y - mu : longer object length is not a multiple of shorter object length
Running gam.check() on the fitted models returns identical residuals plots. The parametric coefficients, smooth terms, and R-squared values are also almost identical. The only things that have really noticeably changed are the deviance explained values, and they've changed to something completely nonsensical (the deviance explained values for the bam() models range from -61% to -101% deviance explained).
I'll admit that I'm brand new to using GAM's. I know just enough to know that the residuals plots are more important than the deviance explained values, and the residuals plots look good (way better than they did with a Gaussian distribution, at least). More than anything, I'm curious about what's going on within the bam() function specifically that's causing the function to pass that warning and return a negative deviance explained value. Is there some extra argument that I can set in bam() or some further manipulations I can do to my data to prevent this from happening, or can I ignore it and move forward since my residuals plots look good and the outputs are mostly the same?
Thanks in advance for any help.

Principal component analysis and elastic net regression

I have identified genes of interest in disease cases and controls within a microarray gene expression set and have applied PCA. I want to use elastic net regression to build a model that can determine which principal components are predictive of the source (case versus control) but I'm unsure of how to do this i.e. what to input as the x and y variables. Any help at all would be much appreciated!
Some form of subset selection (i.e. the elastic net regression you refer to), where you fit a 'penalized' model and determine the most effective predictors isn't applicable to PCA or PCR (principal component regression). PCR reduces the data set to 'n' components, and the different principal components refer to different 'directions' of variance within the data. The first principal component is the direction within the data which has the most variance, the second principal component is the direction within the data which has the second most variance, etc
If you were to type:
summary(pcr.model)
It will return a table containing the amount of variance explained in the response (i.e. your y) by each principal component. You will notice there is a cumulative total of variance explained by the principal components.
The idea of PCR is that you can select a subset of these (if your data is applicable -- i.e. most of the variance is captured in the first few principal components), allowing you to greatly reduce the dimensionality of your data (allowing you to, say, plot a graph of PC1 vs PC2). Note that PCR is generally used in the categorisation of ordinal or categorical data types, so if your data isn't like this, probably use something else.
If, however, you want to know which predictors are useful and apply an elastic-net type regression, I would recommend using the Lasso. I would also recommend the ISLR book, which contains excellent R walkthroughs of all of the essential frequentist modelling techniques.

In R, how to add an external variable to an ARIMA model?

Does anyone here know how I can specify additional external variables to an ARIMA model ?
In my case I am trying to make a volatility model and I would like to add the squared returns to model an ARCH.
The reason I am not using GARCH models, is that I am only interested in the volatility forecasts and the GARCH models present their errors on their returns which is not the subject of my study.
I would like to add an external variable and see the R^2 and p-values to see if the coefficient is statistically significant.
I know that this is a very old question but for people like me who were wondering this you need to use cbind with xreg.
For Example:
Arima(X,order=c(3,1,3),xreg = cbind(ts1,ts2,ts3))
Each external time series should be the same length as the original.

Resources