Julia physics informed neural network - julia

Let's consider this equation:
with : 0 < b < 1.
I am new to julia programming, I am considering to solve this ODE equation using NeuralPDE.jl.
Any idea how to write that physics informed neural network ode in julia framework ?

Related

Does bnlearn compute for non-linear correlations?

I've recently started using the bnlearn package in R for detecting Markov Blankets of a target node in the dataset.
Based on my understanding of Bayesian Inference, two nodes are connected if there is a causal relationship between the two and this is measured using some conditional independence tests to check for correlation while taking into account potential confounders.
I just wanted to clarify if bnlearn checks for both linear and non-linear correlations in these tests. I tried looking for stuff about this in the documentation for the package but I wasn't able to get anything.
It would be really helpful if someone can explain how bnlearn performs the CI tests.
Thanks a bunch <3
Correlation implies statistical dependence, but not vice versa. There are cases of statistical dependence where there is no correlation, e.g. in periodic signals (correlation between sin(x) and x is very low for many periods). The concept of statistical dependence is more abstract than correlation and thus the documentation is written differently.
As you can see in the example of sin(x) and x: This is indeed a non-linear dependency which should be captured by the Bayesian network.

How to implement the evolutionary polynomial regression with a multi-objective genetic algorithm (EPR-MOGA) in R?

I am trying to implement the evolutionary polynomial regression with a multi-objective genetic algorithm (EPR-MOGA) using R. Given there is no available R package for this topic, I wonder what is a starting point? Thanks.
Below are some references that talk about the method.
https://iwaponline.com/jh/article/11/3-4/225/40/Advances-in-data-driven-analyses-and-modelling
https://iwaponline.com/jh/article/8/3/207/31275/A-symbolic-data-driven-technique-based-on
https://ojs.library.queensu.ca/index.php/wdsa-ccw/article/view/12093

Julia - Constraint Programming in JuMP

I know we can use CPLEX in Julia JuMP, for linear programming for instance.
But can we use CPLEX in JuMP, Julia v1.1 for constraint programming ?
CPLEX and OPL can do constraint programming, but what about Julia, are there documentation pages in JuMP linked to that or is it planned to be developped in near future for Julia ?
According to the documentation at http://www.juliaopt.org/JuMP.jl/0.18/installation.html#getting-solvers JUMP CPLEX support includes:
Linear programming
Second-order conic programming (including problems with convex quadratic constraints and/or objective)
Mixed-integer linear programming

Adaboost Implementation (obtaining weak classifier functional form in R)

I am trying to use Adaboost in CRAN-R for a classification problem. I cannot find any R packages that actually output the weak classifier functional form (ex. hi(x) * I (Y > z) that i could then program as a scoring algorithm. Can anyone help point me to a package that could provide these functions / coefficients? Thanks!

Neural Network 0 vs -1

I have seen a few times people using -1 as opposed to 0 when working with neural networks for the input data. How is this better and does it effect any of the mathematics to implement it?
Edit: Using feedforward and back prop
Edit 2: I gave it a go but the network stopped learning so I assume the maths would have to change somewhere?
Edit 3: Finally found the answer. The mathematics for binary is different to bipolar. See my answer below.
Recently found that the sigmoid and sigmoid derivative formula needs to change if using bipolar over binary.
Bipolar Sigmoid Function: f(x) = -1 + 2 / (1 + e^-x)
Bipolar Sigmoid Derivative: f’(x) = 0.5 * (1 + f(x)) * (1 – f(x) )
It's been a long time, but as I recall, it has no effect on the mathematics needed to implement the network (assuming you're not working with a network type that for some reason limits any part of the process to non-negative values). One of the advantages is that it makes a larger distinction between inputs, and helps amplify the learning signal. Similarly for outputs.
Someone who's done this more recently probably has more to say (like about whether the 0-crossing makes a difference; I think it does). And in reality some of this depends on exactly what type of neural network you're using. I'm assuming you're talking about backprop or a variant thereof.
The network learns quickly using -1/1 inputs compared to 0/1. Also, if you use -1/1 inputs, 0 means "unknown entry/noise/does not matter". I would use -1/1 as input of my neural network.

Resources