Formula to calculate water level from Sentinel-3 in BRAT tool? - formula

I'm a university student who curious about alitimetry satellite. I just start using BRAT and I try to download the image of Sentinel-3 and use this tool to calculate inland water level. In BRAT I use the formula: ((((((alt_01 - range_ocean_01_ku) - mod_dry_tropo_cor_meas_altitude_01) - mod_wet_tropo_cor_meas_altitude_01) - iono_cor_alt_01_ku) - pole_tide_01) - solid_earth_tide_01) - geoid_01. However, when I check with data from Hydroweb, it quite different so can anyone tell me the correct one. Thanks!!!

Related

How are the design variables in the SimpleGA or DifferentialEvolution drivers initialized?

I am having trouble navigating the source code to see how the design variables in the initial population for the SimpleGA and DifferentialEvolution Drivers are set. Is there some sort of Latin Hypercube sampling of the design variable ranges? Do the initial values I set in my problem instance get used like they would for the other drivers (Scipy and pyOptSparse)?
Many thanks,
Garrett
For these two drivers, the initial value in the model is not used. Its not even clear to me what it would mean to use that value directly, since you need a stochastically generated population --- but I'm admittedly not an expert on the latest GA population initialization methods. However, I can answer the question of how they do get initialized as of OpenMDAO V3.17:
Simple GA Driver:
This driver does seem to use an LHS sampling like this:
new_gen = np.round(lhs(self.lchrom, self.npop, criterion='center',
random_state=random_state))
new_gen[0] = self.encode(x0, vlb, vub, bits)
Differential Evolution Driver:
This driver uses a uniform random distribution like this:
population = rng.random([self.npop, self.lchrom]) * (vub - vlb) + vlb # scale to bounds
Admittedly, it doesn't make a whole lot of sense why the intialization methods are different, and perhaps there should be some option to pick from a set of methods or provide your own initial population somehow. A POEM and/or pull-request to improve this would be most welcome.

Calculating appropriate rewards for Uniswap like Liquidity pool contract (Solidity)

I'm trying to calculate rewards for liquidity providers and I found this equation that Uniswap apparently uses:
Basic Formula (L = liquidity): (L_you / L_others) * (24h_swap_volume * pool_fee_rate)
And I'm trying to implement this in my smart contract but I can't seem to be able to because the liquidity held by others will always be larger than the liquidity you hold which requires a decimal value so my question is: How do I use this equation in a Solidity smart contract without falling into floating-point hell?
After doing a little more digging I found that you can use ABDKMath to achieve this you can find more information using the links below:
Library: https://github.com/abdk-consulting/abdk-libraries-solidity
Article: https://www.bitcoininsider.org/article/68630/10x-better-fixed-point-math-solidity
As a code example here is a quick snippet of how I implemented this:
uint256 fee = ABDKMath64x64.mulu(
ABDKMath64x64.divu(
tokensProvided,
others
),
taxableValue
);
This is probably not the best solution but it worked for me. Also, take a look at https://github.com/paulrberg/prb-math for a higher precision fixed-point math library.
Regarding gas efficiency here is a quote from the PRB math lib:
The typeless PRBMath library is faster than ABDKMath for abs, exp,
exp2, gm, inv, ln, log2. Conversely, it is slower than ABDKMath for
avg, div, mul, powu and sqrt. There are two technical reasons why
PRBMath lags behind ABDKMath's mul and div functions
So for this use case, I think it is a better idea to use ABDK Math instead of PRB math unless you plan on transferring more than 2 to the 128th power of Wei.

How to interpret "nearest source location" in the cumulativeCost-function of Google Earth Engine?

I am wondering what the documentation of cumulativeCost() in GEE exactly means by "nearest source location".
"nearest" in terms of "the closest starting pixel, linearly computed" or
"nearest" in terms of "the closest in terms of cumulative cost" ?
For my analysis I would like to know if the algorithm already reduces the number of potential routes in advance (by choosing only 1 starting point in advance) OR if it first tries the routes from one pixel to all possible starting points, and then takes the value to the starting pixel with the lowest total cost. Does anyone have more detailed information on how the algorithm works in that case? Thanks.

How do I represent medication alternatives in FHIR?

I would like to know if there is a Resource I could use to represent medication alternatives, such as:
If you are taking Drug A , you should use Drug B or Drug C instead
This is not something that FHIR has implemented yet - it's clinical/medical knowledge. The nearest that FHIR currently has is the Clinical Quality Framework: http://hl7-fhir.github.io/cqif/cqif.html
Warning: It's very much a work in progress, and the link above is to the current build, not a stable release

Trying to use the Naive Bayes Learner in R but predict() giving different results than model would suggest

I'm trying to use the Naive Bayes Learner from e1071 to do spam analysis. This is the code I use to set up the model.
library(e1071)
emails=read.csv("emails.csv")
emailstrain=read.csv("emailstrain.csv")
model<-naiveBayes(type ~.,data=emailstrain)
there a two sets of emails that both have a 'statement' and a type. One is for training and one is for testing. when I run
model
and just read the raw output it seems that it gives a higher then zero percent chance to a statement being spam when it is indeed spam and the same is true for when the statement is not. However when I try to use the model to predict the testing data with
table(predict(model,emails),emails$type)
I get that
ham spam
ham 2086 321
spam 2 0
which seems wrong. I also tried using the training set to test the data on as well, and in this case it should give quite good results, or at least as good as what was observed in the model. However it gave
ham spam
ham 2735 420
spam 0 6
which is only slightly better then with the testing set. I think it must be something wrong with how the predict function is working.
how the data files are set up and some examples of whats inside:
type,statement
ham,How much did ur hdd casing cost.
ham,Mystery solved! Just opened my email and he's sent me another batch! Isn't he a sweetie
ham,I can't describe how lucky you are that I'm actually awake by noon
spam,This is the 2nd time we have tried to contact u. U have won the £1450 prize to claim just call 09053750005 b4 310303. T&Cs/stop SMS 08718725756. 140ppm
ham,"TODAY is Sorry day.! If ever i was angry with you, if ever i misbehaved or hurt you? plz plz JUST SLAP URSELF Bcoz, Its ur fault, I'm basically GOOD"
ham,Cheers for the card ... Is it that time of year already?
spam,"HOT LIVE FANTASIES call now 08707509020 Just 20p per min NTT Ltd, PO Box 1327 Croydon CR9 5WB 0870..k"
ham,"When people see my msgs, They think Iam addicted to msging... They are wrong, Bcoz They don\'t know that Iam addicted to my sweet Friends..!! BSLVYL"
ham,Ugh hopefully the asus ppl dont randomly do a reformat.
ham,"Haven't seen my facebook, huh? Lol!"
ham,"Mah b, I'll pick it up tomorrow"
ham,Still otside le..u come 2morrow maga..
ham,Do u still have plumbers tape and a wrench we could borrow?
spam,"Dear Voucher Holder, To claim this weeks offer, at you PC please go to http://www.e-tlp.co.uk/reward. Ts&Cs apply."
ham,It vl bcum more difficult..
spam,UR GOING 2 BAHAMAS! CallFREEFONE 08081560665 and speak to a live operator to claim either Bahamas cruise of£2000 CASH 18+only. To opt out txt X to 07786200117
I would really love suggestions. Thank you so much for your help
Actually predict function works just fine. Don't get me wrong but problem is in what you are doing. You are building the model using this formula: type ~ ., right? It is clear what we have on the left-hand side of the formula so lets look at the right-hand side.
In your data you have only to variables - type and statement and because type is dependent variable only thing that counts as independent variable is statement. So far everything is clear.
Let's take a look at Bayesian Classifier. A priori probabilities are obvious, right? What about
conditional probabilities? From the classifier point of view you have only one categorical Variable (your sentences). For the classifier point it is only some list of labels. All of them are unique so a posteriori probabilities will be close to the the a priori.
In other words only thing we can tell when we get a new observation is that probability of it being spam is equal to probability of message being spam in your train set.
If you want to use any method of machine learning to work with natural language you have to pre-process your data first. Depending on you problem it could for example mean stemming, lemmatization, computing n-gram statistics, tf-idf. Training classifier is the last step.

Resources