The data in LiveAutos is set using:
ref.child("hhh#hgh").child("latitude").setValue(location.latitude)
the data in Livelyautos is set using a python script by uploading a JSON file.
How can I write the data in LiveAutos similar to the LivelyAutos with serial numbers 0,1,2,3.
the database will be updated by multiple devices locations.
or how can I read the data from LiveAutos?
As #FrankvanPuffelen mentioned in his comment, storing sequential numeric elements is not a recommended way of adding data to Firebase Realtime Database, is rather an anti-pattern, since such a schema doesn't scale. What you can do is to use the push() method:
ref.child("hhh#hgh").push().child("latitude").setValue(location.latitude)
Which will produce a schema that looks like this:
Firebase-root
|
--- hhh#hgh
|
--- $pushedId
|
--- latitude: 0.00
|
--- longitude: 0.00
In this way you can add as many locations as you want.
Related
I am trying to implement a custom formula in Enterprise Custom field of MS Project.
Formula is as below:
``` ([Baseline Cost]*[DurationCustom])/[Baseline Duration]
`[Baseline Cost] and [Baseline Duration] are fields directly from MS Project,
[DurationCustom] is an enterprise custom field with Entity:task and Type:Number.` ```
For instance if
[Baseline Cost] = 1 ----- Type:Cost(₹1.00)
[DurationCustom] = 100 ----- Type:Number(100)
[Baseline Duration] = 100 ----Type:Number(100d)
Expected result: 1
Current result: 0.21
Can anyone suggest if the error is due to the data type.
Also, if there is a possibility to convert "100d" to simply a number "100".
P.S: I am using MS Project server 2016.
Thanks
I suggest you do the following. Break down your formula into three parts (separately) to see what value appears for each of the variables. For example, what do you get with the following:
Number1 = [Baseline Cost]
Then do the same for [DurationCustom] and [Baseline Duration]. That will tell you how Project "sees" the values and likely tell you why you get the result you do.
As far as converting "100d" to "100", yes it can be done but it isn't necessary. The "100d" is a display value based on your option setting. Internally, "100d" is stored as a number, in minutes, without any unit suffix.
John
Suppose I have a table, I run a query like below :-
let data = orders | where zip == "11413" | project timestamp, name, amount ;
inject data into newOrdersInfoTable in another cluster // ==> How can i achieve this?
There are many ways to do it. If it is a manual task and with not too much data you can simply do something like this in the target cluster:
.set-or-append TargetTable <| cluster("put here the source cluster url").database("put here the source database").orders
| where zip == "11413" | project timestamp, name, amount
Note that if the dataset is larger you can use the "async" flavor of this command. If the data size is bigger then you should consider exporting the data and importing it to the other cluster.
I have a documentation site using MkDocs. It contains a list of documents, all with the same basic structure (we only have a list of experimental notebooks). Each notebook (written in a separate MarkDown file) should contain an author + date in the beginning using the following:
<style>table, td, th {border:none!important; font-size:15px}</style>
| | |
| -------------- | :-------------------------- |
| **Author(s):** | First Author, Second Author |
| **Date:** | 2024-01-23 |
What I would like to have is some kind of templating that enables the users to avoid adding the above lines (eg. avoid adding CSS) but instead just specifying the list of authors and a date. This would enable me as a documentation owner to change the formatting later if needed without changing each .md file content.
I read about Jinja templates and wonder if it could be used to achieve this?
My notes after studying a problem that is similar to yours:
One example using global blocks
in your case you should look at mkdocs with jinja macros.
The metadata can be directly accessed in your markdown.
So you can place at start of the document
authors: First A, Second A
docdate: 2024-01-23
and then in the document text uses {{ authors }}, it will be replaced.
You requested to use this globally without changing the markdown files. This can be done by including a block of markdown from an external file in your document. First setup mkdocs include plugin and then use something like this:
---
authors: First, second
docdate: 2024-01-23
---
{% include-markdown "../authortable.md" %}
In the authortable you can directly put your layout for the author table, and include the metadata. The include is done before variable expantion.
Your include file could be:
<style>table, td, th {border:none!important; font-size:15px}</style>
| | |
| -------------- | :-------------------------- |
| **Author(s):** | {{authors}} |
| **Date:** | {{docdate}} |
Other approaches
There are other ways of course.
You can make a python macro and call this with the metadata variables:
{{ make_the_table (authors , docdate) }}
The advantage of the macro approach is that you have the full power of Python, and can use databases, condition, file system properties, etc etc.
You can also automatically modify the raw markdown without having any tags in the documents, using a method on_post_page_macros(env) as described in advanced macro.
You may want to use metadata also called frontend data, as described in the yaml-style-meta-data section. Such information is placed at the beginning of your markdown file. The content can then be used to generate content.
The macro plugin is made for this and can be found here https://mkdocs-macros-plugin.readthedocs.io
I created a fusion sheet data to be synced to the data set. now, I want to use that data set for creating a dictionary in the repo. I am using pyspark in the repo. later I want to use that dictionary to be passed so that it populates descriptions as it is in Is there a tool available within Foundry that can automatically populate column descriptions? If so, what is it called?.
it would great if anyone can help me creating the dictionary from data set using pyspark in the repo.
The following code would convert your pyspark dataframe into a list of dictionaries:
fusion_rows = map(lambda row: row.asDict(), fusion_df.collect())
However, in your particular case, you can use the following snippet:
col_descriptions = {row["column_name"]: row["description"] for row in fusion_df.collect()}
my_output.write_dataframe(
my_input.dataframe(),
column_descriptions=col_descriptions
)
Assuming your Fusion sheet would look like this:
+------------+------------------+
| column_name| description|
+------------+------------------+
| col_A| description for A|
| col_B| description for B|
+------------+------------------+
Background
I am trying to setup my trade analysis environment. I am running some rule based strategies on futures on different brokers and trying to aggregate trades from different brokers in one place. I am using blotter package as my main tool for analysis.
Idea is to use blotter and PerformanceAnalytics for analysis of live performance of various strategies I am running.
Problem at hand
My source of future EOD data is CSIData. All the EOD OHLC prices for these futures are stored in CSV format in following directory structure. For each future there is seperate directory and each contract of the future has one csv file with OHLC price series.
|
+---AD
| AD_201203.TXT
| AD_201206.TXT
| AD_201209.TXT
| AD_201212.TXT
| AD_201303.TXT
| AD_201306.TXT
| AD_201309.TXT
| AD_201312.TXT
| AD_201403.TXT
| AD_201406.TXT
| AD_54.TXT
...
+---BO2
| BO2195012.TXT
| BO2201201.TXT
| BO2201203.TXT
| BO2201205.TXT
| BO2201207.TXT
| BO2201208.TXT
| BO2201209.TXT
| BO2201210.TXT
| BO2201212.TXT
| BO2201301.TXT
...
I have managed to define root contracts for all the futures (e.g. in above case AD, BO2 etc) I will be using in FinancialInstrument with CSIData symbols as primary identifiers.
I am now struggling on how to define all the actual individual future contracts (e.g. AD_201203, AD_201206 etc) and setup their lookup using setSymbolLookup.FI.
Any pointers on how to do that?
To setup individual future contracts, I looked into ?future_series and ?build_series_symbols, however, the suffixes they support seem to be only of Future Month code format. So I have a feeling I am left with setting up each individual future contract manually. e.g.
build_series_symbols(data.frame(primary_id=c('ES','NQ'), month_cycle=c('H,M,U,Z'), yearlist = c(10,11)))
[1] "ESH0" "ESM0" "ESU0" "ESZ0" "NQH0" "NQM0" "NQU0" "NQZ0" "ESH1" "ESM1" "ESU1" "ESZ1" "NQH1" "NQM1" "NQU1" "NQZ1"
I have no clue where to start digging for my second part of my question i.e. setting price lookup for these futures from CSI.
PS: If this is not right forum for this kind of question, I am happy to get it moved to right section or even ask on totally different forum altogether.
PPS: Can someone with higher reputation tag this question with FinancialInstrument and CSIdata? Thanks!
The first part just works.
R> currency("USD")
[1] "USD"
R> future("AD", "USD", 100000)
[1] "AD"
Warning message:
In future("AD", "USD", 1e+05) :
underlying_id should only be NULL for cash-settled futures
R> future_series("AD_201206", expires="2012-06-18")
[1] "AD_201206"
R> getInstrument("AD_201206")
primary_id :"AD_201206"
currency :"USD"
multiplier :1e+05
tick_size : NULL
identifiers: list()
type :"future_series" "future"
root_id :"AD"
suffix_id :"201206"
expires :"2012-06-18"
Regarding the second part, I've never used setSymbolLookup.FI. I'd either use setSymbolLookup directly, or set a src instrument attribute if I were going to go that route.
However, I'd probably make a getSymbols method, maybe getSymbols.mycsv, that knows how to find your data if you give it a dir argument. Then, I'd just setDefaults on your getSymbols method (assuming that's how most of your data are stored).
I save data with saveSymbols.days(), and use getSymbols.FI daily. I think it wouldn't be much effort to tweak getSymbols.FI to read csv files instead of RData files. So, I suggest looking at that code.
Then, you can just
setDefaults("getSymbols", src="mycsv")
setDefaults("getSymbols.mycsv", dir="path/to/dir")
Or, if you prefer
setSymbolLookup(AD_201206=list(src="mycsv", dir="/path/to/dir"))
or (essentially the same thing)
instrument_attr("AD_201206", "src", list(src="mycsv", dir="/path/to/dir")