Multiple packages on Firebase cloud Function project - firebase

Is there a way to have a Firebase/Google cloud function with this kind of architecture with cli command (firebase deploy --only functions) ?
Expected:
.
└── functions/
├── function_using_axios/
│ ├── node_modules/
│ ├── package.json
│ └── index.js
└── function_using_moment/
├── node_modules/
├── package.json
└── index.js
Currently, my archi look like this:
.
└── functions/
├── node_modules/
├── package.json
├── index.js
├── function_using_axios.js
└── function_using_moment.js
The fact is, i have a lot of useless packages dependencies for some functions.
And it increase cold start time.
I know this is possible with the web UI.
WEB UI Exemple:
List
One package for one Function
My Current Archi see on WEB UI, one Package for all functions:
Any idea ?
Thanks.

When deploying through Firebase there can only be a single index.js file, although gcloud may any different in this respect.
To ensure you only load the dependencies that each function needs, move the require for each dependency into the functions that need it:
exports.usageStats = functions.https.onRequest((request, response) => {
const module = require('your-dependency');
// ...
});
Also see:
the Firebase documentation on organizing functions, which shows a way to have the functions over multiple files (although you'll still need to import/export them all in index.js).

Related

Poetry script: No file/folder found

I am currently building a backend using FastAPI and I am facing some issues to run the backend using poetry scripts. This is my project structure:
├── backend
└── src
└── asgi.py
└── Dockerfile
└── poetry.lock
└── pyproject.toml
pyproject.toml
[tool.poetry]
name = "backend"
version = "0.1.0"
description = ""
authors = ["Pierre-Alexandre35 <46579114+pamousset75#users.noreply.github.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.9"
uvicorn = "^0.17.6"
fastapi = "^0.78.0"
psycopg2 = "^2.9.3"
jwt = "^1.3.1"
python-multipart = "^0.0.5"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.poetry.scripts]
foo='asgi:__main__'
If I am running poetry run python asgi.py, it is working perfectly but if I am using poetry foo script, I am getting No file/folder found for package backend. Those are all combinaisons I tried and I have the same error for every poetry run foo:
foo='asgi:main'
foo='backend.asgi:__main__'
foo='backend.asgi:main'
foo='backend.asgi:.'
Your project structure does not seem to be correct. Assuming backend is the package u are trying to create.
Use this structure
└── pyproject.toml
└── poetry.lock
└── README.md
├── backend
└── src
└── asgi.py
└── Dockerfile
└── __init__.py
Also in scripts use. (Assuming you are trying to run main with foo)
[tool.poetry.scripts]
foo='backend.asgi:__main__'

Is organizing config files within a config group in a directory structure a supported feature in hydra?

Let's assume a config group foo and config files organized in the following directory structure:
conf
├── foo
│   ├── bar
│   │ ├── a.yaml
│   │ ├── b.yaml
│   │ ├── c.yaml
│   └── baz
│   ├── d.yaml
│   ├── e.yaml
│   └── f.yaml
Each of the yaml files sets the package to foo using # #package foo. When running the corresponding application, I can simply override foo by specifying something like foo=bar/a or foo=baz/f. Thereby, the sub-directories bar and baz indicate a certain category withing a larger set of possible configurations.
While this works fine for standard use in hydra, some more advanced features of hydra appear to be not compatible with this structure. For instance, I would like to use glob in conjunction with the directory structure like this foo=glob(bar/*) to sweep over all configs of a certain category. However, this does not appear to work as glob does not find any configs in this example. Also if I assign an invalid config to foo and hydra lists the available options, the list is empty.
This makes me wonder if structuring within a config group is a generally supported feature in hydra, and just some corner cases are not covered yet, or if I am using hydra wrong and directories should not be used for organizing configs in a group?
This is not recommended, but not explicitly prohibited.
There are scenarios where this can help, but as you have discovered it does not play well with some other features. A config group contains other config groups/configs.
Hydra 1.1 is adding support for recursive default lists which will make this kind of scenario more common.
See The Defaults List documentation page:
├── server
│ ├── db
│ │ ├── mysql.yaml
│ │ └── sqlite.yaml
│ └── apache.yaml
└── config.yaml
In the scenario from the example there, the entities under server/db are different than the entities under server, so such globing would not make sense.

Manage custom packages and projects

I'm new to Julia.
I'm seeking the best practice of structuring directories, packages, and projects.
Compared with Python, the most annoying parts in Julia are as follows:
The path for e.g. include seems depend on the path of executed file. I'd like to keep a specific reference path so that I can load some files easily.
I asked similar questions here, and someone told me that creating packages and using using will be easy to manage my projects.
However, It's really confusing when loading files and modules. For example,
MyProject
├── MyPkg1
│   ├── src
│   │   └── MyPkg1.jl
│   └── test
│   └── runtests.jl
└── MyPkg2
├── src
│   └── MyPkg1.jl
└── test
└── runtests.jl
6 directories, 4 files
a) MyPkg1/src/MyPkg1.jl
module MyPkg1
export func_export
function func_export
println("hi")
end
end
b) MyPkg1/test/runtests.jl
using MyPkg1
using Test
#testset "MyPkg1.jl" begin
func_export() # raise error: func_export not defined
end
c) MyPkg2/test/runtests.jl
using MyPkg2
using Test
using MyPkg1 # other Pkg
#testset "MyPkg2.jl" begin
func_export() # raise error
end
As written in the above codes, some errors are raised (see b), c)).
So my questions are...
If I did something wrong in the above example, please explain why errors occur in detail.
What's the best practice in developing Julia projects in the sense of directory structure?

What is the best way to schedule a batch python job in Firebase?

I have the following directory structure in my firebase project.
.
├── /functions/
│ ├── index.js
├── /src/
│ ├── job.py
I want to run job.py every midnight in my Firebase. I have created a pub/sub job to see if I can invoke job.py from scheduler. So I have the following code in index.js.
exports.runJob = functions.pubsub.schedule("10 0 * * *")
.timeZone('America/New_York')
.onRun((context) => {
});
Is there a way to run job.py in index.js so that I can leverage pub/sub scheduler? Or is there a better way to run job.py like cronjob?

Only enable eslint in specific files

I really like eslint for es6 projects. Previously I've used it for new projects. Now I want to add it to a legacy project.
Fixing all pre-existing lint issues in one go is too much effort. Can I configure eslint (in .eslintrc.js) to only check files where I've explicitly enabled it with /* eslint-enable */ or similar?
ESLint has no default-disabled state that can be toggled by a file comment. You might be able to use .eslintignore for this purpose, however. You can ignore everything and then gradually whitelist files as you migrate them by using ! to un-ignore individual files. For example:
.
├── .eslintignore
├── .eslintrc.js
├── package.json
├── node_modules
│   └── ...
├── src
│   ├── index.js
│   └── module
│   └── foo.js
└── yarn.lock
Then your .eslintignore could look something like this:
# Start by ignoring everything by default
src/**/*.js
# Enable linting just for some files
!src/module/foo.js
In this case, src/index.js would be ignored, but it would lint src/module/foo.js.

Resources