Set Hydra enviroment with uvicorn/fastapi - fastapi

I have a FastAPI application that is going to production soon, however i am facing some problems with hydra integration.
First i could not run the #hydra.main() decorator on fastAPI endpoints. This was overcome with using the Hydra Compose API. Here is my implementation for it:
def load_config(config_path, config_name):
"""Loads specified hydra config.
This method is used because, fastapi and hydra did not
wanna work together.
"""
core.global_hydra.GlobalHydra.instance().clear()
initialize(version_base=None, config_path=config_path)
cfg = compose(config_name=config_name)
return cfg
This was fine until now, while I only used one config, but now I need to separate test and production configurations.
My configuration dircetory structure looks like this:
├── config
│ ├── config.yaml
│ ├── env
│ │ ├── prod.yaml
│ │ └── test.yaml
│ ├── prod
│ │ └── prod1.yaml
│ └── test
│ └── test1.yaml
The directory structure and file contents are based on this post:
Python Hydra configuration for different environments. The default environment is set to test.
My application runs with this command:
uvicorn my.api:app and this successfully loads the test config.
However when I try to run it with:
uvicorn env=prod my.api:app the test config still loads instead of production. Here i want to load the prod configuration.
How can I achieve this?
I have found this related github issue, but the solution was not clear from it: https://github.com/facebookresearch/hydra/issues/204
To be clear, i am willing to make fundamental changes if the problem lies within not using #hydra.main()

The Compose API does not integrate with the command line. You need to pass the overrides yourself on the call site.
Check the example in the Compose API page again, in particular the overrides parameter.
from hydra import compose, initialize
from omegaconf import OmegaConf
if __name__ == "__main__":
# context initialization
with initialize(version_base=None, config_path="conf", job_name="test_app"):
cfg = compose(config_name="config", overrides=["db=mysql", "db.user=me"])
print(OmegaConf.to_yaml(cfg))

Related

How to exclude a specific folder inside pages from production build? [duplicate]

There are SSR-related problems with several pages in Next.js project that results in errors on npm run build and prevent the project from being built:
pages/
foo/
bar/
[id].jsx
index.jsx
index.jsx
...
For example, bar:
export function getStaticProps() {
return someApiCallThatCurrentlyFails()
...
}
export default function Bar() {...}
As a quick fix, it may be convenient to just not build bar/*.* pages and make routes unavailable.
Can pages be ignored on Next.js build without physically changing or removing page component files in the project?
You can configure the pageExtensions in the next.config.js.
// next.config.js
module.exports = {
pageExtensions: ["page.js"],
}
After configuring this, the only pages with *.page.js will be considered in the below given directory structure.
pages/
├── user
│ └── setting
│ ├── index.js
├── _app.page.js
├── _document.page.js
├── list.page.js
└── theme.ts
Custom file ignores patterns that are not supported yet. You can visit the PR created here, and the solution given here. This is the most satisfactory solution so far.
#Mathilda Here from Nextjs docs: it's necessary for all pages including _app, _document, etc.
https://nextjs.org/docs/api-reference/next.config.js/custom-page-extensions
Changing these values affects all Next.js pages, including the following:
- middleware.js
- pages/_document.js
- pages/_app.js
- pages/api/
For example, if you reconfigure .ts page extensions to .page.ts, you would need to rename pages like _app.page.ts.

Is organizing config files within a config group in a directory structure a supported feature in hydra?

Let's assume a config group foo and config files organized in the following directory structure:
conf
├── foo
│   ├── bar
│   │ ├── a.yaml
│   │ ├── b.yaml
│   │ ├── c.yaml
│   └── baz
│   ├── d.yaml
│   ├── e.yaml
│   └── f.yaml
Each of the yaml files sets the package to foo using # #package foo. When running the corresponding application, I can simply override foo by specifying something like foo=bar/a or foo=baz/f. Thereby, the sub-directories bar and baz indicate a certain category withing a larger set of possible configurations.
While this works fine for standard use in hydra, some more advanced features of hydra appear to be not compatible with this structure. For instance, I would like to use glob in conjunction with the directory structure like this foo=glob(bar/*) to sweep over all configs of a certain category. However, this does not appear to work as glob does not find any configs in this example. Also if I assign an invalid config to foo and hydra lists the available options, the list is empty.
This makes me wonder if structuring within a config group is a generally supported feature in hydra, and just some corner cases are not covered yet, or if I am using hydra wrong and directories should not be used for organizing configs in a group?
This is not recommended, but not explicitly prohibited.
There are scenarios where this can help, but as you have discovered it does not play well with some other features. A config group contains other config groups/configs.
Hydra 1.1 is adding support for recursive default lists which will make this kind of scenario more common.
See The Defaults List documentation page:
├── server
│ ├── db
│ │ ├── mysql.yaml
│ │ └── sqlite.yaml
│ └── apache.yaml
└── config.yaml
In the scenario from the example there, the entities under server/db are different than the entities under server, so such globing would not make sense.

Multiple packages on Firebase cloud Function project

Is there a way to have a Firebase/Google cloud function with this kind of architecture with cli command (firebase deploy --only functions) ?
Expected:
.
└── functions/
├── function_using_axios/
│ ├── node_modules/
│ ├── package.json
│ └── index.js
└── function_using_moment/
├── node_modules/
├── package.json
└── index.js
Currently, my archi look like this:
.
└── functions/
├── node_modules/
├── package.json
├── index.js
├── function_using_axios.js
└── function_using_moment.js
The fact is, i have a lot of useless packages dependencies for some functions.
And it increase cold start time.
I know this is possible with the web UI.
WEB UI Exemple:
List
One package for one Function
My Current Archi see on WEB UI, one Package for all functions:
Any idea ?
Thanks.
When deploying through Firebase there can only be a single index.js file, although gcloud may any different in this respect.
To ensure you only load the dependencies that each function needs, move the require for each dependency into the functions that need it:
exports.usageStats = functions.https.onRequest((request, response) => {
const module = require('your-dependency');
// ...
});
Also see:
the Firebase documentation on organizing functions, which shows a way to have the functions over multiple files (although you'll still need to import/export them all in index.js).

What is the correct way to setup multiple logically organized sub folders in a terraform repo?

Currently I am working on a infrastructure in azure that comprises of the following:
resource group
application gateway
app service
etc
everything I have is in one single main.tf file which I know was a mistake however I wanted to start from there. I am currently trying to move each section into its own sub folder in my repo. Which would look something like this:
terraform-repo/
├── applicationGateway/
│ ├── main.tf
│ ├── vars.tf
├── appService/
│ ├── main.tf
│ └── vars.tf
├── main.tf
└── vars.tfvars
However when I create this while trying to move over from the single file structure I get issues with my remote state where it wants to delete anything that isn't a part of the currently worked on sub folder.
For example if I wanted to run terraform apply applicationGateway I will get the following:
# azurerm_virtual_network.prd_vn will be destroyed
Plan: 0 to add, 2 to change, 9 to destroy.
What is the correct way to setup multiple logically organized sub folders in a terraform repo? Or do I have to destroy my current environment to get it to be setup like this ?
You are seeing this issue because terraform ignores subfolders, so those resources are not being included at all anymore. You would need to configure the subfolders to be Terraform Modules, and then include those modules in your root main.tf
update 06/2022 , Complete example :
Let's say you have the following directories
./your-folder
|__ main.tf
|__ variables.tf
|__ output.tf
|__ /modules
|__ /module-a
|__ main.tf
|__ variables.tf
|__ output.tf
Module definition in ./your-folder/modules/module-a/main.tf:
resource "[resource_type]" "my-module-name" {
...
}
Load module in your root main.tf file, so in ./your-folder/main.tf:
module "my-module-instance-name" {
source = "./modules/module-a"
other-input-variable = "..."
}
Then tell Terraform to load this new module running the following command in your root directory (so ./your-folder):
terraform get
Then test your setup with terraform plan.
To use root-level resources in child modules, inject it into child module as input variable.
To use child-level resources in root module, export it from child module with the output instruction.
Hope this helps. :)
One option for maintaining a DRY environment this way is using Terragrunt
Terragrunt is a wrapper for Terraform that allows organization and reusable components in a slightly different way than Terraform handles environments.

One shared settings section for multiple tests and suits?

I woud like to make a Robot Framework project with multiple (levels of) test suits and test cases.
Is it possible to define a list of settings, specifically importing of libraries, resources and global variable (.py files), only once in one place?
As far as I'm aware this is not possible. You have to import libraries, resources and variable files explicitily in each .robot test case file that uses them. The init file in a directory can only be used for other settings, not imports.
But I would like to keep things DRY and import resourse that I use everywhere only once and in one place.
Is this not possible, or am I missing something?
Note: I'm still a RF newbie.
Thanks!
It's easily doable, and quite a common pattern - have a resource file, that has all the common keywords, variables, imports of other robot or py files, etc, and in every test suite - import it.
Say your project's directory structure is like this:
root_folder/
├── resources/
│ ├── common_resource.robot
│ ├── helpers.robot
│ ├── specific_page.robot
└── suites/
├── login_page.robot
└── specific_page.robot
The file resources/common_resource.robot has all those common elements - say, imports helpers.robot as a resource.
Every suite file imports the common file; e.g. both login_page.robot and specific_page.robot start-off with (path-relative) imports:
*** Settings ***
# other imports, documentation, etc
Resource ../resources/common_resource.robot
On top of that, each suite imports any other specific keyword files - like resources/specific_page.robot.
It's a convention, that once established ("every suite must import common_resource.robot") is easy to follow.
If there is a new keyword, variable or library that has to be used in all - or most - suites, just add it to the common file, and it will be instantly accessible.

Resources