Avoid creating backend s3 bucket when terraspace plan - terraform-provider-aws

Sometimes I need to check what resources will be created within an environment without automatically creating state s3 bucket. I use terraspace plan but it automatically create an s3 bucket with the TS_ENV variable name as set in the backend.tf
Example:
TS_ENV=prod terraspace plan STACK creates a new s3 bucket for the prod environment.
Is there an option to disable that behaviour and only create the bucket when terraspace up?

You need to create a 'aws.rb' file in 'config/plugins/' folder.
Then you can put the following config to override the default behaviour:
TerraspacePluginAws.configure do |config|
config.auto_create = false # set to false to completely disable auto creation
end
Source: https://terraspace.cloud/docs/plugins/aws/

Related

How to use googledrive::drive_upload() without changing the Google ID of file?

When I upload a pptx file to Drive (or any file) I'd like to maintain the Google ID for the file, but every time I execute this function, a new Google ID is created even when overwrite=TRUE. This breaks the hyperlink that stakeholders were using to find the file in Drive. Is there a way to maintain the Google ID when overwriting during upload?
googldrive::drive_upload(
my_pres,
name = "My Presentation",
type = 'presentation', # converts pptx to Googleslides
overwrite = TRUE
)
According to the documentation googledrive::drive_upload() wraps the Files.create method of the Drive API. That is the wrong function to use for updating a file. The overwrite argument set as TRUE stands for:
"[...] Check for a pre-existing file at the filepath. If there is zero or one, move a pre-existing file to the trash[...]"
You should use googledrive::drive_update() which wraps the Files.update method of the Drive API. From the R docs, is described as:
"[...] Update an existing Drive file id with new content ("media" in Drive API-speak), new metadata, or both. To create a new file or update existing, depending on whether the Drive file already exists, see drive_put(). [...]"

set a path according to the user for a folder

I have a script that I am setting a path to where the datasets I will work on, but now the scripts will start to be run by other people on the team, how do I leave the folder with dynamic value according to the user who uses the script.
setwd("C:/Users/Jonas/Database")
I'm even creating a variable to receive the user of the machine, but I don't know how to add this to setwd
u <- Sys.info()["user"]
I tried to do that but was unsuccessful.
setwd("C:/Users/u/Database")
Use paste0
u <- Sys.info()["user"]
setwd(paste0("C:/Users/",u))

Sqlalchemy sqlite url relative to home or environment variable

A relative sqlalchemy path to a sqlite database can be written as:
sqlite:///folder/db_file.db
And an absolute one as:
sqlite:////home/user/folder/db_file.db
Is it possible to write a path relative to home? Like this:
sqlite:///~/folder/db_file.db
Or even better, can the path contain environment variables?
sqlite:////${MY_FOLDER}/db_file.db
This is the context of an alembic.ini file. So if the previous objectives are not possible directly, may I be able to cheat using variable substitution?
[alembic]
script_location = db_versions
sqlalchemy.url = sqlite:///%(MY_FOLDER)s.db
...
I have gone around this issue by modifying the values in the config object just after env.py imports it:
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# import my custom configuration
from my_app import MY_DB_URI
# overwrite the desired value
config.set_main_option("sqlalchemy.url", MY_DB_URI)
Now config.get_main_option("sqlalchemy.url") returns the MY_DB_URI you wanted.
As others have pointed out, one key is 3 slashes for relative, 4 for absolute.
But it took for me than just that...
Had trouble with just a string, I had to do this:
db_dir = "../../database/db.sqlite"
print(f'os.path.abspath(db_dir): {str(os.path.abspath(db_dir))}')
SQLALCHEMY_DATABASE_URI = "sqlite:///" + os.path.abspath(db_dir) # works
# SQLALCHEMY_DATABASE_URI = "sqlite:///" + db_dir # fails
From the alembic documentation (emphasis mine):
sqlalchemy.url - A URL to connect to the database via SQLAlchemy. This configuration value is only used if the env.py file calls upon them; in the “generic” template, the call to config.get_main_option("sqlalchemy.url") in the run_migrations_offline() function and the call to engine_from_config(prefix="sqlalchemy.") in the run_migrations_online() function are where this key is referenced. If the SQLAlchemy URL should come from some other source, such as from environment variables or a global registry, or if the migration environment makes use of multiple database URLs, the developer is encouraged to alter the env.py file to use whatever methods are appropriate in order to acquire the database URL or URLs.
So for this case, the sqlalchemy url format can be circunvented and generated by python itself.

Create a folder in Plone and set uid

I have a Plone project which I need to fork; sadly, the UID of the temp folder (for Archetypes objects) is used in the code (as a module level variable, at least, not as strings all over the source tree).
When starting with a fresh ZODB - can I create the temp folder and set the UID? Or should I simply change that constant in the new development branch?
You can set the uid for an AT object by...
obj._setUID(uid)
The _setUID method is defined in Products.Archetypes.Referencable Module
For more information you can also check the plone.app.transmogrifier uidupdater section.

Drupal 7 deleting node does not delete all associated files

One file gets upload when the node is created via standard Drupal.
Later, 2 files are added to the node via:
file_save(std Class)
file_usage_add(std Class, 'module', 'node', $node_id)
At the end, I end up with 3 entries in file_managed and file_usage.
Problem: when I delete the node via standard Drupal, the file that was added during the initial node creation gets removed, but not the 2 that were added later. These files remain in both tables, and physically on the disk.
Is there some flag that is being set to keep the files even if the node is deleted? If so, where is this flag, and how do I set it correctly (to be removed along with the node)?
The answer is in the file_delete() function, see this comment:
// If any module still has a usage entry in the file_usage table, the file// will not be deleted
As your module has declared an interest in the file by using file_usage_add() it will not be deleted unless your module explicitly says it's OK to do so.
You can either remove the call to file_usage_add() or implement hook_file_delete() and use file_usage_delete() to ensure the file can be deleted:
function mymodule_file_delete($file) {
file_usage_delete($file, 'mymodule');
}
You can force deleting of file.
file_delete($old_file, TRUE);
But make sure that this file is not used in other nodes using:
file_usage_list($file);

Resources