How to tell what files turborepo is using to compute its hash - turborepo

I am using turborepo with remote caching, and sometimes I get a cache miss that I am not expecting.
Can I somehow get turborepo to list the files it is using as inputs to its hash function, so I can get more insight into what it thinks has changed?

Related

Stop rsync from backing up if too many files are being changed

Does anyone know of a way that I can tell rsync to not perform a backup if it detects and X amount of data will be changed? For example, if I run a backup and it detects and 25% of the data in the destination directory will be changed can I have it automatically abort that run and then I can evaluate and make a decision whether allow it or not. I back up my machine every night but what I'm worried about is if my machine gets hit with a ransomware bug or another issue that causes a ton of my data gets destroyed or lost I really don't want it to propagate to my backup. I used to tool call synconvery and it had this feature but I don't think the tool is supported very well and I get a lot of permission and read errors that I don't see with any other tools. Goodsync also has this feature but even though it runs on the Mac it doesn't support special characters in a file name and replaces it with an underscore when the file is copied. I just think that will cause problems when I try to restore those file and it's being referenced wit that special character but can't be found because it has a damn underscore. I like using rsync and I will eventually retrofit my script to use msrsync but I can't trust it if I can't get this protection in place.

How to access webpack generated filenames in a plugin?

So, here is my situation. I have a JavaScript application where I'm appending the hashes to the filenames, as is the standard for Webpack output. This way the content can be safely cached by the browser, with the fresh load controlled by the changing file hash.
My problem is I have a situation where I need other applications to access mine, and they won't be able to be updated every time the hash changes. So I need a request like this:
https://my-domain.com/assets/js/app.js
to be redirected to
https://my-domain.com/assets/js/app.ab12cd34.js
My application currently uses nginx to serve up the pages, but nginx is static. I don't know how to configure it to dynamically identify the hashed file name and return it.
The app is being deployed to a Pivotal CloudFoundry environment. PCF supports evaluating dynamic Ruby code in an nginx.conf file, so that seemed like an easy way around this. Unfortunately, my company requires that the nginx.conf go through a special parser to enforce security headers. This parser only knows nginx syntax, and mangles any Ruby code there.
So, that leaves me with Webpack. I started investigating ways for Webpack to modify files during the build process, and I discovered the transform() function in the copy-webpack-plugin. It has the ability to modify the files exactly how I need. What is still a challenge, though, is getting the hash filename.
So, I'm hoping there's some way to gain access to what the hash filename will be in this plugin, so that I can inject it into the nginx.conf.
Alternatively, if someone knows another way to get around my core problem, I'm all ears.
You can use the webpack-manifest-plugin to create a manifest file with a filename -> chunkname/bundlename mapping.
This manifest file can then be consumed by any piece of software that needs it.

Is it possible to fetch and use a file from cloud storage at when deploying a cloud function

I have a firebase function that makes use of a SQLite database (read-only) which is currently uploaded along with the function.
The problem is that the db file is quite large and gets uploaded every time the function is changed. Is there a way to fetch this file from cloud storage during the installation process (during firebase deploy) - without hard-coding the URL in the source files?
What you're trying to do is problematic because your code running in Cloud Functions may actually be running on any number of server instances, determined by the load on your project. As such, downloading a file once at the time of deployment isn't going to naturally affect all the instances that maybe created or destroyed at any given moment.
It's far better to keep doing what you're doing, and include your extra data during deployment. When a new instance is spun up to handle events for your function, the file will be immediately ready to help service requests.

Best practices place to put URL that configs my app?

We have a Qt app that when it starts tries to connect to a servlet to get config parameters that it needs to keep running.
The URL may change frequently because we have to test the application in several environments. Right now (as a temporary solution) the URL is a constant in source code, but it is a little bit ugly.
Where is the best place to mainting this URL, so that we do not need to change the source code every time I want to change the environment target?
In a database table maybe (my application uses a SQLite DB), in a settings file, or in some other way?
Thank you for you replies.
You have a number of options:
Hard coded (like you have already)
Run-time user input
Command line arguments
QSettings
Read from a bespoke file as text.
I would think option 3 would be the most simple to implement without being intrusive, but it does depend on what kind of application you have.
I would keep the list of url in a document, e.g. a XML, stored in a central, well known place, e.g. a known web server, and hardcode the url of the known place in the app.
The list could then be edited externally without recompiling your app;
The app would at startup download and parse the list, pointing to the right servlet based upon an environment specified as a command line parameter.

need help in choosing the right tool

I have a client who has set-up a testing environment in some AI language. It basically runs some predefined test cases and stores the results in as log files (comma separated txt files). My job is to identify and suggest a reporting system and I have these options in mind. either
1. Importing the logs into MSSQL and use the reporting(SSRS) it uses
2. or us import the logs to MySQL and use PHP to develop custom reporting.
I am thinking that going with option2 is better. The reason for this is, the logs are inconsistent and contain unexpected wild characters that normally DB's don't accept. So, I can write some scripts in php before loading them to the database.
Can anyone please suggest if this is your problem what will you suggest to do?
It depends how fancy you need to be. If the data is in CSV files, you could even go so simple as to load it into Excel (or their favorite spreadsheet tool), and use spreadsheet macros to analyze it.

Resources