Error with date/time fields in EasyAdmin once deployed on Heroku - symfony

I work on a Symfony 6.0.9 website with EasyAdmin to handle the administration panel.
I've got an entity ProfessionalExperience with some of its properties that are dates. Its CRUD controller for EasyAdmin looks like this :
class ProfessionalExperienceCrudController extends AbstractCrudController
{
public static function getEntityFqcn(): string
{
return ProfessionalExperience::class;
}
public function configureCrud(Crud $crud): Crud
{
...
}
public function configureFields(string $pageName): iterable
{
return [
...
DateTimeField::new('start')
->setFormat('Y-MM-dd'),
DateTimeField::new('stop')
->setFormat('Y-MM-dd'),
...
];
}
}
It works just fine in my dev env and the build and deploy on Heroku works fine too.
But when I try to click to access this part of the administration panel in the deployed website, I've got this error in Heroku's logs :
[critical] Uncaught PHP Exception LogicException: "When using date/time fields in EasyAdmin backends, you must install and enable the PHP Intl extension, which is used to format date/time values." at /app/vendor/easycorp/easyadmin-bundle/src/Field/Configurator/DateTimeConfigurator.php line 37
I don't understand, because in my Dockerfile, I've got this :
RUN set -eux; \
apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
icu-dev \
libzip-dev \
zlib-dev \
; \
\
docker-php-ext-configure zip; \
docker-php-ext-install -j$(nproc) \
intl \
zip \
; \
Thanks for helping me ! ;)

You could try adding the extension in your composer.json.
You can see in the heroku documentation that you can add optionnal extension to install by adding them in your composer.json.
{
"require": {
"ext-intl": "*",
}
}
Don't forget to run composer update and commit your lock file.

Related

How can I add a user to a protected branch?

I would like to configure my gitlab project so that every maintainer can merge (after review) but nobody can push on master; only a bot (for release).
I'm using terraform to configure my gitlab, with something like this:
resource "gitlab_branch_protection" "BranchProtect" {
project = local.project_id
branch = "master"
push_access_level = "no one"
merge_access_level = "maintainer"
}
But with have a "premium" version and the terraform provider do not allow to add a user (goto: https://github.com/gitlabhq/terraform-provider-gitlab/issues/165 ).
So, what I like to do is doing some http request on the API to add the specific user.
So I'm doing it like this:
get the actual protection
delete the actual configuration
update the retrieved configuration with what I want
push the new configuration
BTW: I've not found how to just update the configuration... https://docs.gitlab.com/ee/api/protected_branches.html
TMP_FILE=$(mktemp)
http GET \
$GITLAB_URL/api/v4/projects/$pid/protected_branches \
PRIVATE-TOKEN:$GITLAB_TOKEN \
name=$BRANCH_NAME \
| \
jq \
--arg uid $USER_ID \
'.[0] | .push_access_levels |= . + [{user_id: ($uid | tonumber)}]' \
> $TMP_FILE
http DELETE \
"$GITLAB_URL/api/v4/projects/$pid/protected_branches/$BRANCH_NAME" \
PRIVATE-TOKEN:$GITLAB_TOKEN
http --verbose POST \
"$GITLAB_URL/api/v4/projects/$pid/protected_branches" \
PRIVATE-TOKEN:$GITLAB_TOKEN \
< $TMP_FILE
But my problem is that the resulting configuration is not what I expect, I've got something like this:
"push_access_levels": [
{
"access_level": 40,
"access_level_description": "Maintainers",
"group_id": null,
"user_id": null
}
],
How can I just update the branch protection to add a simple user ?
Ok like they say: RTFM !
But you need to delete the rule before adding the new configuration.
http \
DELETE \
"$GITLAB_URL/api/v4/projects/$pid/protected_branches/$BRANCH_NAME" \
PRIVATE-TOKEN:$GITLAB_TOKEN \
http \
POST \
$GITLAB_URL/api/v4/projects/$pid/protected_branches \
PRIVATE-TOKEN:$GITLAB_TOKEN \
name==${BRANCH_NAME} \
push_access_level==0 \
merge_access_level==40 \
unprotect_access_level==40 \
allowed_to_push[][user_id]==$USER_ID \

Jenkins - Symfony with environment variables

I've been struggling in building automated build using Jenkins with symfony 3.4.
How to properly set environment variables in Jenkins that symfony can find it.
here's my pipeline.
node {
def app
stage('composer install') {
sh 'export $(cat env/env_vars | xargs)'
sh 'composer install --optimize-autoloader'
}
stage('yarn install') {
sh 'yarn install'
}
stage ('build assets') {
sh 'yarn encore production'
}
stage('Clone repository') {
// clone
}
stage('Build image') {
// build here
}
stage('Push image') {
// push here
}
}
then after I run my build.
I always got this message
....
Creating the "app/config/parameters.yml" file
Some parameters are missing. Please provide them.
database_host ('%env(DATABASE_HOST)%'): Script Incenteev\ParameterHandler
\ScriptHandler::buildParameters handling the symfony-scripts event terminated with an exception
[Symfony\Component\Console\Exception\RuntimeException]
Aborted
....
I already used some jenkins plugin like EnvInjector and something similar. But still symfony can't find my environment variables.
You can probably solve this like this:
stage('composer install') {
sh 'export $(cat env/env_vars | xargs) && composer install --optimize-autoloader'
}
This will make the environment variables available in the same shell session.

Google Closure Compiler is importing my extern functions

I have created an extern in a javascript file and specified it as part of the Google Closure Compiler (GCC) command line option. I am compiling with advanced mode. GCC is taking the function in my extern and placing it in the compiled code. I have no idea why it would do this. GCC is suppose to recognize that the extern function is in a separate file. When I export the object, it will rename the object and leave the object's function names alone BUT it will create a copy of the entire extern function in the compiled code.
I have tried many variations (too numerous here to list) to see how to prevent GCC from doing this but nothing has worked.
My extern:
var MyCustomResizer = {
"onResize": function (a, b) {
},
"detach": function () {
}
}
I exported the object as follows:
window["MyCustomResizer"] = MyCustomResizer;
My app using the "detach" function:
MyCustomResizer.detach();
My compiler settings:
java -jar closure-compiler/compiler.jar \
--compilation_level ADVANCED_OPTIMIZATIONS \
--externs scripts/externs/resizer-extern.js \
--js_output_file scripts/release/myapp.js \
--warning_level VERBOSE \
--language_out ECMASCRIPT5 \
--language_in=ECMASCRIPT_2017 \
--js scripts/base.js
And the generated compiled output contains this:
ha.detach();
...
var ha = {
onResize: function () {
}, detach: function () {
}
};
It turns out that when you specify extern files, you MUST use the --extern option in front of every extern file. I only had it on the first one:
Incorrect:
java -jar closure-compiler/compiler.jar \
--compilation_level ADVANCED_OPTIMIZATIONS \
--externs scripts/externs/jQuery/jquery-1.9-externs.js \
scripts/externs/third-party.js \
--js_output_file scripts/release/servetus-min.js \
Correct:
java -jar closure-compiler/compiler.jar \
--compilation_level ADVANCED_OPTIMIZATIONS \
--externs scripts/externs/jQuery/jquery-1.9-externs.js \
--externs scripts/externs/third-party.js \
--js_output_file scripts/release/servetus-min.js \
I find it very strange that the compiler just ignores the one without the --externs but just goes ahead anyway and copies its functions into the compiled code. This should not be allowed and a warning should be issued. This took an entire day to track down.

Apigee Command Line import returns 500 with NullPointerException

I'm trying to customise the deploy scripts to allow me to deploy each of my four API proxies from the command line. It looks very similar to the one provided in the samples on Github:
#!/bin/bash
if [[ $# -eq 0 ]] ; then
echo 'Must provide proxy name.'
exit 0
fi
dirname=$1
proxyname="teamname-"$dirname
source ./setup/setenv.sh
echo "Enter your password for user $username in the Apigee Enterprise organization $org, followed by [ENTER]:"
read -s password
echo Deploying $proxyname to $env on $url using $username and $org
./tools/deploy.py -n $proxyname -u $username:$password -o $org -h $url -e $env -p / -d ./$dirname
echo "If 'State: deployed', then your API Proxy is ready to be invoked."
echo "Run '$ sh invoke.sh'"
echo "If you get errors, make sure you have set the proper account settings in /setup/setenv.sh"
However when I run it, I get the following response:
Deploying teamname-gameassets to int on https://api.enterprise.apigee.com using my-email-address and org-name
Writing ./gameassets/teamname-gameassets.xml to ./teamname-gameassets.xml
Writing ./gameassets/policies/Add-CORS.xml to policies/Add-CORS.xml
Writing ./gameassets/proxies/default.xml to proxies/default.xml
Writing ./gameassets/targets/development.xml to targets/development.xml
Writing ./gameassets/targets/production.xml to targets/production.xml
Import failed to /v1/organizations/org-name/apis?action=import&name=teamname-gameassets with status 500:
{
"code" : "messaging.config.beans.ImportFailed",
"message" : "Failed to import the bundle : java.lang.NullPointerException",
"contexts" : [ ],
"cause" : {
"contexts" : [ ]
}
}
How should I go about debugging when I receive errors during the deploy process? Is there some sort of console I can view once logged in to Apigee?
I'm not sure how your proxy ended up this way, but it looks like the top-level directory is named "gameassets." It should be named "apiproxy". If you rename this directory you should see a successful deployment.
Also, before you customize too much, please try out "apigeetool," which is a more flexible command-line tool for deploying proxies:
https://github.com/apigee/api-platform-tools

Symfony 2.2.1 rsync deploy - not working on remote server

I'm very new to Symfony and I'm trying to automate the deploy process with rsync, while keeping both the local and remote installs of Symfony working.
What I've done so far:
installed Cygwin on my local machine (Windows 7+Apache2.2+PHP 5.3+MySQL 5.1)
done a basic Symfony install on my local machine from shell with the command
php composer.phar create-project symfony/framework-standard-edition [path]/ 2.2.1
set up a remote LAMP Ubuntu server with php-fpm (fastcgi)
set up two different configuration files for local and remote in the app/config/ dir, parameters.yml and parameters.yml.remote
created an app/config/rsync_exclude.txt file containing a list of files not to rsync to the remote server (as suggested in this page)
created a deploy shell script that I run from Cygwin (see below)
The deploy script issues the commands:
rsync -avz /cygdrive/c/[path]/ user#server:[remote-path]/ --exclude-from=/cygdrive/c/[path]/app/config/rsync_exclude.txt
ssh user#server 'cd [remote-path]/ && php app/console --env=prod cache:clear && php app/console cache:clear'
ssh user#server 'mv [remote-path]/app/config/parameters.yml.remote ~/[remote-path]/app/config/parameters.yml'
The rsync, ssh and mv commands work, but the deployed site shows always a HTTP 500 error (both app.php and app_dev.php).
Looking at server error log the error is:
Fatal error: Class 'Composer\\Autoload\\ClassLoader' not found in /[remote-path]/vendor/composer/autoload_real.php on line 23
Any clue would be more than welcome.
Edit - here is my vendor/composer/autoload_real.php file (sorry for the making the question longer!):
<?php
// autoload_real.php generated by Composer
class ComposerAutoloaderInit9d50f07556e53717271b583e52c7de25
{
private static $loader;
public static function loadClassLoader($class)
{
if ('Composer\Autoload\ClassLoader' === $class) {
require __DIR__ . '/ClassLoader.php';
}
}
public static function getLoader()
{
if (null !== self::$loader) {
return self::$loader;
}
spl_autoload_register(array('ComposerAutoloaderInit9d50f07556e53717271b583e52c7de25', 'loadClassLoader'), true, true);
self::$loader = $loader = new \Composer\Autoload\ClassLoader();
// ^^^^^^ this is line 23 and gives the error ^^^^^^^^^^^
spl_autoload_unregister(array('ComposerAutoloaderInit9d50f07556e53717271b583e52c7de25', 'loadClassLoader'));
$vendorDir = dirname(__DIR__);
$baseDir = dirname($vendorDir);
$map = require __DIR__ . '/autoload_namespaces.php';
foreach ($map as $namespace => $path) {
$loader->add($namespace, $path);
}
$classMap = require __DIR__ . '/autoload_classmap.php';
if ($classMap) {
$loader->addClassMap($classMap);
}
$loader->register(true);
require $vendorDir . '/kriswallsmith/assetic/src/functions.php';
require $vendorDir . '/swiftmailer/swiftmailer/lib/swift_required.php';
return $loader;
}
}
If there is an error with the autoloader generated by composer, performing ...
composer update
... will update your dependencies and create a new one.
You should invoke the command with the -o flag if you are deploying to a production system.
This way composer generates a classmap autoloader ( which performs way better ) instead of the classic autoloader.
composer update -o
I guess re-generating the autoloader will solve the issue :)

Resources