OpenCL NDRange dimensions order bug on nVidia? - opencl

I know OpenCL is fairly idle these days - especially NVidia's CUDA implementation. That said I think I've found a significant bug in Nvidia and I'd like to see if anybody else notices the same. Using Linux Platform Version OpenCL 1.2 CUDA 10.1.0 with C++ bindings I've been having all kinds of issues with NDRange order and I finally have a simple kernel that can definitively reproduce the issue:
void kernel test()
{
printf("G0:%d G1:%d G2:%d L0:%d L1:%d L2:%d\n",
get_global_id(0),
get_global_id(1),
get_global_id(2),
get_local_id(0),
get_local_id(1),
get_local_id(2));
}
If I enqueue this kernel with 3 dimensions: global (4,3,2) and local (1,1,1):
queue.enqueueNDRangeKernel(kernel, cl::NullRange,
cl::NDRange(4, 3, 2),
cl::NDRange(1, 1, 1),
NULL, events);
it randomly outputs the following correctly on AMD/Intel (random output sorted for clarity):
G0:0 G1:0 G2:0 L0:0 L1:0 L2:0
G0:0 G1:0 G2:1 L0:0 L1:0 L2:0
G0:0 G1:1 G2:0 L0:0 L1:0 L2:0
G0:0 G1:1 G2:1 L0:0 L1:0 L2:0
G0:0 G1:2 G2:0 L0:0 L1:0 L2:0
G0:0 G1:2 G2:1 L0:0 L1:0 L2:0
G0:1 G1:0 G2:0 L0:0 L1:0 L2:0
G0:1 G1:0 G2:1 L0:0 L1:0 L2:0
G0:1 G1:1 G2:0 L0:0 L1:0 L2:0
G0:1 G1:1 G2:1 L0:0 L1:0 L2:0
G0:1 G1:2 G2:0 L0:0 L1:0 L2:0
G0:1 G1:2 G2:1 L0:0 L1:0 L2:0
G0:2 G1:0 G2:0 L0:0 L1:0 L2:0
G0:2 G1:0 G2:1 L0:0 L1:0 L2:0
G0:2 G1:1 G2:0 L0:0 L1:0 L2:0
G0:2 G1:1 G2:1 L0:0 L1:0 L2:0
G0:2 G1:2 G2:0 L0:0 L1:0 L2:0
G0:2 G1:2 G2:1 L0:0 L1:0 L2:0
G0:3 G1:0 G2:0 L0:0 L1:0 L2:0
G0:3 G1:0 G2:1 L0:0 L1:0 L2:0
G0:3 G1:1 G2:0 L0:0 L1:0 L2:0
G0:3 G1:1 G2:1 L0:0 L1:0 L2:0
G0:3 G1:2 G2:0 L0:0 L1:0 L2:0
G0:3 G1:2 G2:1 L0:0 L1:0 L2:0
This follows the spec. But if I schedule the exact same kernel with same dimensions using NVidia I the following output:
G0:0 G1:0 G2:0 L0:0 L1:0 L2:0
G0:0 G1:0 G2:0 L0:0 L1:1 L2:0
G0:0 G1:0 G2:1 L0:0 L1:0 L2:0
G0:0 G1:0 G2:1 L0:0 L1:1 L2:0
G0:0 G1:0 G2:2 L0:0 L1:0 L2:0
G0:0 G1:0 G2:2 L0:0 L1:1 L2:0
G0:1 G1:0 G2:0 L0:0 L1:0 L2:0
G0:1 G1:0 G2:0 L0:0 L1:1 L2:0
G0:1 G1:0 G2:1 L0:0 L1:0 L2:0
G0:1 G1:0 G2:1 L0:0 L1:1 L2:0
G0:1 G1:0 G2:2 L0:0 L1:0 L2:0
G0:1 G1:0 G2:2 L0:0 L1:1 L2:0
G0:2 G1:0 G2:0 L0:0 L1:0 L2:0
G0:2 G1:0 G2:0 L0:0 L1:1 L2:0
G0:2 G1:0 G2:1 L0:0 L1:0 L2:0
G0:2 G1:0 G2:1 L0:0 L1:1 L2:0
G0:2 G1:0 G2:2 L0:0 L1:0 L2:0
G0:2 G1:0 G2:2 L0:0 L1:1 L2:0
G0:3 G1:0 G2:0 L0:0 L1:0 L2:0
G0:3 G1:0 G2:0 L0:0 L1:1 L2:0
G0:3 G1:0 G2:1 L0:0 L1:0 L2:0
G0:3 G1:0 G2:1 L0:0 L1:1 L2:0
G0:3 G1:0 G2:2 L0:0 L1:0 L2:0
G0:3 G1:0 G2:2 L0:0 L1:1 L2:0
It seems like NVidia's interpretation of global/local dimensions is interleaved which doesn't match spec. This doesn't seem to involve the C++ bindings either. Local ID should never be anything but zero and get_global_id(1) is always zero.
I know NVidia doesn't care much for OpenCL but this seems like a fairly major issue. Anyone else encounter something like this? This isn't a synch issue with printf. I've noticed it in actual data use cases and built this kernel only to demonstrate it.

Although it's hard to verify this in detail, I'll post it as an answer, because from my observations, it seems to explain the issue:
tl;dr: The reason is almost certainly due to the lack of synchronization in printf.
First of all, I observed the same behavior as you: On AMD the output seems to be right. On NVIDIA, it seems to be irritatingly wrong. So I was curious, and extended the kernel, to also print the get_local_size:
void kernel test()
{
printf("G0:%d G1:%d G2:%d L0:%d L1:%d L2:%d S0:%d S1:%d S2:%d\n",
get_global_id(0),
get_global_id(1),
get_global_id(2),
get_local_id(0),
get_local_id(1),
get_local_id(2),
get_local_size(0),
get_local_size(1),
get_local_size(2));
}
Now, the get_local_id certainly must be smaller than the size, otherwise most kernels would just crash. On AMD, the output was nice and clean:
platform AMD Accelerated Parallel Processing
device Spectre
G0:0 G1:0 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:1 G1:0 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:2 G1:0 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:3 G1:0 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:0 G1:1 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:1 G1:1 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:2 G1:1 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:3 G1:1 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:0 G1:2 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:1 G1:2 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:2 G1:2 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:3 G1:2 G2:0 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:0 G1:0 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:1 G1:0 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:2 G1:0 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:3 G1:0 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:0 G1:1 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:1 G1:1 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:2 G1:1 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:3 G1:1 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:0 G1:2 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:1 G1:2 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:2 G1:2 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
G0:3 G1:2 G2:1 L0:0 L1:0 L2:0 S0:1 S1:1 S2:1
On NVIDIA, the output was
platform NVIDIA CUDA
device GeForce GTX 970
G0:3 G1:0 G2:2 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:3 G1:0 G2:1 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:3 G1:0 G2:0 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:0 G1:0 G2:2 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:0 G1:0 G2:1 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:2 G1:0 G2:0 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:2 G1:0 G2:1 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:2 G1:0 G2:2 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:1 G1:0 G2:1 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:3 G1:0 G2:0 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:1 G1:0 G2:0 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:3 G1:0 G2:1 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:0 G1:0 G2:2 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:1 G1:0 G2:2 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:3 G1:0 G2:2 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:0 G1:0 G2:1 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:2 G1:0 G2:1 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:0 G1:0 G2:0 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:2 G1:0 G2:0 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:0 G1:0 G2:0 L0:0 L1:0 L2:0 S0:0 S1:0 S2:0
G0:2 G1:0 G2:2 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:1 G1:0 G2:2 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:1 G1:0 G2:1 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
G0:1 G1:0 G2:0 L0:0 L1:1 L2:0 S0:0 S1:0 S2:0
Now, that cannot be right: The local work size is always 0!
After some further tests (e.g. with 2D kernels and different numbers), the output generally did not seem to make any sense, at all. So I tried this kernel:
void kernel test()
{
printf("G0:%d\n", get_global_id(0));
printf("G1:%d\n", get_global_id(1));
printf("G2:%d\n", get_global_id(2));
printf("L0:%d\n", get_local_id(0));
printf("L1:%d\n", get_local_id(1));
printf("L2:%d\n", get_local_id(2));
printf("S0:%d\n", get_local_size(0));
printf("S1:%d\n", get_local_size(1));
printf("S2:%d\n", get_local_size(2));
}
The on NVIDIA, the output then is
platform NVIDIA CUDA
device GeForce GTX 970
G0:1
G0:1
G0:1
G0:2
G0:2
G0:2
G0:2
G0:2
G0:3
G0:2
G0:3
G0:3
G0:0
G0:3
G0:3
G0:0
G0:0
G0:3
G0:0
G0:0
G0:0
G0:1
G0:1
G0:1
G1:2
G1:2
G1:0
G1:0
G1:1
G1:2
G1:2
G1:1
G1:1
G1:1
G1:0
G1:0
G1:2
G1:1
G1:0
G1:0
G1:2
G1:1
G1:1
G1:0
G1:2
G1:2
G1:0
G1:1
G2:0
G2:0
G2:1
G2:1
G2:0
G2:0
G2:1
G2:0
G2:0
G2:0
G2:0
G2:0
G2:1
G2:1
G2:0
G2:1
G2:1
G2:1
G2:1
G2:0
G2:1
G2:0
G2:1
G2:1
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L0:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L1:0
L2:0
L1:0
L1:0
L1:0
L1:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
S0:1
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
L2:0
S0:1
S0:1
S0:1
S0:1
S0:1
S0:1
S0:1
S0:1
S0:1
S0:1
S0:1
S0:1
S0:1
S1:1
S0:1
S0:1
S0:1
S0:1
S0:1
S1:1
S0:1
S0:1
S0:1
S0:1
S0:1
S1:1
S1:1
S1:1
S1:1
S1:1
S1:1
S1:1
S1:1
S1:1
S1:1
S2:1
S1:1
S1:1
S1:1
S2:1
S1:1
S1:1
S1:1
S1:1
S1:1
S2:1
S1:1
S1:1
S2:1
S2:1
S1:1
S1:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
S2:1
The key point is: Each individual output is correct!. The problem seems to be that putting everything into a single printf messes up some internal buffer.
This is a pity, of course. It basically makes it impossible to use printf for the only purpose that it could sensibly be used for inside a kernel, namely for debugging...
An aside: The specifications remain a bit hard to interpret at that point - at least when it comes to deciding whether the observed behavior is "right" or "wrong". From the Khronos documentation of printf :
In the case that printf is executed from multiple work-items concurrently, there is no guarantee of ordering with respect to written data. For example, it is valid for the output of a work-item with a global id (0,0,1) to appear intermixed with the output of a work-item with a global id (0,0,4) and so on.
The NVIDIA documentation of the CUDA printf implementation also contains some disclaimers and talks about some buffers that may be overwritten, but mapping this (on the technical level of a specification) to the OpenCL behavior is difficult...

Related

doctrine compatibility with Sylius 1.12

PHP Symfony Sylius on Windows 11 with docker
I launch a sylius:install and the 2 steps of 5 stop this : warning error
Step 2 of 5. Setting up the database.
-------------------------------------
Creating Sylius database for environment dev.
It appears that your database already exists.
Warning! This action will erase your database.
Would you like to reset it? (y/N) y
1/4 [░░░░░░░░ ] 25%
In ImagesRemoveListener.php line 40:
Attempted to call an undefined method named "getObjectManager" of class "Doctrine\ORM\Event\OnFlushEventArgs".
doctrine:migrations:migrate [--write-sql [WRITE-SQL]] [--dry-run] [--query-time] [--allow-no-migration] [--all-or-nothing [ALL-OR-NOTHING]] [--configuration CONFIGURATION] [--em EM] [--conn CONN] [--] [<version>]
I confirm have the Doctrine\ORM\Event\OnFlushEventArgs Class.
But nothing getObjectManager method into this.
This is the require of the composer :
"require": {
"php": "^8.0",
"sylius/paypal-plugin": "^1.2.1",
"sylius/sylius": "^1.12#dev",
"symfony/dotenv": "^5.4",
"symfony/flex": "^2.1"
},
"require-dev": {
"behat/behat": "^3.7",
"behat/mink-selenium2-driver": "^1.4",
"dmore/behat-chrome-extension": "^1.3",
"dmore/chrome-mink-driver": "^2.7",
"friends-of-behat/mink": "^1.8",
"friends-of-behat/mink-browserkit-driver": "^1.4",
"friends-of-behat/mink-debug-extension": "^2.0",
"friends-of-behat/mink-extension": "^2.4",
"friends-of-behat/page-object-extension": "^0.3",
"friends-of-behat/suite-settings-extension": "^1.0",
"friends-of-behat/symfony-extension": "^2.1",
"friends-of-behat/variadic-extension": "^1.3",
"lchrusciel/api-test-case": "^5.0",
"phpspec/phpspec": "^7.0",
"phpstan/extension-installer": "^1.0",
"phpstan/phpstan": "1.5.4",
"phpstan/phpstan-doctrine": "1.3.2",
"phpstan/phpstan-webmozart-assert": "^1.1",
"phpunit/phpunit": "^8.5",
"stripe/stripe-php": "^6.43",
"sylius-labs/coding-standard": "^4.0",
"symfony/browser-kit": "^5.4",
"symfony/debug-bundle": "^5.4",
"symfony/intl": "^5.4",
"symfony/web-profiler-bundle": "^5.4",
"polishsymfonycommunity/symfony-mocker-container": "^1.0"
},
The part dealing with doctrine into sylius/sylius vendor bundles composer.json file :
"doctrine/collections": "^1.6",
"doctrine/common": "^3.2",
"doctrine/dbal": "^2.7|^3.0",
"doctrine/doctrine-bundle": "^1.12 || ^2.0",
"doctrine/doctrine-migrations-bundle": "^3.0.1",
"doctrine/event-manager": "^1.1",
"doctrine/inflector": "^1.4 || ^2.0",
"doctrine/migrations": "^3.0",
"doctrine/orm": "^2.7",
"doctrine/persistence": "^2.3",
OnFlushEventArgs.php is part of doctrine/orm package. The 2.7 version.
Downgrade or update version of doctrine/orm ?

ERROR: Failed to create collection 'gettingstarted' due to: Underlying core creation failed while creating collection: while creating solrcloud SOLR

From Log:
ERROR (OverseerThreadFactory-9-thread-1-processing-n:localhost:1000_solr) [ ] o.a.s.c.a.c.OverseerCollectionMessageHandler Error from shard: http://localhost:1001/solr => org.apache.solr.client.solrj.SolrServerException: IOException occurred when talking to server at: http://localhost:1001/solr
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:679)
org.apache.solr.client.solrj.SolrServerException: IOException occurred when talking to server at: http://localhost:1001/solr
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:679) ~[solr-solrj-8.4.0.jar:8.4.0 bc02ab906445fcf4e297f4ef00ab4a54fdd72ca2 - jpountz - 2019-12-19 20:19:50]
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265) ~[solr-solrj-8.4.0.jar:8.4.0 bc02ab906445fcf4e297f4ef00ab4a54fdd72ca2 - jpountz - 2019-12-19 20:19:50]
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248) ~[solr-solrj-8.4.0.jar:8.4.0 bc02ab906445fcf4e297f4ef00ab4a54fdd72ca2 - jpountz - 2019-12-19 20:19:50]
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1290) ~[solr-solrj-8.4.0.jar:8.4.0 bc02ab906445fcf4e297f4ef00ab4a54fdd72ca2 - jpountz - 2019-12-19 20:19:50]
at org.apache.solr.handler.component.HttpShardHandlerFactory$1.request(HttpShardHandlerFactory.java:176) ~[solr-core-8.4.0.jar:8.4.0 bc02ab906445fcf4e297f4ef00ab4a54fdd72ca2 - jpountz - 2019-12-19 20:19:49]
at org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:195) ~[solr-core-8.4.0.jar:8.4.0 bc02ab906445fcf4e297f4ef00ab4a54fdd72ca2 - jpountz - 2019-12-19 20:19:49]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_242]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_242]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_242]
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181) ~[metrics-core-4.0.5.jar:4.0.5]
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210) ~[solr-solrj-8.4.0.jar:8.4.0 bc02ab906445fcf4e297f4ef00ab4a54fdd72ca2 - jpountz - 2019-12-19 20:19:50]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_242]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
Caused by: org.apache.http.client.ClientProtocolException
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:187) ~[httpclient-4.5.6.jar:4.5.6]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.6.jar:4.5.6]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[httpclient-4.5.6.jar:4.5.6]
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:564) ~[solr-solrj-8.4.0.jar:8.4.0 bc02ab906445fcf4e297f4ef00ab4a54fdd72ca2 - jpountz - 2019-12-19 20:19:50]
... 13 more

How to enable the annotation reader service for sensio framework controller listener?

After a composer update to fix some vulnerabilities on package used by my application, I handle an error:
The service "sensio_framework_extra.controller.listener" has a dependency on a non-existent service "annotation_reader"
As suggested in this answer, I tried to add the doctrine/annotations package, but it doesn't solved my issue (package seems to be already installed).
λ composer require doctrine/annotations
Using version ^1.8 for doctrine/annotations
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Restricting packages listed in "symfony/symfony" to "4.2.*"
Nothing to install or update
Here is the packages used by my composer.json:
"require": {
"php": "^7.1.3",
"ext-ctype": "*",
"ext-fileinfo": "*",
"ext-iconv": "*",
"ext-json": "*",
"doctrine/doctrine-fixtures-bundle": "^3.1",
"ekyna/payum-monetico-bundle": "^1.5",
"gedmo/doctrine-extensions": "^2.4",
"knplabs/knp-paginator-bundle": "^3.0",
"payum/offline": "^1.5",
"payum/paypal-express-checkout-nvp": "^1.5",
"payum/payum-bundle": "^2.3",
"php-http/guzzle6-adapter": "^2.0",
"sensio/framework-extra-bundle": "^5.1",
"stof/doctrine-extensions-bundle": "^1.3",
"symfony/asset": "4.2.*",
"symfony/console": "4.2.*",
"symfony/dotenv": "4.2.*",
"symfony/expression-language": "4.2.*",
"symfony/flex": "^1.1",
"symfony/form": "4.2.*",
"symfony/framework-bundle": "4.2.*",
"symfony/monolog-bundle": "^3.1",
"symfony/orm-pack": "1.*",
"symfony/process": "4.2.*",
"symfony/security-bundle": "4.2.*",
"symfony/serializer-pack": "1.*",
"symfony/swiftmailer-bundle": "^3.1",
"symfony/templating": "4.2.*",
"symfony/translation": "4.2.*",
"symfony/twig-bundle": "4.2.*",
"symfony/validator": "4.2.*",
"symfony/web-link": "4.2.*",
"symfony/webpack-encore-bundle": "^1.4",
"symfony/yaml": "4.2.*",
"twig/extensions": "^1.5",
"vich/uploader-bundle": "^1.8"
},
"require-dev": {
"codeception/codeception": "^2.5",
"codeception/c3": "2.*",
"friendsofphp/php-cs-fixer": "^2.14",
"php-coveralls/php-coveralls": "^2.1",
"phpmd/phpmd": "2.*",
"squizlabs/php_codesniffer": "*",
"symfony/debug-pack": "*",
"symfony/maker-bundle": "^1.11",
"symfony/profiler-pack": "*",
"symfony/test-pack": "^1.0",
"symfony/web-server-bundle": "4.2.*"
}
Edit :
I'm using all the framework, I've manually dropped cache to be sure there is no problem. I rebooted my dev computer too. Finally, I went around the problem by deploying my github project in another directory and application works fine. I don't close this question because I want to find the problem if it occurs in production.
since doctrine (or doctrine/annotations) itself does not register services (why would it), I looked up the doctrine bundles and the doctrine/doctrine-bundle provides the annotation_reader service: https://github.com/doctrine/DoctrineBundle/blob/af8ac792c9b970ff2bc25b49ab9b31afd9e03dbf/Resources/config/orm.xml#L82
I ran into a very similar error (The service "doctrine.orm.default_annotation_metadata_driver" has a dependency on a non-existent service "annotation_reader".) while trying to create a new setup. I have some instructions documented and had tried to composer install ormfixtures (--dev) before doctrine. Once I did that in the right order, everything was working as expected.
My specific example is from Twilio's instructions, which have these commands in this order:
composer req --dev maker ormfixtures fakerphp/faker
composer req doctrine twig
It worked when I reversed them to be in this order:
composer req doctrine twig
composer req --dev maker ormfixtures fakerphp/faker
I Had the same issue after a composer update from Symfony 4.4.
It was because I have replace de "Doctrine\Common" with "Doctrine" from all the using instead of "Doctrine\Common\Persistence" with "Doctrine\Persistence".
In other term: no matters why you have this error, it come from a broken namespace in your code.
This is the official link of the SensioFrameworkExtraBundle: https://symfony.com/doc/current/bundles/SensioFrameworkExtraBundle/index.html

Cannot use/import npm packages in Meteor

I am not able to do:
import SimpleSchema from 'simpl-schema';
or to import any other npm package with this command. I have already installed the package via meteor npm install --save simpl-schema.
I have also tried to import other npm dependencies like:
import moment from 'moment';
Therefore, I think that I generally missed something important to be able to require NPM packages in my client-side code or that I am using a wrong structure?
I get the error:
Uncaught SyntaxError: Unexpected identifier
in the browser console.
Edit
I have called meteor npm install before running the app.
If I code this line import moment from 'moment'; on a server side js-file, I get the following error:
app\server\startup\profilemanagement.js:1
(function(NPM,Assets){(function(){import moment from 'moment';
SyntaxError: Unexpected token import at C:...meteor\local\build\programs\server\boot.js:392:18 at
Array.forEach () at Function..each..forEach
(C:...meteor\packages\meteor-tool\1.6.0:1\mt-os.windows.x86_64\dev_bundle\server-lib\node_modules\underscore\underscore.js:79:11) at C:...meteor\local\build\programs\server\boot.js:220:5 at
C:...meteor\local\build\programs\server\boot.js:463:5 at
Function.run
(C:...meteor\local\build\programs\server\profile.js:510:12) at
C:...meteor\local\build\programs\server\boot.js:462:11
I am not sure if I installed the babel packages correctly. I think something went wrong there. Here is my package.json:
{
"dependencies": {
"abbrev": "^1.1.0",
"addressparser": "^1.0.1",
"ajv": "^5.2.2",
"ansi-regex": "^3.0.0",
"aproba": "^1.2.0",
"are-we-there-yet": "^1.1.4",
"asn1": "^0.2.3",
"assert-plus": "^1.0.0",
"asynckit": "^0.4.0",
"aws-sign2": "^0.7.0",
"aws4": "^1.6.0",
"babel-runtime": "^6.26.0",
"balanced-match": "^1.0.0",
"bcrypt": "^1.0.3",
"bcrypt-pbkdf": "^1.0.1",
"block-stream": "0.0.9",
"boom": "^5.2.0",
"brace-expansion": "^1.1.8",
"buildmail": "^4.0.1",
"caseless": "^0.12.0",
"chart.js": "^2.7.0",
"chartjs-color": "^2.2.0",
"cloudinary": "^1.9.0",
"co": "^4.6.0",
"code-point-at": "^1.1.0",
"color-convert": "^1.9.0",
"color-name": "^1.1.3",
"combined-stream": "^1.0.5",
"concat-map": "0.0.1",
"console-control-strings": "^1.1.0",
"core-js": "^2.5.1",
"core-util-is": "^1.0.2",
"cryptiles": "^3.1.2",
"dashdash": "^1.14.1",
"debug": "^3.0.1",
"deep-extend": "^0.5.0",
"delayed-stream": "^1.0.0",
"delegates": "^1.0.0",
"ecc-jsbn": "^0.1.1",
"extend": "^3.0.1",
"extsprintf": "^1.3.0",
"fast-deep-equal": "^1.0.0",
"fontawesome": "^4.7.2",
"forever-agent": "^0.6.1",
"form-data": "^2.3.1",
"fs.realpath": "^1.0.0",
"fstream": "^1.0.11",
"fstream-ignore": "^1.0.5",
"gauge": "^2.7.4",
"getpass": "^0.1.7",
"glob": "^7.1.2",
"graceful-fs": "^4.1.11",
"har-schema": "^2.0.0",
"har-validator": "^5.0.3",
"has-unicode": "^2.0.1",
"hawk": "^6.0.2",
"hoek": "^4.2.0",
"http-signature": "^1.2.0",
"iconv-lite": "^0.4.19",
"inflight": "^1.0.6",
"inherits": "^2.0.3",
"ini": "^1.3.4",
"is-fullwidth-code-point": "^2.0.0",
"is-typedarray": "^1.0.0",
"isarray": "^2.0.2",
"isstream": "^0.1.2",
"jasny-bootstrap": "^3.1.3",
"jodid25519": "^1.0.2",
"jsbn": "^1.1.0",
"json-schema": "^0.2.3",
"json-schema-traverse": "^0.3.1",
"json-stable-stringify": "^1.0.1",
"json-stringify-safe": "^5.0.1",
"jsonify": "0.0.0",
"jsprim": "^1.4.1",
"libbase64": "^0.2.0",
"libmime": "^3.1.0",
"libqp": "^1.1.0",
"lodash": "^4.17.4",
"mailcomposer": "^4.0.2",
"meteor-node-stubs": "^0.2.11",
"mime-db": "^1.30.0",
"mime-types": "^2.1.17",
"minimatch": "^3.0.4",
"minimist": "^1.2.0",
"mkdirp": "^0.5.1",
"moment": "^2.18.1",
"ms": "^2.0.0",
"nan": "^2.7.0",
"node-pre-gyp": "^0.6.37",
"nodemailer": "^4.1.0",
"nodemailer-fetch": "^2.1.0",
"nodemailer-shared": "^2.0.0",
"nopt": "^4.0.1",
"npmlog": "^4.1.2",
"number-is-nan": "^1.0.1",
"oauth-sign": "^0.8.2",
"object-assign": "^4.1.1",
"object-inspect": "^1.3.0",
"object-keys": "^1.0.11",
"once": "^1.4.0",
"os-homedir": "^1.0.2",
"os-tmpdir": "^1.0.2",
"osenv": "^0.1.4",
"path-is-absolute": "^1.0.1",
"path-parse": "^1.0.5",
"performance-now": "^2.1.0",
"process-nextick-args": "^1.0.7",
"punycode": "^2.1.0",
"q": "^1.5.0",
"qs": "^6.5.1",
"rc": "^1.2.1",
"readable-stream": "^2.3.3",
"regenerator-runtime": "^0.11.0",
"request": "^2.82.0",
"resolve": "^1.4.0",
"resumer": "0.0.0",
"rimraf": "^2.6.2",
"safe-buffer": "^5.1.1",
"semver": "^5.4.1",
"set-blocking": "^2.0.0",
"signal-exit": "^3.0.2",
"sntp": "^2.0.2",
"sshpk": "^1.13.1",
"string-width": "^2.1.1",
"string.prototype.trim": "^1.1.2",
"string_decoder": "^1.0.3",
"stringstream": "0.0.5",
"strip-ansi": "^4.0.0",
"strip-json-comments": "^2.0.1",
"sweetalert": "^1.1.3",
"tape": "^4.8.0",
"tar": "^4.0.1",
"tar-pack": "^3.4.0",
"through": "^2.3.8",
"tough-cookie": "^2.3.2",
"tunnel-agent": "^0.6.0",
"tweetnacl": "^1.0.0",
"uid-number": "0.0.6",
"util-deprecate": "^1.0.2",
"uuid": "^3.1.0",
"verror": "^1.10.0",
"wide-align": "^1.1.2",
"wow.js": "^1.2.2",
"wrappy": "^1.0.2",
"yallist": "^3.0.2"
}
}
Second edit
It seems to be that (maybe since the last update?) I have a major issue with the meteor installation and its npm dependencies. I am not able to create a new meteor project due to this error:
Error: Error: Could not install npm dependencies for test-packages:
Command failed: C:\WINDOWS\system32\cmd.exe /c
C:\Users...\AppData\Local.meteor\packages\meteor-tool\1.6.0._1\mt-os.windows.x86_64\dev_bundle\bin\npm.cmd
install npm ERR! code ENOGIT npm ERR! No git binary found in $PATH npm
ERR! npm ERR! Failed using git. npm ERR! Please check if you have git
installed and in your PATH.
You can check this error here:
https://github.com/meteor/meteor/issues/8585. It tells that me that no Npm dependencies can be installed because of not finding the Git-path.
However, the solutiuon to reinstall Git did not work for me. So, I think something went generally wrong since the last meteor update. Maybe the only solution is to reinstall meteor completely or use an older version? I started this project from my computer with an older meteor version and uninstalling and reinstalling Babel did not make any difference.
Ok, got it! Although I don´t know exactly what produced the error, I could solve the issue by creating a new meteor project with the --bare flag (so now I know that all npm dependencies are installed correctly to ensure that I can use 'import') and copied my code into the project. However, than I got another error message which told me that my bootstrap is not compatible with my jquery version. Before solving this, I had to delete codemirror in my client/vendor folder, which produces another error
Uncaught Error: Cannot find module ‘…/…/lib/codemirror’
Than I installed jquery#2.2.4 plus all required datatable npm dependencies I use in my code to keep the compatibility of my applied bootstrap and jquery. Now I can use import SimpleSchema from 'simpl-schema' and all other imports and no more errors occure.
So lastly, some npm and/or packages were wrong and the initial error could be solved by creating a new meteor project. The other errors were related to bootstrap/jquery compatibility and codemirror.

Composer freezing when installing symfony vendors

I'm trying to install Symfony 2. I get the same issue of I download the archive without vendors and if I try to install via curl.
Running OSX/MAMP setup.
➜ composer install
Loading composer repositories with package information
Installing dependencies (including require-dev)
- Installing symfony/icu (v1.0.0)
Downloading: connection...^C
➜ composer install -vvv
Downloading composer.json
Loading composer repositories with package information
Downloading https://packagist.org/packages.json
Writing /Users/alexlongshaw/.composer/cache/repo/https---packagist.org/packages.json into cache
Downloading https://packagist.org/p/provider-active$fa1339d67d333d9449a21f7a2c80888f2c7a02dbb4d3e6b11a9dd5855df3f537.json
....
Downloading http://packagist.org/p/symfony/class-loader$962a39a1da8588e7f97e22517580a460d5349699d5ccb967167c2a1e9802ce50.json
Reading /Users/alexlongshaw/.composer/cache/repo/https---packagist.org/provider-symfony$class-loader.json from cache
zlib_decode(): data error
http://packagist.org could not be fully loaded, package information was loaded from the local cache and may be out of date
Downloading http://packagist.org/p/symfony/config$eec66e956c41b0728a7fc4f40b95a116bc469f8583c2602b14af3d00f36711fc.json
Writing /Users/alexlongshaw/.composer/cache/repo/https---packagist.org/provider-symfony$config.json into cache
Reading /Users/alexlongshaw/.composer/cache/repo/https---packagist.org/provider-phpoption$phpoption.json from cache
- Installing symfony/icu (v1.0.0)
Downloading https://api.github.com/repos/symfony/Icu/zipball/v1.0.0
Downloading: connection...
As you can see below, if I do composer update I get a similar problem.
➜ composer update
Loading composer repositories with package information
Updating dependencies (including require-dev)
zlib_decode(): data error
http://packagist.org could not be fully loaded, package information was loaded from the local cache and may be out of date
- Installing symfony/icu (v1.0.0)
Downloading: connection...
Any suggestions on how to get past this? It works fine for me on an Ubuntu VM so I presume it is something to do with the setup.
Composer.json
{
"name": "symfony/framework-standard-edition",
"license": "MIT",
"type": "project",
"description": "The \"Symfony Standard Edition\" distribution",
"autoload": {
"psr-0": { "": "src/" }
},
"require": {
"php": ">=5.3.3",
"symfony/symfony": "2.3.*",
"doctrine/orm": ">=2.2.3,<2.4-dev",
"doctrine/doctrine-bundle": "1.2.*",
"twig/extensions": "1.0.*",
"symfony/assetic-bundle": "2.3.*",
"symfony/swiftmailer-bundle": "2.3.*",
"symfony/monolog-bundle": "2.3.*",
"sensio/distribution-bundle": "2.3.*",
"sensio/framework-extra-bundle": "2.3.*",
"sensio/generator-bundle": "2.3.*",
"incenteev/composer-parameter-handler": "~2.0"
},
"scripts": {
"post-install-cmd": [
"Incenteev\\ParameterHandler\\ScriptHandler::buildParameters",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::buildBootstrap",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::clearCache",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installAssets",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installRequirementsFile"
],
"post-update-cmd": [
"Incenteev\\ParameterHandler\\ScriptHandler::buildParameters",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::buildBootstrap",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::clearCache",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installAssets",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installRequirementsFile"
]
},
"config": {
"bin-dir": "bin"
},
"minimum-stability": "stable",
"extra": {
"symfony-app-dir": "app",
"symfony-web-dir": "web",
"incenteev-parameters": {
"file": "app/config/parameters.yml"
},
"branch-alias": {
"dev-master": "2.3-dev"
}
}
}
zlib extension enabled?
check your phpinfo if zlib ( which provides zlib_decode ) is enabled or run
php -m
if your php cli uses a different php.ini
cache problem?
[...] package information was loaded from the local cache and may be out of date
Delete composer's cache folder being...
/Users/alexlongshaw/.composer/cache/
... in your case to prevent updating from cache only and see if a general connection exists.
proxy?
make sure you don't have a proxy set via environment-variables
https_proxy
http_proxy
HTTPS_PROXY
HTTP_PROXY
common problems
composer has built-in capabability of identifying some common problems
composer diagnose
This turned out to be network related. Didn't work on either connection (Broadband or 3G) at home but at work there were no problems

Resources