Using Pytorch Lightning in Google Colab shows unexpected errors, Stopped at Sanity Checking DataLoader - runtime-error

I'm using Bert for Arabic texts classification, The same code I ran it using this dataset, and it works fine without any errors, then I ran it with another dataset, and it gave me this error:
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/utilities.py:97: PossibleUserWarning: `max_epochs` was not set. Setting it to 1000 epochs. To train without an epoch limit, set `max_epochs=-1`.
category=PossibleUserWarning,
WARNING:pytorch_lightning.loggers.tensorboard:Missing logger folder: /content/lightning_logs
INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO:pytorch_lightning.callbacks.model_summary:
| Name | Type | Params
-----------------------------------
0 | l1 | BertModel | 135 M
1 | l2 | Linear | 6.1 K
2 | drop | Dropout | 0
-----------------------------------
6.1 K Trainable params
135 M Non-trainable params
135 M Total params
540.798 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%
0/2 [00:00<?, ?it/s]
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/data_connector.py:491: PossibleUserWarning: Your `val_dataloader`'s sampler has shuffling enabled, it is strongly recommended that you turn shuffling off for val/test/predict dataloaders.
category=PossibleUserWarning,
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/call.py in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
37 else:
---> 38 return trainer_fn(*args, **kwargs)
39
30 frames
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in <lambda>(t)
736 Module: self
737 """
--> 738 return self._apply(lambda t: t.cpu())
739
740 def type(self: T, dst_type: Union[dtype, str]) -> T:
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
I don't know what causes this error!
The complete code is here.

Related

AttributeError: 'dict' object has no attribute 'to'

I used an example of using BERT to classify reviews, described at the link. The code is written for using the CPU and it works fine, but slowly. In Colab Google, with a multilingual model, one epoch is considered 4 hours for me. If I replace the CPU with the CUDA everywhere in the code, then the error that YOU met with appears. I followed the guidelines given in the link, but then another error appears:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-0b35a5f74768> in <module>()
268 'labels': batch[2],
269 }
--> 270 inputs.to(device)
271 outputs = model(**inputs)
272
AttributeError: 'dict' object has no attribute 'to'
Firstly you do not need to replace the CPU with the CUDA everywhere in the code.
You should just add the following to the cell from which you import the libraries
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
By printing the device object, you can see the gpu that google colab assigned to you to use.
Coming to your question, I think instead of giving the dictionary itself, you just need to give values corresponding to the expected keys.

reload does not work in jupyter notebook, but works in Ipython

I use Anaconda a lot, both jupyter notebook and spyder. Here is my python version:
Python 3.7.6 .
After I modified some module written by myself, reload always works in spyder's Ipython console, but does not work in jupyter notebook. Here is my reload code:
import imp
imp.reload(my_module)
Here is what jupyter notebook react(It looks like reload works, but changes don't update)
I also tried the following magic command :
%load_ext autoreload
%autoreload 2
Again, these same code work in spyder's Ipython console, but don't work most of the time (yet sometimes work, I can't figure out when the commands work, but I had experienced twice successes) in jupyter notebook. This is the Error reported by jupyter notebook:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-27-209f44cf5dc0> in <module>
----> 1 get_ipython().run_line_magic('load_ext', 'autoreload ')
2 get_ipython().run_line_magic('autoreload', '2')
D:\Programs\anaconda3\lib\site-packages\IPython\core\interactiveshell.py in run_line_magic(self, magic_name, line, _stack_depth)
2315 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2316 with self.builtin_trap:
-> 2317 result = fn(*args, **kwargs)
2318 return result
2319
<D:\Programs\anaconda3\lib\site-packages\decorator.py:decorator-gen-65> in load_ext(self, module_str)
D:\Programs\anaconda3\lib\site-packages\IPython\core\magic.py in <lambda>(f, *a, **k)
185 # but it's overkill for just that one bit of state.
186 def magic_deco(arg):
--> 187 call = lambda f, *a, **k: f(*a, **k)
188
189 if callable(arg):
D:\Programs\anaconda3\lib\site-packages\IPython\core\magics\extension.py in load_ext(self, module_str)
31 if not module_str:
32 raise UsageError('Missing module name.')
---> 33 res = self.shell.extension_manager.load_extension(module_str)
34
35 if res == 'already loaded':
D:\Programs\anaconda3\lib\site-packages\IPython\core\extensions.py in load_extension(self, module_str)
78 if module_str not in sys.modules:
79 with prepended_to_syspath(self.ipython_extension_dir):
---> 80 mod = import_module(module_str)
81 if mod.__file__.startswith(self.ipython_extension_dir):
82 print(("Loading extensions from {dir} is deprecated. "
D:\Programs\anaconda3\lib\importlib\__init__.py in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129
D:\Programs\anaconda3\lib\importlib\_bootstrap.py in _gcd_import(name, package, level)
D:\Programs\anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_)
D:\Programs\anaconda3\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'autoreload '
There is an extra space at the end of the module name which is the cause. I saw a similar stack thread and here it is for your reference.
I ran the magic with the space after the module name and ended up with the same error. Below are the last few lines of the stack trace.
~\anaconda3\lib\importlib\_bootstrap.py in _gcd_import(name, package, level)
~\anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_)
~\anaconda3\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'autoreload '
The solution is to remove the space after the %load_ext autoreload magic. It should be "%load_ext autoreload" and not "%load_ext autoreload ".
The Spyder IPython console seems to do some text processing (trimming spaces) before code execution. Google Colab also does the same and so, the error is not thrown.

Eva plugin of Frama-C reports "invalid user input" after finishing analysis

I get the following log when I am trying to apply Eva plugin to a C project.
[eva:summary] ====== ANALYSIS SUMMARY ======
----------------------------------------------------------------------------
53 functions analyzed (out of 9107): 0% coverage.
In these functions, 5300 statements reached (out of 14354): 36% coverage.
----------------------------------------------------------------------------
Some errors and warnings have been raised during the analysis:
by the Eva analyzer: 0 errors 15 warnings
by the Frama-C kernel: 0 errors 2 warnings
----------------------------------------------------------------------------
45 alarms generated by the analysis:
29 invalid memory accesses
4 accesses out of bounds index
6 invalid shifts
1 access to uninitialized left-values
5 others
----------------------------------------------------------------------------
Evaluation of the logical properties reached by the analysis:
Assertions 1113 valid 18 unknown 1 invalid 1132 total
Preconditions 0 valid 0 unknown 0 invalid 0 total
98% of the logical properties reached have been proven.
----------------------------------------------------------------------------
[kernel] Warning: warning CERT:MSC:38 treated as deferred error. See above messages for more information.
[kernel] Frama-C aborted: invalid user input.
Frama-C aborted the analysis after providing the analysis summary. However, it does not point out which file and which line of code that has a problem.
Could you please let me know which are possible problems in this case? And is the analysis finished?
As the header of the line indicates, the message is not emitted by Eva, but by Frama-C's kernel. This error indicates that your code is violating CERT C Coding Standard, and more specifically its rule MSC-38 which basically states that it is a bad idea to declare identifiers that belong to the standard library, where they are specified as being potentially implemented as a macro. This notably includes assert and errno.
As this rule indicates that the code is not strictly ISO-C compliant, it has been decided to treat it by default as an error, but given the fact that the issue by itself is unlikely to make the analyzers crash, Frama-C does not abort as soon as it is triggered. This is why you can still launch Eva, which runs flawlessly, before being reminded by the kernel that there is an issue in your code (a first message, with Warning status, was likely output at the beginning of the log).
You can modify the severity status of CERT:MSC:38 using -kernel-warn-key CERT:MSC:38=<status>, where <status> can range from inactive (completely ignored) to abort (emit an error and abort immediately). The complete list of statuses can be found on section 6.2 of the user manual.

After first run ,Jupyter notebook with python 3.6.1, using asyncio basic example gives: RuntimeError: Event loop is closed

In Jupyter Notebook (python 3.6.1) I went to run the basic python docs Hello_World in (18.5.3.1.1. Example: Hello World coroutine) and noticed that it was giving me a RuntimeError. After trying a long time to find the problem with the program(my understanding is that the docs may not be totally up to date), I finally noticed that it only does this after the second run and tested in a restarted Kernel. I've since then copied the same small python program in two successive cells(In 1 and 2) and found that it gives the error on the second not the first and gives the error to both there after. This repeats this after restarting the Kernel.
import asyncio
def hello_world(loop):
print('Hello World')
loop.stop()
loop = asyncio.get_event_loop()
# Schedule a call to hello_world()
loop.call_soon(hello_world, loop)
# Blocking call interrupted by loop.stop()
loop.run_forever()
loop.close()
The traceback:
RuntimeError Traceback (most recent call last)
<ipython-input-2-0930271bd896> in <module>()
6 loop = asyncio.get_event_loop()
7 # Blocking call which returns when the hello_world() coroutine
----> 8 loop.run_until_complete(hello_world())
9 loop.close()
/home/pontiac/anaconda3/lib/python3.6/asyncio/base_events.py in run_until_complete(self, future)
441 Return the Future's result, or raise its exception.
442 """
--> 443 self._check_closed()
444
445 new_task = not futures.isfuture(future)
/home/pontiac/anaconda3/lib/python3.6/asyncio/base_events.py in _check_closed(self)
355 def _check_closed(self):
356 if self._closed:
--> 357 raise RuntimeError('Event loop is closed')
358
359 def _asyncgen_finalizer_hook(self, agen):
RuntimeError: Event loop is closed
I don't get this error when running a file in the interpreter with all the Debug settings set. I am running this Notebook in my recently reinstalled Anaconda set up which only has the 3.6.1 python version installed.
the issue is that loop.close() makes the loop unavailable for future use. That is, you can never use a loop again after calling close. The loop stays around as an object, but almost all methods on th eloop will raise an exception once the loop is closed. However, asyncio.get_event_loop() returns the same loop if you call it more than once. You often want this, so that multiple parts of an application get the same event loop.
However if you plan on closing a loop, you are better off calling asyncio.new_event_loop rather than asyncio.get_event_loop. That will give you a fresh event loop. If you call new_event_loop rather than get_event_loop, you're responsible for making sure that the right loop gets used in all parts of the application that run in this thread. If you want to be able to run multiple times to test you could do something like:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
After that, you'll find that asyncio.get_event_loop returns the same thing as loop. So if you do that near the top of your program, you will have a new fresh event loop each run of the code.

Percentage Reported By PHPUnit: What Does it Mean?

I'm working on my first project in which I'm utilizing PHPUnit for unit testing. Things have been moving along nicely; however, I've started getting the following output with recent tests:
................................................................. 65 / 76 ( 85%)
...........
Time: 1 second, Memory: 30.50Mb
OK (76 tests, 404 assertions)
I cannot find any information about what the "65 / 76 ( 85%)" means.
Does anyone know how to interpret this?
Thanks!
It means the amount of tests that have been run so far (test methods actually, or to be even more precise: test method calls, because each test method can be called several times).
65 / 76 ( 85%)
65 tests of 76 have already run (which is 85% overall)
And as long as you see dots for each of them - all of them passed
I didn't understand it until I kept writing more tests, here's a sample of what you see as the number of tests grow:
............................................................... 63 / 152 ( 41%)
............................................................... 126 / 152 ( 82%)
..........................
...it's just a progress indicator of the number of tests that have been run. But as you can see, it still gets to the end.

Resources