Why running memtest.py on gem5 gives error with --cpu-type arguments? - simulator

The memtest.py is working fine this way:
build/X86/gem5.opt configs/example/memtest.py
But when i am giving with arguments its is giving no such option error:
build/X86/gem5.opt configs/example/memtest.py --cpu-type=TimingSimpleCPU
gem5 Simulator System. http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.
gem5 compiled Jul 27 2018 14:19:35
gem5 started Sep 17 2018 15:31:03
gem5 executing on 2030044470, pid 5045
command line: build/X86/gem5.opt configs/example/memtest.py --cpu-type=TimingSimpleCPU
Usage: memtest.py [options]
memtest.py: error: no such option: --cpu-type
On the other hand se.py and fs.py works fine with additional arguments:
build/X86/gem5.opt configs/example/se.py -c /home/prakhar/gem5/tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU
Is there any way to run memtest.py with --cpu-type and --mem-type arguments?

As the error informs, there is no such option.
The cpu-type is added for se.py and fs.py on the call to
Options.addCommonOptions(parser)
You can add both cpu-type and mem-type manually, by doing something along the lines of
from m5.util import addToPath, fatal
addToPath('../')
from common import CpuConfig
from common import MemConfig
And adding the options to the parser
parser = optparse.OptionParser()
# Other parser options
parser.add_option("--cpu-type", type="choice", default="AtomicSimpleCPU",
choices=CpuConfig.cpu_names(),
help = "type of cpu to run with")
parser.add_option("--mem-type", type="choice", default="DDR3_1600_8x8",
choices=MemConfig.mem_names(),
help = "type of memory to use")
The options are then added as options.cpu_type and options.mem_type. You can check the other examples (in configs/example/) to know if you will need to modify other things to comply with your intentions.

Well i was struggling with this problem and then i found this line in gem5/configs/example/memtest.py:
system = System(physmem = SimpleMemory(),cache_line_size = block_size)
If you want to run with any other memory ie. DRAMSim2 you can change this line.
system = System(physmem = DRAMSim2(),cache_line_size = block_size)
This will enable running memtest.py with memory type as DRAMSim2.You can now this as:
build/X86/gem5.opt configs/example/memtest.py
Also to change cpu-type ypu can refer to lines:
if options.atomic:
root.system.mem_mode = 'atomic'
else:
root.system.mem_mode = 'timing'
Default cpu-type is timing and you can change it to atomic by adding --atomic in the command.

Related

Dataflow from Colab issue

I'm trying to run a Dataflow job from Colab and getting the following worker error:
sdk_worker_main.py: error: argument --flexrs_goal: invalid choice: '/root/.local/share/jupyter/runtime/kernel-1dbd101c-a79e-432e-89b3-5ba68df104d7.json' (choose from 'COST_OPTIMIZED', 'SPEED_OPTIMIZED')
I haven't provided the flexrs_goal argument, and if I do it doesn't fix this issue. Here are my pipeline options:
beam_options = PipelineOptions(
runner='DataflowRunner',
project=...,
job_name=...,
temp_location=...,
subnetwork='regions/us-west1/subnetworks/default',
region='us-west1'
)
My pipeline is very simple, it's just:
with beam.Pipeline(options=beam_options) as pipeline:
(pipeline
| beam.io.ReadFromBigQuery(
query=f'SELECT column FROM {BQ_TABLE} LIMIT 100')
| beam.Map(print))
It looks like the command line args for the sdk worker are getting polluted by jupyter somehow. I've rolled back to the past two apache-beam library versions and it hasn't helped. I could move over to Vertex Workbench but I've invested a lot in this Colab notebook (plus I like the easy sharing) and I'd rather not migrate.
Figured it out. The PipelineOptions constructor will pull in sys.argv if no parameter is given for the first argument (called flags). In my case it was pulling in the command line args that my jupyter notebook was started with and passing them as Beam options to the workers.
I fixed my issue by doing this:
beam_options = PipelineOptions(
flags=[],
...
)

How to avoid RuntimeError while call __dict__ on module?

it is appearing in some big modules like matplotlib. For example expression :
import importlib
obj = importlib.import_module('matplotlib')
obj_entries = obj.__dict__
Between runs len of obj_entries can vary. From 108 to 157 (expected) entries. Especially pyplot can be ignored like some another submodules.
it can work stable during manual debug mode with len computing statement after dict extraction. But in auto it dont work well.
such error occures:
RuntimeError: dictionary changed size during iteration
python-BaseException
using clear python 3.10 on windows. Version swap change nothing at all
during some attempts some interesting features was found.
use of repr is helpfull before dict initiation.
But if module transported between classes like variable more likely lazy-import happening? For now there is evidence that not all names showing when command line interpriter doing opposite - returning what expected. So this junk of code help bypass this bechavior...
Note: using pkgutil.iter_modules(some_path) to observe modules im internal for pkgutil ModuleInfo form.
import pkgutil, importlib
module_info : pkgutil.ModuleInfo
name = module_info.name
founder = module_info.module_finder
spec = founder.find_spec(name)
module_obj = importlib.util.module_from_spec(spec)
loader = module_obj.__loader__
loader.exec_module(module_obj)
still unfamilliar with interior of import mechanics so it will be helpfull to recive some links to more detail explanation (spot on)

How to silence linux kernel checkpatch.pl ERROR: Missing Signed-off-by: line(s)

I use the Linux kernel style in my C code, even though it's not related to the kernel. Running checkpatch.pl produces ERROR: Missing Signed-off-by: line(s). How to ignore it?
To intentionally silence the ERROR: Missing Signed-off-by: line(s) one can pass the --no-signoff parameter, e.g.:
git show | checkpatch.pl --no-tree --no-signoff
This can also be added on a new line to the .checkpatch.conf file to avoid having to type it.

Have the Arduino IDE set compiler warnings to error

Is there a way to set the compiler warnings to be interpreted as an error in the Arduino IDE?
Or any generic way to set GCC compiler options?
I have looked at the ~/.arduino/preferences.txt file, but I found nothing that indicates fine-tuned control. I also looked if I could set GCC options via environment variables, but I did not find anything.
I don't want to have verbose compiler output (which you can specify using the IDE) that is way too much distracting non-essential information, and I don't want to waste my time on reading through it.
I want for a compilation to stop on a warning, so code can be cleaned up. My preference would be to be able to set -Werror= options, but a generic -Werror will do for the small code size of .ino projects.
Addendum:
Based on the suggestion in the selected answer, I implemented an avr-g++ script and put that in the path before the normal avr-g++. For that I changed the Arduino command as follows:
-export PATH="${APPDIR}/java/bin:${PATH}"
+export ORGPATH="${APPDIR}/java/bin:${PATH}"
+export PATH="${APPDIR}/extra:${ORGPATH}"
And in the new directory extra in the APPSDIR (where the Arduino command is located), I have
an avr-g++ which is a Python script:
#!/usr/bin/env python
import os
import sys
import subprocess
werr = '-Werror'
wall = '-Wall'
cmd = ['avr-g++'] + sys.argv[1:]
os.environ['PATH'] = os.environ['ORGPATH']
fname = sys.argv[-2][:]
if cmd[-2].startswith('/tmp'):
#print fname, list(fname) # this looks strange
for i, c in enumerate(cmd):
if c == '-w':
cmd[i] = wall
break
cmd.insert(1, werr)
subprocess.call(cmd)
So you replace the first command with the original compiler name and reset the environment used to exclude the extra directory.
The fname is actually strange. If you print it, it is only abc.cpp, but its length is much larger and it actually starts with /tmp. So I check for that to decide whether to add/update the compile options.
It looks like you are on Linux. Arduino is a script, so you can set PATH in the script to include a directory at the beginning to a directory containing a program, avr-g++. Then the Java stuff should take the compiler from there, should it not?
That program then calls the normal /usr/bin/avr-g++ with the extra options.
One option you have is to compile your sketches from the command line. Take a look at this makefile.

XCode4 can not Watch value of variables

It's a bit annoying that when I hit a break point in XCode 4, values of Watch Expressions are always grayed out. I have to create dummy variables pointing to the thing I want to watch in order to get around it.
The log says the following errors when I run the app:
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.3.3 (8J2)/Symbols/System/Library/Frameworks/IOKit.framework/IOKit (file not found).
warning: Tried to remove a non-existent library: /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.3.3 (8J2)/Symbols/System/Library/Frameworks/IOKit.framework/IOKit
Current language: auto; currently objective-c++
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.3.3 (8J2)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib (file not found).
How can I fix this?
As for myself, I debug variables using two handy GDB console commands. You can enter them when in debug mode in debug console after GDB mark. I use "p" command for printing basic C type variables:
p [[[self pointerToMyClass] anotherPointerToOtherClass] someIntValue]
And I use "po" command for printing content of arrays, for checking objects:
po [[[self pointerToMyClass] anotherPointerToOtherClass] someNSArray]
po [[[self pointerToMyClass] anotherPointerToOtherClass] myUIImageView]

Resources