RISC-V emulator with Vector Extension support - vector

Where can I find a RISC-V emulator that supports the "V" Vector Extension?
I know that the current specification version 0.8 is a draft:
This is a draft of a stable proposal for the vector specification to be used for implementation and evaluation. Once the draft label is
removed, version 0.8 is intended to be stable enough to begin developing toolchains, functional simulators, and initial implementations, though will continue to evolve with minor changes and updates.
But perhaps there is already some initial support in some emulator.

The RISC-V spec suggests riscvOVPsim
A Complete, Fully Functional, Configurable RISC-V Simulator
...
RISC-V Specifications currently supported:
...
RISC-V Instruction Set Manual, RISC-V "V" Vector Extension (with version configurable in the model using the 'vector_version' parameter. 'master' version conforms to specification changes up to 14 December 2019 and is regularly updated to track the evolving specification.)
There's also RISCV-V V extension simulator but it supports an older version of the vector extension
RISC-V vector extension v0.7 (base) simulator implemented in C++.

From the official riscv github channel, there's the Spike RISC-V ISA simulator. Quoting the documentation:
V extension, v1.0 (requires a 64-bit host)
https://github.com/riscv/riscv-isa-sim

Related

Truth extensions causing rest of project to downgrade to guava android

If I add the com.google.truth.extensions:truth-proto-extension:1.1 jar to my bazel workspace, it seems to totally nuke the classes from com.google.guava:guava:28.2-jre, resulting in errors like
import static com.google.common.collect.ImmutableMap.toImmutableMap;
^
symbol: static toImmutableMap
location: class ImmutableMap
java/com/google/fhir/protogen/ProtoGenerator.java:316: error: cannot find symbol
.collect(toImmutableMap(def -> def.getId().getValue(), def -> def));
^
symbol: method toImmutableMap((def)->def[...]lue(),(def)->def)
location: class ProtoGenerator
Your documentation says
One warning: Truth depends on the “Android” version of Guava, a subset of the “JRE” version.
If your project uses the JRE version, be aware that your build system might select the Android version instead.
If so, you may see “missing symbol” errors.
The easiest fix is usually to add a direct dependency on the newest JRE version of Guava.
Does this mean anything other than the maven dep on com.google.guava:guava:28.2-jre? If not, what's the next easiest fix?
The key word here is "newest": You'll need to depend on (as of this writing) 30.1-jre. I have edited the docs to emphasize this.
(You can see the newest version in various locations, including: Maven Central, Maven Central Search, the Guava GitHub page.)
The problem is:
Some tools (including Gradle as well as the maven_install rule from Bazel's rules_jvm_external) pick the "newest" version of any given artifact among all versions found in your transitive dependencies.
Truth 1.1 depends on version 30.0-android.
30.0-android is considered to be "newer" than 28.2-jre (because 30 is greater than 28).
The -android releases lack the Java 8 APIs.
(So you can actually fix this by depending on any -jre version from 30.0-jre up: 30.0-jre is considered "newer" than 30.0-android because of alphabetical order. Fun!)
Unfortunately, the Maven ecosystem doesn't support a good way to offer 2 "flavors" of every release (JRE+Android). (People often suggest the Maven "classifier," but that does not actually solve the problem.)
For the future:
Gradle: Gradle is working with us to provide its own solution, but it's not quite ready yet.
Maven: Maven is unlikely to provide help. (It doesn't even try to pick the "newest" version, let alone support "flavors.")
Bazel: I don't know if rules_jvm_external (which uses Coursier) has any plans to support "flavors." (Editorializing a bit: In an ideal world, I would rather specify all my repo's transitive dependencies and their versions myself, rather than having the build system try to work it out for me. That can help avoid surprises like this one. But that brings its own challenges, and we've made only incremental effort toward addressing them in our own Bazel-based projects.)

How to get TCL http package 2.8 and install on tcl 8.3. I am using http::geturl and need to change the method to PUT

I need the http::geturl -method option to do a PUT. I am using tcl8.3 and http package 2.4. I need http package 2.8 to use the -method option. Where do I get the package, where do I put it (tcl8.3 folder?), and is it compatible with tcl8.3?
You're using an unsupported version of Tcl. (Good grief! 8.3? That's a blast from the past!) Tcl 8.4 is also unsupported (support effectively stopped in 2013), and 8.5 is only really supported for existing code and should not be used for new work. You don't have to switch to 8.6… but it's strongly recommended that you do for many reasons (such as it being a version that actually builds with current toolchains!)
The package you are interested in, http, is shipped as an integrated part of Tcl. It's not intended for separate use, and newer versions make use of basic Tcl features that are not supported in older versions of the language as they use features like coroutines and decompression streams. However, the -method option is supported in 8.5 onwards so you have a range of upgrade options and you can therefore use any currently supported version.
(FWIW, the feature that you're asking for was added about 12 years ago. Insisting on sticking with 8.3 — or 8.4 for that matter — is really sticking with the time pod beyond all common sense.)

CLR vs Core CLR

I understand that CLR in its current state is bound to windows OS and provides various services by using Win32 APIs internally.
Since .NET Core is platform independent, this basically implies the same IL code can run on different OS. Is CoreCLR OS specific? Or is CoreCLR code written to take different execution paths depending upon current execution environment/OS ?
From the discussion in coreclr repository:
As far as I am aware the CLR in this repo [coreclr] is identical to the CLR in full .NET and the only differences are in the available API set in corefx.
... but seems like there is at least C++/CLI that is missing...
To answer some of the other questions:
Since .NET Core is platform independent, this basically implies the same IL code can run on different OS
Yes. IL is a custom "language". You can write an interpreter/runtime for it that can run on any platform. This is true for other intermediate representations in other languages too, including java bytecode, llvm ir, python bytecode and so on.
Is CoreCLR OS specific? Or is CoreCLR code written to take different execution paths depending upon current execution environment/OS ?
It's a mix. A particular build of coreclr will only work on one OS, because it has been compiled to use features from that OS (including the OS-specific compiler, linking against the right OS-specific libraries, and running code specific to handle that OS). There is also a Platform Abstraction Layer in CoreCLR so that developers can code against one API - based on the Win32 API - and the PAL layer converts it to the right syscalls on Linux and Mac. As pointed out in a comment by #HansPassant, there's a large number of #ifdefs - both on native side and the managed side of CoreCLR.

Qt + VTK +Ubuntu on VirtualBox

I need to run a Qt project with VTK on Ubuntu and I'm using VirtualBox, but I have an error:
GL version 2.1 with the gpu_shader4 extension is not supported by your
graphics driver but is required for the new OpenGL rendering backend.
Please update your OpenGL driver. If you are using Mesa please make
sure you have version 10.6.5 or later and make sure your driver in
Mesa supports OpenGL 3.2.
Recent version of VTK use a new rendering backend by default. In the CMake cache file used to configure your build, the corresponding cmake variable VTK_RENDERING_BACKEND has the value "OpenGL2" which assumes a minimum OpenGL API version of 2.1. But the problem is that a vanilla installation of VirtualBox does not grant access to 3D acceleration by default as it cannot presumably infer these informations from the host system.
So I think that you have several options here depending on your needs and constraints, you could install the VirtualBox Guest Additions to enable hardware 3D acceleration, allow access to a newer version of the OpenGL API and use in the end the host to performs the requested 3D operations. You could also use a recent version of the Mesa3D library to performs the needed 3D operations on the CPU (to choose preferably if you don't have graphics hardware on the host). For a presentation of its features, you can take a look here

Implementation of frama-clang

So far I've found the STANCE project (Stance project website) a reader (found on the website) and a presentation (also found on the website). Also, apparently there will be a frama-c day taking place on June 20th where frama-clang is going to be introduced.
However, I am wondering whether there is an implementation to play around with frama-clang.
Since a few minutes, there is: http://frama-c.com/frama-clang.html (don't forget to read the Caveat part). It is released as a new plug-in, under LGPL2.1. Frama-Clang is compatible with Frama-C Aluminium (i.e. the latest Frama-C version so far), and clang/llvm 3.8 (be sure to either use the dev packages of your distribution or compile clang by hand).

Resources