I want to create one 16-bit-vector from two 8-bit-vectors but have errors like below. How to solve it?
LIBRARY ieee;
USE ieee.std_logic_1164.all;
USE ieee.std_logic_arith.all;
ENTITY Binary2Gray IS
-- Declarations
port(data : in STD_LOGIC_VECTOR (3 downto 0);
data_out : inout STD_LOGIC_VECTOR (3 downto 0);
data1 : inout std_logic_vector (1 downto 0);
data2 : inout std_logic_vector (1 downto 0);
CLK_I : in std_logic;
y1 : out std_logic_vector (7 downto 0);
y2 : out std_logic_vector (7 downto 0);
op : out std_logic_vector (15 downto 0)
);
END Binary2Gray ;
-----------------------------
ARCHITECTURE rtl OF Binary2Gray IS
signal op : std_logic_vector (15 downto 0);
begin
process(CLK_I)
BEGIN
data_out(3) <=data(3);
data_out(2) <=data(3) xor data (2);
data_out(1) <=data(2) xor data (1);
data_out(0) <=data(1) xor data (0);
label_1: for data_out in 0 to 3 loop
if(data_out = 0 ) then
data1(0) <=data(1) xor data (0);
elsif (data_out = 1 ) then
data1(1) <=data(2) xor data (1);
elsif (data_out = 2 ) then
data2(0) <=data(3) xor data (2);
else
data2(1) <=data(3);
end if;
end loop label_1;
end process;
with data1 select y1 <=
"00110011" when "00",
"00111101" when "01",
"11010011" when "10",
"11011101" when others;
with data2 select y2 <=
"00110011" when "00",
"00111101" when "01",
"11010011" when "10",
"11011101" when others;
op <= y1 & y2 ;
END rtl;
Errors:
# Error: ELAB1_0008: QAM.vhd : (56, 8): Cannot read output : "y1".
# Error: ELAB1_0008: QAM.vhd : (56, 8): Cannot read output : "y2".
In VHDL-2002 (and earlier) it is not allowed to read an output port like y1
and y2, hence the error.
Possible fixes are any of:
declared y1 and y2 as buffer ports
create intermediate signals y1_sig and y2_sig with the values and
assign these to y1, y2, and op
use VHDL-2008 if possible in the tool chain.
Note that op should not be declared as signal when an output port. Note
also that the process does probably not work as expected, since it is not a
clocked process due to missing if rising_edge(CLK_I) then statement, nor a combinatorial
process due to missing data in sensitivity list.
Related
I am trying to convert from UTF 16 to UTF 8; this is a test program:
with Ada.Text_IO;
with Ada.Strings.UTF_Encoding.Conversions;
use Ada.Text_IO;
use Ada.Strings.Utf_Encoding.Conversions;
use Ada.Strings.UTF_Encoding;
procedure Main is
Str_8: UTF_8_String := "𝄞";
Str_16: UTF_16_Wide_String := Convert(Str_8);
Str_8_New: UTF_8_String := Convert(Str_16);
begin
if Str_8 = Str_8_New then
Put_Line("OK");
else
Put_Line("Bug");
end if;
end Main;
With latest GNAT community it prints "Bug". Is this a bug in the implementation of UTF conversion functions or am I doing something wrong here?
Edit: For reference, this issue has been accepted as Bug 95953 / Bug 95959.
As shown here, #DeeDee has identified a bug in the implementation of Convert for UTF_16 to UTF_8. The problem arises in byte three of the four byte value for code points in the range U+10000 to U+10FFFF, shown here. The source documents the relevant bit fields:
-- Codes in the range 16#10000# - 16#10FFFF#
-- UTF-16: 110110zzzzyyyyyy 110111yyxxxxxxxx
-- UTF-8: 11110zzz 10zzyyyy 10yyyyxx 10xxxxxx
-- Note: zzzzz in the output is input zzzz + 1
Byte three is constructed as follows:
Result (Len + 3) :=
Character'Val
(2#10_000000# or Shift_Left (yyyyyyyy and 2#1111#, 4)
or Shift_Right (xxxxxxxx, 6));
While the low four bits of yyyyyyyy are used to construct byte three, the value only needs to be shifted two places left to make room for the top two bits of xxxxxxxx. The correct formulation should be this:
Result (Len + 3) :=
Character'Val
(2#10_000000# or Shift_Left (yyyyyyyy and 2#1111#, 2)
or Shift_Right (xxxxxxxx, 6));
For reference, the complete example below recapitulates the original implementation, with enough additions to study the problem in isolation. The output shows the code point, the expected binary representation of the UTF-8 encoding, the conversion to UTF-16, the incorrect UTF-8 conversion, and the correct UTF-8 conversion.
Codepoint: 16#1D11E#
UTF-8: 4: 2#11110000# 2#10011101# 2#10000100# 2#10011110#
UTF-16: 2: 2#1101100000110100# 2#1101110100011110#
UTF-8: 4: 2#11110000# 2#10011101# 2#10010000# 2#10011110#
UTF-8: 4: 2#11110000# 2#10011101# 2#10000100# 2#10011110#
OK
Code:
-- https://stackoverflow.com/q/62564638/230513
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
with Ada.Strings.UTF_Encoding; use Ada.Strings.UTF_Encoding;
with Ada.Strings.UTF_Encoding.Conversions;
use Ada.Strings.UTF_Encoding.Conversions;
with Ada.Strings.UTF_Encoding.Wide_Wide_Strings;
use Ada.Strings.UTF_Encoding.Wide_Wide_Strings;
with Interfaces; use Interfaces;
with Unchecked_Conversion;
procedure UTFTest is
-- http://www.fileformat.info/info/unicode/char/1d11e/index.htm
Clef : constant Wide_Wide_String :=
(1 => Wide_Wide_Character'Val (16#1D11E#));
Str_8 : constant UTF_8_String := Encode (Clef);
Str_16 : constant UTF_16_Wide_String := Convert (Str_8);
Str_8_New : constant UTF_8_String := Convert (Str_16);
My_Str_8 : UTF_8_String := Convert (Str_16);
function To_Unsigned_16 is new Unchecked_Conversion (Wide_Character,
Interfaces.Unsigned_16);
procedure Raise_Encoding_Error (Index : Natural) is
Val : constant String := Index'Img;
begin
raise Encoding_Error
with "bad input at Item (" & Val (Val'First + 1 .. Val'Last) & ')';
end Raise_Encoding_Error;
function My_Convert (Item : UTF_16_Wide_String;
Output_BOM : Boolean := False) return UTF_8_String
is
Result : UTF_8_String (1 .. 3 * Item'Length + 3);
-- Worst case is 3 output codes for each input code + BOM space
Len : Natural;
-- Number of result codes stored
Iptr : Natural;
-- Pointer to next input character
C1, C2 : Unsigned_16;
zzzzz : Unsigned_16;
yyyyyyyy : Unsigned_16;
xxxxxxxx : Unsigned_16;
-- Components of double length case
begin
Iptr := Item'First;
-- Skip BOM at start of input
if Item'Length > 0 and then Item (Iptr) = BOM_16 (1) then
Iptr := Iptr + 1;
end if;
-- Generate output BOM if required
if Output_BOM then
Result (1 .. 3) := BOM_8;
Len := 3;
else
Len := 0;
end if;
-- Loop through input
while Iptr <= Item'Last loop
C1 := To_Unsigned_16 (Item (Iptr));
Iptr := Iptr + 1;
-- Codes in the range 16#0000# - 16#007F#
-- UTF-16: 000000000xxxxxxx
-- UTF-8: 0xxxxxxx
if C1 <= 16#007F# then
Result (Len + 1) := Character'Val (C1);
Len := Len + 1;
-- Codes in the range 16#80# - 16#7FF#
-- UTF-16: 00000yyyxxxxxxxx
-- UTF-8: 110yyyxx 10xxxxxx
elsif C1 <= 16#07FF# then
Result (Len + 1) :=
Character'Val (2#110_00000# or Shift_Right (C1, 6));
Result (Len + 2) :=
Character'Val (2#10_000000# or (C1 and 2#00_111111#));
Len := Len + 2;
-- Codes in the range 16#800# - 16#D7FF# or 16#E000# - 16#FFFF#
-- UTF-16: yyyyyyyyxxxxxxxx
-- UTF-8: 1110yyyy 10yyyyxx 10xxxxxx
elsif C1 <= 16#D7FF# or else C1 >= 16#E000# then
Result (Len + 1) :=
Character'Val (2#1110_0000# or Shift_Right (C1, 12));
Result (Len + 2) :=
Character'Val
(2#10_000000# or (Shift_Right (C1, 6) and 2#00_111111#));
Result (Len + 3) :=
Character'Val (2#10_000000# or (C1 and 2#00_111111#));
Len := Len + 3;
-- Codes in the range 16#10000# - 16#10FFFF#
-- UTF-16: 110110zzzzyyyyyy 110111yyxxxxxxxx
-- UTF-8: 11110zzz 10zzyyyy 10yyyyxx 10xxxxxx
-- Note: zzzzz in the output is input zzzz + 1
elsif C1 <= 2#110110_11_11111111# then
if Iptr > Item'Last then
Raise_Encoding_Error (Iptr - 1);
else
C2 := To_Unsigned_16 (Item (Iptr));
Iptr := Iptr + 1;
end if;
if (C2 and 2#111111_00_00000000#) /= 2#110111_00_00000000# then
Raise_Encoding_Error (Iptr - 1);
end if;
zzzzz := (Shift_Right (C1, 6) and 2#1111#) + 1;
yyyyyyyy :=
((Shift_Left (C1, 2) and 2#111111_00#) or
(Shift_Right (C2, 8) and 2#000000_11#));
xxxxxxxx := C2 and 2#11111111#;
Result (Len + 1) :=
Character'Val (2#11110_000# or (Shift_Right (zzzzz, 2)));
Result (Len + 2) :=
Character'Val
(2#10_000000# or Shift_Left (zzzzz and 2#11#, 4) or
Shift_Right (yyyyyyyy, 4));
Result (Len + 3) :=
Character'Val
(2#10_000000# or Shift_Left (yyyyyyyy and 2#1111#, 2) or
Shift_Right (xxxxxxxx, 6));
Result (Len + 4) :=
Character'Val (2#10_000000# or (xxxxxxxx and 2#00_111111#));
Len := Len + 4;
-- Error if input in 16#DC00# - 16#DFFF# (2nd surrogate with no 1st)
else
Raise_Encoding_Error (Iptr - 2);
end if;
end loop;
return Result (1 .. Len);
end My_Convert;
procedure Show (S : String) is
begin
Put(" UTF-8: ");
Put (S'Length, 1);
Put (":");
for C of S loop
Put (Character'Pos (C), 12, 2);
end loop;
New_Line;
end Show;
procedure Show (S : Wide_String) is
begin
Put("UTF-16: ");
Put (S'Length, 1);
Put (":");
for C of S loop
Put (Wide_Character'Pos (C), 20, 2);
end loop;
New_Line;
end Show;
begin
Put ("Codepoint:");
Put (Wide_Wide_Character'Pos (Clef (1)), 10, 16);
New_Line;
Show (Str_8);
Show (Str_16);
Show (Str_8_New);
My_Str_8 := My_Convert (Str_16);
Show (My_Str_8);
if Str_8 = My_Str_8 then
Put_Line ("OK");
else
Put_Line ("Bug");
end if;
end UTFTest;
See also Bug 95953 / Bug 95959.
There's a mismatch between the 3rd byte of Str_8 and Str_8_New which causes the round-trip to fail. This seems a bug.
main.adb
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
with Ada.Strings.UTF_Encoding.Conversions;
use Ada.Strings.UTF_Encoding;
use Ada.Strings.UTF_Encoding.Conversions;
procedure Main is
-- UTF8 encoded Clef (U+1D11E)
-- (e.g.) https://unicode-table.com/en/1D11E/
Str_8 : constant UTF_8_String :=
Character'Val (16#F0#) &
Character'Val (16#9D#) &
Character'Val (16#84#) &
Character'Val (16#9E#);
Str_16 : constant UTF_16_Wide_String := Convert (Str_8);
Str_8_New : constant UTF_8_String := Convert (Str_16);
begin
for I in Str_8'Range loop
Put (Character'Pos (Str_8 (I)), 7, 16);
end loop;
New_Line (2);
for I in Str_16'Range loop
Put (Wide_Character'Pos (Str_16 (I)), 9, 16);
end loop;
New_Line (2);
for I in Str_8_New'Range loop
Put (Character'Pos (Str_8_New (I)), 7, 16);
end loop;
New_Line (2);
end Main;
output
$ ./main
16#F0# 16#9D# 16#84# 16#9E#
16#D834# 16#DD1E#
16#F0# 16#9D# 16#90# 16#9E#
I've got a 2x2 array of 4-bit std_logic_vector in my VHDL and when I simulate it my tool only gives me a 16 bit std_logic_vector, which bits are which?
More generally: how does VHDL store multidimensional arrays?
From the discussion on this answer it seems that there is no fixed way to store these bits and it's up to the tools you use to provide a correct interface. Mine are apparently a bit wonky.
I did a quick experiment and FOR MY TOOLS it appears the for an array A you get the A(i,j) in the order you specify them. So if you do:
type array_type is array (integer range 0 to 1,integer range 0 to 2) of std_logic_vector(3 downto 0);
you get: A(0,0), A(0,1), A(0,...), A(1,0), A(1,1), A(1,...).
But if you declare your array as:
type array_type is array (integer range 1 downto 0,integer range 2 downto 0) of std_logic_vector(3 downto 0);
(Note we're now using downto) you'll get A(1,2), A(1,1), A(1,0), A(0,2), A(1,1), A(0,0).
I'm making this deduction based on running the following code:
library ieee;
use ieee.std_logic_1164.all;
entity arrays is
port (
x : in std_logic_vector(3 downto 0);
y : out std_logic_vector(3 downto 0)
);
end entity arrays;
architecture rtl of arrays is
type array_type is array (integer range 0 to 1,integer range 0 to 2) of std_logic_vector(3 downto 0);
signal my_array : array_type;
begin
my_array(0,0) <= "0001";
my_array(0,1) <= "0010";
my_array(0,2) <= "0011";
my_array(1,0) <= "0100";
my_array(1,1) <= "0101";
my_array(1,2) <= "0110";
y <= x;
end architecture rtl;
and getting my_array to be 123456 where each character is a hex number.
Then I switched round the deceleration and got 654321.
I have been trying to implement a method by which i can concatenate an array of vectors to a vector. Essentially i need something like:
data_received((rx_length_int + 5) * 8)downto 0) <= rx_ident & rx_length & rx_data & rx_checksum;
data_received(BUILD2_RX_PKT_LEN downto ((rx_length_int + 5) * 8)) <= (others => '0');
where BUILD2_RX_PKT_LEN is a constant size, rx_data has a variable number of bytes, but is defined as:
type t_rx_data is array (0 to MAX_PLD) of STD_LOGIC_VECTOR((ADDRESS_WIDTH - 1) downto 0)
I have implemented a few methods, such as a for loop to iterate through rx_data up to rx_length_int, but this has issues with concatenation to data_received as it grows in size... I'm sure there is a very simple solution to this, but I have been unable to come up with one. Any help would be appreciated.
Instead of an unconstrained aggregate, (others => 0), which relies on the result type to constrain the range, you can build an aggregate with a specified range, such as (7 downto 2 => '0').
So why not
data_received <= (BUILD2_RX_PKT_LEN downto ((rx_length_int + 5) * 8) => '0')
& rx_ident & rx_length & rx_data & rx_checksum;
However it's unreadably clumsy. A better approach would be a padding function:
function pad_packet (data : std_logic_vector) return std_logic_vector is
variable temp : std_logic_vector (BUILD2_RX_PKT_LEN downto 1) := (others => '0');
-- NB "downto 0" would be an off-by-1 error if LEN is actual length
-- Initialised vhole vector to zero
begin
temp (data'length downto 1) := data;
return temp;
end pad_packet;
...
data_received <= pad_packet ( rx_ident & rx_length & rx_data & rx_checksum );
Much clearer...
I'm trying to do a few mathematical operations on integers in a piece of vhdl code but when i try to compile the tool says "0 definitions of operator "+" match here". Here is what i'm trying to do:
for i in 0 to arr_size - 1 loop
for j in 0 to arr_size - 1 loop
for k in 0 to arr_size - 1 loop
for l in 0 to arr_size - 1 loop
for m in 0 to arr_size - 1 loop
mega_array(i)(j)(k)(l)(m) <= i*(arr_size**4) + j*(arr_size**3) + k*(arr_size**2) + l*(arr_size**1) + m*(arr_size**0);
end loop;
end loop;
end loop;
end loop;
end loop;
The problem was encountered in the line where mega_array is set. Note that this whole block is in a process.
Additionally:
arr_size : integer := 4;
sig_size : integer := 32
type \1-line\ is array (arr_size - 1 downto 0) of unsigned (sig_size - 1 downto 0);
type square is array (arr_size - 1 downto 0) of \1-line\;
type cube is array (arr_size - 1 downto 0) of square;
type hypercube is array (arr_size - 1 downto 0) of cube;
type \5-cube\ is array (arr_size - 1 downto 0) of hypercube;
signal mega_array : \5-cube\;
When reading your older post, the mega_array is an array of 4 levels with at the lowest level an unsigned. In your code in this question I see 5 levels. So at the fifth level you have bit. You can not assign an integer to a std_logic.
Could it be this code is what you want?
for i in 0 to arr_size - 1 loop -- 5-cube
for j in 0 to arr_size - 1 loop -- hypercube
for k in 0 to arr_size - 1 loop -- cube
for l in 0 to arr_size - 1 loop -- square
for m in 0 to arr_size - 1 loop -- 1-line
mega_array(i)(j)(k)(l) <= to_unsigned(i*(arr_size**4) + j*(arr_size**3) + k*(arr_size**2) + l*(arr_size**1), 32);
end loop
end loop;
end loop;
end loop;
end loop;
The to_unsigned functions converts the integer to an unsigned, what is the type of 1-line. The second parameter is the size of the vector to convert the integer into. It must be the same as the size of 1-line.
My CPU register contains a binary integer 0101, equal to the decimal number 5:
0101 ( 4 + 1 = 5 )
I want the register to contain instead the binary integer equal to decimal 10, as if the original binary number 0101 were ternary (base 3) and every digit happens to be either 0 or 1:
0101 ( 9 + 1 = 10 )
How can i do this on a contemporary CPU or GPU with 1. the fewest memory reads and 2. the fewest hardware instructions?
Use an accumulator. C-ish Pseudocode:
var accumulator = 0
foreach digit in string
accumulator = accumulator * 3 + (digit - '0')
return accumulator
To speed up the multiply by 3, you might use ((accumulator << 1) + accumulator), but a good compiler will be able to do that for you.
If a large percentage of your numbers are within a relatively small range, you can also pregenerate a lookup table to make the transformation from base2 to base3 instantaneous (using the base2 value as the index). You can also use the lookup table to accelerate lookup of the first N digits, so you only pay for the conversion of the remaining digits.
This C program will do it:
#include <stdio.h>
main()
{
int binary = 5000; //Example
int ternary = 0;
int po3 = 1;
do
{
ternary += (binary & 1) * po3;
po3 *= 3;
}
while (binary >>= 1 != 0);
printf("%d\n",ternary);
}
The loop compiles into this machine code on my 32-bit Intel machine:
do
{
ternary += (binary & 1) * po3;
0041BB33 mov eax,dword ptr [binary]
0041BB36 and eax,1
0041BB39 imul eax,dword ptr [po3]
0041BB3D add eax,dword ptr [ternary]
0041BB40 mov dword ptr [ternary],eax
po3 *= 3;
0041BB43 mov eax,dword ptr [po3]
0041BB46 imul eax,eax,3
0041BB49 mov dword ptr [po3],eax
}
while (binary >>= 1 != 0);
0041BB4C mov eax,dword ptr [binary]
0041BB4F sar eax,1
0041BB51 mov dword ptr [binary],eax
0041BB54 jne main+33h (41BB33h)
For the example value (decimal 5000 = binary 1001110001000), the ternary value it produces is 559899.