Related
I have a program where I use a vector to simulate all the possible outcomes when counting cards in blackjack. There's only three possible values, -1, 0, and 1. There's 52 cards in a deck therefore the vector will have 52 elements, each assigned one of values mentioned above. The program works when I scale down the size of the vector, it still works when I have it as this size however I get no output and get the warning "warning C4267: '=': conversion from 'size_t' to 'int', possible loss of data".
#include<iostream>
#include"subtracter.h"
#include<time.h>
#include<vector>
#include<random>
using namespace std;
int acecard = 4;
int twocard = 4;
int threecard = 4;
int fourcard = 4;
int fivecard = 4;
int sixcard = 4;
int sevencard = 4;
int eightcard = 4;
int ninecard = 4;
int tencard = 16;
// declares how many of each card there is
vector<int> cardvalues = {-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
// a vector that describes how many cards there are with a certain value
vector<int> deck = { acecard, twocard, threecard, fourcard, fivecard, sixcard, sevencard, eightcard, ninecard, tencard };
// a vector keeping track of how many of each cards there's left in the deck
int start()
{
int deckcount;
deckcount = 0;
int decksize;
decksize = cardvalues.size();
while (decksize >= 49)
{
deckcount += cardsubtracter(cardvalues);
};
return deckcount;
}
int cardcounting()
{
int deckcount;
deckcount = start();
deckcount += cardsubtracter(cardvalues);
return deckcount;
}
int main()
{
int value;
value = cardcounting();
int size;
size = cardvalues.size();
cout << value << "\n";
cout << size;
return 0;
}
#include<iostream>
#include<random>
using namespace std;
int numbergenerator(int x, int y)
{
int number;
random_device generator;
uniform_int_distribution<>distrib(x, y);
number = distrib(generator); //picks random element from vector
return number;
}
int cardsubtracter(vector<int> mynum)
{
int counter;
int size;
int number;
size = mynum.size() - 1;//gives the range of values to picked from the vectorlist
number = numbergenerator(0, size);//gives a random number to pick from the vectorlist
counter = mynum[number]; // uses the random number to pick a value from the vectorlist
mynum.erase(mynum.begin()+number); //removes that value from the vectorlist
return counter;
}
I looked up the max limit of vectors and it said that vectors can hold up 232 values with integers, which should work for this. So I also tried creating a new file and copying the code over to that in case there was something wrong with this file.
There could be different reasons why a vector may not be able to hold all 52 elements. Some possible reasons are:
Insufficient memory: Each element in a vector requires a certain amount of memory, and the total memory required for all 52 elements may exceed the available memory. This can happen if the elements are large, or if there are many other variables or data structures in the environment that consume memory.
Data type limitations: The data type of the vector may not be able to accommodate all 52 elements. For example, if the vector is of type "integer", it can only hold integers up to a certain limit, beyond which it will overflow or produce incorrect results.
Code errors: There may be errors in the code that prevent all 52 elements from being added to the vector. For example, if the vector is being filled in a loop, there may be a mistake in the loop condition or in the indexing that causes the loop to terminate early or skip some elements.
To determine the exact reason for the vector not being able to hold all 52 elements, it is necessary to examine the code, the data types involved, and the memory usage.
I have some trouble when using 3D-LUT in GLSL shader, so I'm trying to test a minimal 3D-LUT, which is of size 2x2x2, totally 8 colors. My pixel data is
static const unsigned char lutData222[] = {
0, 0, 0, 255, 0, 0,
0,255, 0, 255,255, 0,
0, 0,255, 255, 0,255,
0,255,255, 255,255,255,
};
The 3D-texture settings are
m_lut.create();
m_lut.bind();
m_lut.setFormat(QOpenGLTexture::TextureFormat::RGB8_UNorm);
m_lut.setMinificationFilter(QOpenGLTexture::Filter::Nearest);
m_lut.setMagnificationFilter(QOpenGLTexture::Filter::Nearest);
m_lut.setWrapMode(QOpenGLTexture::WrapMode::ClampToEdge);
m_lut.setSize(LUT_EXTENT,LUT_EXTENT,LUT_EXTENT);
m_lut.allocateStorage(QOpenGLTexture::PixelFormat::RGB,
QOpenGLTexture::PixelType::UInt8);
m_lut.setData(QOpenGLTexture::PixelFormat::RGB,
QOpenGLTexture::PixelType::UInt8,
lutData222);
m_lut.release();
where m_lut is of type QOpenGLTexture, Yes this a Qt project. Blow is my fragment shader, it just using the texture coordinates plus a third dimension to sample the lut. So I can change the third dimension to be 0 or 1, to see if the color is as expected.
#version 330 core
in vec2 uv;
uniform sampler3D lut;
out vec4 FragColor;
void main() {
vec4 col = texture(lut, vec3(uv, 0));
col.a = 1;
FragColor = col;
};
But it is not. When the third dimension is 0, the reuslt picture is
And when the third dimension is 1, the reuslt picture is
There is a function RFID that returns *z4 parameter that should be put into TagID.
When I print TagID from loop(), '1' instead of '1B31687DBC7FF' is printed.
How can I get the whole value? I would like to print full string '1B31687DBC7FF' to serial port.
#include "Arduino.h"
#include <Wire.h>
#include <LiquidCrystal_I2C.h>
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
LiquidCrystal_I2C lcd(0x3F, 2, 1, 0, 4, 5, 6, 7, 3, POSITIVE);
int inWord = 0;
int outWord[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
int index = 0;
unsigned char Data2Calc[]= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
unsigned short CRC2Calc = 0;
unsigned char Bytes2Calc = 9;
char z5 []= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
const char* TagID;
void setup()
{
pinMode(13, OUTPUT);
lcd.begin(16,2);
lcd.backlight();
lcd.setCursor(0,0);
lcd.print("RFID Reader");
lcd.setCursor(0,1);
lcd.print("Skanuj TAG");
Serial.begin(9600);
Serial1.begin(9600);
lcd.setCursor(0,0);
}
void loop()
{
if (Serial1.available())
{
TagID = RFID();
}
else
{
if (index==11)
{
index=0;
Serial.print(TagID);
Serial.println("");
lcd.setCursor(0,0);
lcd.print("ID:");
lcd.print(TagID);
}
}
}
const char * RFID()
{
char z1 []= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
char z2 []= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
char z3 []= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
unsigned short crc2[] = { 0 };
inWord = Serial1.read();
index++;
if (index == 1)
{
if (inWord == 1)
{
outWord[index] = inWord;
}
else
{
index=index-1;
}
}
else if (index > 1)
{
if (index == 11)
{
outWord[index] = inWord;
for (int i = 1; i <12; i++)
{
Data2Calc[i-1] = outWord[i];
}
CRC16(Data2Calc, &CRC2Calc, Bytes2Calc);
itoa(outWord[10],z1,16);
itoa(outWord[11],z2,16);
strcat(z1, z2);
*crc2 = CRC2Calc;
sprintf(z2, "%x", *crc2); //
if (strcmp(z1,z2) == 0)
{
char z4 []= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
for (int i=1;i<10;i++)
{
itoa(outWord[i],z3,16);
if (strlen(z3)<2) strcat(z4, "0");
strcat(z4, z3);
//Serial.print(z4);
//Serial.println("");
}
//Serial.print("z4=");
//Serial.print(z4);
//Serial.println("");
strncpy(z5, z4, 18);
}
}
else
{
outWord[index] = inWord;
}
}
return z5;
}
void CRC16(unsigned char * Data, unsigned short * CRC, unsigned char Bytes)
{
int i, byte;
unsigned short C;
*CRC = 0;
for (byte = 1; byte <= Bytes; byte++, Data++)
{
C = ((*CRC >> 8) ^ *Data) << 8;
for (i = 0; i < 8; i++)
{
if (C & 0x8000)
C = (C << 1) ^ 0x1021;
else
C = C << 1;
}
*CRC = C ^ (*CRC << 8);
}
}
Whole output of the functions is attached bellow:
Currently there are no serial ports registered - please use the + button to add a port to the monitor.
Connect to serial port COM4 at 9600
TagID: 1
UPDATE 1
I have attached above full code. Sorry... it is a bit long.
UPDATE 2
OK. I got it somewhat working, but not quite as expected. I got the value printed as expected, but only for the first time I call the function. If I call the function more times, I get some garbage added to the printed value as bellow:
Currently there are no serial ports registered - please use the + button to add a port to the monitor.
Connect to serial port COM4 at 9600
TagID: 1b31596d9cff
TagID: 1b31596d9cff1b31596d9cff
TagID: 1b31596d9cff1Řc–
ś˙cŘ1b31596d9cff
TagID: 1b31596d9cff1Řc–
ś˙cŘ1b311031596d9cff
TagID: 1b31596d9cff1Řc–
ś˙cŘ1b311031596d9cff
Any idea on what the problem might be?
I have updated the latest full source code at the top of the post.
Thanks.
UPDATE 3
OK, I got it finally working. I have changed declaration from 'char z1 []=...' to 'const char z1 []=...'
I am not sure it is written in decent style... but it works :) I attach working source code at the top of the page.
UPDATE 4
No, after a few tests I have to admit that the solution from UPDATE 3 does NOT work. Indeed it reads correctly but only for the first time... then program crashes and... it reads RFID again for the first time... so it looks only it reads OK, but it does not.
Serial output for 5 readings is as follows:
Currently there are no serial ports registered - please use the + button to add a port to the monitor.
Connect to serial port COM4 at 9600
1b31596d9cff
1b31596d9cff1b31596d9cff
1b31596d9cff1Řc–
ś˙cŘ1b31596d9cff
1b31596d9cff1Řc– ś˙cŘ1b311031596d9cff
1b31596d9cff1Řc– ś˙cŘ1b311031596d9cff
Any hints on what is wrong with the code?
UPDATE 5
OK. Finally I got it working... at least from what I can see.
I changed tables size, reworked HEX display way and made a few minor changes.
The entire source code updated at the top.
TagID is a char and your function returns a char. A char is a one byte variable. It will hold at most one character. It shouldn't then surprise you that you only print one character. You haven't provided enough of your code to really figure out what you're actually after. But that explains why you only get one character printed. A char variable can hold one character, not that whole string of stuff.
I'm thinking that you wanted to get a char*, a pointer to a char array. But you're going to have trouble with that too because z4 is a local array and goes out of scope before you get a chance to use it.
I am applying a slightly modified version of the classic depth peeling algorithm, basically I am rendering all the opaque objects first and then I use that depth as minimum depth, because since they are opaque, it doesnt make sense to not discard fragment deeper than them.
I first tested it on a small test case and it works flawless.
Now I am applying this algorithm to my main application, but for some unknown reasons, it doesnt work and I am going crazy, the main problem is that I keep reading the value 0 for the opaque depth texture bounded in the fragment shader of the next stage
To sum up, this is the fbo for the opaque stuff:
opaqueDepthTexture = new int[1];
opaqueColorTexture = new int[1];
opaqueFbo = new int[1];
gl3.glGenTextures(1, opaqueDepthTexture, 0);
gl3.glGenTextures(1, opaqueColorTexture, 0);
gl3.glGenFramebuffers(1, opaqueFbo, 0);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueDepthTexture[0]);
gl3.glTexImage2D(GL3.GL_TEXTURE_RECTANGLE, 0, GL3.GL_DEPTH_COMPONENT32F, width, height, 0,
GL3.GL_DEPTH_COMPONENT, GL3.GL_FLOAT, null);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_BASE_LEVEL, 0);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_MAX_LEVEL, 0);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueColorTexture[0]);
gl3.glTexImage2D(GL3.GL_TEXTURE_RECTANGLE, 0, GL3.GL_RGBA, width, height, 0,
GL3.GL_RGBA, GL3.GL_FLOAT, null);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_BASE_LEVEL, 0);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_MAX_LEVEL, 0);
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, opaqueFbo[0]);
gl3.glFramebufferTexture2D(GL3.GL_FRAMEBUFFER, GL3.GL_DEPTH_ATTACHMENT, GL3.GL_TEXTURE_RECTANGLE,
opaqueDepthTexture[0], 0);
gl3.glFramebufferTexture2D(GL3.GL_FRAMEBUFFER, GL3.GL_COLOR_ATTACHMENT0, GL3.GL_TEXTURE_RECTANGLE,
opaqueColorTexture[0], 0);
checkBindedFrameBuffer(gl3);
Here I just clear the depth (default to 1), I even commented out the opaque rendering:
/**
* (1) Initialize Opaque FBO.
*/
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, opaqueFbo[0]);
gl3.glDrawBuffer(GL3.GL_COLOR_ATTACHMENT0);
gl3.glClearColor(1, 1, 1, 1);
gl3.glClear(GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);
gl3.glEnable(GL3.GL_DEPTH_TEST);
dpOpaque.bind(gl3);
{
// EC_Graph.instance.getRoot().renderDpOpaque(gl3, dpOpaque, new MatrixStack(), properties);
}
dpOpaque.unbind(gl3);
And I have a double confirmation from this
FloatBuffer fb = FloatBuffer.allocate(1 * GLBuffers.SIZEOF_FLOAT);
gl3.glReadPixels(width / 2, height / 2, 1, 1, GL3.GL_DEPTH_COMPONENT, GL3.GL_FLOAT, fb);
System.out.println("opaque fb.get(0) " + fb.get(0));
If I change the clearDepth to 0.9 for example, I get 0.9, so this is ok.
Now I initialize the minimum depth buffer, by rendering all the geometry having alpha < 1 and I bind the previous depth texture, the one used in the opaque rendering, to the
uniform sampler2D opaqueDepthTexture;
I temporarily switched the rendering of this passage to the default framebuffer
/**
* (2) Initialize Min Depth Buffer.
*/
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, 0);
gl3.glDrawBuffer(GL3.GL_BACK);
// gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, blendFbo[0]);
// gl3.glDrawBuffer(GL3.GL_COLOR_ATTACHMENT0);
gl3.glClearColor(0, 0, 0, 1);
gl3.glClear(GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);
gl3.glEnable(GL3.GL_DEPTH_TEST);
if (cullFace) {
gl3.glEnable(GL3.GL_CULL_FACE);
}
dpInit.bind(gl3);
{
gl3.glActiveTexture(GL3.GL_TEXTURE1);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueDepthTexture[0]);
gl3.glUniform1i(dpInit.getOpaqueDepthTextureUL(), 1);
gl3.glBindSampler(1, sampler[0]);
{
EC_Graph.instance.getRoot().renderDpTransparent(gl3, dpInit, new MatrixStack(), properties);
}
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, 0);
gl3.glBindSampler(1, 0);
}
dpInit.unbind(gl3);
This is the dpInit Fragment Shader:
#version 330
out vec4 outputColor;
uniform sampler2D texture0;
in vec2 oUV;
uniform sampler2D opaqueDepthTexture;
/*
* Layout {lighting, normal orientation, active, selected}
*/
uniform ivec4 settings;
const vec3 selectionColor = vec3(1, .5, 0);
const vec4 inactiveColor = vec4(.5, .5, .5, .2);
vec4 CalculateLight();
void main()
{
float opaqueDepth = texture(opaqueDepthTexture, gl_FragCoord.xy).r;
if(gl_FragCoord.z > opaqueDepth) {
//discard;
}
vec4 color = (1 - settings.x) * texture(texture0, oUV) + settings.x * CalculateLight();
if(settings.w == 1) {
if(settings.z == 1) {
color = vec4(selectionColor, color.q);
} else {
color = vec4(selectionColor, inactiveColor.w);
}
} else {
if(settings.z == 0) {
color = inactiveColor;
}
}
outputColor = vec4(color.rgb * color.a, 1.0 - color.a);
outputColor = vec4(.5, 1, 1, 1.0 - color.a);
if(opaqueDepth == 0)
outputColor = vec4(1, 0, 0, 1);
else
outputColor = vec4(0, 1, 0, 1);
}
Ignore the middle, the important is just at the begin, where I read the red component of the previous depth texture and then I compare at the end, and the geometry I obtain is red, this means the value I read in the opaqueDepthTexture is 0...
The question is why?
After the dpInit rendering, if I bind again the opaqueFbo and read the depth, it is always the clearDepth, so 1 as default or .9 if I cleared it with .9, so it works.
The problem is really that I read the wrong value in the dpInit FS from a bound depth texture.. why?
For clarification, this is the sampler:
private void initSampler(GL3 gl3) {
sampler = new int[1];
gl3.glGenSamplers(1, sampler, 0);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_WRAP_S, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_WRAP_T, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_MIN_FILTER, GL3.GL_NEAREST);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_MAG_FILTER, GL3.GL_NEAREST);
}
Ps: checking all the components, I see the opaqueDepthTexture has always the following values (0, 0, 0, 1)
Oh god, I found it, in the init FS
uniform sampler2D opaqueDepthTexture;
should be
uniform sampler2DRect opaqueDepthTexture;
I am trying to use unit/normal vector based gradients in html5 canvas element and transform them afterwards for the desired results. However, I seem to figure troubles which might be because of my lack of math. I am trying to create a simple linear gradient going from 0,0 to 1,0 (i.e. a simple unit gradient going from left to right). Afterwards, I transform the canvas for scaling, rotating and moving the gradient. However, when for example giving a rotation value of 45DEG, the actual gradient gets painted wrong. The right bottom corner has way to much black that is, the gradient seems to be not "big" enough. Here's my code:
var rect = {x: 0, y: 0, w: 500, h: 500};
var rotation = 45 * Math.PI/180;
var sx = 1;
var sy = 1;
var tx = 0;
var ty = 0;
var radial = false;
// Create unit vector 0,0 1,1
var grd = radial ? ctx.createRadialGradient(0, 0, 0, 0, 0, 0.5) : ctx.createLinearGradient(0, 0, 1, 0);
grd.addColorStop(0, 'black');
grd.addColorStop(0.1, 'lime');
grd.addColorStop(0.9, 'yellow');
grd.addColorStop(1, 'black');
// Add our rectangle path before transforming
ctx.beginPath();
ctx.moveTo(rect.x, rect.y);
ctx.lineTo(rect.x + rect.w, rect.y);
ctx.lineTo(rect.x + rect.w, rect.y + rect.h);
ctx.lineTo(rect.x, rect.y + rect.h);
ctx.closePath();
// Rotate and scale unit gradient
ctx.rotate(rotation);
ctx.scale(sx * rect.w, sy * rect.h);
ctx.fillStyle = grd;
// Fill gradient
ctx.fill();
And here's the fiddle to try it out:
http://jsfiddle.net/4GsCE/1/
Curious enough, changing the unit linear gradient vector to a factor of about 1.41 makes the gradient look right:
ctx.createLinearGradient(0, 0, 1.41, 0)
Which can be seen in this fiddle:
http://jsfiddle.net/4GsCE/2/
But I couldn't figure how to calculate that factor?
Since you want to use normalized gradients, you have to decide how to normalize. Here you choose to center the gradient, and to have its (x,y) in the [-0.5, 0.5 ] range.
First issue is that the linear gradient is not centered, it's in the [0, 1.0] range.
Normalize them the same way :
var linGrd = ctx.createLinearGradient(-0.5, 0, 0.5, 0);
Second issue is that you must translate to the center of your figure, then scale, then draw in a normalized way.
Meaning you must use same coordinate system as your gradients.
Since you were both drawing a shape having (w,h) as size AND using a scale of (w,h), you were drawing a ( ww, hh ) sized rect.
Correct draw code is this one :
function drawRect(rect, fill) {
ctx.save();
// translate to the center of the rect (the new (0,0) )
ctx.translate(rect.x + rect.w / 2, rect.y + rect.h / 2);
// Rotate
ctx.rotate(rotation);
// scale to the size of the rect
ctx.scale(rect.w, rect.h);
// ...
ctx.fillStyle = fill;
// draw 'normalized' rect
ctx.fillRect(-0.5, -0.5, 1, 1);
ctx.restore();
}
Notice that by default the radialGradient will end at a distance of 0.5, meaning, if you are filling a rect, that it will fill the corners with the last color of the gradient. Maybe you want the corners to end the gradient.
In that case, you want to have the gradient to reach its value at a distance of :
sqrt ( 0.5*0.5 + 0.5*0.5 ) = 0.7 ( pythagore in the normalized circle)
So you'll define your normalized gradient like :
var fullRadGrd = ctx.createRadialGradient(0, 0, 0, 0, 0, 0.7) ;
http://jsfiddle.net/gamealchemist/4GsCE/4/