I'm writing a function graph plotting application with Qt. And I need an algorithm to determine the domain of the function.
here is the part were i draw the function graph
QPainterPath p(QPointF(-m_w/2,f(-m_w/2)));
m_painter->setPen(m_functionPen);
for(double x=-m_w/2, y; x<m_w/2; x++)
{
y = f(x/100);
p.lineTo(x,y*100);
}
m_painter->drawPath(p);
i think that if i find the domain i would stop the progrma from drawing out of it
Plotting software usually doesn't bother determining the domain; it just evaluates the function at every visible position and skips drawing any lines if the result was "undefined"/"NaN"/etc. Here is your code modified to do that skipping (untested, and I didn't match your brace style because I can't stand it):
QPainterPath p();
double previousY = 1/0 /* NaN */;
m_painter->setPen(m_functionPen);
for(double x=-m_w/2, y; x<m_w/2; x++) {
y = f(x/100);
if (y == y /* not-NaN test */) {
if (previousY == previousY) {
p.lineTo(x,y*100);
} else {
p.moveTo(x,y*100);
}
}
previousY = y;
}
m_painter->drawPath(p);
(I'm assuming that QPainterPath p() will construct an empty path. I'm not familiar with the library you are using.) Note that this now treats the first point like the other points for simplicity of coding.
(Also, this strategy will not produce a correct graph if you are evaluating a function like f(x) = 1/(x + 0.00005), because the undefined point will just be skipped over and you'll get a vertical line. There is no simple general solution for this problem.)
On the other hand, if you're trying to find reasonable bounds for your graph (your m_w variable), then determining the domain is the problem. In this case, it will depend on what kinds of functions you have and how they are represented.
Related
I have convex polygons, and I want to extend them by projecting along a vector like so:
(Original polygon and vector on left, desired result on right.)
My polygons are stored as a series of points with counter-clockwise winding. What I want to find are the "starting" and "stopping" point that I need to project from, as in the circled vertices below.
(The green arrows are to indicate the polygon's winding, giving the "direction" of each edge.)
My original plan was to determine which points to use by projecting a ray with the vector's direction from each point, and finding the first and last points whose ray doesn't intersect an edge. However, that seems expensive.
Is there a way I can use the edge directions vs the vector direction, or a similar trick, to determine which points to extend from?
Look at points where the direction of the vector falls between the directions of the edges.
In other words, take three vectors:
of the edge leading out of the vertex
of the translation vector
opposite to the edge leading to the vertex
If they are in this order when going CCW, i.e. if the second vector is between the first and the third, this is an "inside" point.
In order to determine whether a vector lies between two other vectors, use cross product as described e.g. here.
Yes you can. You want to project along x, y. So the normal is y, -x. Now rotate by that (atan2, or you can you it directly if you understand rotation matrices). The points to project from and now the minimum and maximum x, You can also speed up the projection by always doing it along an axis then rotating back.
n.m. answered the question as I asked and pictured it, but upon programming I soon noticed that there was a common case where all vertices would be "outside" vertices (this can be easily seen on triangles, and can occur for other polygons too).
The text explanation.
The solution I used was to look at the normal vectors of the edges leading into and exiting each vertex. The vertices we want to extend are vertices that have at least one edge normal with a minimum angle of less than 90 degrees to the delta vector we are extending by.
The outward-facing edge normals on a counterclockwise-wound polygon can be found by:
normal = (currentVertex.y - nextVertex.y, nextVertex.x - currentVertex.x)
Note that since we don't care about the exact angle, we don't need to normalize (make a unit vector of) the normal, which saves a square root.
To compare it to the delta vector, we use the dot product:
dot = edgeNormal.dot(deltaVector)
If the result is greater than zero, the minimum angle is acute (less than 90). If the result is exactly zero, the vectors are perpendicular. If the result is less than zero, the minimum angle is obtuse. It is worth noting when the vectors are perpendicular, since it lets us avoid adding extra vertices to the extended polygon.
If you want to visualize how the angle works with the dot product, like I did, just look at a graph of arc cosine (normally you get the angle via acos(dot)).
Now we can find the vertices that have one acute and one not-acute minimum angle between its edge normals and the delta vector. Everything on the "acute side" of these vertices has the delta vector added to it, and everything on the "obtuse side" stays the same. The two boarder vertices themselves are duplicated, having one extended and one staying the same, unless the "obtuse side" is exactly perpendicular to the delta vector (in this case we only need to extend the vertex, since otherwise we would have two vertices on the same line).
Here is the C++ code for this solution.
It may look a little long, but it is actually quite straightforward and has many comments so it hopefully shouldn't be hard to follow.
It is part of my Polygon class, which has a std::vector of counterclockwise-wound vertices. units::Coordinate are floats, and units::Coordinate2D is a vector class that I feel should be self-explanatory.
// Compute the normal of an edge of a polygon with counterclockwise winding, without normalizing it to a unit vector.
inline units::Coordinate2D _get_non_normalized_normal(units::Coordinate2D first, units::Coordinate2D second) {
return units::Coordinate2D(first.y - second.y, second.x - first.x);
}
enum AngleResult {
ACUTE,
PERPENDICULAR,
OBTUSE
};
// Avoid accumulative floating point errors.
// Choosing a good epsilon is extra important, since we don't normalize our vectors (so it is scale dependent).
const units::Coordinate eps = 0.001;
// Check what kind of angle the minimum angle between two vectors is.
inline AngleResult _check_min_angle(units::Coordinate2D vec1, units::Coordinate2D vec2) {
const units::Coordinate dot = vec1.dot(vec2);
if (std::abs(dot) <= eps)
return PERPENDICULAR;
if ((dot + eps) > 0)
return ACUTE;
return OBTUSE;
}
Polygon Polygon::extend(units::Coordinate2D delta) const {
if (delta.isZero()) { // Isn't being moved. Just return the current polygon.
return Polygon(*this);
}
const std::size_t numVerts = vertices_.size();
if (numVerts < 3) {
std::cerr << "Error: Cannot extend polygon (polygon invalid; must have at least three vertices).\n";
return Polygon();
}
// We are interested in extending from vertices that have at least one edge normal with a minimum angle acute to the delta.
// With a convex polygon, there will form a single contiguous range of such vertices.
// The first and last vertex in that range may need to be duplicated, and then the vertices within the range
// are projected along the delta to form the new polygon.
// The first and last vertices are defined by the vertices that have only one acute edge normal.
// Whether the minimum angle of the normal of the edge made from the last and first vertices is acute with delta.
const AngleResult firstEdge = _check_min_angle(_get_non_normalized_normal(vertices_[numVerts-1], vertices_[0]), delta);
const bool isFirstEdgeAcute = firstEdge == ACUTE;
AngleResult prevEdge = firstEdge;
AngleResult currEdge;
bool found = false;
std::size_t vertexInRegion;
for (std::size_t i = 0; i < numVerts - 1; ++i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i], vertices_[i+1]), delta);
if (isFirstEdgeAcute != (currEdge == ACUTE)) {
// Either crossed from inside to outside the region, or vice versa.
// (One side of the vertex has an edge normal that is acute, the other side obtuse.)
found = true;
vertexInRegion = i;
break;
}
prevEdge = currEdge;
}
if (!found) {
// A valid polygon has two points that define where the region starts and ends.
// If we didn't find one in the loop, the polygon is invalid.
std::cerr << "Error: Polygon can not be extended (invalid polygon).\n";
return Polygon();
}
found = false;
std::size_t first, last;
// If an edge being extended is perpendicular to the delta, there is no need to duplicate that vertex.
bool shouldDuplicateFirst, shouldDuplicateLast;
// We found either the first or last vertex for the region.
if (isFirstEdgeAcute) {
// It is the last vertex in the region.
last = vertexInRegion;
shouldDuplicateLast = currEdge != PERPENDICULAR; // currEdge is either perpendicular or obtuse.
// Loop backwards from the end to find the first vertex.
for (std::size_t i = numVerts - 1; i > 0; --i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i-1], vertices_[i]), delta);
if (currEdge != ACUTE) {
first = i;
shouldDuplicateFirst = currEdge != PERPENDICULAR;
found = true;
break;
}
}
if (!found) {
std::cerr << "Error: Polygon can not be extended (invalid polygon).\n";
return Polygon();
}
} else {
// It is the first vertex in the region.
first = vertexInRegion;
shouldDuplicateFirst = prevEdge != PERPENDICULAR; // prevEdge is either perpendicular or obtuse.
// Loop forwards from the first vertex to find where it ends.
for (std::size_t i = vertexInRegion + 1; i < numVerts - 1; ++i) {
currEdge = _check_min_angle(_get_non_normalized_normal(vertices_[i], vertices_[i+1]), delta);
if (currEdge != ACUTE) {
last = i;
shouldDuplicateLast = currEdge != PERPENDICULAR;
found = true;
break;
}
}
if (!found) {
// The edge normal between the last and first vertex is the only non-acute edge normal.
last = numVerts - 1;
shouldDuplicateLast = firstEdge != PERPENDICULAR;
}
}
// Create the new polygon.
std::vector<units::Coordinate2D> newVertices;
newVertices.reserve(numVerts + (shouldDuplicateFirst ? 1 : 0) + (shouldDuplicateLast ? 1 : 0) );
for (std::size_t i = 0; i < numVerts; ++i) {
// Extend vertices in the region first-to-last inclusive. Duplicate first/last vertices if required.
if (i == first && shouldDuplicateFirst) {
newVertices.push_back(vertices_[i]);
newVertices.push_back(vertices_[i] + delta);
} else if (i == last && shouldDuplicateLast) {
newVertices.push_back(vertices_[i] + delta);
newVertices.push_back(vertices_[i]);
} else {
newVertices.push_back( isFirstEdgeAcute ? // Determine which range to use.
( (i <= last || i >= first) ? vertices_[i] + delta : vertices_[i] ) : // Range overlaps start/end of the array.
( (i <= last && i >= first) ? vertices_[i] + delta : vertices_[i] )); // Range is somewhere in the middle of the array.
}
}
return Polygon(newVertices);
}
So far I tested this code with triangles, rectangles, approximated circles, and arbitrary convex polygons made by extending the approximated circles sequentially by many different delta vectors.
Please note that this solution is still only valid for convex polygons.
I am doing a leetcode problem.
A robot is located at the top-left corner of a m x n grid (marked 'Start' in the diagram below).
The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid (marked 'Finish' in the diagram below).
How many possible unique paths are there?
So I tried this implementation first and got a "exceeds runtime" (I forgot the exact term but it means the implementation is slow). So I changed it version 2, which use a array to save the results. I honestly don't know how the recursion works internally and why these two implementations have different efficiency.
version 1(slow):
class Solution {
// int res[101][101]={{0}};
public:
int uniquePaths(int m, int n) {
if (m==1 || n==1) return 1;
else{
return uniquePaths(m-1,n) + uniquePaths(m,n-1);
}
}
};
version2 (faster):
class Solution {
int res[101][101]={{0}};
public:
int uniquePaths(int m, int n) {
if (m==1 || n==1) return 1;
else{
if (res[m-1][n]==0) res[m-1][n] = uniquePaths(m-1,n);
if (res[m][n-1]==0) res[m][n-1] = uniquePaths(m,n-1);
return res[m-1][n] + res[m][n-1];
}
}
};
Version 1 is slower beacuse you are calculating the same data again and again. I'll try to explain this on different problem but I guess that you know Fibonacci numbers. You can calculate any Fibonacci number by following recursive algorithm:
fib(n):
if n == 0 then return 0
if n == 1 then return 1
return fib(n-1) + fib(n-1)
But what actually are you calculating? If you want to find fib(5) you need to calculate fib(4) and fib(3), then to calculate fib(4) you need to calculate fib(3) again! Take a look at the image to fully understand:
The same situation is in your code. You compute uniquePaths(m,n) even if you have it calculated before. To avoid that, in your second version you use array to store computed data and you don't have to compute it again when res[m][n]!=0
I am using a cosine curve to apply a force on an object between the range [0, pi]. By my calculations, that should give me a sine curve for the velocity which, at t=pi/2 should have a velocity of 1.0f
However, for the simplest of examples, I get a top speed of 0.753.
Now if this is a floating point issue, that is fine, but that is a very significant error so I am having trouble accepting that it is (and if it is, why is there such a huge error computing these values).
Some code:
// the function that gives the force to apply (totalTime = pi, maxForce = 1.0 in this example)
return ((Mathf.Cos(time * (Mathf.PI / totalTime)) * maxForce));
// the engine stores this value and in the next fixed update applies it to the rigidbody
// the mass is 1 so isn't affecting the result
engine.ApplyAccelerateForce(applyingForce * ship.rigidbody2D.mass);
Update
There is no gravity being applied to the object, no other objects in the world for it to interact with and no drag. I'm also using a RigidBody2D so the object is only moving on the plane.
Update 2
Ok have tried a super simple example and I get the result I am expecting so there must be something in my code. Will update once I have isolated what is different.
For the record, super simple code:
float forceThisFrame;
float startTime;
// Use this for initialization
void Start () {
forceThisFrame = 0.0f;
startTime = Time.fixedTime;
}
// Update is called once per frame
void Update () {
float time = Time.fixedTime - startTime;
if(time <= Mathf.PI)
{
forceThisFrame = Mathf.Cos (time);
if(time >= (Mathf.PI /2.0f)- 0.01f && time <= (Mathf.PI /2.0f) + 0.01f)
{
print ("Speed: " + rigidbody2D.velocity);
}
}
else
{
forceThisFrame = 0.0f;
}
}
void FixedUpdate()
{
rigidbody2D.AddForce(forceThisFrame * Vector2.up);
}
Update 3
I have changed my original code to match the above example as near as I can (remaining differences listed below) and I still get the discrepancy.
Here are my results of velocity against time. Neither of them make sense to me, with a constant force of 1N, that should result in a linear velocity function v(t) = t but that isn't quite what is produced by either example.
Remaining differences:
The code that is "calculating" the force (now just returning 1) is being run via a non-unity DLL, though the code itself resides within a Unity DLL (can explain more but can't believe this is relevant!)
The behaviour that is applying the force to the rigid body is a separate behaviour.
One is moving a cube in an empty enviroment, the other is moving a Model3D and there is a plane nearby - tried a cube with same code in broken project, same problem
Other than that, I can't see any difference and I certainly can't see why any of those things would affect it. They both apply a force of 1 on an object every fixed update.
For the cosine case this isn't a floating point issue, per se, it's an integration issue.
[In your 'fixed' acceleration case there are clearly also minor floating point issues].
Obviously acceleration is proportional to force (F = ma) but you can't just simply add the acceleration to get the velocity, especially if the time interval between frames is not constant.
Simplifying things by assuming that the inter-frame acceleration is constant, and therefore following v = u + at (or alternately ∂v = a.∂t) you need to scale the effect of the acceleration in proportion to the time elapsed since the last frame. It follows that the smaller ∂t is, the more accurate your integration.
This was a multi-part problem that started with me not fully understanding Update vs. FixedUpdate in Unity, see this question on GameDev.SE for more info on that part.
My "fix" from that was advancing a timer that went with the fixed update so as to not apply the force wrong. The problem, as demonstrated by Eric Postpischil was because the FixedUpdate, despite its name, is not called every 0.02s but instead at most every 0.02s. The fix for this was, in my update to apply some scaling to the force to apply to accomodate for missed fixed updates. My code ended up looking something like:
Called From Update
float oldTime = time;
time = Time.fixedTime - startTime;
float variableFixedDeltaTime = time - oldTime;
float fixedRatio = variableFixedDeltaTime / Time.fixedDeltaTime;
if(time <= totalTime)
{
applyingForce = forceFunction.GetValue(time) * fixedRatio;
Vector2 currentVelocity = ship.rigidbody2D.velocity;
Vector2 direction = new Vector2(ship.transform.right.x, ship.transform.right.y);
float velocityAlongDir = Vector2.Dot(currentVelocity, direction);
float velocityPrediction = velocityAlongDir + (applyingForce * lDeltaTime);
if(time > 0.0f && // we are not interested if we are just starting
((velocityPrediction < 0.0f && velocityAlongDir > 0.0f ) ||
(velocityPrediction > 0.0f && velocityAlongDir < 0.0f ) ))
{
float ratio = Mathf.Abs((velocityAlongDir / (applyingForce * lDeltaTime)));
applyingForce = applyingForce * ratio;
// We have reversed the direction so we must have arrived
Deactivate();
}
engine.ApplyAccelerateForce(applyingForce);
}
Where ApplyAccelerateForce does:
public void ApplyAccelerateForce(float requestedForce)
{
forceToApply += requestedForce;
}
Called from FixedUpdate
rigidbody2D.AddForce(forceToApply * new Vector2(transform.right.x, transform.right.y));
forceToApply = 0.0f;
New to Processing working on understanding this code:
import com.onformative.leap.LeapMotionP5;
import java.util.*;
LeapMotionP5 leap;
LinkedList<Integer> values;
public void setup() {
size(800, 300);
frameRate(120); //Specifies the number of frames to be displayed every second
leap = new LeapMotionP5(this);
values = new LinkedList<Integer>();
stroke(255);
}
int lastY = 0;
public void draw() {
**translate(0, 180)**; //(x, y, z)
background(0);
if (values.size() >= width) {
values.removeFirst();
}
values.add((int) leap.getVelocity(leap.getHand(0)).y);
System.out.println((int) leap.getVelocity(leap.getHand(0)).y);
int counter = 0;
** for (Integer val : values)** {
**val = (int) map(val, 0, 1500, 0, height);**
line(counter, val, counter - 1, lastY);
point(counter, val);
lastY = val;
counter++;
}
** line(0, map(1300, 0, 1500, 0, height), width, map(1300, 0, 1500, 0, height)); //(x1, y1, x2, y2)**
}
It basically draw of graph of movement detected on the y axis using the Leap Motion sensor. Output looks like this:
I eventually need to do something similar to this that would detect amplitude instead of velocity simultaneously on all 3 axis instead of just the y.
The use of Map and Translate are whats really confusing me. I've read the definitions of these functions on the Processing website so I know what they are and the syntax, but what I dont understand is the why?! (which is arguably the most important part.
I am asking if someone can provide simple examples that explain the WHY behind using these 2 functions. For instance, given a program that needs to do A, B, and C, with data foo, y, and x, you would use Map or Translate because A, B, and C.
I think programming guides often overlook this important fact but to me it is very important to truly understanding a function.
Bonus points for explaining:
for (Integer val : values) and LinkedList<Integer> values; (cant find any documentation on the processing website for these)
Thanks!
First, we'll do the easiest one. LinkedList is a data structure similar to ArrayList, which you may be more familiar with. If not, then it's just a list of values (of the type between the angle braces, in this case integer) that you can insert and remove from. It's a bit complicated on the inside, but if it doesn't appear in the Processing documentation, it's a safe bet that it's built into Java itself (java documentation).
This line:
for (Integer val : values)
is called a "for-each" or "foreach" loop, which has plenty of very good explanation on the internet, but I'll give a brief explanation here. If you have some list (perhaps a LinkedList, perhaps an ArrayList, whatever) and want to do something with all the elements, you might do something like this:
for(int i = 0; i < values.size(); i++){
println(values.get(i)); //or whatever
println(values.get(i) * 2);
println(pow(values.get(i),3) - 2*pow(values.get(i),2) + values.get(i));
}
If you're doing a lot of manipulation with each element, it quickly gets tedious to write out values.get(i) each time. The solution would be to capture values.get(i) into some variable at the start of the loop and use that everywhere instead. However, this is not 100% elegant, so java has a built-in way to do this, which is the for-each loop. The code
for (Integer val : values){
//use val
}
is equivalent to
for(int i = 0; i < values.size(); i++){
int val = values.get(i);
//use val
}
Hopefully that makes sense.
map() takes a number in one linear system and maps it onto another linear system. Imagine if I were an evil professor and wanted to give students random grades from 0 to 100. I have a function that returns a random decimal between 0 and 1, so I can now do map(rand(),0,1,0,100); and it will convert the number for me! In this example, you could also just multiply by 100 and get the same result, but it is usually not so trivial. In this case, you have a sensor reading between 0 and 1500, but if you just plotted that value directly, sometimes it would go off the screen! So you have to scale it to an appropriate scale, which is what that does. 1500 is the max that the reading can be, and presumably we want the maximum graphing height to be at the edge of the screen.
I'm not familiar with your setup, but it looks like the readings can be negative, which means that they might get graphed off the screen, too. The better solution would be to map the readings from -1500,1500 to 0,height, but it looks like they chose to do it a different way. Whenever you call a drawing function in processing (eg point(x,y)), it draws the pixels at (x,y) offset from (0,0). Sometimes you don't want it to draw it relative to (0,0), so the translate() function allows you to change what it draws things relative against. In this case, translating allows you to plot some point (x,0) somewhere in the middle of the screen, rather than on the edge.
Hope that helps!
I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not.
For my initial testing, I tried with two spheres circles, and the set operation difference. i.e A - B. One circle is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result.
void march(const vec2& cmin, //min x and y for the grid cell
const vec2& cmax, //max x and y for the grid cell
std::vector<vec2>& tri,
float iso,
float (*cmp1)(const vec2&), //distance from stationary circle
float (*cmp2)(const vec2&) //distance from moving circle
)
{
unsigned int squareindex = 0;
float scalar[4];
vec2 verts[8];
/* initial setup of the grid cell */
verts[0] = vec2(cmax.x, cmax.y);
verts[2] = vec2(cmin.x, cmax.y);
verts[4] = vec2(cmin.x, cmin.y);
verts[6] = vec2(cmax.x, cmin.y);
float s1,s2;
/**********************************
********For-loop of interest******
*******Set difference between ****
*******two implicit surfaces******
**********************************/
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
if((s1 < iso)){ //if inside circle1
if((s2 < iso)){ //if inside circle2
scalar[i] = s2; //then set the scalar to the moving circle
} else {
scalar[i] = s1; //only inside circle1
squareindex |= (1<<i); //mark as inside
}
}
else {
scalar[i] = s1; //inside neither circle
}
}
if(squareindex == 0)
return;
/* Usual interpolation between edge points to compute
the new intersection points */
verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]);
verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]);
verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]);
verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]);
for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token
int index = triTable[squareindex][i]; //look up our indices for triangulation
if(index == -1)
break;
tri.push_back(verts[index]);
}
}
This gives me weird jaggies:
(source: mechcore.net)
It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this.
A full testcase can be downloaded HERE
EDIT: Basically, my implementation of marching squares works fine. It is my scalar field which is broken, and I wonder what the correct way would look like. Preferably I'm looking for a general approach to implement the three set operations I discussed above, for the usual primitives (circle, rectangle/square, plane)
EDIT 2: Here are some new images after implementing the answerer's whitepaper:
1.Difference
2.Intersection
3.Union
EDIT 3: I implemented this in 3D too, with proper shading/lighting:
1.Difference between a greater sphere and a smaller sphere
2.Difference between a greater sphere and a smaller sphere in the center, clipped by two planes on both sides, and then union with a sphere in the center.
3.Union between two cylinders.
This is not how you mix the scalar fields. Your scalars say one thing, but your flags whether you are inside or not say another. First merge the fields, then render as if you were doing a single compound object:
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
s = max(s1, iso-s2); // This is the secret sauce
if(s < iso) { // inside circle1, but not inside circle2
squareindex |= (1<<i);
}
scalar[i] = s;
}
This article might be helpful: Combining CSG modeling with soft blending using
Lipschitz-based implicit surfaces.