I need pass returnValue to a method as argument passed by reference and adjust original var value when function id done. So using ReferenceArgumentHelper class.
What's wrong in code bellow when returnValue is unintentionally deleted (when it is a node, i.e. string) and valgrind detects it. callMethod("onFunctionExit" calls an Qore script method and I can see there correct returnValue value. I suspect it's deleted when exiting onFunctionExit when ReferenceArgumentHelper is destroyed. rah.getArg() references reference variable, so it should not be deleted in callMethod.
DLLLOCAL ThreadDebugEnum callMethod(const char* name, const ThreadDebugEnum defaultResult, QoreProgram *pgm, int paramCount, AbstractQoreNode** params, ExceptionSink* xsink) {
int rv;
QoreListNode* l = new QoreListNode();
qore_program_to_object_map_t::iterator i = qore_program_to_object_map.find(pgm);
if (i == qore_program_to_object_map.end()) {
return defaultResult;
}
i->second->ref();
l->push(i->second);
for (int i=0; i<paramCount; i++) {
if (params[i])
params[i]->ref();
l->push(params[i]);
}
rv = qo->intEvalMethod(name, l, xsink);
l->deref(xsink);
return (ThreadDebugEnum) rv;
}
DLLLOCAL virtual ThreadDebugEnum onFunctionExit(QoreProgram *pgm, const StatementBlock *blockStatement, QoreValue& returnValue, ExceptionSink* xsink) {
AbstractQoreNode* params[2];
params[0] = getLocation(blockStatement);
ReferenceArgumentHelper rah(returnValue.takeNode(), xsink); // grab node from returnValue and pass to helper
params[1] = rah.getArg(); // caller owns ref
ThreadDebugEnum rv = callMethod("onFunctionExit", DBG_SB_RUN, pgm, 2, params, xsink);
AbstractQoreNode* rc = rah.getOutputValue(); // caller owns ref
returnValue.assign(rc); // takes reference
// returnValue.ref();
}
return rv;
}
When looking deeply I did not get why compiler is happy with code in /lib/ReferenceArgumentHelper.cpp:
struct lvih_intern {
LocalVar lv;
ExceptionSink* xsink;
ReferenceNode* ref;
DLLLOCAL lvih_intern(AbstractQoreNode* val, ExceptionSink* xs) : lv("ref_arg_helper", 0), xsink(xs) {
printd(5, "ReferenceArgumentHelper::ReferenceArgumentHelper() instantiating %p (val: %p type: '%s') \n", &lv, val, val ? val->getTypeName() : "n/a");
lv.instantiate(val); <--------------
VarRefNode* vr = new VarRefNode(strdup("ref_arg_helper"), VT_LOCAL);
vr->ref.id = &lv;
ref = new ReferenceNode(vr, 0, vr, 0);
}
class LocalVar {
....
DLLLOCAL void instantiate(QoreValue nval) const {
What is behind conversion AbstractQoreNode* to QoreValue in method call? I did not find an overloaded operator or so. I'm looking what exactly happens with references.
** EDIT **
To make a long story short, ReferenceArgumentHelper was buggy; it hadn't been used in years and was not up to date. The class has been fixed which should fix your issue I hope.
Thank you for pointing this out, and let me know if you have any further problems with this or the fix to the affected code.
Related
I am trying the async examples from the GNOME project site. I get the follwoing warning which I don't under stand on how to fix.
async.vala:8.2-8.17: warning: delegates with scope="async" must be owned
Code
async double do_calc_in_bg(double val) throws ThreadError {
SourceFunc callback = do_calc_in_bg.callback;
double[] output = new double[1];
// Hold reference to closure to keep it from being freed whilst
// thread is active.
// WARNING HERE
ThreadFunc<bool> run = () => {
// Perform a dummy slow calculation.
// (Insert real-life time-consuming algorithm here.)
double result = 0;
for (int a = 0; a<100000000; a++)
result += val * a;
output[0] = result;
Idle.add((owned) callback);
return true;
};
new Thread<bool>("thread-example", run);
yield;
return output[0];
}
void main(string[] args) {
var loop = new MainLoop();
do_calc_in_bg.begin(0.001, (obj, res) => {
try {
double result = do_calc_in_bg.end(res);
stderr.printf(#"Result: $result\n");
} catch (ThreadError e) {
string msg = e.message;
stderr.printf(#"Thread error: $msg\n");
}
loop.quit();
});
loop.run();
}
The warning is pointing at the run variable inside the async function. Who or what needs to be owned? The reference to the closure?
The delegate needs to have a well defined owner all the time. The error message is a bit misleading.
To fix it you have to explicitly transfer the ownership from the delegate to the thread constructor:
new Thread<bool>("thread-example", (owned) run);
Instead of
new Thread<bool>("thread-example", run);
See also: https://wiki.gnome.org/Projects/Vala/Tutorial#Ownership
PS: The generated C code is fine in both cases. (at least with valac 0.46.6)
So for some project i'm working with Xamarin.Forms.
Since one area is just unbearably slow with Xamarin.Forms i've used a CustomRenderer to solve one particular area where a list is involved.
After getting back to the project and upgrading packages, i've suddenly got the weirdest bug.
I am setting "1234" to an EditText, and the EditText.Text Property is suddenly "49505152" - the string is converted to its ascii equivalent.
Is this a known issue? Does anyone know how to fix it?
The cause of the issue was that my EditText had an InputFilter applied and that after updating a package suddenly another code path of FilterFormatted was executed.
public ICharSequence FilterFormatted(ICharSequence source, int start, int end, ISpanned dest, int dstart, int dend)
{
var startSection = dest.SubSequenceFormatted(0, dstart);
var insert = source.SubSequenceFormatted(start, end);
var endSection = dest.SubSequenceFormatted(dstart, dest.Length());
var merged = $"{startSection}{insert}{endSection}";
if (ValidationRegex.IsMatch(merged) && InputRangeCheck(merged, CultureInfo.InvariantCulture))
{
StringBuilder sb = new StringBuilder(end - start);
for (int i = start; i < end; i++)
{
char c = source.CharAt(i);
sb.Append(c);
}
if (source is ISpanned) {
SpannableString sp = new SpannableString(sb);
TextUtils.CopySpansFrom((ISpanned)source, start, sb.Length(), null, sp, 0);
return sp;
} else {
// AFTER UPDATE THIS PATH WAS ENTERED UNLIKE BEFORE
return sb;
}
}
else
{
return new SpannableString(string.Empty);
}
}
I'm trying to wrap my head around flow and I struggle to make it work with ES6's Map
Consider this simple case (live demo):
// create a new map
const m = new Map();
m.set('value', 5);
console.log(m.get('value') * 5)
flow throws:
console.log(m.get('value') * 5)
^ Cannot perform arithmetic operation because undefined [1] is not a number.
References:
[LIB] static/v0.72.0/flowlib/core.js:532: get(key: K): V | void;
^ [1]
I also tried:
const m:Map<string, number> = new Map();
m.set('value', 5);
console.log(m.get('value') * 5)
But I got the same error
I believe this is because flow thinks that the value can also be something else than a number, so I tried to wrap the map with a strict setter and getter (live demo):
type MyMapType = {
set: (key: string, value: number) => MyMapType,
get: (key: string) => number
};
function MyMap() : MyMapType {
const map = new Map();
return {
set (key: string, value: number) {
map.set(key, value);
return this;
},
get (key: string) {
return map.get(key);
}
}
}
const m = MyMap();
m.set('value', 5);
const n = m.get('value');
console.log(n * 2);
but then I got:
get (key: string) {
^ Cannot return object literal because undefined [1] is incompatible
with number [2] in the return value of property `get`.
References:
[LIB] static/v0.72.0/flowlib/core.js:532: get(key: K): V | void;
^ [1]
get: (key: string) => number ^ [2]
How can I tell flow that I only deal with a Map of numbers?
Edit:
Typescript approach makes more senses to me, it throws on set instead on get.
// TypeScript
const m:Map<string, number> = new Map();
m.set('value', 'no-number'); // << throws on set, not on get
console.log(m.get('value') * 2);
Is there a way to make Flow behave the same way?
What Flow is trying to tell you is that by calling map.get(key), .get(...) may (V) or may not (void) return something out of that map. If the key is not found in the map, then the call to .get(...) will return undefined. To get around this, you need to handle the case where something is returned undefined. Here's a few ways to do it:
(Try)
const m = new Map();
m.set('value', 5);
// Throw if a value is not found
const getOrThrow = (map, key) => {
const val = map.get(key)
if (val == null) {
throw new Error("Uh-oh, key not found")
}
return val
}
// Return a default value if the key is not found
const getOrDefault = (map, key, defaultValue) => {
const val = map.get(key)
return val == null ? defaultValue : val
}
console.log(getOrThrow(m, 'value') * 5)
console.log(getOrDefault(m, 'value', 1) * 5)
The reason that map.get(key) is typed as V | void is the map might not contain a value at that key. If it doesn't have a value at the key, then you'll throw a runtime error. The Flow developers decided they would rather force the developer (you and me) to think about the problem while we're writing the code then find out at runtime.
Random and pretty late, but was searching and came up with this for my own use cases when I didn't see it mentioned:
const specialIdMap = new Map<SpecialId, Set<SpecialId>>();
const set : Set<SpecialId> = specialIdMap.get(uniqueSpecialId) || new Set();
and this saves quite a lot of boilerplate of checking if null and/or whatever. Of course, this only works if you also do not rely on a falsy value. Alternatively, you could use the new ?? operator.
I am trying to convert some Objective C code provided in one of Apple's code examples here: https://developer.apple.com/library/mac/samplecode/avsubtitleswriterOSX/Listings/avsubtitleswriter_SubtitlesTextReader_m.html
The result I have come up with thus far is as follows:
func copySampleBuffer() -> CMSampleBuffer? {
var textLength : Int = 0
var sampleSize : Int = 0
if (text != nil) {
textLength = text!.characters.count
sampleSize = text!.lengthOfBytesUsingEncoding(NSUTF16StringEncoding)
}
var sampleData = [UInt8]()
// Append text length
sampleData.append(UInt16(textLength).hiByte())
sampleData.append(UInt16(textLength).loByte())
// Append the text
for char in (text?.utf16)! {
sampleData.append(char.bigEndian.hiByte())
sampleData.append(char.bigEndian.loByte())
}
if (self.forced) {
// TODO
}
let samplePtr = UnsafeMutablePointer<[UInt8]>.alloc(1)
samplePtr.memory = sampleData
var sampleTiming = CMSampleTimingInfo()
sampleTiming.duration = self.timeRange.duration;
sampleTiming.presentationTimeStamp = self.timeRange.start;
sampleTiming.decodeTimeStamp = kCMTimeInvalid;
let formatDescription = copyFormatDescription()
let dataBufferUMP = UnsafeMutablePointer<Optional<CMBlockBuffer>>.alloc(1)
CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, samplePtr, sampleSize, kCFAllocatorMalloc, nil, 0, sampleSize, 0, dataBufferUMP);
let sampleBufferUMP = UnsafeMutablePointer<Optional<CMSampleBuffer>>.alloc(1)
CMSampleBufferCreate(kCFAllocatorDefault, dataBufferUMP.memory, true, nil, nil, formatDescription, 1, 1, &sampleTiming, 1, &sampleSize, sampleBufferUMP);
let sampleBuffer = sampleBufferUMP.memory
sampleBufferUMP.destroy()
sampleBufferUMP.dealloc(1)
dataBufferUMP.destroy()
dataBufferUMP.dealloc(1)
samplePtr.destroy()
//Crash if I call dealloc here
//Error is: error for object 0x10071e400: pointer being freed was not allocated
//samplePtr.dealloc(1)
return sampleBuffer;
}
I would like to avoid the "Unsafe*" types where possible, though I am not sure it is possible here. I also looked at using a struct and then somehow seeing to pack it somehow, but example I see seem to be based of sizeof, which uses the size of the definition, rather than the current size of the structure. This would have been the structure I would have used:
struct SubtitleAtom {
var length : UInt16
var text : [UInt16]
var forced : Bool?
}
Any advice on most suitable Swift 2 code for this function would be appreciated.
so, at first, you code use this pattern
class C { deinit { print("I got deinit'd!") } }
struct S { var objectRef:AnyObject? }
func foo() {
let ptr = UnsafeMutablePointer<S>.alloc(1)
let o = C()
let fancy = S(objectRef: o)
ptr.memory = fancy
ptr.destroy() //deinit runs here!
ptr.dealloc(1) //don't leak memory
}
// soon or later this code should crash :-)
(1..<1000).forEach{ i in
foo()
print(i)
}
Try it in a playground and most likely it crash :-). What's wrong with it? The trouble is your unbalanced retain / release cycles. How to write the same in the safe manner? You removed dealloc part. But try to do it in my snippet and see the result. The code crash again :-). The only safe way is to properly initialize and de-ininitialize (destroy) the underlying ptr's Memory as you can see in next snippet
class C { deinit { print("I got deinit'd!") } }
struct S { var objectRef:AnyObject? }
func foo() {
let ptr = UnsafeMutablePointer<S>.alloc(1)
let o = C()
let fancy = S(objectRef: o)
ptr.initialize(fancy)
ptr.destroy()
ptr.dealloc(1)
}
(1..<1000).forEach{ i in
foo()
print(i)
}
Now the code is executed as expected and all retain / release cycles are balanced.
I wrote a program to test my binary tree and when I run it, the program seems to crash (btree.exe has stopped working, Windows is checking for a solution ...).
When I ran it through my debugger and placed the breakpoint on the function I suspect is causing it, destroy_tree(), it seemed to run as expected and returned back to the main function. Main, in turn, returned from the program but then the cursor jumped back to destroy_tree() and looped recusively within itself.
The minimal code sample is below so it can be ran instantly. My compiler is MinGW and my debugger is gdb (I'm using Code::Blocks).
#include <iostream>
using namespace std;
struct node
{
int key_value;
node *left;
node *right;
};
class Btree
{
public:
Btree();
~Btree();
void insert(int key);
void destroy_tree();
private:
node *root;
void destroy_tree(node *leaf);
void insert(int key, node *leaf);
};
Btree::Btree()
{
root = NULL;
}
Btree::~Btree()
{
destroy_tree();
}
void Btree::destroy_tree()
{
destroy_tree(root);
cout<<"tree destroyed\n"<<endl;
}
void Btree::destroy_tree(node *leaf)
{
if(leaf!=NULL)
{
destroy_tree(leaf->left);
destroy_tree(leaf->right);
delete leaf;
}
}
void Btree::insert(int key, node *leaf)
{
if(key < leaf->key_value)
{
if(leaf->left!=NULL)
insert(key, leaf->left);
else
{
leaf->left = new node;
leaf->left->key_value = key;
leaf->left->left = NULL;
leaf->left->right = NULL;
}
}
else if (key >= leaf->key_value)
{
if(leaf->right!=NULL)
insert(key, leaf->right);
else
{
leaf->right = new node;
leaf->right->key_value = key;
leaf->right->left = NULL;
leaf->right->right = NULL;
}
}
}
void Btree::insert(int key)
{
if(root!=NULL)
{
insert(key, root);
}
else
{
root = new node;
root->key_value = key;
root->left = NULL;
root->right = NULL;
}
}
int main()
{
Btree tree;
int i;
tree.insert(1);
tree.destroy_tree();
return 0;
}
As an aside, I'm planning to switch from Code::Blocks built-in debugger to DDD for debugging these problems. I heard DDD can display visually pointers to objects instead of just displaying the pointer's address. Do you think making the switch will help with solving these types of problems (data structure and algorithm problems)?
Your destroy_tree() is called twice, you call it once and then it gets called after the execution leaves main() from the destructor.
You may think it should work anyway, because you check whether leaf!=NULL, but delete does not set the pointer to NULL. So your root is not NULL when destroy_tree() is called for the second time,
Not directly related (or maybe it is) to your problem, but it's good practice to give structs a constructor. For example:
struct node
{
int key_value;
node *left;
node *right;
node( int val ) : key_val( val ), left(NULL), right(NULL) {}
};
If you do this, your code becomes simpler, because you don't need worry about setting the pointers when you create a node, and it is not possible to forget to initialise them.
Regarding DDD, it;'s a fine debugger, but frankly the secret of debugging is to write correct code in the first place, so you don't have to do it. C++ gives you a lot of help in this direction (like the use of constructors), but you have to understand and use the facilities it provides.
Btree::destroy_tree doesn't set 'root' to 0 after successfully nuking the tree. As a result, the destructor class destroy_tree() again and you're trying to destroy already destroyed objects.
That'll be undefined behaviour then :).
Once you destroy the root.
Make sure it is NULL so it does not try to do it again (from the destructor)
void Btree::destroy_tree(node *leaf)
{
if(leaf!=NULL)
{
destroy_tree(leaf->left);
destroy_tree(leaf->right);
delete leaf;
leaf = NULL; // add this line
}
}