Looking for Segmentation Fault in C script - pointers

Hi trying to learn C specifically how to use pointers.
I wrote this script to practice ideas I've learned, but it crashes with segmentation fault error.
Bit of research search suggests that I am trying to access something that I should not be accessing I think that is an uninitialized pointer but I can't find it.
#include <stdio.h>
struct IntItem {
struct IntItem* next;
int value;
};
struct IntList {
struct IntItem* head;
struct IntItem* tail;
};
void append_list(struct IntList* ls, int item){
struct IntItem* last = ls->tail;
struct IntItem addition = {NULL,item};
last->next = &addition;
ls->tail = &addition;
if (!ls->head) {
ls->head = &addition;
}
}
int sum(int x, int y){
return x + y;
}
int max(int x, int y){
return x*(x>y) + y*(y>x);
}
int reduce(struct IntList xs, int (*opy)(int, int)){
struct IntItem current = *xs.head;
int running = 0;
while (current.next) {
running = opy(running,current.value);
current = *current.next;
}
return running;
}
int main(void) {
struct IntList ls = {NULL, NULL};
printf("Start Script\n");
append_list(&ls, 1);
append_list(&ls, 2);
append_list(&ls, 3);
printf("List Complete\n");
printf("Sum: %i",reduce(ls,sum));
printf("Max: %i",reduce(ls,max));
return 0;
}

Hints:
When you call append_list(&ls, 1), then inside append_list, what is the value of last?
What does last->next = &addition do?
And for your next bug:
What happens to addition after append_list returns? What does that mean for pointers to it?

Related

confusing pointer error while implementing linked list

#define _CRT_SECURE_NO_WARNINGS
#include <stdio.h>
#include <stdlib.h>
#define MALLOC(p,s) {\
if (!((p) = malloc(s))) { \
fprintf(stderr, "insufficient memory");\
exit(EXIT_FAILURE);\
}\
}
#define IS_EMPTY(first) (!first)
typedef struct listNode* listPointer;
typedef struct listNode {
int data;
listPointer link;
}listNode;
void printList(listPointer first);
int main(void)
{
int x;
int tmpData;
listPointer first = NULL;
listPointer tmpLink = NULL;
FILE* fp = NULL;
if (!(fp = fopen("in.txt", "r"))) {
fprintf(stderr, "cannot open the file");
exit(EXIT_FAILURE);
}
while (!feof(fp)) {
fscanf(fp, "%d", &tmpData);
MALLOC(tmpLink, sizeof(listNode));
if (IS_EMPTY(first)) {
MALLOC(first, sizeof(listNode));
*tmpLink = *first;
}
tmpLink->data = tmpData;
tmpLink = tmpLink->link;
}
printList(first);
}
void printList(listPointer first)
{
for (; first; first = first->link) {
printf("%d ", first->data);
}
printf("\n");
}
We know that we can implement the insert function.
But I'm really curious about why this doesn't work.
What "first" refers to and what "tmpLink" refers to is the same
After implementing the link list while updating tmpLink,
I'm going to use "first" to print later.
I've spent almost a day just thinking about this, and I've tried debugging it, but I don't know why.

Double LinkList in data structure

When I create a double Linklist By using C(C99), if I add the code: struct DNonde***first**=NULL; on the main function when running Display function the parameter first = NULL. If I remove the additional code: struct DNonde *first = NULL; then structure point: first will normally running as the linklist's first node's address and the Display funtion will also normally running.
However the question is I am using the Create function to create double Linklist after the code:struct DNonde*first=NULL;thus, it will not influence the latter codes, and when I debug the codes it shows me that the double Linklist is created successfully but when in Display function the first = NULL. And why it cause that?
Below is the source code
#include "stdio.h"
#include "stdlib.h"
struct DNonde
{
struct DNonde *preview;
int data;
struct DNonde *next;
}*first=NULL;
void Create(int a[],int length)
{
struct DNonde*tem = NULL;
first =(struct DNonde*)malloc(sizeof(struct DNonde));
first->preview=NULL; first->data=a[0]; first->next=NULL;
struct DNonde *control = first;
for(int i=1;i<length;i++)
{
tem =(struct DNonde*)malloc(sizeof(struct DNonde));
control->next = tem;
tem->preview = control; tem->data = a[i]; tem->next=NULL;
control = tem;
}
}
void Display(struct DNonde*first)
{
do
{
printf("%d ",first->data);
first=first->next;
}while(first != NULL);
}
int main()
{
int a[]={1,3,4,5};
struct DNonde*first=NULL;
Create(a, 4);
Display(first);
}

when to use to double pointers and pointers

// A C program to demonstrate linked list based implementation of queue
#include <stdio.h>
#include <stdlib.h>
struct QNode {
int key;
struct QNode* next;
};
struct Queue {
struct QNode *front, *rear;
};
struct QNode* newNode(int k)
{
struct QNode* temp = (struct QNode*)malloc(sizeof(struct QNode));
temp->key = k;
temp->next = NULL;
return temp;
}
struct Queue* createQueue()
{
struct Queue* q = (struct Queue*)malloc(sizeof(struct Queue));
q->front = q->rear = NULL;
return q;
}
void enQueue(struct Queue* q, int k)
{
struct QNode* temp = newNode(k);
if (q->rear == NULL) {
q->front = q->rear = temp;
return;
}
q->rear->next = temp;
q->rear = temp;
}
void deQueue(struct Queue* q)
{
if (q->front == NULL)
return;
struct QNode* temp = q->front;
q->front = q->front->next;
if (q->front == NULL)
q->rear = NULL;
free(temp);
}
int main()
{
struct Queue* q = createQueue();
enQueue(q, 10);
enQueue(q, 20);
deQueue(q);
deQueue(q);
enQueue(q, 30);
enQueue(q, 40);
enQueue(q, 50);
deQueue(q);
printf("Queue Front : %d \n", q->front->key);
printf("Queue Rear : %d", q->rear->key);
return 0;
}
The above code is from geeksforgeeks website.
in function calls they used pointer to struct,
in function definition they passed pointer to struct.
how it works, I thought we need to use double pointers , otherwise > it is pass by value instead of pass by reference.
the above code works fine, but i have doubt about it.
In main there is a variable q declared which is a pointer to a struct. The variable q is used as the function argument which means the function receives a pointer to the struct. The function can dereference the pointer and modify the struct. The variable q is technically passed by value because its value is a pointer and that's what the function receives. But you have to remember that q points to a struct that could be modified by the function.
Because this situation causes some confusion some people have tried to introduce new terminology like "pass by sharing" or "object sharing" to distinguish it from passing primitive values like an `int' by value.
If you had passed a pointer to a pointer then the function could have modified the variable q declared in main and changed it so it points to a completely different struct. That would be (technically) pass by reference because you are passing a reference to the variable.

timerfd mysteriously set int to 0 when read()

I am doing an timerfd hello world in ubuntu 14.04, but got a strange situation: the int count is reset after read timerfd but uint64_int not.
int main(int agrc, char **argv) {
unsigned int heartbeat_interval = 1;
struct itimerspec next_timer;
struct timespec now;
if (clock_gettime(CLOCK_REALTIME, &now) == -1)
err_sys((WHERE + std::string("timer error")).c_str());
next_timer.it_value.tv_sec = now.tv_sec;
next_timer.it_value.tv_nsec = 0;
next_timer.it_interval.tv_sec = heartbeat_interval;
next_timer.it_interval.tv_nsec = 0;
int timefd = timerfd_create(CLOCK_REALTIME, 0);
if (timerfd_settime(timefd, TFD_TIMER_ABSTIME, &next_timer, NULL) == -1) {
err_sys((WHERE).c_str());
}
uint64_t s;
int exp;
int count = 1;
uint64_t count1=0;
while (1) {
s = read(timefd, &exp, sizeof(uint64_t));
if (s != sizeof(uint64_t)) {
err_sys((WHERE).c_str());
}
}
}
int exp;
^^^
s = read(timefd, &exp, sizeof(uint64_t));
^^^ ^^^^^^^^
Unless your int and uint64_t types are the same size, this is a very bad idea. What's most likely happening is that the 64 bits you're reading are overwriting exp and whatever else happens to be next to it on the stack.
Actually, even if they are the same size, it's a bad idea. What you should have is something like:
s = read(timefd, &exp, sizeof(exp));
That way, you're guaranteed to never overwrite the data and your next line would catch the problem for you:
if (s != sizeof(uint64_t)) {
It won't solve the problem that an unsigned integral type and an integral type will be treated differently but you can fix that just by using the right type for exp.

QFuture Memoryleak

I want to parallelize a function and have the problem that after a few hours my memory is overloaded.
The test program calculates something simple, and works so far. Only the memory usage is constantly increasing.
QT Project file:
QT -= gui
QT += concurrent widgets
CONFIG += c++11 console
CONFIG -= app_bundle
DEFINES += QT_DEPRECATED_WARNINGS
SOURCES += main.cpp
QT program file:
#include <QCoreApplication>
#include <qdebug.h>
#include <qtconcurrentrun.h>
double parallel_function(int instance){
return (double)(instance)*10.0;
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
int nr_of_threads = 8;
double result_sum,temp_var;
for(qint32 i = 0; i<100000000; i++){
QFuture<double> * future = new QFuture<double>[nr_of_threads];
for(int thread = 0; thread < nr_of_threads; thread++){
future[thread] = QtConcurrent::run(parallel_function,thread);
}
for(int thread = 0; thread < nr_of_threads; thread++){
future[thread].waitForFinished();
temp_var = future[thread].result();
qDebug()<<"result: " << temp_var;
result_sum += temp_var;
}
}
qDebug()<<"total: "<<result_sum;
return a.exec();
}
As I have observed, QtConcurrent::run(parallel_function,thread) allocates memory, but does not release memory after future[thread].waitForFinished().
What's wrong here?
You have memory leak because future array is not deleted. Add delete[] future at the end of outer for loop.
for(qint32 i = 0; i<100000000; i++)
{
QFuture<double> * future = new QFuture<double>[nr_of_threads];
for(int thread = 0; thread < nr_of_threads; thread++){
future[thread] = QtConcurrent::run(parallel_function,thread);
}
for(int thread = 0; thread < nr_of_threads; thread++){
future[thread].waitForFinished();
temp_var = future[thread].result();
qDebug()<<"result: " << temp_var;
result_sum += temp_var;
}
delete[] future; // <--
}
Here's how this might look - note how much simpler everything can be! You're dead set on doing manual memory management: why? First of all, QFuture is a value. You can store it very efficiently in any vector container that will manage the memory for you. You can iterate such a container using range-for. Etc.
QT = concurrent # dependencies are automatic, you don't use widgets
CONFIG += c++14 console
CONFIG -= app_bundle
SOURCES = main.cpp
Even though the example is synthetic and the map_function is very simple, it's worth considering how to do things most efficiently and expressively. Your algorithm is a typical map-reduce operation, and blockingMappedReduce has half the overhead of manually doing all of the work.
First of all, let's recast the original problem in C++, instead of some C-with-pluses Frankenstein.
// https://github.com/KubaO/stackoverflown/tree/master/questions/future-ranges-49107082
/* QtConcurrent will include QtCore as well */
#include <QtConcurrent>
#include <algorithm>
#include <iterator>
using result_type = double;
static result_type map_function(int instance){
return instance * result_type(10);
}
static void sum_modifier(result_type &result, result_type value) {
result += value;
}
static result_type sum_function(result_type result, result_type value) {
return result + value;
}
result_type sum_approach1(int const N) {
QVector<QFuture<result_type>> futures(N);
int id = 0;
for (auto &future : futures)
future = QtConcurrent::run(map_function, id++);
return std::accumulate(futures.cbegin(), futures.cend(), result_type{}, sum_function);
}
There is no manual memory management, and no explicit splitting into "threads" - that was pointless, since the concurrent execution platform is aware of how many threads there are. So this is already better!
But this seems quite wasteful: each future internally allocates at least once (!).
Instead of using futures explicitly for each result, we can use the map-reduce framework. To generate the sequence, we can define an iterator that provides the integers we wish to work on. The iterator can be a forward or a bidirectional one, and its implementation is the bare minimum needed by QtConcurrent framework.
#include <iterator>
template <typename tag> class num_iterator : public std::iterator<tag, int, int, const int*, int> {
int num = 0;
using self = num_iterator;
using base = std::iterator<tag, int, int, const int*, int>;
public:
explicit num_iterator(int num = 0) : num(num) {}
self &operator++() { num ++; return *this; }
self &operator--() { num --; return *this; }
self &operator+=(typename base::difference_type d) { num += d; return *this; }
friend typename base::difference_type operator-(self lhs, self rhs) { return lhs.num - rhs.num; }
bool operator==(self o) const { return num == o.num; }
bool operator!=(self o) const { return !(*this == o); }
typename base::reference operator*() const { return num; }
};
using num_f_iterator = num_iterator<std::forward_iterator_tag>;
result_type sum_approach2(int const N) {
auto results = QtConcurrent::blockingMapped<QVector<result_type>>(num_f_iterator{0}, num_f_iterator{N}, map_function);
return std::accumulate(results.cbegin(), results.cend(), result_type{}, sum_function);
}
using num_b_iterator = num_iterator<std::bidirectional_iterator_tag>;
result_type sum_approach3(int const N) {
auto results = QtConcurrent::blockingMapped<QVector<result_type>>(num_b_iterator{0}, num_b_iterator{N}, map_function);
return std::accumulate(results.cbegin(), results.cend(), result_type{}, sum_function);
}
Could we drop the std::accumulate and use blockingMappedReduced instead? Sure:
result_type sum_approach4(int const N) {
return QtConcurrent::blockingMappedReduced(num_b_iterator{0}, num_b_iterator{N},
map_function, sum_modifier);
}
We can also try a random access iterator:
using num_r_iterator = num_iterator<std::random_access_iterator_tag>;
result_type sum_approach5(int const N) {
return QtConcurrent::blockingMappedReduced(num_r_iterator{0}, num_r_iterator{N},
map_function, sum_modifier);
}
Finally, we can switch from using range-generating iterators, to a precomputed range:
#include <numeric>
result_type sum_approach6(int const N) {
QVector<int> sequence(N);
std::iota(sequence.begin(), sequence.end(), 0);
return QtConcurrent::blockingMappedReduced(sequence, map_function, sum_modifier);
}
Of course, our point is to benchmark it all:
template <typename F> void benchmark(F fun, double const N) {
QElapsedTimer timer;
timer.start();
auto result = fun(N);
qDebug() << "sum:" << fixed << result << "took" << timer.elapsed()/N << "ms/item";
}
int main() {
const int N = 1000000;
benchmark(sum_approach1, N);
benchmark(sum_approach2, N);
benchmark(sum_approach3, N);
benchmark(sum_approach4, N);
benchmark(sum_approach5, N);
benchmark(sum_approach6, N);
}
On my system, in release build, the output is:
sum: 4999995000000.000000 took 0.015778 ms/item
sum: 4999995000000.000000 took 0.003631 ms/item
sum: 4999995000000.000000 took 0.003610 ms/item
sum: 4999995000000.000000 took 0.005414 ms/item
sum: 4999995000000.000000 took 0.000011 ms/item
sum: 4999995000000.000000 took 0.000008 ms/item
Note how using map-reduce on a random-iterable sequence has over 3 orders of magnitude lower overhead than using QtConcurrent::run, and is 2 orders of magnitude faster than non-random-iterable solutions.

Resources