donderdag 21 december 2017

Algorithms at Khan Academy part 4

Quick Sort

Quicksort has a couple of other differences from merge sort. Quicksort works in place. And its worst-case running time is as bad as selection sort's and insertion sort's: Θ(n2) \Theta(n^2) . But its average-case running time is as good as merge sort's: Θ(nlgn) \Theta(n \lg n) . So why think about quicksort when merge sort is at least as good? That's because the constant factor hidden in the big-Θ notation for quicksort is quite good. In practice, quicksort outperforms merge sort, and it significantly outperforms selection sort and insertion sort.


var quickSort = function(array, p, r) {
 if(p<r){
        var q = partition(array,p,r);
        quickSort(array,p,q-1);
        quickSort(array,q+1,r);
    }
};

Challenge: implement partition

// Swaps two items in an array, changing the original array
var swap = function(array, firstIndex, secondIndex) {
    var temp = array[firstIndex];
    array[firstIndex] = array[secondIndex];
    array[secondIndex] = temp;
};

var partition = function(array, p, r) {
    // Compare array[j] with array[r], for j = p, p+1,...r-1
    // maintaining that:
    //  array[p..q-1] are values known to be <= to array[r]
    //  array[q..j-1] are values known to be > array[r]
    //  array[j..r-1] haven't been compared with array[r]
    // If array[j] > array[r], just increment j.
    // If array[j] <= array[r], swap array[j] with array[q],
    //   increment q, and increment j.
    // Once all elements in array[p..r-1]
    //  have been compared with array[r],
    //  swap array[r] with array[q], and return q.
      var q = p;
   
    for(var j = q; j < r; j++) {
        if(array[j] <= array[r]) {
            swap(array, j, q);
            q++;
        }
    }
   
    swap(array, r, q);
    return q;
};

Graph Representation










var edgeList = [ [0, 2], [1, 3], [2, 3], [2, 4], [3, 5], [4, 5] ];

var adjMatrix = [
     [[0,0,1,0,0,0], [0,0,0,1,0,0], [0,0,0,1,1,0], [0,0,0,0,0,1], [0,0,0,0,0,1], [0,0,0,0,0,0]]
    ];

var adjList = [
    [[2], [3], [3, 4], [5], [5], []]   
    ];

Breadth-first search, also known as BFS, finds shortest paths from a given source vertex to all other vertices, in terms of the number of edges in the paths.

Breadth-first search assigns two values to each vertex v v vv:
  • A distance, giving the minimum number of edges in any path from the source vertex to vertex v v vv.
  • The predecessor vertex of v v vv along some shortest path from the source vertex. The source vertex's predecessor is some special value, such as null, indicating that it has no predecessor.


how to keep track of which vertices have already been visited but have not yet been visited from. We use a queue, which is a data structure that allows us to insert and remove items, where the item removed is always the one that has been in the queue the longest. We call this behavior first in, first out. A queue has three operations:
  • enqueue(obj) inserts an object into the queue.
  • dequeue() removes from the queue the object that has been in it the longest, returning this object.
  • isEmpty() returns true if the queue currently contains no objects, and false if the queue contains at least one object.


BFS is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors and so on until the best possible path has been found. Due to the fact that this strategy for graph traversal has no additional information about states beyond that provided in the problem definition, Breadth First Search is classed as an uninformed or blind search.

BFS uses a queue data structure which is a 'First in, First Out' or FIFO data structure. This queue stores all the nodes that we have to explore and each time a node is explored it is added to our set of visited nodes.
If we were to conduct a breadth first search on the binary tree above then it would do the following:
  1. Set Node 1 as the start Node
  2. Add this Node to the Queue
  3. Add this Node to the visited set
  4. If this node is our goal node then return true, else add Node 2 and Node 3 to our Queue
  5. Check Node 2 and if it isn’t add both Node 4 and Node 5 to our Queue. 
  6. Take the next node from our Queue which should be Node 3 and check that.
  7. If Node 3 isn’t our goal node add Node 6 and Node 7 to our Queue.
  8. Repeat until goal Node is found.

 Challenge: Implement breadth-first search

/* A Queue object for queue-like functionality over JavaScript arrays. */
var Queue = function() {
    this.items = [];
};
Queue.prototype.enqueue = function(obj) {
    this.items.push(obj);
};
Queue.prototype.dequeue = function() {
    return this.items.shift();
};
Queue.prototype.isEmpty = function() {
    return this.items.length === 0;
};

/*
 * Performs a breadth-first search on a graph
 * @param {array} graph - Graph, represented as adjacency lists.
 * @param {number} source - The index of the source vertex.
 * @returns {array} Array of objects describing each vertex, like
 *     [{distance: _, predecessor: _ }]
 */
var doBFS = function(graph, source) {
    var bfsInfo = [];

    for (var i = 0; i < graph.length; i++) {
        bfsInfo[i] = {
            distance: null,
            predecessor: null };
    }

    bfsInfo[source].distance = 0;

    var queue = new Queue();
    queue.enqueue(source);

    // Traverse the graph
   
    // As long as the queue is not empty:
    //  Repeatedly dequeue a vertex u from the queue.
    // 
    //  For each neighbor v of u that has not been visited:
    //     Set distance to 1 greater than u's distance
    //     Set predecessor to u
    //     Enqueue v
    //
    //  Hint:
    //  use graph to get the neighbors,
    //  use bfsInfo for distances and predecessors
        while(!queue.isEmpty()){
                    var vertex= queue.dequeue();

                for(var i=0; i< graph[vertex].length; i++){
                    var  neighbour = graph[vertex][i];

                    if(bfsInfo[neighbour].distance===null){
                        bfsInfo[neighbour].distance = bfsInfo[vertex].distance + 1;
                        bfsInfo[neighbour].predecessor=vertex;
                        queue.enqueue(neighbour);
                    }
                }
        }
               
   
    return bfsInfo;
};


var adjList = [
    [1],
    [0, 4, 5],
    [3, 4, 5],
    [2, 6],
    [1, 2],
    [1, 2, 6],
    [3, 5],
    []
    ];
var bfsInfo = doBFS(adjList, 3);
for (var i = 0; i < adjList.length; i++) {
    println("vertex " + i + ": distance = " + bfsInfo[i].distance + ", predecessor = " + bfsInfo[i].predecessor);
}

A graph in which E=V1 |E| = |V|-1 E=V1vertical bar, E, vertical bar, equals, vertical bar, V, vertical bar, minus, 1 is called a free tree.)

woensdag 20 december 2017

Algorithms at Khan Academy part 3

we'll see two other sorting algorithms, merge sort and quicksort, whose running times are better. In particular, merge sort runs in Θ(nlgn) \Theta(n \lg n) time in all cases, and quicksort runs in Θ(nlgn) time in the best case and on average, though its worst-case running time is Θ(n2) \Theta(n^2) .

During the Mergesort process the objects of the collection are divided into two collections. To split a collection, Mergesort will take the middle of the collection and split the collection into its left and its right part.

Divide-and-conquer algorithm:
  1. Divide the problem into a number of subproblems that are smaller instances of the same problem.
  2. Conquer the subproblems by solving them recursively. If they are small enough, solve the subproblems as base cases.
  3. Combine the solutions to the subproblems into the solution for the original problem.

Here's how merge sort uses divide-and-conquer:
  1. Divide by finding the number q q qq of the position midway between p p pp and r r rr. Do this step the same way we found the midpoint in binary search: add p p pp and r r rr, divide by 2, and round down.
  2. Conquer by recursively sorting the subarrays in each of the two subproblems created by the divide step. That is, recursively sort the subarray array[p..q] and recursively sort the subarray array[q+1..r].
  3. Combine by merging the two sorted subarrays back into the single sorted subarray array[p..r].

Challenge: Implement merge sort

// Takes in an array and recursively merge sorts it
var mergeSort = function(array, p, r) {
if (p < r) {
        var q = floor((p + r)/2);
        mergeSort(array, p, q);
        mergeSort(array, q + 1, r);
        merge(array, p, q,  r);
    }
};

Challenge: implement merge

var merge = function(array, p, q, r) {
    var lowHalf = [];
    var highHalf = [];

    var k = p;
    var i;
    var j;
    for (i = 0; k <= q; i++, k++) {
        lowHalf[i] = array[k];
    }
    for (j = 0; k <= r; j++, k++) {
        highHalf[j] = array[k];
    }

    k = p;
    i = 0;
    j = 0;
   
    // Repeatedly compare the lowest untaken element in
    //  lowHalf with the lowest untaken element in highHalf
    //  and copy the lower of the two back into array
    while (i<=(q-p) && j<=(r-q-1) ) {
if (lowHalf[i] < highHalf[j]) {
array[k] = lowHalf[i];
i++;
k++;
} else {
array[k] = highHalf[j];
j++;
k++;
}

}
   
    // Once one of lowHalf and highHalf has been fully copied
    //  back into array, copy the remaining elements from the
    //  other temporary array back into the array
    while (i <= (q-p)) {
array[k] = lowHalf[i];
i++;
k++;
}

while (j <= (r-q-1)) {
array[k] = highHalf[j];
j++;
k++;
}
     
};

VS quick sort: Quicksort can sort "inline" in an existing collection, e.g. it does not have to create a copy of the collection while Mergesort requires a copy.

One other thing about merge sort is worth noting. During merging, it makes a copy of the entire array being sorted, with one half in lowHalf and the other half in highHalf. Because it copies more than a constant number of elements at some time, we say that merge sort does not work in place. By contrast, both selection sort and insertion sort do work in place, since they never make a copy of more than a constant number of array elements at any one time. Computer scientists like to consider whether an algorithm works in place, because there are some systems where space is at a premium, and thus in-place algorithms are preferred.

If the amount of memory available to us is limited, it might be in our best interest to switch to another sorting algorithm like selection sort or insertion sort, since they require less memory (both of them require only constant memory, because they don't have to copy anything). Thus, it often boils down to a trade-off between memory and speed. If you're lacking in memory, you might have to use a slower or less desirable algorithm like selection sort. But if you have plenty of memory, then maybe you can afford a faster algorithm like merge sort.

dinsdag 19 december 2017

Algorithms at Khan Academy part 2

Recursive algorithms
The technique recursion is an algorithm to solve a problem by solving a smaller instance of the same problem, unless the problem is so small that we can just solve it directly.

The factorial function = very useful for when we're trying to count how many different orders there are for things or how many different ways we can combine things.

Iterative factorial

var factorial = function(n) {
    var result = 1;
for(var i = 1; i <= n; i++){
    result *= i;
}

    return result;
};

A recursive solution would involve the function calling itself
Induction starts from the base case(s) and works up, while recursion starts from the top and works downwards until it hits a base case.

With induction we know we started on a solid foundation of the base cases, but with recursion we have to be careful when we design the algorithm to make sure that we eventually hit a base case.

Recursive factorial
 var factorial = function(n) {
    // base case:
if(n === 0){
    return 1;
}
    // recursive case:
    return n * factorial(n-1);
};

println("The value of 0! is " + factorial(0) + ".");
println("The value of 5! is " + factorial(5) + ".");

Program.assertEqual(factorial(0), 1);
Program.assertEqual(factorial(5), 120);
Program.assertEqual(factorial(4), 24);
Program.assertEqual(factorial(3), 6);

We can distill the idea of recursion into two simple rules:
  1. Each recursive call should be on a smaller instance of the same problem, that is, a smaller subproblem.
  2. The recursive calls must eventually reach a base case, which is solved without further recursion.
A palindrome is a word that is spelled the same forward and backward. For example, rotor and redder.

Challenge: is a string a palindrome?

// Returns the first character of the string str
var firstCharacter = function(str) {
    return str.slice(0, 1);
};

// Returns the last character of a string str
var lastCharacter = function(str) {
    return str.slice(-1);
};

// Returns the string that results from removing the first
//  and last characters from str
var middleCharacters = function(str) {
    return str.slice(1, -1);
};

var isPalindrome = function(str) {
    // base case #1
    if(str.length <= 1){ return true;}
    // base case #2
    if(firstCharacter(str) !== lastCharacter(str)){ return false;}
    // recursive case
    return isPalindrome(middleCharacters(str));
};

var checkPalindrome = function(str) {
    println("Is this word a palindrome? " + str);
    println(isPalindrome(str));
};
 

recursive algorithm for computing xn x^n xnx, start superscript, n, end superscript:
  • The base case is when n=0 n = 0 n, equals, 0, and x0=1 x^0 = 1 x, start superscript, 0, end superscript, equals, 1.
  • If n n nn is positive and even, recursively compute y=xn/2 y = x^{n/2} y, equals, x, start superscript, n, slash, 2, end superscript, and then xn=yyx, start superscript, n, end superscript, equals, y, dot, y. Notice that you can get away with making just one recursive call in this case, computing xn/2x, start superscript, n, slash, 2, end superscript just once, and then you multiply the result of this recursive call by itself.
  • If n n nn is positive and odd, recursively compute xn1x, start superscript, n, minus, 1, end superscript, so that the exponent either is 0 or is positive and even. Then, xn=xn1x, start superscript, n, end superscript, equals, x, start superscript, n, minus, 1, end superscript, dot, x.
  • If n n nn is negative, recursively compute xn x^{-n} xnx, start superscript, minus, n, end superscript, so that the exponent becomes positive. Then, xn=1/xnx, start superscript, n, end superscript, equals, 1, slash, x, start superscript, minus, n, end superscript.

Challenge: Recursive powers

 var isEven = function(n) {
    return n % 2 === 0;
};

var isOdd = function(n) {
    return !isEven(n);
};

var power = function(x, n) {
    println("Computing " + x + " raised to power " + n + ".");
    // base case
    if(n === 0 && x^0 === 1){return 1^n;}
    // recursive case: n is negative
    if(n < 0) {
        return 1 / (power(x, abs(n))); //Make sure it's float AND abs
    }

    if(isOdd(n)) { return(power(x,n-1) * x);}
    // recursive case: n is odd
    if (isEven(n)) {
var abc = power(x, n/2);
return(abc*abc);
}
   
    return x * power(x, n-1);

};

var displayPower = function(x, n) {
    println(x + " to the " + n + " is " + power(x, n));
};

var displayPower = function(x, n) {
    println(x + " to the " + n + " is " + power(x, n));
};

displayPower(3, 0);
Program.assertEqual(power(3, 0), 1);
displayPower(3, 1);
Program.assertEqual(power(3, 1), 3);

Challenge: Solve Hanoi recursively

var solveHanoi = function(numDisks, fromPeg, toPeg) {
    // base case:  no disks to move
    if(numDisks === 0){
        return 1;}
    // recursive case:
    var sparePeg = hanoi.getSparePeg(fromPeg, toPeg);
solveHanoi(numDisks - 1, fromPeg, sparePeg);
hanoi.moveDisk(fromPeg, toPeg);
solveHanoi(numDisks - 1, sparePeg, toPeg);
};

solveHanoi(5, "A", "B");
Program.assertEqual(hanoi.isSolved("B"),true);

maandag 4 september 2017

Algorithms at Khan Academy part 1

Binary search is an efficient algorithm for finding an item from an ordered list of items. It works by repeatedly dividing in half the portion of the list that could contain the item, until you've narrowed down the possible locations to just one.

Implement binary search in Javascript:
/* Returns either the index of the location in the array,
  or -1 if the array did not contain the targetValue */
var doSearch = function(array, targetValue) {
    var min = 0;
    var max = array.length - 1;
    var guess;
    var i = 1;
    while(min <= max) {
    guess = Math.floor((max + min) / 2);

    if (array[guess] === targetValue) {
        println("Total guesses " +i);
        return guess;
    }
    else if (array[guess] < targetValue) {
        min = guess + 1;
    }
    else {
        max = guess - 1;
    }
    i = i+1;
    println(guess);   
}
   


    return -1;
};

var primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37,
        41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97];

var result = doSearch(primes, 73);
println("Found prime at index " + result);

Program.assertEqual(doSearch(primes, 73), 20);
Program.assertEqual(doSearch(primes, 97), 24);
Program.assertEqual(doSearch(primes, 11), 4);


We use big-Θ notation to asymptotically bound the growth of a running time to within constant factors above and below.

We use big-O notation for asymptotic upper bounds, since it bounds the growth of the running time from above for large enough input sizes. It basically means "the running time grows at most this much, but it could grow more slowly."

We can say that the running time of binary search is always O(lgn) O(\lg n) O(lgn). We can make a stronger statement about the worst-case running time: it's Θ(lgn) \Theta(\lg n) Θ(lgn).

We use big-Ω notation to say that an algorithm takes at least a certain amount of time, without providing an upper bound. So the running time is "big-Ω of f(n) f(n) f(n)f, left parenthesis, n, right parenthesis." We use big-Ω notation for asymptotic lower bounds, since it bounds the growth of the running time from below for large enough input sizes.

IMPORTANT: We can say that binary search always runs in O(lgn) O(\lg n) O(lgn) time but not that it always runs in Θ(lgn) \Theta(\lg n) Θ(lgn) time. Θ(f(n)) automatically implies O(f(n)) O(f(n)) O(f(n))O, left parenthesis, f, left parenthesis, n, right parenthesis, right parenthesis AND Ω(f(n)) \Omega(f(n)) Ω(f(n)).



Here is a list of functions in asymptotic notation that we often encounter when analyzing algorithms, listed from slowest to fastest growing.
  1. Θ(1) \Theta(1) Θ(1)
  2. Θ(lgn) \Theta(\lg n) Θ(lgn)
  3. Θ(n) \Theta(n) Θ(n)
  4. Θ(nlgn) \Theta(n \lg n) Θ(nlgn)
  5. Θ(n2) \Theta(n^2) Θ(n2)
  6. Θ(n2lgn) \Theta(n^2 \lg n) Θ(n2lgn)
  7. Θ(n3) \Theta(n^3) Θ(n3)
  8. Θ(2n) \Theta(2^n) Θ(2n)
  • A function has "constant" growth if its output does not change based on the input, the nn. The easy way to identify constant functions is find those that have no nn in their expression anywhere, or have n0n^0 --> 11 and 100010001000 are constant.
  • A function has "linear" growth if its output increases linearly with the size of its input. The way to identify linear functions is find those where nnn is never raised to a power (although n1n^1 n, start superscript, 1, end superscriptis OK) or used as a power. --> 3n3n3, n and (3/2)nleft parenthesis, 3, slash, 2, right parenthesis, n are linear.
  • A function has "polynomial" growth if its output increases according to a polynomial expression. --> 2n32n^32, n, start superscript, 3, end superscript and 3n23n^2 are polynomial.
  • A function has "exponential" growth if its output increases according to an exponential expression. The way to identify exponential functions is to find those where a constant is raised to some expression involving nnn. --> 2n2^n and (3/2)n(3/2)^nleft parenthesis, 3, slash, 2, right parenthesis, start superscript, n, end superscript are exponential.
Find minimum in subarray:
var indexOfMinimum = function(array, startIndex) {
    // Set initial values for minValue and minIndex,
    // based on the leftmost entry in the subarray: 
    var minValue = array[startIndex];
    var minIndex = startIndex;

    // Loop over items starting with startIndex,
    // updating minValue and minIndex as needed:
    for (var i = minIndex+1; i<array.length;i++){
if(array[i]<array[minIndex]){
minIndex = i;
minValue=array[i];} }
    return minIndex;
};

var array = [18, 6, 66, 44, 9, 22, 14];  
var index = indexOfMinimum(array, 2);

//  For the test array [18, 6, 66, 44, 9, 22, 14],
//  the value 9 is the smallest of [..66, 44, 9, 22, 14]
//  Since 9 is at index 4 in the original array,
//  "index" has value 4
println("The index of the minimum value of the subarray starting at index 2 is " + index + "."  );
Program.assertEqual(index, 4);
Program.assertEqual(indexOfMinimum(array, 5), 6);
Program.assertEqual(indexOfMinimum(array, 3), 4); 


Implement selection sort:
var selectionSort = function(array) {
  var minIndex;
  for (var i = 0; i < array.length; i++) {
    minIndex = indexOfMinimum(array, i);
    swap(array, i, minIndex);
  }
};

var array = [22, 11, 99, 88, 9, 7, 42];
selectionSort(array);
println("Array after sorting:  " + array);

Program.assertEqual(array, [7, 9, 11, 22, 42, 88, 99]);
array = [22, 11, 99, 88, 9, 7, 42,1];
selectionSort(array);
Program.assertEqual(array, [1, 7, 9, 11, 22, 42, 88, 99]);
array = [22, 11, 0, 88, 9, 7, 42,1];
selectionSort(array);
Program.assertEqual(array, [0, 1, 7, 9, 11, 22, 42, 88]);
 

Computing summations from 1 to n:
A faster way to calculate 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 = 36 is:
(8+1)+(7+2)+(6+3)+(5+4)=9+9+9+9
                                          =49=36 .

Add the smallest and the largest number and then multiply by the number of pairs.
What if the sequence to sum up goes from 1 to n n n? We call this an arithmetic series. The sum of the smallest and largest numbers is n+1n, plus, 1. Because there are n n nn numbers altogether, there are n/2 n/2 pairs (whether n n nn is odd or even). Therefore, the sum of numbers from 1 to n is (n+1)(n/2), which equals n2/2+n/2n, start superscript, 2, end superscript, slash, 2, plus, n, slash, 2.

  1. Call insert to insert the element that starts at index 1 into the sorted subarray in index 0.
  2. Call insert to insert the element that starts at index 2 into the sorted subarray in indices 0 through 1.
  3. Call insert to insert the element that starts at index 3 into the sorted subarray in indices 0 through 2.
  4. Finally, call insert to insert the element that starts at index n1 n-1 n1n, minus, 1 into the sorted subarray in indices 0 through n2 n-2 n2n, minus, 2.


Implement insert sort (running time Θ(n2)):
var insert = function(array, rightIndex, value) {
   var i;
  for (i = rightIndex; i >= 0 && array[i] > value; i--) {
    array[i + 1] = array[i];
  }
  array[i + 1] = value;
};


var insertionSort = function(array) {
 for (var i = 1; i < array.length; i++) {
    insert(array, i - 1, array[i]);
  }
};

var array = [3, 5, 7, 11, 13, 2, 9, 6];

insert(array, 4, 2);
println("Array after inserting 2:  " + array);
Program.assertEqual(array, [2, 3, 5, 7, 11, 13, 9, 6]);



Difference between Insertion and Selection sort:
  • Insertion sort: inserts the next element at the correct position;
  • Selection sort: selects the smallest element and exchange it with the current item;
The running times for insertion sort:
  • Worst case: Θ(n2).
  • Best case: Θ(n) \Theta(n) . <--almost sorted array
  • Average case for a random array: Θ(n2) \Theta(n^2) .
  • "Almost sorted" case: Θ(n) \Theta(n) .
Recursion:  solve a problem by solving a smaller instance of the same problem, unless the problem is so small that we can just solve it directly.See part 2.