# Captain's Log: CIS 775, Analysis Of Algorithms, Fall 2014

1: Aug 25g (Mon)
Introduction to course:
• go through syllabus.
• emphasize that the task of an algorithm is to implement a specification.
• specify the sorting problem.
2: Aug 27 (Wed)
Using the insertion sort method as example, we
• illustrate how to design algorithms by the top-down approach
and then improve them by applying the bottom-up technique
• show some simple reasoning about time and space complexity
• address how to prove the correctness of a recursive algorithm.
Cormen Reading: Sections 2.1 & 2.2.
Howell Reading: Sections 1.2-1.4; Sections 2.1-2.2.
3: Aug 29 (Fri)
We address, with insertion sort as one of the examples, how to prove the correctness of a
• recursive algorithm (Howell Section 2.2)
• iterative algorithm (Howell Section 2.3).
For iterative insertion sort, a full proof is given in Howell's Theorem 2.7 whereas Cormen has a partial proof on p.19.
Discuss the principle of binary search as to be used in Assignment 1.
Sep 1 (Mon)
University Holiday (Labor Day)
4: Sep 3 (Wed)
• We discuss (in much detail) Assignment 1, due tomorrow at 11pm.
• We motivate the "Dutch National Flag" problem, useful in a number of contexts.
5: Sep 5 (Fri)
• We develop a linear-time constant-space algorithm for the Dutch National Flag problem.
• We introduce the "Big-O" notation, useful for asymptotic analysis, and reason about its properties.
6: Sep 8 (Mon)
We continue the theory of asymptotic notation, and
• reason about the properties of big-O
• introduce "Big-Omega" and "Big-Theta", while distinguishing between "best-case" and "worst-case" interpretation
(cf. Cormen p.49; Howell p.57)
• also introduce little-o, as mentioned in Assignment 2.
7: Sep 10 (Wed)
We finish the theory of asymptotic notation, and
• introduce "little-o" and "little-omega" (Howell Section 3.10)
• reason about how the various symbols interact
• discuss how to soundly generalize the Big-O notation to more than one variable (Howell Section 3.9)
Discuss Assignment 2, due tomorrow at 11pm.
8: Sep 12 (Fri)
We address the complexity of iterative algorithms, where we need to estimate sums (cf. Howell 3.4-6):
• Upper approximations are straightforward
• Lower approximations require that the functions are smooth
9: Sep 15 (Mon)
Introduce general recurrences:
• they show up when analyzing divide and conquer algorithms (Cormen Section 2.3)
• to check a proposed solution, the substitution method can be used. (Cormen Section 4.3).
Discuss model solutions for Assignment 1 (now graded).
10: Sep 17 (Wed)
• Wrap up model solutions for Assignment 1, motivating the need to explicitly state the range of variables
• Mention some pitfalls in the analysis of algorithms
• See applications of the theorem for analyzing iterative algorithms
• Discuss Assignment 3, due tomorrow at 11pm.
11: Sep 19 (Fri)
We present a general recipe (the "Master Theorem") to solve recurrences,
cf. Cormen Sections 4.4 & 4.5 (Section 4.6.1 is optional) or Howell Sections 3.7-8:
• we look at applications
• we give an example, T(n) = 2T(n/2) + n log(n), where
• Cormen's Theorem 4.1 is not applicable,
• the Master Theorem from the slides gives a solution sandwiched between n log(n) and n^q for any q > 1,
• but Howell's Theorem 3.32 gives the exact order n log(n) log(n).
Discuss model solutions for Assignment 2 (now graded).
12: Sep 22 (Mon)
Wrap up the Master Theorem:
• we outline a proof
• we give an example where to apply it, one needs to define S(k) = T(2^k).
We embark on a detailed study of the divide and conquer paradigm, with applications such as
• merge sort (Howell Section 10.2 or Cormen Section 2.3)
• quicksort (Cormen Sections 7.1-2 or Howell Section 10.3)
We discuss the upcoming Assignment 4.
13: Sep 24 (Wed)
Show how clever application of the divide and conquer paradigm enables
• efficient multiplication of large integers (or polynomials) (Howell Section 10.1)
• matrix multiplication (Cormen Section 4.2).
Further discuss Assignment 4, due tomorrow at 11pm.
14: Sep 26 (Fri)
Show one more application of divide & conquer:
• the selection problem which has a clever solution that always runs in linear time (Cormen Section 9.3 or Howell Section 10.4).
Discuss model solutions for Assignment 3.
15: Sep 29 (Mon)
Introduce graphs which may
• be directed or undirected, as well as cyclic and/or connected, cf. Cormen's Appendix B.4
• may be represented using an adjacency matrix or using adjacency lists, cf. Cormen 22.1 (or Howell 9.3-4)
Briefly motivate the upcoming Assignment 5.
16: Oct 1 (Wed)
Prepare for Exam 1, discussing relevant questions from
• Exam 1 from Fall 2013
• Final exam from Fall 2013
• Exam 1 from Fall 2012
Discuss model solutions for Assignment 4.
17: Oct 3 (Fri)
Exam 1
18: Oct 6 (Mon)
Introduce the heap data structure, cf. Cormen Chapter 6 or Howell Sections 5.1-2&6, used for
• implementing priority queues
• efficient and in-place sorting (as we shall see next time).
Discuss the upcoming Assignment 5.
19: Oct 8 (Wed)
• Present heap sort.
• Motivate amortized analysis, cf. Cormen Section 17.1
• Further discuss Assignment 5, due tomorrow at 11pm.
20: Oct 10 (Fri)
Present amortized analysis, cf Cormen Chapter 17 (or Howell Sections 4.3&4.5), illustrated by
• expandable arrays (cf the upcoming Assignment 6)
21: Oct 13 (Mon)
Introduce union/find structures, useful to represent equivalence classes, cf Cormen: Chapter 21, Sections 1-3 (or Howell Chapter 8, Sections 1-3);
• operations can be made to run in logarithmic time, and
• it is possible to obtain an amortized cost per operation which is almost in O(1).
Briefly discuss the upcoming Assignment 6.
22: Oct 15 (Wed)
• Wrap up union/find structures, discussing the meaning of "almost O(1)".
• Discuss model solutions for Assignment 5.
• Further discuss Assignment 6, due tomorrow at 11pm.
23: Oct 17 (Fri)
Introduce dynamic programming (Cormen 15.1 & 15.3)
which often allows an efficient implementation of exhaustive search, as illustrated by
• giving optimal change (Howell 12.1)
Hand back, and discuss, graded Exam 1.
24: Oct 20 (Mon)
Further study applications of dynamic programming:
• the binary knapsack problem (Howell 12.4).
Motivate the upcoming Assignment 7.
25: Oct 22 (Wed)
Give one further application of dynamic programming:
• Floyd's all-pair shortest paths algorithm (Howell 12.3 or Cormen 25.2).
Discuss model solutions for Assignment 6.
Assignment 7 due tomorrow at 11pm.
26: Oct 24 (Fri)
One last application of dynamic programming:
• chained matrix multiplcation (Howell 12.2 or Cormen 15.2).
We embark on greedy algorithms (the topic of Cormen 16 or Howell 11):
• motivate the concept
• show the correctness of a greedy strategy for a simple scheduling problem
• show a greedy strategy for the fractional knapsack problem, cf. Cormen 16.2.
27: Oct 27 (Mon)
Continue greedy algorithms:
• discuss various approaches to construct a minimum-cost spanning tree, cf. Cormen 23 or Howell 11.2:
• present Kruskal's algorithm
• present the proof that Kruskal's algorithm returns a correct solution
• discuss how the above proof technique is generally applicable to greedy algorithms
• present Prim's algorithm
Motivate the upcoming Assignment 8.
28: Oct 29 (Wed)
See other examples of the greedy paradigm:
• present Dijkstra's algorithm for single-source shortest paths, cf. Howell 11.3 or Cormen 24.3
• scheduling of events with fixed start and end times, cf. Cormen 16.1
• Huffman codes, cf. Howell 11.4 or Cormen 16.3
Further discuss Assignment 8, due tomorrow at 11pm.
29: Oct 31 (Fri)
Discuss depth-first traversal of graphs, directed as well as undirected (Cormen 22.3 or Howell 13.1-3).
• As a result, we can view a graph as a rooted tree together with certain kinds of extra edges.
Discuss model solutions for Assignments 7 and 8.
30: Nov 3 (Mon)
See how depth-first traversal can be augmented into
• an algorithm for topological sort, cf. Cormen 22.4 or Howell 13.5
Prepare for Exam 2, discussing relevant questions from Fall 2013 and Fall 2012.
31: Nov 5 (Wed)
Exam 2
32: Nov 7 (Fri)
See how depth-first traversal can be augmented into
• an algorithm for finding "articulation points", cf. Howell 13.4.
Start introduction to key concepts of probability theory, cf Cormen Chapter 5 (Sect 5.4 cursory only), looking at:
• how many people must be present in order for a shared birthday to be more likely than not
33: Nov 10 (Mon)
Continue probability theory, introducing random variables which may be used to calculate
• the expected number of birthday coincidences
• the expected wait time until success.
Discuss the upcoming Assignment 9.
34: Nov 12 (Wed)
Finish basic probability theory, illustrated by
• strategies for optimal hiring (cf. Cormen 5.2 & 5.4.4).
Motivate randomized algorithms; one important such is
• randomized quicksort (Cormen 7.3-4 or Howell 10.3).
Assignment 9 due tomorrow at 11pm.
35: Nov 14 (Fri)
Continue randomized algorithms:
• how, and how not, to randomly permute an array
• randomized selection (Cormen: Section 9.2)
• Give an upper bound for the expected random path length in an arbitrary binary tree
• Mention "Monte-Carlo" algorithms, in particular for testing if a given number is a prime (cf. Cormen 31.8-9).
36: Nov 17 (Mon)
• Finish Monte-Carlo algorithms.
• Discuss various pitfalls in probability reasoning.
• Study decision trees, and establish a lower bound for comparison-based sorting algorithms, cf. Cormen 8.1.
37: Nov 19 (Wed)
• Hand back, and discuss, graded Exam 2.
• Compare various sorting algorithms along various metrics.
• We exhibit sorting algorithms, in particular counting sort, that run in linear time, cf. Cormen 8.2-4.
38: Nov 21 (Fri)
More probabilistic reasoning:
• find the expected running time of bucket sort
• discuss model solutions for Assignment 9
(which also has a question on depth-first search).
Introduce "adversary arguments" to establish lower bounds that are more precise than what information theory would yield
• cf. Cormen 9.1 on finding minimum and maximum.
Nov 24-28 (Mon-Fri)
Thanksgiving holiday
39: Dec 1 (Mon)
We embark on discussing what constitutes a really hard problem, cf. Cormen 34.1-2 or Howell 16.2-3:
• introduce the class P
• motivate the class NP, and then formally define it
• define the concept of polynomial many-one reduction.
40: Dec 3 (Wed)
Continue the discussion of what is a really hard problem:
• Define the class of NP-hard (and NP-complete) problems
• Mention the existence of a "first" NP-hard problem, boolean SATisfiability, cf. Cormen 34.3
• Present a number of NP-hard problems, cf. Cormen 34.4-5 (except 34.5.3&5) or Howell 16.4-5.
Discuss Assignment 10, due tomorrow at 11pm.
41: Dec 5 (Fri)
Present several polynomial-time reductions (so as to show that various problems are NP-hard):
• from Clique to VertexCover, cf. Cormen 34.5.2
• from 3-Sat to Clique, cf. Cormen 34.5.1
• from CSat to 3-Sat
• from Sat to CSat (sketch only)
• from k-Col to (k+1)-Col, cf Assignment 10, Q4
• from 3-Sat to 3-Col (sketch only), cf. the upcoming Assignment 11, Q1.
42: Dec 8 (Mon)
Finish the slides on NP:
• mention that optimization problems can often be reduced to decision problems, and vice versa
(cf. Howell 17.1 and Cormen p.1051, and Assignment 10, Q3).
Embark on Approximation Algorithms:
• motivate the concept (cf. Cormen 35.0)
• for some problems (cf. Cormen 35.1) we can construct polynomial algorithms that are guaranteed to approximate within a fixed factor (often two)
• the traveling salesman problem (Cormen 35.2 and Howell 17.4) can be polynomially approximated within a factor two if the triangle inequality holds
but in the general case, there exists no polynomial approximation algorithm (unless P = NP).
Discuss the upcoming Assignment 11.
43: Dec 10 (Wed)
Discuss the binary knapsack problem (cf Howell 17.2):
• a simple improvement of a greedy algorithm gives approximation within a factor 2
• a generalization gives us arbitrarily high precision where each approximation is polynomial but with degree that depends on the precision
• fortunately, there exists a fully polynomial approximation scheme: we can approximate within an error of 1/k at a cost polynomial even in k.
Present two seemingly equivalent problems (Howell 17.5) where
• one (Max-Cut) can in polynomial time be approximated within a fixed relative error (but it is not clear how to further increase precision)
• one (Min-Cluster) does not allow any polynomial-time fixed-precision approximation algorithm (unless P = NP).
Further discuss Question 2 (where C is for extra credit) in Assignment 11, due tomorrow at 11pm.
44: Dec 12 (Fri)
General review session:
• Discuss the final exam from Fall 2013
• Discuss model solutions for Assignment 10 (Q1 and Q2)
• Discuss model solutions for Assignment 11
• Wrap up the last part of the course: sketch a hierarchy of complexity classes, where in most cases it is unknown whether the inclusion is strict.
Dec 16 (Tue)
Final exam, 11:50am - 1:40pm

Torben Amtoft