Algorithm Complexity & Big-O Notation: From the Basics

download Algorithm Complexity & Big-O Notation:  From the Basics

of 14

  • date post

  • Category


  • view

  • download


Embed Size (px)


ϵ O (. Algorithm Complexity & Big-O Notation: From the Basics. CompSci Club 29 May 2014. History -- Number Theory. ←Edmund Landau (1877-1938). Intelligent mathematician of Germany Supervisor: Frobenius Dirichlet Series - PowerPoint PPT Presentation

Transcript of Algorithm Complexity & Big-O Notation: From the Basics

Algorithm Complexity and Big-O Notation

O (Algorithm Complexity & Big-O Notation: From the BasicsCompSci Club29 May 2014

History -- Number TheoryEdmund Landau (1877-1938)

Image source: Intelligent mathematician of Germany Supervisor: Frobenius Dirichlet Series Number theory over 250 papers, simple proof of the Prime Number Theorem, development on algebraic number fields *Asymptotic behavior of functions*; O is for OrderHistory -- Application to CSBig-O Notation -- used to study performance, complexity of algorithms in Comp SciExecution Time T(n)Memory Usage (hard drive, network use, etc)Performance what are these variables?Complexity how does execution time change with greater amnt of data?Amortized analysis studying the worst case scenario of algorithms, using big-O notation, determining complexityDefinition & Notations, IIf there is number N and number c such that: f(x) c*g(x) for all x > NThen we can write: f(N) O (g(N))N problem size, input size, list sizeWe see various examples, as different functions:

O (N3)O (aN)O (log (N))

Definitions & Notations, IIf(n) O (g(x)), f(x) c*g(x)f(n) (g(x)), f(x) = c*g(x)f(n) (g(x)), f(x) c*g(x)*Note: in many texts these will be equals signs, though many mathematicians such as myself find this to be inadequate notation(the equals operator implies true converses, which is not true in all cases)For Those of You in Calc ClassIf we know that: lim _f(x)_x g(x)then f(x) = o (g(x)).However! This is actually little-oh notation (a stricter quality of Big-Oh Notation)There exists a number N such that f(x) < c*g(x) for x > N and for all values of c.= 0 ,Example ProblemsYou may note that, in the coming examples, constant values dont end up mattering very much.Dept. CS at Univ. Wisconsin-Madison describes / proves the evaluation of complexitySumming up the times of each statementpublic void testComplexity () { statement1; statement2; } = O (1)Solely a function of number of statements (N) Some More Complicated Examplesfor-loop complexityProportional to the upper index of the loop

Nested for-loops, each starting at int ** = 0:Proportional to O (N*M) or O (N2) if N=M = O (N*M) = O (N)Some More Complicated Examplesfor (int k = 0; k < N; k++){ for (int j = k; j < N; j++) { statements; }} = O (N2)n(n+1)/2 = (1/2)(n2 n)1 + 2 + 3 + 4 + + N = Some Practice ProblemsWhat is the worst-case complexity of the each of the following code fragments? Two loops in a row: for (i = 0; i < N; i++) { sequence of statements } for (j = 0; j < M; j++) { sequence of statements } How would the complexity change if the second loop went to N instead of M? Practice ProblemsA nested loop followed by a non-nested loop: for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { sequence of statements } } for (k = 0; k < N; k++) { sequence of statements } A nested loop in which the number of times the inner loop executes depends on the value of the outer loop index: for (i = 0; i < N; i++) { for (j = N; j > i; j--) { sequence of statements } }

When Does a Constant Matter?One can study time functions with greater specificity, for smaller differences in complexityT1 (N) = kN; T2 (N) = aNb, b > 1 T1 can become more efficient than T2 after a certain number of trialsSome Well-known Algorithms& Their ComplexitiesFrom Wikipedia:Constant time: Size of arrayLogarithmic: BinarySearch AlgorithmQuadratic: Bubble Sort & Insertion Sort

And these can be verified by hand (Binary Search, List size, insertion sort, etc) = O (1) = O (log(N)) = O (N2)SourcesCited