Θ-notation
• Math:
Θ(g(n)) = { f(n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1g(n) ≤ f (n) ≤ c2g(n) for all n ≥ n0}
• Engineering:– Drop low-order terms; – ignore leading constants.
O-notation
• Another asymptotic symbol used to indicate upper bounds.
• Math:
• Ex : 2n2= O(n3)(c= 1, n0= 2)
• “=” means “one-way” equality
Set definition of O-notation
• Math:
• Looks better?• EX : 2n2 O(n∈ 3)
• O-notation corresponds roughly to “less than or equal to”.
Usage of O-notation
• Macro substitution: A set in a formula represents an anonymous function in the set, if O-notation only occurs on the right hand of formula.
• Ex:
f(n) = n3+ O(n2)
Means f(n) = n3+ h(n)
for some h(n) O(n∈ 2), here we can think of h(n) as error term.
Usage of O-notation
• If O-notation occurs on the left hand of formula, such as n2+ O(n) = O(n2), which means
for any f(n) O(n):∈there exists some h(n) O(n∈ 2),
makes n2+ f(n) = h(n)
• O-notation is an upper-bound notation. – It makes no sense to say f(n) is at least O(n2).
Θ-notation
• O-notation is like ≤
• Ω-notation is like ≥.
• And Θ-notation is like =
• Now we can give another definition of Θ-notation
• We also call Θ-notation as “tight bound”.
A Theorem on Asymptotic Notations
• Theorem 3.1– For any two functions f(n) and g(n), we say
f(n)=Θ(g(n)) if and only if f(n)=O(g(n)) and f(n)=Ω(g(n)).
• Ex: , how can we prove that?
Two More Strict Notations
• We have just said that:– O-notation and Ω-notation are like ≤and ≥.– o-notation and ω-notation are like < and >.
– Difference with O-notation : This inequality must hold for all c instead of just for 1.
What does it mean? “for any constant c ”
• With o-notation, we mean that: no matter what constant we put in front of g(x), f(x) will be still less than cg(x) for sufficiently large n. No matter how small c is.
• Ex: 2n2= o(n3)
• An counter example:
• 1/2n2 = Ω (n2) does not hold for each c.
(n0= 2/c)
Two More Strict Notations
• We have just said that:– O-notation and Ω-notation are like ≤and ≥.– o-notation and ω-notation are like < and >.
– Difference with Ω-notation : This inequality must hold for all c instead of just for 1.
Solving recurrences
• The analysis of merge sort from Lecture 1 required us to solve a recurrence.
• Lecture 3: Applications of recurrences to divide-and-conquer algorithms.
• We often omit some details inessential while solving recurrences:
– n is supposed to be an integer because the size of input is an integer typically .
– Ignore the boundary conditions for convenience.
Substitution method
• The most general method : Substitution method– Guess the form of the solution.– Verify by induction.– Solve for constants.
• Substitution method is used to determine the upper or lower bounds of recurrence.
Example of Substitution
• EXAMPLE: T(n) = 4T(n/2) + n (n≥1)– (Assume that T(1) = Θ(1) )
• Can you guess the time complexity of it?
• Guess O(n3). (Prove O and Ω separately.)– We will prove T(n) ≤ cn3 by induction.– First ,we assume that T(k) ≤ ck3 for k < n– T(k) ≤ ck3 holds while k=n/2 by assumption.
Example of Substitution
residualdesired
desired
All we need is
this holds as long as c ≥ 2
for n ≥ 1
Base Case in Induction
• We must also handle the initial conditions, that is, ground the induction with base cases.
• Base: T(n) = Θ(1) for all n < n0, where n0 is a suitable constant.
• For 1 ≤ n< n0, we have “Θ(1)” ≤ cn3, if we pick c big enough.
Here we are! BUT this bound is not tight !
A tighter upper bound?
• We shall prove that T(n) = O(n2) .
• Assume that T(k) ≤ ck2 for k < n.
Anybody can tell me why?
Now we need –n ≥ 0
But it seems impossible
for n ≥ 1
residualdesired
A tighter upper bound!
• IDEA: Strengthen the inductive hypothesis. Subtract a low-order term.
• Inductive hypothesis: T(k) ≤ c1k2 – c2k for k< n.
Now we need
(c2-1) n ≥0,
it holds if c2 ≥ 1residualdesired
For the Base Case
• We have proved now that for any value of c1, and provided c2 ≥ 1.
• Base: We need T(1) ≤ c1 – c2
• Assumed T(1) is some constant.
• We need to choose c1 to be sufficiently larger than c2, and c2 has to be at least 1.
• To handle the initial conditions, just pick c1
big enough with respect to c2.
About Substitution method
• We have worked for upper bounds, and the lower bounds are similar.– Try it yourself.
• Shortcomings: – We had to know the answer in order to find it,
which is a bit of a pain. – It would be nicer to just figure out the answer
by some procedure, and that will be the next two techniques.
Recursion-tree method
• A recursion tree models the costs (time) of a recursive execution of an algorithm.
• The recursion-tree method can be unreliable, just like any method that uses dot,dot,dots (…).
• The recursion-tree method promotes intuition, however.
• The recursion tree method is good for generating guesses for the substitution method.
Example of recursion tree
• Solve T(n) = T(n/4) + T(n/2)+ n2:
? How many leaves?We just need an upper bound.
<n
Example of recursion tree
• Solve T(n) = T(n/4) + T(n/2)+ n2:
< 2n2
• So T(n) = Θ(n2)
?
?
?
?
Recall 1+1/2+1/4+1/8+…
The Master Method
• It looks like an application of the recursion tree method but with more precise.
• The sad part about the master method is it is pretty restrictive.
• It only applies to recurrences of the form:
T(n) = a T(n/b) + f(n) , where a ≥1, b >1, and f(n) is asymptotically
positive.• aT(n/b) means every problem you recurse on
should be of the same size.
Proof of Master Method
Height = ?logbn
#(leaves) = ?
• CASE1: The weight increases geometrically from the root to the leaves. The leaves hold a constant fraction of the total weight.
Proof of Master Method
Height = ?logbn
#(leaves) = ?
• CASE2(k= 0) :The weight is approximately the same on each of the logbn levels.
Proof of Master Method
Height = ?logbn
#(leaves) = ?
• CASE3: The weight decreases geometrically from the root to the leaves. The root holds a constant fraction of the total weight.
Top Related