Discrete Mathematics (H) Lecture 6 Notes

Zhige Chen Lv4

Algorithms and Complexity

Asymptotic Notation

Big- Notation

Definition: Big- Notation

Let and be functions from set of integers or the set of real numbers to the set of real numbers. We say that , if there exist some positive constants and s.t. , whenever .

(for more precise definition and applications see notes for lecture 2 of Data Structures and Algorithm Analysis (H))

Tip: Big- Estimates for Polynomials

Let , where are real numbers. Then .

Proof. Assuming , we have So we can conclude that for polynomials, the leading term dominates its growth.

Big- estimates for some functions:

  • for integer
  • for integers
  • for an integer

Tip: Big- for Summation of Functions

If and then .

Proof. By definition, there exist constants s.t. when and when . Then where

Tip: Big- for Product of Functions

If and then .

Proof. When , where .

Big- Notation

Definition: Big- Notation

Let and be functions from the set of integers or the set of real numbers to the set of real numbers. We say that , if there exists some positive constants and and s.t. , whenever .

Tip

Big- gives an upper bound on the growth of a function, while Big- gives a lower bound.

Big- Notation

Definition: Big- Notation

Two functions have the same order growth if and . In this case, we say that , which is the same as .

Algorithms

Definition: Algorithm

An algorithm is a finite sequence of precise instructions for performing a computation or for solving a problem.

Definition: Computational Problem

A computational problem is a specification of the desired input-output relationship.

Definition: Problem Instance

An instance of a problem is all the inputs needed to compute a solution to the problem.

A correct algorithm halts with the correct output for every input instance. We then say that the algorithm solves the problem.

Time and Space Complexity

Definition: Time Complexity and Space Complexity

The number of machine operations needed in an algorithm is the time complexity of the algorithm, and the amount of memory needed is the space complexity of the algorithm.

Example: Horner’s Algorithm

Consider the evaluation of Direct computation will take 3 additions and 6 multiplications. A more optimal way is to rearrange the expression as which takes 3 additions and 3 multiplications. This is the core idea of the Horner’s Algorithm. The number of operations needed in this algorithm is clearly So the time complexity of this algorithm is .

Example: Insertion Sort

Best-case

To analyze for the best-case, we constrain the input so that is will result in the fastest possible running time for the given input size. For a sorting algorithm, the best-case is when the input array is already sorted.

In this case the number of comparison needed is

Worst-case

To analyze for the best-case, we constrain the input so that is will result in the slowest possible running time for the given input size. For a sorting algorithm, the best-case is when the input array is already sorted in reverse order.

Average-case

We average running time over every possible type of input for the given size, this usually involve probabilities of different types of input.

The complexity of insertion sort is assuming that each of the instances are equally likely.

Dealing with Hard Problems

Showing that a problem has an efficient algorithm is relatively easy, since all we need to do is to demonstrate an algorithm. But showing that there is no efficient algorithm exists for a particular problem is rather difficult, since in this case we need to prove the non-existence of such algorithm among the countless possible algorithms. A handy way to approach this question is through the NP-Complete problems.

The NP-Complete problems is a very large class of thousands of practical problems for which it is not known if the problems have efficient solutions. Or more precisely, have polynomial-time solutions. It is known that if any one of the NP-Complete problems has an efficient solution then all of the NP-Complete problems can be efficient solutions. Since the innumberable man-years trying to find the efficient solutions, NP-Complete problems are very likely to be hard.

Encoding the Input of Problems

Complexity of a problem is a measure w.r.t. the size of input. In order to formally discuss how hard a problem is, we need to be much more formal than before about the input size of a problem.

Definition: Input Size

The input size of a problem is the minimum number of bits () needed to encode the input of the problem.

But the exact input size , determined by an optimal encoding method, is hard to compute in most cases. However, we do not need to determine exactly in most cases.

For most problems, it is sufficient to choose some natural and (usually) simple, encoding and use the size of this encoding.

Input Size Example: Composite

Example: Composite Number

Given a positive integer , are there integers s.t. ? (i.e., is a composite number?)

Solution. Any integer can be represented in the binary number system as a string of length . Thus, a natural measure of input size is (or just )

Input Size Example: Sorting

Example: Sorting

Sort integers .

Solution. Using fixed length encoding, we write as a binary string of length . This coding gives an input size .

Complexity in terms of Input Size

Example: Composite Number

The naive algorithm for determining whether is composite compares with the first numbers to see if any of them divides .

At the first glance, this algorithm makes comparisons, so it might seem linear and very efficient. But, not that the input size of this problem is , so the number of comparisons performed is actually which is exponential.

Definition: Type of Functions

Two positive functions and are of the same type if for all large , where are some positive constants.

Input Size Example: Integer Multiplication

Example: Integer Multiplication

Given two integers and , compute .

The minimum input size of this problem is A natural choice is to use since .

Complexity

Decision Problems and Optimization Problems

Example: Decision Problem

A decision problem is a question that has two possible answers: yes and no.

If is the problem, and is the input, we will often write to denote a yes answer and to denote a no answer.

Example: Optimization Problem

An optimization problem requires an answer that is an optimal configuration.

An optimization problem usually has a corresponding decision problem.

Example: Knapsack vs. DKnapsack

We have a knapsack of capacity (a positive integer) and objects with weights and values , where and are positive integers.

Then

  • Optimization problem (Knapsack):
    • Find the largest value of any subset that fits in the knapsack, i.e, .
  • Decision problem (DKnapsack):
    • Given , is there a subset of the objects that fits in the knapsack and has total value at least ?

Relationship Between the two Problems

Given a subroutine for solving the optimization problem, solving the corresponding decision problem is usually trivial: First we solve the optimization problem, then check the decision problem. If it does, answer yes, otherwise no.

Thus, if we prove that a given decision problem is hard to solve efficiently, then it is obvious that the optimization problem must be (at least as) hard.

Complexity Classes

Definition: Polynomial-Time Algorithms

An algorithm is polynomial-time if its running time is , where is a constant independent of , and is the input size of the problem that the algorithm solves.

Tip

Whether we use or as the input size, it will not affect the conclusion of whether an algorithm is polynomial-time.

Definition: Nonpolynomial-Time Algorithms

An algorithm is nonpolynomial-time if the running time is not for any fixed .

Nonpolynomial-time algorithms are very often impractical.

Example: Polynomial-Time Solvable Problems

A problem is solvable in polynomial time (or simply in polynomial time) if there exists an algorithm which solves the problem in polynomial time (a.k.a. tractable).

Definition: Complexity Class

The class consists of all decision problems that are polynomial time. That is, there exists an algorithm that will decide in polynomial time if any given input is a yes-input or a no-input.

Definition: Certificate

A certificate is a specific object corresponding to a yes-input, such that it can be used to show that the input is indeed a yes-input.

Given a presumed yes-input and its corresponding certificate, by making used of the given certificate, we verify that the input is actually a yes-input.

Definition: Complexity Class

The class consists of all decision problems s.t., for each yes-input, there exists a certificate allows us to verify in polynomial time that the input is indeed a yes-input.

We can intuitively observe that , but still remains as a biggest problem in computer science.

Reduction

Definition: Reduction

We say that a problem can be reduced to if every instance of can be “rephrased” to an instance of .

Example

Let

  • : multiplying two positive numbers
  • : adding two numbers

Then can be reduced to via a logarithmic transformation

We say that is “no harder to solve” than if can be reduced to .

Definition: Polynomial-Time Reductions

Let and be two decision problems. A polynomial-time reduction from to is a transformation with the following two properties:

  1. transform an input for into an input for s.t. a yes-input of maps to a yes-input of , and a no-input maps to a no-input of .
  2. is computable in polynomial time in .

If such an exists, we say that is polynomial-time reducible to , and write .

Given two decision problems and an algorithm for the decision problem , we can develop an algorithm to solve trivially:

Theorem

If and , then .

Proof. means we have a polynomial-time algorithm for . Since , we have a polynomial-time transformation mapping input for to an input for .

Combining these, we get the following polynomial-time algorithm for solving :

  1. take input for and compute
  2. run algorithm on input , and return the answer.

Both steps take polynomial time. So the combined algorithm takes polynomial time. Hence .

Lemma: Transitivity of Reduction

If and , then .

NP-Completeness and Its Properties

Definition: Complexity Class

The class of NP-Complete problems consists of all decision problems s.t.

  1. for every ,

Intuitively, consists of all the hardest problems in .

Theorem

Let be an arbitrary problem in .

  1. If there is a polynomial-time algorithm for , then there is a polynomial-time algorithm for every .
  2. If there is no polynomial-time algorithm for , then there is no polynomial-time algorithm for every .

Either all NP-Complete problems are polynomial time solvable, or all NP-Complete problems are not polynomial time solvable.

Proof. For 1., if and , then from the 2. of the definition of NP-Completeness, all problems in will also belong to .

For 2., assume that for some problem that has a polynomial-time algorithm while all other don’t have. Since , we have for all other , i.e., is also solvable in polynomial time. But this contradicts with our assumption.

  • Title: Discrete Mathematics (H) Lecture 6 Notes
  • Author: Zhige Chen
  • Created at : 2025-10-15 21:46:09
  • Updated at : 2025-10-15 22:25:30
  • Link: https://nofe1248.github.io/2025/10/15/discrete-math-lec-6/
  • License: This work is licensed under CC BY-NC-SA 4.0.
Comments