• Analysis of Algorithms
  • Backtracking
  • Dynamic Programming
  • Divide and Conquer
  • Geometric Algorithms
  • Mathematical Algorithms
  • Pattern Searching
  • Bitwise Algorithms
  • Branch & Bound
  • Randomized Algorithms
  • Design and Analysis of Algorithms
  • Complete Guide On Complexity Analysis - Data Structure and Algorithms Tutorial
  • Why the Analysis of Algorithm is important?
  • Types of Asymptotic Notations in Complexity Analysis of Algorithms

Worst, Average and Best Case Analysis of Algorithms

  • Asymptotic Notation and Analysis (Based on input size) in Complexity Analysis of Algorithms
  • How to Analyse Loops for Complexity Analysis of Algorithms
  • Sample Practice Problems on Complexity Analysis of Algorithms

Basics on Analysis of Algorithms

  • How to analyse Complexity of Recurrence Relation
  • Introduction to Amortized Analysis

Asymptotic Notations

  • Big O Notation Tutorial - A Guide to Big O Analysis
  • Difference between Big O vs Big Theta Θ vs Big Omega Ω Notations
  • Examples of Big-O analysis
  • Difference between big O notations and tilde
  • Analysis of Algorithms | Big-Omega Ω Notation
  • Analysis of Algorithms | Big - Θ (Big Theta) Notation

Some Advance Topics

  • P, NP, CoNP, NP hard and NP complete | Complexity Classes
  • Can Run Time Complexity of a comparison-based sorting algorithm be less than N logN?
  • Why does accessing an Array element take O(1) time?
  • What is the time efficiency of the push(), pop(), isEmpty() and peek() operations of Stacks?

Complexity Proofs

  • Proof that Clique Decision problem is NP-Complete
  • Proof that Independent Set in Graph theory is NP Complete
  • Prove that a problem consisting of Clique and Independent Set is NP Complete
  • Prove that Dense Subgraph is NP Complete by Generalisation
  • Prove that Sparse Graph is NP-Complete
  • Top MCQs on Complexity Analysis of Algorithms with Answers

In the previous post , we discussed how Asymptotic analysis overcomes the problems of the naive way of analyzing algorithms. But let’s take an overview of the asymptotic notation and learn about What is Worst, Average, and Best cases of an algorithm:

Popular Notations in Complexity Analysis of Algorithms

1. big-o notation.

We define an algorithm’s worst-case time complexity by using the Big-O notation, which determines the set of functions grows slower than or at the same rate as the expression. Furthermore, it explains the maximum amount of time an algorithm requires to consider all input values.

2. Omega Notation

It defines the best case of an algorithm’s time complexity, the Omega notation defines whether the set of functions will grow faster or at the same rate as the expression. Furthermore, it explains the minimum amount of time an algorithm requires to consider all input values.

3. Theta Notation

It defines the average case of an algorithm’s time complexity, the Theta notation defines when the set of functions lies in both O(expression) and Omega(expression) , then Theta notation is used. This is how we define a time complexity average case for an algorithm. 

Measurement of Complexity of an Algorithm

Based on the above three notations of Time Complexity there are three cases to analyze an algorithm:

1. Worst Case Analysis (Mostly used)  

In the worst-case analysis, we calculate the upper bound on the running time of an algorithm. We must know the case that causes a maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x) is not present in the array. When x is not present, the search() function compares it with all the elements of arr[] one by one. Therefore, the worst-case time complexity of the linear search would be O(n) .

2. Best Case Analysis (Very Rarely used) 

In the best-case analysis, we calculate the lower bound on the running time of an algorithm. We must know the case that causes a minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be ?(1) 

3. Average Case Analysis (Rarely used) 

In average case analysis, we take all possible inputs and calculate the computing time for all of the inputs. Sum all the calculated values and divide the sum by the total number of inputs. We must know (or predict) the distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed (including the case of x not being present in the array). So we sum all the cases and divide the sum by (n+1). Following is the value of average-case time complexity.   

Average Case Time = \sum_{i=1}^{n}\frac{\theta (i)}{(n+1)} = \frac{\theta (\frac{(n+1)*(n+2)}{2})}{(n+1)} = \theta (n)

Which Complexity analysis is generally used?

Below is the ranked mention of complexity analysis notation based on popularity:

1. Worst Case Analysis: 

Most of the time, we do worst-case analyses to analyze algorithms. In the worst analysis, we guarantee an upper bound on the running time of an algorithm which is good information. 

2. Average Case Analysis 

The average case analysis is not easy to do in most practical cases and it is rarely done. In the average case analysis, we must know (or predict) the mathematical distribution of all possible inputs. 

3. Best Case Analysis 

The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide any information as in the worst case, an algorithm may take years to run.

Interesting information about asymptotic notations:

A) For some algorithms, all the cases (worst, best, average) are asymptotically the same. i.e., there are no worst and best cases.  Example:   Merge Sort does ?(n log(n)) operations in all cases. B) Where as most of the other sorting algorithms have worst and best cases.  Example 1: In the typical implementation of Quick Sort (where pivot is chosen as a corner element), the worst occurs when the input array is already sorted and the best occurs when the pivot elements always divide the array into two halves. Example 2: For insertion sort, the worst case occurs when the array is reverse sorted and the best case occurs when the array is sorted in the same order as output.

Examples with their complexity analysis:

1. linear search algorithm:.

Time Complexity Analysis: (In Big-O notation)

  • Best Case: O(1), This will take place if the element to be searched is on the first index of the given list. So, the number of comparisons, in this case, is 1.
  • Average Case: O(n), This will take place if the element to be searched is on the middle index of the given list.
  • The element to be searched is on the last index
  • The element to be searched is not present on the list

2. In this example, we will take an array of length (n) and deals with the following cases :

  • If (n) is even then our output will be 0
  • If (n) is odd then our output will be the sum of the elements of the array.

Below is the implementation of the given problem:

Time Complexity Analysis:

  • Best Case: The order of growth will be constant because in the best case we are assuming that (n) is even.
  • Average Case: In this case, we will assume that even and odd are equally likely, therefore Order of growth will be linear
  • Worst Case: The order of growth will be linear because in this case, we are assuming that (n) is always odd.

For more details, please refer: Design and Analysis of Algorithms .

Worst, Average , and Best Case Analysis of Algorithms is a technique used to analyze the performance of algorithms under different conditions. Here are some advantages, disadvantages, important points, and reference books related to this analysis technique:

Advantages:

  • This technique allows developers to understand the performance of algorithms under different scenarios, which can help in making informed decisions about which algorithm to use for a specific task.
  • Worst case analysis provides a guarantee on the upper bound of the running time of an algorithm, which can help in designing reliable and efficient algorithms.
  • Average case analysis provides a more realistic estimate of the running time of an algorithm, which can be useful in real-world scenarios.

Disadvantages:

  • This technique can be time-consuming and requires a good understanding of the algorithm being analyzed.
  • Worst case analysis does not provide any information about the typical running time of an algorithm, which can be a disadvantage in real-world scenarios.
  • Average case analysis requires knowledge of the probability distribution of input data, which may not always be available.

Important points:

  • The worst case analysis of an algorithm provides an upper bound on the running time of the algorithm for any input size.
  • The average case analysis of an algorithm provides an estimate of the running time of the algorithm for a random input.
  • The best case analysis of an algorithm provides a lower bound on the running time of the algorithm for any input size.
  • The big O notation is commonly used to express the worst case running time of an algorithm.
  • Different algorithms may have different best, average, and worst case running times.

Reference books:

  • “Introduction to Algorithms” by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein is a comprehensive guide to algorithm analysis, including worst, average, and best case analysis.
  • “Algorithm Design” by Jon Kleinberg and Éva Tardos provides a modern approach to algorithm design, with a focus on worst case analysis.
  • “The Art of Computer Programming” by Donald Knuth is a classic text on algorithms and programming, which includes a detailed discussion of worst case analysis.
  • “Algorithms Unlocked” by Thomas H. Cormen provides a gentle introduction to algorithm analysis, including worst, average, and best case analysis.

Please Login to comment...

Similar reads.

  • Complexity-analysis

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

  • Running Time, Growth of Function and Asymptotic Notations

Best, Average and Worst case Analysis of Algorithms

  • Amortized Analysis of Algorithms
  • Calculating the running time of Algorithms
  • Empirical way of calculating running time of Algorithms
  • Understanding Recursion
  • Tail Recursion
  • Recurrence Relation
  • Solving Recurrence Relations (Part I)
  • Solving Recurrence Relations (Part II)
  • Solving Recurrence Relations (Part III)

Before reading this article, I strongly recommend you to visit the following article.

  • Running Time, Growth of Functions and Asymptotic Notations

Introduction

We all know that the running time of an algorithm increases (or remains constant in case of constant running time) as the input size ($n$) increases. Sometimes even if the size of the input is same, the running time varies among different instances of the input. In that case, we perform best, average and worst-case analysis. The best case gives the minimum time, the worst case running time gives the maximum time and average case running time gives the time required on average to execute the algorithm. I will explain all these concepts with the help of two examples - (i) Linear Search and (ii) Insertion sort.

Best Case Analysis

Consider the example of Linear Search where we search for an item in an array. If the item is in the array, we return the corresponding index, otherwise, we return -1. The code for linear search is given below.

Variable a is an array, n is the size of the array and item is the item we are looking for in the array. When the item we are looking for is in the very first position of the array, it will return the index immediately. The for loop runs only once. So the complexity, in this case, will be $O(1)$. This is the called the best case.

Consider another example of insertion sort. Insertion sort sorts the items in the input array in an ascending (or descending) order. It maintains the sorted and un-sorted parts in an array. It takes the items from the un-sorted part and inserts into the sorted part in its appropriate position. The figure below shows one snapshot of the insertion operation.

In the figure, items [1, 4, 7, 11, 53] are already sorted and now we want to place 33 in its appropriate place. The item to be inserted are compared with the items from right to left one-by-one until we found an item that is smaller than the item we are trying to insert. We compare 33 with 53 since 53 is bigger we move one position to the left and compare 33 with 11. Since 11 is smaller than 33, we place 33 just after 11 and move 53 one step to the right. Here we did 2 comparisons. It the item was 55 instead of 33, we would have performed only one comparison. That means, if the array is already sorted then only one comparison is necessary to place each item to its appropriate place and one scan of the array would sort it. The code for insertion operation is given below.

When items are already sorted, then the while loop executes only once for each item. There are total n items, so the running time would be $O(n)$. So the best case running time of insertion sort is $O(n)$.

The best case gives us a lower bound on the running time for any input. If the best case of the algorithm is $O(n)$ then we know that for any input the program needs at least $O(n)$ time to run. In reality, we rarely need the best case for our algorithm. We never design an algorithm based on the best case scenario.

Worst Case Analysis

In real life, most of the time we do the worst case analysis of an algorithm. Worst case running time is the longest running time for any input of size $n$.

In the linear search, the worst case happens when the item we are searching is in the last position of the array or the item is not in the array. In both the cases, we need to go through all $n$ items in the array. The worst case runtime is, therefore, $O(n)$. Worst case performance is more important than the best case performance in case of linear search because of the following reasons.

  • The item we are searching is rarely in the first position. If the array has 1000 items from 1 to 1000. If we randomly search the item from 1 to 1000, there is 0.001 percent chance that the item will be in the first position.
  • Most of the time the item is not in the array (or database in general). In which case it takes the worst case running time to run.

Similarly, in insertion sort, the worst case scenario occurs when the items are reverse sorted. The number of comparisons in the worst case will be in the order of $n^2$ and hence the running time is $O(n^2)$.

Knowing the worst-case performance of an algorithm provides a guarantee that the algorithm will never take any time longer.

Average Case Analysis

Sometimes we do the average case analysis on algorithms. Most of the time the average case is roughly as bad as the worst case. In the case of insertion sort, when we try to insert a new item to its appropriate position, we compare the new item with half of the sorted item on average. The complexity is still in the order of $n^2$ which is the worst-case running time.

It is usually harder to analyze the average behavior of an algorithm than to analyze its behavior in the worst case. This is because it may not be apparent what constitutes an “average” input for a particular problem. A useful analysis of the average behavior of an algorithm, therefore, requires a prior knowledge of the distribution of the input instances which is an unrealistic requirement. Therefore often we assume that all inputs of a given size are equally likely and do the probabilistic analysis for the average case.

Average Case Analysis

Steven j. zeil, 1 introduction.

Earlier, we looked at the process of analyzing the worst-case running time of an algorithm, and the use of worst-case analysis and big-O notation as a way of describing it.

In this section, we will introduce average-case complexity. Just as the worst-case complexity describes an upper bound on the worst-case time we would see when running an algorithm, average case complexity will present an upper bound on the average time we would see when running the program many times on many different inputs.

Worst-case complexity gets used more often than average-case. There are a number of reasons for this.

The worst-case complexity is often easier to compute than the average case. Just figuring out what an “average” set of inputs will look like is often a challenge. To figure out the worst case complexity, we only need to identify that one single input that results in the slowest running.

In many cases, it will turn out the worst and average complexity will turn out to be the same.

Finally, reporting the worst case to your boss or your customers is often “safer” than reporting the average.

If you give them the average, then sometimes they will run the program and see slower performance than they had expected. Human nature being what it is, they will probably get rather annoyed. On the other hand, if you go to those same customers with the worst-case figure, most of the time they will observe faster-than-expected behavior, and will be more pleased.

This appears to be particularly true of interactive programs.

When people are actually sitting there typing things in or clicking with the mouse and then waiting for a response, if they have to sit for a long time waiting for a response, they’re going to remember that. Even if 99.9% of the time they get instant response (so the average response is still quite good), they will characterize your program as “sluggish”.

In those circumstances, it makes sense to focus on worst-case behavior and to do what we can to improve that worst case.

On the other hand, suppose we’re talking about a batch program that will process thousands of inputs per run, or we’re talking about a critical piece of an interactive program that gets run hundreds or thousands of times in between each response from the user.

In that situation, adding up hundreds or thousands of worst-cases may be just too pessimistic. The cumulative time of thousands of different runs should show some averaging out of the worst-case behavior, and an average case analysis may give a more realistic picture of what the user will be seeing.

2 Definition

Definition: Average Case Complexity We say that an algorithm requires average time proportional to $f(n)$ (or that it has average-case complexity $O(f(N))$ if there are constants $c$ and $n_{\mbox{0}}$ such that the average time the algorithm requires to process an input set of size $n$ is no more than $c*f(n)$ time units whenever $n \geq n_{\mbox{0}}$.

This definition is very similar to the one for worst case complexity. The difference is that for worst-case complexity, we want $T_{\mbox{max}}(n) \leq c*f(n)$ where $T_{\mbox{max}}(n)$ is the maximum time taken by any input of size $n$, but for average case complexity we want $T_{\mbox{avg}}(n) \leq c*f(n)$ where $T_{\mbox{avg}}(n)$ is the average time required by inputs of size $n$.

The average case complexity describes how quickly the average time increases when n increases, just as the worst case complexity describes how quickly the worst case time increases when n increases.

In both forms of complexity, we are looking for upper bounds, so our big-O notation (and its peculiar algebra) will still apply.

Question: Suppose we have an algorithm with worst case complexity $O(n)$.

True or false: It is possible for that algorithm to have average case complexity $O(n^{\mbox{2}})$.

False (sort of): If it were true, the average case time would be increasing faster than the worst case time. Eventually, that means that the average case time for some large $n$ would be larger than the worst case among all input sets of size $n$. But an average can never be larger than the maximum value in the set being averaged.

So the only way this statement could be true is in the trivial sense that any algorithm in $O(n)$ is also in $O(n^2)$. (Remember, the big-O expressions describe sets of functions. The set of all functions that can be bounded by a linear function is a subset of the set of all functions that can be bounded by a quadratic: $O(n) \subset O(n^2)$. )

This is an important idea to keep in mind as we discuss the rules for average case analysis. The worst-case analysis rules all apply, because they do provide an upper bound. But that bound is sometimes not as tight as it could be. One of the things that makes average case analysis tricky is recognizing when we can get by with the worst-case rules and when we can gain by using a more elaborate average-case rule. There’s no hard-and fast way to tell — it requires the same kind of personal judgment that goes on in most mathematical proofs.

3 Probably Not a Problem

We’re going to need a few basic facts about probabilities for this section.

Every probability is between $0$ and $1$, inclusive.

  • For example, the probability of a flipped coin coming up heads is $0.5$.
  • The probability of that same die rolling an even number is $0.5$.

The sum of the probabilities of all possible events must be $1.0$.

For example, the probability of an ordinary six-sided die rolling some number between 1 and 6 is $1.0$.

If $p$ is the probability that an event will happen, then $(1-p)$ is the probability that the event will not happen.

The probability of a 6-sided die rolling any number other than ‘3’ is $1 - 1/6 = 5/6$.

If two events with probabilities $p_1$ and $p_2$ are independent (the success or failure of the first event does not affect the success or failure of the other), then

  • the probability that both events will occur is $p_1 * p_2$.
  • the probability that neither will occur is $(1 - p_1)(1 - p_2)$.
  • the probability that at least one will occur is $1 - (1-p_1)(1-p_2)$.

For example, if I roll two 6-sided dice, the probability that both will roll a ‘3’ is $\frac{1}{6} * \frac{1}{6} = \frac{1}{36}$.

If I roll two six-sided dice, the probability that at least one will come up ‘3’ is

\[ 1 - \left(1-\frac{1}{6}\right)\left(1-\frac{1}{6}\right) = 1 - \left(\frac{5}{6}\right)\left(\frac{5}{6}\right) = 1 - \frac{25}{36} = \frac{11}{36} \]

More generally, if I have a series of independent events with probabilities $p_1, p_2, \ldots p_k$, then

  • the probability that all events will occur is $\prod_{i=1}^{k} p_i$.
  • the probability that neither will occur is $\prod_{i=1}^{k}(1 - p_i)$.
  • the probability that at least one will occur is $1 - \prod_{i=1}^{k}(1-p_i)$.

If I flip a coin 3 times, the probability that I will see the sequence heads-tails-heads is $\frac{1}{2} * \frac{1}{2} * \frac{1}{2} = \frac{1}{8}$

  • Note that the probability of seeing the sequence heads-heads-heads is exactly the same.

If we make a number of independent attempts at something, and the chance of seeing an event on any single attempt is $p$, on average we will need $1/p$ attempts before seeing that event.

For example, if I start rolling a six-sided die until I see a ‘2’, on average I would wait $\frac{1}{1/6} = 6$ rolls.

This is a great illustration of the difference between average and worst-case times, because in the worst case you would keep rolling forever!

You can find a fuller tutorial here .

4 What’s an Average?

​ For some people, average case analysis is difficult because they ​ don’t have a very flexible idea of what an “average” is.

Example: Last semester, Professor Cord gave out the following grades in his CS361 class: A, A, A-, B+, B, B, B, B-, C+, C, C-, D, F, F, F, F Translating these to their numerical equivalent, 4, 4, 3.7, 3.3, 3, 3, 3, 2.7, 2.3, 2, 1.7, 1, 0, 0, 0, 0 what was the average grade in Cord’s class?

According to some classic forms of average:

\[ \mbox{avg}_{\mbox{median}} = 2.5 \]

\[ \mbox{avg}_{\mbox{modal}} = 0 \]

\[ \begin{align} \mbox{avg}_{\mbox{mean}} &= (4 + 4 + 3.7 + 3.3 + 3 + 3 + 3 + 2.7 + 2.3 + 2 + 1.7 + 1 + 0 + 0 + 0 + 0) / 16 \\ &= 2.11 \end{align} \]

4.1 The Mean Average

The mean average is the most commonly used, and corresponds to most people’s idea of a “normal” average, but even that comes in comes in many varieties:

The $w_i$ are the weights that adjust the relative importance of the scores.

Example: Last semester Professor Cord gave the following grades Grade # students 4.0 2 3.7 1 3.3 1 3.0 3 2.7 1 2.3 1 2.0 1 1.7 1 1.3 0 1.0 1 0.0 4 The weighted average is $\frac{2*4.0 + 1*3.7 + 1*3.3 + 3*3.0 + 1*2.7 + 1*2.3 + 1*2.0 + 1*1.7 + 0*1.3 + 1*1.0 + 4*0.0}{2 + 1 + 1 + 3 + 1 + 1 + 1 + 1 + 0 + 1 + 4}$ $= 2.11$

Another example of weighted averages:

When one student asked about his overall grade for the semester, Professor Cord pointed out that assignments were worth 50% of the grade, the final exam was worth 30%, and the midterm exam worth 20%. The student has a B, A, and C-, respectively on these. Category Score Weight Assignments 3.0 50 Final 4.0 30 Midterm 1.7 20 So the student’s average grade was $$\frac{50*3.0 + 30*4.0 + 20*1.7}{50+30+20} = 3.04$$

4.2 Expected Value

The expected value is a special version of the weighted mean in which the weights are the probability of seeing each particular value.

If $x_1, x_2, \ldots ,$ are all the possible values of some quantity, and these values occur with probability $p_1, p_2, \ldots ,$, then the expected value of that quantity is

\[ E(x) = \sum_{i=1}^N p_i * x_i \]

Note that if we have listed all possible values, then

\[ \sum_{i=1}^N p_i = 1 \]

so you can regard the $E(x)$ formula above as a special case of the weighted average in which the denominator (the sum of the weights) becomes simply “1”.

Example: After long observation, we have determined that Professor Cord tends to give grades with the following distribution: Grade probability 4.0 2/16 3.7 1/16 3.3 1/16 3.0 3/16 2.7 1/16 2.3 1/16 2.0 1/16 1.7 1/16 1.3 0/16 1.0 1/16 0.0 4/16 So the expected value of the grade for an average student in his class is $$\begin{align} &((2/16)*4.0 + (1/16)*3.7 + (1/16)*3.3 + (3/16)*3.0 \\ &+ (1/16)*2.7 + (1/16)*2.3 + (1/16)*2.0 + (1/16)*1.7 + (0/16)*1.3 \\ &+ (1/16)*1.0 + (4/16)*0.0) \\ &= 2.11 \end{align}$$ The expected value is the kind of average we will use throughout this course in discussing average case complexity.

5 Determining the Average Case Complexity

In many ways, determining average case complexity is similar to determining the worst-case complexity.

5.1 Why Might Average-Case Be Smaller than Worst-Case?

It boils down to 3 possible reasons: Your code calls another function whose average case complexity is smaller than its worst case. You have a loop (or recursion) that, on average, does not repeat as often as it would in the worst case. You have an if statement with different complexities for its then and else parts, that if statement is inside a loop (or recursion), and, on average it takes the cheaper option ore often than it would in the worst case.

5.2 It Still All Boils Down to Addition

If you want to know how much time a complicated process takes, you figure that out by adding up the times of its various components.

That basic observations is the same for average times as it is for worst-case times.

When in doubt, just add things up. Just keep in mind that you want to add up their average times instead of their worst-case times.

5.3 All of Your Variables Must be Defined

In math, as in programming, all of your variables must be declared/defined before you can use them.

5.4 Complexity is Written in Terms of the Inputs

The complexity of a block of code must be a function of the inputs (only!) to that block.

5.5 The Complexity of Any Block of Code Must be Numeric

No reason for this to change.

5.6 Surprises Demand Explanation

The vast majority of algorithms we will look at will have the same average-case complexity as worst-case. So, if you come up with a different value for the average-case, make sure that you understand why.

By the same token, though, if you make it to the end of an average-case analysis and never once took the time to even consider how the “average input” is different from the “worst-case input”, you may need to start over.

6 Why Would Average-Case Complexity Be Different from Worst-Case?

There are basically only three reasons why a piece of code would run in an average case complexity faster than its worst case:

  • The code calls some function that is known to have a faster average case than worst case.
  • The code contains a conditional statement that chooses between two alternatives, one of which is faster than the other, and that, on average, the faster choice is taken more often.
  • The code contains a loop (or recursive call) that, on average, repeats far fewer times than it does in the worst case.

That’s pretty much it.

  • Of course, the first of these just kind of “passes the buck” to the function being called. How is it that this function has a faster average case than its worst case? Well, the body of that function must exhibit one of the three properties we have listed above. That means that, evenually, we would need to identify an instance of reasons 2 or 3.

$E(t_{\mbox{add}}(N)) = \frac{1}{N} O(N) + \frac{N-1}{N} O(1)$

$= O\left( \frac{N}{N} + \frac{N-1}{N} \right)$

The third possible reason, early exit from a loop or recursion, could stand a little deeper exploration.

6.1 Exiting Early from a Loop

The total time for a loop is found by adding up the times of all of its iterations. But, what do we do if the number of iterations can vary depending on subtle properties of the input?

If we can say that each iteration of the loop runs in time $O_{\mbox{iteration}}(f(N))$, i.e., the time of an iteration does not depend on which iteration we are in, then we can write

\[ T_{\mbox{loop}} = \sum_{i=1}^k O_{\mbox{iteration}}(f(N)) \]

where $k$ is the number of times that the loop repeats. Now, if we are doing worst case analysis, we figure out what the maximum value of $k$ would be. But if we are doing average case analysis, we might ask if the average (or expected) value of $k$ is significantly less than that maximum.

6.1.1 Example 1: Ordered Search

Consider this code for searching an ordered (sorted) array of integers. We will assume that both the numbers inserted into the array and the values we use when searching are drawn randomly from the integers in some range $0 \ldots M$.

This search takes advantage of the fact that the array is ordered by stopping the loop as soon as we get to an array value larger than or equal to the key. For example, if we had the array

and if we were searching for 97, we would stop after i==2 because array[2] > 97 and so we know, if we have not found 97 yet, there’s no point to looking through the even larger numbers in the rest of the array.

So if we let $k$ denote the number of iterations of this loop,

In the worst case, we go through the entire array, so $k == $ array.length .

The loop does $O(1)$ work per iteration, so the worst case complexity of the loop, and of the entire function, is O(array.length) .

It’s pretty clear that, if we are looking for numbers that are actually in the array, we are equally likely to find them at ann position. So, on average, we would only examine half of the array elements before finding the one we wanted.

If we are looking for data that is not in the array, our assumption that both the arrays elements and the values bing searched for were drawn randomly from a common range of values means that the value we are looking for is equally likely to fall into any of the “gaps” between array values (or into the gaps at either end). So again, it seems that we will, on average, only need to look through half the array before hitting a value larger than the one we are looking for.

Either way, it appears that E(k) = array.length/2 .

The loop does $O(1)$ work per iteration, and we do array.length/2 iterations on average, so the average case complexity of the loop, and of the entire function, is O(array.length/2) .

However, that simplifies to O(array.length) .

So, in this case, even though the function is faster on average, the speedup only affects the constant multiplier. It isn’t faster in a big-O sense.

6.1.2 Example 2: Simulating a Rolling Die

Java has a useful class for generating pseudo-random numbers.

The distinction between a “pseudorandom” and true “random” integer is not particularly important to us.

The nextInt(bound) function returns a random integer in the range 0...bound-1 . It does this in O(1) time.

We can simulate the roll of a six-sided die by taking nextInt(6) + 1 .This gives us a uniform random selection in the range 1..6 .

Consider the following code:

All of the operations in the first two lines and in the loop body are $O(1)

The condition of the while loop is $O(1)$:

Things get tricky when we ask the question “How many times does this loop repeat?”

How many times, in the worst case, can we roll a die until a ‘2’ comes up? In the worst case, there is no limit to the number of times we might roll.

And we conclude that the loop, and the entire block of code, is $O(\infty)$.

Now the fun part: How many times, on average , will this loop repeat? We can compute the expected value of the number of repetitions. How many times, on average, would we roll a die until a ‘2’ comes up?

  • On any given roll, the probability of seeing a ‘2’ is $1/6$.
  • And if we are looking at a string of independent events with probability $p$, on average we wait $1/p$ times to see that event.

So the average number of times we would repeat this loop is $\frac{1}{1/6}$ or 6 times.

Each iteration of the loop is $O(1)$ time for both the condition and the body, so we conclude that the loop average is $6*O(1)$ time, which simplifies to $O(1)$

And the entire block of code has an average-case complexity of $O(1)$.

​%if _ignore

7 Extended Example: Ordered Insertion, Different Input Distributions

We’ll illustrate the process of doing average-case analysis by looking at a simple but useful algorithm, exploring how changes in the input distribution (the probabilities of seeing various possible inputs) affect the average case behavior.

Here is our ordered insertion algorithm .

We will, as always, assume that the basic iterator operations are $O(1)$. For the sake of this example, we will also assume that the operations on the Comparable type are $O(1)$. (We’ll discuss the practical implications of this at the end.)

We start, as usual, by marking the simple bits O(1).

Next we note that the loop body can be collapsed to O(1).

The loop condition is O(1):

Because the loop condition and body are $O(1)$, we can use the shortcut of simply analyzing this loop on the expected number of iterations.

That, however, depends on the values already in the container, and how the new value compares to them.

The loop might execute

0 times (if value is larger than anything already in the container),

1 time (if value is larger than all but one element already in the container),

and so on up to a maximum of distance(start,stop) times (if value is smaller than everything already in the container).

What we don’t know are the probabilities to associate with these different numbers of iterations.

  • These depend upon the way the successive inputs to this function are distributed.

Consider using this algorithm as part of a spell checking program. We can envision two very different input patterns:

If we are reading words from the spell check dictionary into an array for later search purposes, those words are most likely already sorted. There may be a few exceptions, e.g., words that the user has added to their personal dictionary. For example, I often find it useful to add my last name to my spellcheck dictionaries.

If we are building a concordance, a list of words actually appearing in the document, then we would recieve the words in whatever order they appear in the document text, essentially a random order.

Let’s analyze each of these cases in turn and see how they might differ in performance.

7.1 Input in Sorted Order

If the input arrives in sorted order, then each call to the function will execute the loop zero times, because the word being inserted will always be alphabetically greater than all the words already in the container.

So if $p_k$ denotes the probability of executing the loop $k$ times, then \( p_0=1, p_1 = 0, p_2 = 0, \ldots \) .

So the time is

\[ \begin{align} t_{\mbox{loop}} & = t_L(0) \\ & = O(1) \end{align} \]

For this input pattern, the entire algorithm has an average-case complexity of $O(1)$.

7.2 Input in Arbitrary Order:

In this case, we are equally likely to need 0 iterations, 1 iteration, 2 iterations , … , n iterations, where $n$ is distance(start,stop) . So the possible numbers of iterations from 0 to $n$ are all equally likely:

\[p_i = \left\{ \begin{array}{ll}\frac{1}{n+1} & \mbox{if } 0 \leq k \leq n \\ 0 & \mbox{otherwise}\end{array}\right. \]

The cost of the loop condition and of the body is constant for each iteration, however, so we can use the special case

\[ t_{\mbox{loop}} = t_L(E(k)) \]

where $E(k)$ is the expected number of iterations of the loop.

What is $E(k)$?

Intuitively, if we are equally likely to repeat the loop 0 times, 1 time, 2 times, … , $n$ times, the average number of iterations would seem to be $n/2$.

\[ \begin{eqnarray*} E(k) & = & \sum_{k=0}^{\infty} p_k k \\ & = & \sum_{k=0}^{\mbox{n}} p_k k \; \; (\mbox{because } p_k=0 \mbox{ when } k > \mbox{n})\\ & = & \sum_{k=0}^{\mbox{n}} \frac{1}{\mbox{n}+1} k \\ & = & \frac{1}{\mbox{n}+1} \sum_{k=0}^{\mbox{n}} k \\ & = & \frac{1}{\mbox{n}+1} \frac{\mbox{n}(\mbox{n}+1)}{2} \\ & = & \frac{\mbox{n}}{2} \\ \end{eqnarray*} \]

Chalk one up for intuition!

So the loop is $\frac{n}{2} O(1) = O(n)$

And we can then replace the entire loop by $O(\mbox{n})$.

And now, we add up the complexities in the remaining straight-line sequence, and conclude that the entire algorithm has an average case complexity of $O(n)$, where $n$ is the distance from start to stop , when presented with randomly arranged inputs.

This is the same result we had for the worst case analysis. Does this mean that it runs in the same time on average as it does in the worst case? No, on average, it runs in half the time of its worst case, but that’s only a constant multiplier, so it disappears when we simplify.

Under similar randomly arranged inputs, the average case complexity of ordered search is $O(n)$ and the average case complexity of binary search is $O(\log n)$. Again, these are the same as their worst-case complexities.

7.3 Inputs in Almost-Sorted Order

​ We’ve already considered the case where the inputs to this function were already arranged into ascending order. What would happen if the inputs were almost, but not exactly, already sorted into ascending order?

For example, suppose that, on average, one out of $n$ items is out of order. Then the probability of a given input repeating the loop zero times would be $p_{\mbox{0}} = \frac{n-1}{n}$, and some single $p_{\mbox{i}}$ would have probability $1/n$, with all the other probabilities being zero.

Assuming the worst (because we want to find an upper bound), let’s assume that the one out-of-order element is the very last one added, and that it actually gets inserted into position 0. Then we have $p_0 = (n-1)/n, p_1 = 0, p_2 = 0, … , p_{n-1} = 0, p_n = 1/n$

So the average number of iterations would be given by \begin{eqnarray*} E(k) & = & \sum_{k=0}^{n} k p_k \\ & = & 0 * (n-1)/n + n * 1/n \\ & = & n/n \\ & = & 1 \\ \end{eqnarray*} and the function is $O(E(k)) = O(1)$

7.4 Almost Sorted - version 2

Now, that’s only one possible scenario in which the inputs are almost sorted. Let’s look at another. Suppose that we knew that, for each successive input, the probability of it appearing in the input $m$ steps out of its correct position is proportional to $1/(m+1)$ (i.e., each additional step out of its correct position is progressively more unlikely). Then we have $p_{0}=c, p_{1}=c/2, p_{2}=c/3, … p_{n-1}=c/n, p_{n}=c/(n+1)$.

The constant $c$ is necessary because the sum of all the probabilities must be exactly 1. We can compute the value of $c$ by using that fact:

\begin{align} \sum_{i=0}^{n} p_i = & 1 \\ \sum_{i=0}^{n} \frac{c}{i+1} = & 1 \\ c \sum_{i=0}^{n} \frac{1}{i+1} = & 1 \end{align}

This sum, for reasonably large n, is approximately $\log n$.

So we conclude that $c$ is approximately $= 1/\log(n)$.

So the function, for this input distribution, is

\begin{align} t_{\mbox{loop}} = & O(E(k)) \\ = & O\left(\sum_{i=0}^n (i + 1)p_i\right) \\ = & O\left(\sum_{i=0}^n (i + 1) \frac{c}{i+1}\right) \\ = & O\left(\sum_{i=0}^n c\right) \\ = & O((n+1)c) \\ = & O\left(\frac{n}{\log n}\right) \end{align}

So the average case is slightly smaller than the worst case, though not by much (remember that $\log n$ is nearly constant over large ranges of $n$, so $n/(\log(n))$ grows only slightly slower than $n$.

8 The Input Distribution is Key

You can see, then, that average case complexity can vary considerably depending upon just what constitutes an “average” set of inputs.

Utility functions that get used in many different programs may see different input distributions in each program, and so their average performances in different programs will vary accordingly.

19.2 Average-Case Running Time of Linear Search

In this section, we’ll learn how to perform an average-case running time analysis . Recall the linear search algorithm, which searches for an item in a list by checking each list element one at a time.

We’ve previously seen that the worst-case running time of this function is \(\Theta(n)\) , where \(n\) is the length of the list. What can we say about its average-case running time?

Well, for one thing, we need to precisely define what we mean by “all possible inputs of length \(n\) ” for this function. Because we don’t have any restrictions on the elements stored in the input list, it seems like there could be an infinite number of lists of length \(n\) to choose from, and we cannot take an average of an infinite set of numbers!

In an average-case running-time analysis, dealing with an infinite set of inputs is a common problem. To resolve it, we choose a particular set of allowable inputs, and then compute the average running time for that set. The rest of this section will be divided into two example analyses of search , using two different input sets.

A first example

Let \(n \in \N\) . We’ll choose our input set to be the set of inputs where:

  • lst is always the list [0, 1, 2, ... n - 1]
  • x is an integer between 0 and n - 1 , inclusive.

In other words, we’ll consider always searching in the same list ( [0, 1, ..., n - 1] ), and search for one of the elements in the list. For convenience, we’ll use \(\cI_n\) to denote this set, rather than \(\cI_{\texttt{search}, n}\) . Now let’s see how to analyse the average-case running time for this input list.

Average-case running time analysis. For this definition of \(\cI_n\) , we know that \(|\cI_n| = n\) , since there are \(n\) different choices for x (and just one choice for lst ). From our definition of average-case running time, we have

\[ \mathit{Avg}_{\texttt{search}}(n) = \frac{1}{n} \sum_{\texttt{(lst, x)} \in \cI_n} \text{running time of } \texttt{search(lst, x)} \]

To calculate the sum, we need to compute the running time of search(lst, x) for every possible input. Let \(x \in \{0, 1, 2, \dots, n - 1\}\) . We’ll calculate the running time in terms of \(x\) .

Since lst = [0, 1, 2, ... n - 1] , we know that there will be exactly \(x + 1\) loop iterations until \(x\) is found in the list, at which point the early return will occur and the loop will stop. Each loop iteration takes constant time (1 step), for a total of \(x + 1\) steps.

So the running time of search(lst, x) equals \(x + 1\) , and we can use this to calculate the average-case running time:

\[\begin{align*} Avg_{\texttt{search}}(n) &= \frac{1}{n} \times \sum_{\texttt{(lst, x)} \in \cI_n} \text{running time of } \texttt{search(lst, x)} \\ &= \frac{1}{n} \times \sum_{x=0}^{n - 1} (x + 1) \\ &= \frac{1}{n} \times \sum_{x'=1}^{n} x' \tag{$x' = x + 1$} \\ &= \frac{1}{n} \times \frac{n(n+1)}{2} \\ &= \frac{n + 1}{2} \end{align*}\]

And so the average-case running time of search on this set of inputs is \(\frac{n + 1}{2}\) , which is \(\Theta(n)\) . Notice that we do not need to compute an upper and lower bound separately, since in this case we have computed an exact average.

At the end of this analysis, it was tempting for us to say that the average-case running time of search is \(\Theta(n)\) , but keep in mind that our analysis depended on the choice of allowable inputs. For this specific choice of inputs, the average running time was \(\Theta(n)\) .

Like worst-case and best-case running times, the average-case running time is a function which relates input size to some measure of program efficiency. In this particular example, we found that for the given set of inputs \(\cI_n\) for each \(n\) , the average-case running time is asymptotically equal to that of the worst-case.

This might sound a little disappointing, but keep in mind the positive information this tells us: the worst-case input family here is not so different from the average case, i.e., it is fairly representative of the algorithm’s running time as a whole.

And as our next example will show, it isn’t always the case that the average-case and worst-case running times are equal!

A second example: searching in binary lists

For our next example, we’ll keep the same function ( search ) but choose a different input set to consider. Let \(n \in \N\) , and let \(\cI_n\) be the set of inputs (lst, x) where:

  • lst is a list of length \(n\) that contains only 0 ’s and 1 ’s. We call these lists binary lists, analogous to the binary representation of numbers we’ve discussed earlier.

This input set is a little more complicated than the first. Now x is fixed, and lst has multiple possibilities. But also, there are far more than \(n\) possible inputs: lst could be any list of length \(n\) that consists of 0 ’s and 1 ’s. But intuitively, because there are fewer possibilities for each element, it should be faster “on average” to find a 0 . But how much faster?

We’ll now do an analysis to confirm and quantify our intuition, and introduce a more formal technique for average-case running time analysis. This technique will build on what we did in our previous example, but provide a more concrete structure that generalizes to more complex algorithms as well.

Average-case running time analysis. This running-time analysis is divided into five steps.

  • Compute the number of inputs, \(|\cI_n|\) .
  • Divide the set of inputs into partitions based on the running time of each input.
  • Compute the size of each partition.
  • Using Steps 2 and 3, calculate the sum of the running times for all the inputs in \(\cI_n\) .
  • Using Steps 1 and 4, calculate the average-case running time for \(\cI_n\) .

Step 1: compute the number of inputs . Since x is always 0 , the size of \(\cI_n\) is equal to the number of lists of 0 ’s and 1 ’s of length \(n\) . This is \(2^n\) , since there are \(n\) independent choices for each of the \(n\) elements of the list, and each choice has two possible values, 0 and 1 . Note that the “all 0 ” and “all 1 ” lists are included in \(\cI_n\) .

Step 2: partitioning the inputs by their running time. We now have far more inputs to worry about in this example than our previous one. To add up the running times for each input, we’ll first group them based on their running time.

Looking at the implementation of search , we know that if lst first has a 0 at index \(i\) , then the loop will take \(i + 1\) iterations, and therefore this many steps. Note that lst could have other 0 ’s after index \(i\) ; the loop stops the first time it sees a 0 . In other words, we can divide up the lists based on the index of the first 0 in the list, and treat the list [1, 1, ..., 1] as a special case, since it has no 0 ’s. Formally, for each \(i \in \{0, 1, \dots, n - 1\}\) , we define \(S_{n, i} \subset \cI_n\) to be the set of binary lists of length \(n\) where their first 0 occurs at index \(i\) . For example:

  • Every element in \(S_{n, 0}\) takes 1 step for search .
  • Every element in \(S_{n, 1}\) takes 2 steps for search .
  • Every element in \(S_{n, i}\) takes \(i + 1\) steps for search .

We can’t leave out the all- 1 ’s list! We’ll define a special set \(S_{n, n}\) that contains just the list [1, 1, ..., 1] . This input causes search to take \(n + 1\) steps: \(n\) steps for the loop, and then 1 step for the return False at the end of the function. In fact, this input is the only one in \(\cI_n\) where False gets returned!

Step 3: compute the size of each partition. Now that we have our partitions and their running times, to add them all up we need to compute the size of each partition. Let’s try to establish a pattern first.

  • First consider \(S_{n, 0}\) . A list lst in this set must have lst[0] == 0 , but its remaining elements could be anything (either 0 or 1 ). So there are \(2^{n - 1}\) possible lists, since there are 2 choices for each of the remaining \(n - 1\) spots. Therefore \(|S_{n, 0}| = 2^{n - 1}\) .
  • Next, consider \(S_{n, 1}\) . A list lst in this set has two “fixed” elements, lst[0] == 1 and lst[1] == 0 . So there are \(n - 2\) remaining spots that could contain either a 0 or 1 , and so \(2^{n - 2}\) such lists total. Therefore \(|S_{n, 1}| = 2^{n - 2}\) .
  • In general, for \(i \in \{0, 1, \dots, n-1\}\) , lists in \(S_{n, i}\) have \(i + 1\) fixed spots, and the remaining \(n - i - 1\) spots could be either 0 or 1. So \(|S_{n, i}| = 2^{n - i - 1}\) .
  • Finally, \(|S_{n, n}| = 1\) , since we defined that partition to only contain the all-1’s list, and there is only one such list.

Step 4: calculate the sum of the running times. To calculate the sum of the running times of all inputs in \(\cI_n\) , we can group the inputs into their partitions:

\[\begin{align*} &\quad \sum_{\texttt{(lst, x)} \in \cI_n} \text{running time of } \texttt{search(lst, x)} \\ &= \sum_{i=0}^n \sum_{\texttt{lst} \in S_{n, i}} \text{running time of } \texttt{search(lst, 0)} \tag{note $x = 0$} \end{align*}\]

We haven’t changed the values being summed, just rearranged the terms so that they are grouped together. Now the big payoff: within a single partition \(S_{n, i}\) , each list has the same running time, and so the body of the inner summation depends only on \(i\) , and is constant for the inner summation. We can use our work in Steps 2 and 3 to simplify this summation:

\[\begin{align*} &\quad \sum_{\texttt{(lst, x)} \in \cI_n} \text{running time of } \texttt{search(lst, x)} \\ &= \sum_{i=0}^n \sum_{\texttt{lst} \in S_{n, i}} \text{running time of } \texttt{search(lst, 0)} \tag{note $x = 0$} \\ &= \sum_{i=0}^n \sum_{\texttt{lst} \in S_{n, i}} (i + 1) \tag{from Step 2} \\ &= \sum_{i=0}^n |S_{n, i}| \cdot (i + 1) \\ &= \left(\sum_{i=0}^{n-1} |S_{n, i}| \cdot (i + 1)\right) + |S_{n, n}| \cdot (n + 1) \tag{splitting off last term} \\ &= \left(\sum_{i=0}^{n-1} 2^{n - i - 1} \cdot (i + 1)\right) + 1 \cdot (n + 1) \tag{from Step 3} \end{align*}\]

Using this grouping technique, we’ve successfully managed to obtain a purely algebraic expression for the total running time of this function. To finish up, we’ll use the arithmetico-geometric summation formula from Appendix C.1 (valid for all \(r \in \R\) if \(r \neq 1\) ): That formula uses a lower bound of \(i = 0\) instead of \(i = 1\) , but these are equivalent since the summation body equals 0 when \(i = 0\) .

\[ \sum_{i=0}^{m - 1} i \cdot r^i = \frac{m \cdot r^m}{r - 1} - \frac{r(r^m - 1)}{(r - 1)^2} \]

\[\begin{align*} &\quad \left(\sum_{i=0}^{n-1} 2^{n - i - 1} \cdot (i + 1)\right) + (n + 1) \\ &= \left(\sum_{i'=1}^n 2^{n - i'} \cdot i'\right) + (n + 1) \tag{$i' = i + 1$} \\ &= 2^n \left(\sum_{i'=1}^n \left(\frac{1}{2}\right)^{i'} \cdot i'\right) + (n + 1) \\ &= 2^n \left(\frac{\frac{n + 1}{2^{n + 1}}}{- \frac{1}{2}} - \frac{\frac{1}{2}\left(\frac{1}{2^{n + 1}} - 1\right)}{\frac{1}{4}} \right) + (n + 1) \tag{Using the formula} \\ &= 2^n \left(- \frac{n + 1}{2^n} - \frac{1}{2^n} + 2 \right) + (n + 1) \\ &= - (n + 1) - 1 + 2^{n + 1} + (n + 1) \\ &= 2^{n + 1} - 1 \end{align*}\]

That algebraic simplification was a bit tricky, but well worth it: our total running time for all inputs in \(\cI_n\) is just \(2^{n + 1} - 1\) .

Step 5: Putting it all together . The hard work is out of the way now, and we can wrap up by calculating the average-case running time of search for this set of inputs:

\[\begin{align*} \mathit{Avg}_{\texttt{search}}(n) &= \frac{1}{|\cI_n|} \times \sum_{x \in \cI_n} \text{running time of } \texttt{search}(x) \\ &= \frac{1}{2^n} \cdot (2^{n + 1} - 1) \\ &= 2 - \frac{1}{2^n} \end{align*}\]

Amazing! What we’ve found is that our average-case running time is \(2 - \frac{1}{2^n}\) steps, which is \(\Theta(1)\) . This not just confirms our intuition that the average running of search is faster on this input set than our initial example, but show just how dramatically faster it is.

Average-case analysis and choosing input sets

We’ve now performed two average-case running time analyses for search , and gotten two different results, a \(\Theta(n)\) and a \(\Theta(1)\) . So you might be wondering, what’s the “right” average-case running time for search ? It turns out that this is a subtle question. On one hand, we can simply say that there is no right answer, and that the average-case running time is always dependent on the set of inputs we choose to analyse. This is technically true, and what we’ve illustrated in the past two examples.

On the other hand, this response isn’t necessarily very helpful, and leads to asking why we would care about average-case running time at all. In practice, we use our judgments of what input sets are realistic, and use those in our analysis. This often relies on empirical observation, for example by recording the inputs to an algorithm over a period of time and analysing their distribution. In a program \(P_1\) that calls search , we might observe that the input lists often don’t contain many duplicates, in which case an average-case analysis based on having \(n\) distinct elements is more applicable. In another program \(P_2\) , we might find that the input lists to search have elements that are within a fixed range (e.g., the digits 0 to 9 ), and so might try to generalize our binary list analysis. Neither of these analyses is more “correct” than the other, but instead differ in how accurately they reflect the algorithm’s actual inputs in practice.

In future courses, you’ll explore more complex average-case analyses, on algorithms like quicksort and data structures like hash tables , which are used to implement sets and dictionaries in Python. You’ll also learn how to use tools from probability theory to frame such analyses in terms of probabilities and likelihood, rather than simply counting inputs. This will allow you to simplify some calculations, and give particular inputs certain weights to indicate that they are more frequently occurring than others. Fun stuff!

  • Privacy Policy
  • Refund Policy
  • Terms And Conditions Of Use
  • What is AP Algorithm Basics
  • Write for us

APalgorithm

APalgorithm

Simplify, Optimize, Succeed!

Worst, Average, and Best Case Analysis of Algorithms

'  data-srcset=

Introduction

When analyzing algorithms, it is important to understand their performance characteristics. This analysis helps us determine how efficient an algorithm is and how it will behave under different scenarios. In this blog post, we will explore the concepts of worst, average, and best case analysis of algorithms, popular notations used in complexity analysis, and the measurement of algorithmic complexity.

Worst Case Analysis

Worst case analysis involves determining the upper bound on the running time of an algorithm for the input that results in the maximum number of operations. It gives us an idea of the maximum time an algorithm will take to complete its execution. This analysis is particularly useful when we want to ensure that an algorithm performs well even in the worst possible scenario. For example, consider a sorting algorithm. In the worst case, the input array may be in reverse order, requiring the maximum number of comparisons and swaps. By analyzing the worst case, we can determine the upper bound on the time complexity of the sorting algorithm.

Average Case Analysis

Average case analysis involves determining the expected running time of an algorithm for a random input. It takes into account the probabilities of different inputs and their corresponding running times. This analysis gives us a more realistic estimate of an algorithm’s performance under typical conditions. Continuing with the sorting algorithm example, average case analysis considers the probability distribution of different input arrays and calculates the expected number of comparisons and swaps required. This analysis provides a more accurate assessment of the algorithm’s efficiency in real-world scenarios.

Best Case Analysis

Best case analysis involves determining the lower bound on the running time of an algorithm for the input that results in the minimum number of operations. It gives us an idea of the minimum time an algorithm will take to complete its execution. However, best case analysis is not very informative on its own, as it often represents unrealistic scenarios that rarely occur in practice. For the sorting algorithm example, the best case occurs when the input array is already sorted. In this case, the algorithm may have a lower time complexity compared to other scenarios. However, the best case analysis alone does not provide a comprehensive understanding of the algorithm’s performance.

Popular Notations in Complexity Analysis

Complexity analysis involves expressing the growth rate of an algorithm’s running time or space requirements as a function of the input size. Several popular notations are used to represent algorithmic complexity:

Big O Notation (O)

Big O notation represents the upper bound on the growth rate of an algorithm’s running time. It provides an upper limit on how the algorithm’s performance scales with the input size. For example, if an algorithm has a time complexity of O(n^2), it means the running time grows quadratically with the input size.

Omega Notation (Ω)

Omega notation represents the lower bound on the growth rate of an algorithm’s running time. It provides a lower limit on how the algorithm’s performance scales with the input size. For example, if an algorithm has a time complexity of Ω(n), it means the running time grows linearly with the input size.

Theta Notation (Θ)

Theta notation represents both the upper and lower bounds on the growth rate of an algorithm’s running time. It provides a tight estimate of how the algorithm’s performance scales with the input size. For example, if an algorithm has a time complexity of Θ(n), it means the running time grows linearly with the input size, and the upper and lower bounds are the same.

Read this: Difference between Big O vs Big Theta Θ vs Big Omega Ω Notations

Measurement of Complexity of an Algorithm

To measure the complexity of an algorithm, we consider the input size (n) and the number of basic operations performed by the algorithm. The basic operations can be comparisons, assignments, arithmetic operations, or any other operation that takes a constant amount of time. The most common way to measure complexity is by counting the number of operations as a function of the input size. For example, an algorithm that performs a constant number of operations for each element in an array of size n would have a linear time complexity of O(n).

Which Complexity Analysis is Generally Used?

The choice of complexity analysis depends on the specific requirements and characteristics of the algorithm and the problem it solves. In general, worst case analysis is commonly used as it provides an upper bound on the algorithm’s performance. This ensures that the algorithm will perform well even in the worst possible scenario. However, average case analysis is also important, especially when the algorithm is expected to handle random or typical inputs. It gives a more realistic estimate of the algorithm’s performance under normal conditions. Best case analysis, while less informative on its own, can be useful in certain cases where the best case scenario is of particular interest or when comparing algorithms with similar best case performance.

In conclusion, analyzing the worst, average, and best case scenarios of an algorithm provides valuable insights into its performance characteristics. Complexity analysis using popular notations such as Big O, Omega, and Theta helps us express and compare the growth rates of algorithms. By understanding the measurement of algorithmic complexity, we can make informed decisions when choosing and optimizing algorithms for different applications.

By Harshvardhan Mishra

Related post, quantum approximate optimization algorithm (qaoa).

'  data-srcset=

Demystifying the Digital Signature Algorithm (DSA)

Navigating complexity: understanding algorithms through flowchart illustrations, leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Differences Between Machine Learning and Deep Learning Algorithms

Learn Loner

We Make Learning Smoother.

Home Algorithm Analysis: Worst, Best and Average Case Analysis

Algorithm Analysis: Worst, Best and Average Case Analysis

In the realm of computer science and programming, understanding the efficiency of algorithms is paramount. The study of algorithm analysis delves into evaluating how different algorithms perform under varying conditions. Among the key considerations are the worst-case, best-case, and average-case scenarios. This article provides a comprehensive overview of algorithm analysis, specifically focusing on worst, best, and average-case analysis in data structures. By the end, you’ll be equipped with the knowledge to make informed decisions that contribute to optimized coding and efficient problem-solving.

Algorithm Analysis

Algorithm analysis involves the meticulous evaluation of how algorithms perform, helping programmers identify the most suitable solutions for specific problems. This analysis is crucial for assessing an algorithm’s efficiency and resource consumption. It aids in understanding how execution time and memory usage may vary based on input size.

Importance of Algorithm Efficiency

Efficiency in algorithms directly impacts application performance. An algorithm that runs efficiently can save valuable time, reduce resource consumption, and enhance user experience. Conversely, a poorly designed algorithm can lead to slow execution and increased resource utilization.

Key Metrics in Algorithm Analysis

  • Worst-Case Analysis – Consider the scenario where an algorithm takes the maximum possible time to complete its execution. This provides an upper bound on the time complexity.
  • Best-Case Analysis – This scenario examines the shortest time an algorithm takes to complete its execution. While it might seem ideal, it rarely represents real-world situations.
  • Average-Case Analysis – By considering all possible inputs and their likelihood of occurrence, the average-case analysis offers a more realistic perspective on an algorithm’s performance.

Understanding Big O Notation

Big O notation is a standardized way to express an algorithm’s time complexity in relation to the input size. It simplifies comparison and selection of algorithms based on efficiency. Some common time complexities include O(1), O(log n), O(n), O(n log n), and O(n^2).

Worst Case Analysis

In worst-case analysis, we evaluate the maximum time an algorithm takes for any input of size “n.” This analysis helps programmers anticipate the algorithm’s performance in unfavorable situations.

Consider a sorting algorithm like Bubble Sort. In its worst-case scenario, where the input array is reverse sorted, each element needs to be compared and swapped. This leads to a time complexity of O(n^2), making it inefficient for large datasets.

Best Case Analysis

While best-case analysis might seem optimistic, it rarely mirrors real-world scenarios. It evaluates the algorithm’s shortest execution time, which usually occurs for a specific input.

For instance, in Quick Sort, the best case occurs when the pivot chosen during partitioning is the middle element. This results in balanced partitions and a time complexity of O(n log n).

Average Case Analysis

Average-case analysis provides insight into how an algorithm behaves on an average, considering all possible inputs. This analysis considers the probability distribution of inputs to calculate an expected execution time.

Take Hashing as an example. In hash table operations, the average-case time complexity for inserting, deleting, or retrieving an element is often O(1), making hash tables highly efficient.

Balancing Act

Selecting the right algorithm involves weighing the trade-offs between best, worst, and average-case scenarios. While an algorithm might shine in one scenario, it could falter in another.

For example, Merge Sort offers a consistent O(n log n) time complexity, making it suitable for various scenarios. However, its recursive nature might lead to higher memory consumption compared to iterative algorithms like Quick Sort.

Q: Why is worst-case analysis important?

A: Worst-case analysis helps programmers understand the upper limit of an algorithm’s performance, aiding in preparing for unfavorable scenarios.

Q: Is the best-case scenario practical?

A: The best-case scenario is usually an optimistic estimate and may not reflect real-world conditions.

Q: How does average-case analysis differ from the rest?

A: Average-case analysis considers all possible inputs’ likelihood, offering a more practical view of an algorithm’s efficiency.

Q: Can an algorithm perform well in all scenarios?

A: No, an algorithm’s performance often varies based on the input distribution and characteristics.

Q: Is Big O notation the only metric for efficiency?

A: While Big O notation is commonly used, it’s not the only metric. Other factors like space complexity also influence efficiency.

Q: How does algorithm analysis impact software development?

A: Efficient algorithms lead to faster and optimized software, enhancing user experience and resource utilization.

  • Data Structure
  • Graph Theory
  • Microprocessor
  • Cryptography
  • Compiler Design
  • Computer Graphics
  • Parallel Algorithm
  • ASP.NET MVC
  • Gk/Aptitude
  • Css/Html Maker
  • Cheat Sheets
  • Math Formulas

: Scanftree

Analysis-of-algorithms.

Menu

  • Introduction
  • Pseudo-polynomial Algorithms

Worst, Average and Best Cases

  • Asymptotic Notations
  • Analysis of Loops
  • Solving Recurrences
  • Amortized Analysis Introduction
  • Space Complexity
  • NP-Completeness
  • Polynomial Time Approximation Scheme
  • A Time Complexity
  • calculate pow(x,n)
  • The Knight’s tour problem
  • Rat in a Maze
  • N Queen Problem
  • m Coloring Problem
  • Hamiltonian Cycle
  • Solving Cryptarithmetic Puzzles

we discussed how Asymptotic analysis overcomes the problems of naive way of analyzing algorithms . In this post, we will take an example of Linear Search and analyze it using Asymptotic analysis.

We can have three cases to analyze an algorithm: 1) Worst Case 2) Average Case 3) Best Case

Let us consider the following implementation of Linear Search.

Worst Case Analysis (Usually Done) In the worst case analysis, we calculate upper bound on the running time of an algorithm. We must know the case that causes the maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x in the above code) is not present in the array. When x is not present, the search() functions compares it with all the elements of arr[] one by one. Therefore, the worst case time complexity of linear search would be Θ(n).

Average Case Analysis (Sometimes done)  In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by a total number of inputs. We must know (or predict) distribution of cases. For the linear search problem, let us assume that all cases are  uniformly distributed  (including the case of x not being present in array). So we sum all the cases and divide the sum by (n+1). Following is the value of average case time complexity.

analysis1

Best Case Analysis (Bogus)  In the best case analysis, we calculate lower bound on the running time of an algorithm. We must know the case that causes minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be Θ(1) Most of the times, we do worst-case analysis to analyze algorithms. In the worst analysis, we guarantee an upper bound on the running time of an algorithm which is good information. The average case analysis is not easy to do in most of the practical cases and it is rarely done. In the average case analysis, we must know (or predict) the mathematical distribution of all possible inputs. The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide any information as in the worst case, an algorithm may take years to run.

For some algorithms, all the cases are asymptotically same, i.e., there are no worst and best cases. For example, Merge Sort . Merge Sort does Θ(nLogn) operations in all cases. Most of the other sorting algorithms have worst and best cases. For example, in the typical implementation of Quick Sort (where pivot is chosen as a corner element), the worst occurs when the input array is already sorted and the best occurs when the pivot elements always divide array in two halves. For insertion sort, the worst case occurs when the array is reverse sorted and the best case occurs when the array is sorted in the same order as output.

Aptitude / Reasoning / Interview

Physical education & sports.

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Computer science theory

Course: computer science theory   >   unit 1.

  • Overview of quicksort
  • Challenge: Implement quicksort
  • Linear-time partitioning
  • Challenge: Implement partition

Analysis of quicksort

Worst-case running time, best-case running time, average-case running time, randomized quicksort, want to join the conversation.

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Good Answer

thecleverprogrammer

Worst Case, Average Case, and Best Case

Aman Kharwal

  • November 9, 2020
  • C++ , Machine Learning

In this article, I will introduce you to the concept of worst case, average case and best case analysis of the algorithm.

Introduction to Worst Case, Average Case and Best Case

In computing, the worst, average, and best case of an algorithm depends on the size of the user input value. To understand these terms, let’s go through them one by one.

Also, Read – Machine Learning Full Course for free.

Worst Case Analysis:

In the worst-case analysis, we calculate the upper limit of the execution time of an algorithm. It is necessary to know the case which causes the execution of the maximum number of operations.

For linear search, the worst case occurs when the element to search for is not present in the array. When x is not present, the search () function compares it with all the elements of arr [] one by one. Therefore, the temporal complexity of the worst case of linear search would be Θ (n).

Average Case Analysis:

In the average case analysis, we take all possible inputs and calculate the computation time for all inputs. Add up all the calculated values ​​and divide the sum by the total number of entries.

We need to predict the distribution of cases. For the linear search problem, assume that all cases are uniformly distributed. So we add all the cases and divide the sum by (n + 1).

Best Case Analysis:

In the best case analysis, we calculate the lower bound of the execution time of an algorithm. It is necessary to know the case which causes the execution of the minimum number of operations. In the linear search problem, the best case occurs when x is present at the first location.

The number of operations in the best case is constant. The best-case time complexity would therefore be Θ (1) Most of the time, we perform worst-case analysis to analyze algorithms. In the worst analysis, we guarantee an upper bound on the execution time of an algorithm which is good information.

The average case analysis is not easy to do in most practical cases and is rarely done. In the average case analysis, we need to predict the mathematical distribution of all possible inputs. The Best Case analysis is wrong. Guaranteeing a lower bound on an algorithm does not provide any information because in the Worst Case scenario an algorithm can take years to run.

Conclusion:

For some algorithms, all cases are asymptotically the same, that is, there is no worst and best case. For example, Sort by merge. Merge sorting performs Θ (nLogn) operations in all cases. Most of the other sorting algorithms present the worst and best cases.

For example, in the typical quicksort implementation, the worst occurs when the input array is already sorted and the best occurs when the pivot elements always divide the table into two halves.

For insert sorting, the worst case occurs when the array is sorted in reverse order and the best case occurs when the array is sorted in the same order as the output.

Hope you liked this article on the concept of worst case, middle case and best case analysis of algorithms. Please feel free to ask your valuable questions in the comments section below.

Also, Read – 130 Machine Learning Projects solved and explained.

Aman Kharwal

Aman Kharwal

Data Strategist at Statso. My aim is to decode data science for the real world in the most simple words.

Recommended For You

How Much Python is Required for Data Science

How Much Python is Required for Data Science

  • June 7, 2023

Best Courses for Coding Interview Preparation

Best Courses for Coding Interview Preparation

  • October 28, 2022

How to Install MySQL on MacBook

Here’s How to Install MySQL on MacBook

  • September 1, 2022

Examples of the Applications of Python

Examples of the Applications of Python

  • June 17, 2022

Leave a Reply Cancel reply

Discover more from thecleverprogrammer.

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

Complexica Logo

  • Information Security Policy
  • Speaking on AI
  • Research Partners
  • Overview - Decision Cloud®
  • Larry, the Digital Analyst®
  • TPM & TPO
  • Promotional Campaign Manager (PCM)
  • What-if Simulator & Optimiser
  • Customer Opportunity Profiler (COP)
  • Order Management System (OMS)
  • E-commerce Recommendation Engine
  • Touchless CRM
  • Supply & Demand Planner
  • Sales Enablement
  • Intelligent Quoting & Order Management
  • Dynamic Pricing
  • Cross-selling
  • CRM Automation
  • Promotional Planning & Trade Spend
  • Predictive Customer Churn
  • Personalisation
  • Marketing Mix & Media Spend
  • Demand Planning
  • Inventory Replenishment
  • Advanced Planning & Scheduling
  • Ports & Logistics
  • Book: Rise of AI
  • Publications
  • Frequently Asked Questions

Science

Narrow AI Glossary

Best, worst, and average case analysis.

Best, worst, and average case analysis: In computer science, best, worst, and average case analysis is a method of analyzing algorithms that takes into account the amount of time it takes for an algorithm to run in the best case, worst case, and average case scenarios.

Best, worst, and average case analysis is an important concept in computer science. It is a tool used to evaluate the performance of algorithms given certain input data. By analyzing the running time or memory usage of an algorithm under different conditions, one can gain insights into its effectiveness for solving a particular problem. This article will provide an overview of best, worst, and average case analysis by exploring how it works and when it should be applied.

The purpose of best, worst, and average case analysis is to determine the expected behavior of an algorithm in various scenarios. In doing so, it helps identify potential areas where improvement could be made before launching a project or deploying software solutions. Furthermore, this technique allows developers to compare different approaches to tackling a problem and make informed decisions on which option may be most suitable for their use case.

By studying best, worst, and average cases separately, researchers are able to assess whether an algorithm's performance degrades over time as more data points are added or if there is any variation depending on the type of data being processed. Through careful evaluation of these three concepts together with other parameters such as space complexity and scalability features, developers can ensure that they have chosen the right solution for their system requirements.

What Is Best, Worst, And Average Case Analysis?

Best, worst, and average case analysis is a methodology used to assess the time complexity of an algorithm. It involves analyzing the best-case running time, worst-case performance, and average case complexity of an algorithm. The goal of this type of analysis is to determine how well an algorithm performs in different scenarios based on its input size.

In terms of best case analysis, it focuses on computing the minimum amount of processing time required for any given set of inputs. This results in the best overall performance since there are no unnecessary steps taken. On the other hand, worst case analysis looks at what happens when things go wrong by considering all possible outcomes that can lead to long execution times or even errors. Finally, average case complexity considers the expected outcome from a large variety of inputs which will help identify potential bugs or slowdowns in algorithms with complex data sets. In sum, these three approaches provide valuable insight into assessing and improving an algorithm’s efficiency regardless of its input size while identifying opportunities for improvement where necessary.

Examples Of Best, Worst, And Average Case Analysis

Best, worst, and average case analysis is a method of evaluating the performance of an algorithm in terms of its time complexity. It involves determining the upper bound for the worst case execution time as well as the average case execution time based on different possible inputs to an algorithm. This type of analysis can be applied to any problem that requires searching or sorting through data structures such as arrays and linked lists.

For example, when considering the linear search problem, best, worst, and average case analysis looks at how long it takes to find a target element depending on where it is located within an input array. If we assume that all elements are randomly distributed across the array then this would represent an average-case scenario.

In contrast, if we assume that the target element is always in either the first or last position of the array then this would represent a worst-case scenario with an upper bound determined by calculating a worst-case execution time. Similarly, if we assume that our target element is always found somewhere in between these two positions then this would represent a best-case scenario which could have better performance than both cases previously mentioned.

By studying different types of data distributions and analyzing each possible input condition separately while having knowledge about the structure of our algorithms, one can effectively determine how much time it will take for their program to execute under various conditions. Best, Worst and Average Case Analysis helps us understand precisely how fast our code runs under different scenarios so that developers can make improvements accordingly and optimize their programs for maximum efficiency no matter what kind of input they receive from users.

Benefits Of Best, Worst, And Average Case Analysis

Best, worst, and average case analysis are important tools in complexity analysis as they allow for the analysis of algorithms and their running times. In particular, these methods provide insight into the performance of an algorithm under different scenarios.

By performing a best case analysis one can determine the lower bound on the time taken by an algorithm to complete its task. On the other hand, worst case analysis gives us information about how much time is needed for an algorithm to run when it experiences instability or has inputs that produce particularly bad results - such as with unstable sorting algorithms. Average case analysis then provides a more general overview of what happens under normal conditions.

The main benefits of assessing algorithms using best, worst, and average case analyses is that researchers can get a better understanding of which strategies work well in certain situations. This knowledge allows them to make informed decisions when selecting suitable approaches for their applications.

Additionally, this type of assessment offers a more comprehensive view than simply relying on asymptotic analysis alone without considering any real-world scenarios where performance may be impacted by factors outside of theoretical bounds. Ultimately, best, worst, and average case analyses offer valuable insights into the true complexities associated with various algorithms so that developers have greater confidence in implementing solutions that will ensure steady and reliable performance over time.

Challenges Of Best, Worst, And Average Case Analysis

Best, worst, and average case analysis is a useful tool in problem-solving. It allows for the evaluation of various scenarios in order to identify potential challenges or opportunities. This type of analysis can be utilized when working with sorting algorithms, linear search, merge sort, linked list data structures, insertion sort, bubble sort, quick sort and heap sort.

However, best, worst and average case analysis also brings certain challenges. For example, it requires accuracy when defining the parameters that will be analyzed. If incorrect parameters are used then this could lead to inaccurate results which may not provide an accurate picture of what needs to be done.

Additionally there is often ambiguity between the different cases being evaluated due to changing conditions and variables within each one. As such it can become difficult to determine which course of action should be taken based on these factors alone.

This highlights the importance of having reliable data sources as well as a thorough understanding of binary search techniques and other relevant concepts in order to make informed decisions when utilizing best, worst and average case analysis for problem solving purposes.

Applications Of Best, Worst, And Average Case Analysis

Best, worst, and average case analysis are essential tools for assessing the performance of computing algorithms. Each have their own unique applications and can be used to optimize runtime and program efficiency. For instance, merge sorting is a widely-used algorithm that benefits from best and worst case analysis in order to determine which array values should be compared first. This helps to minimize the number of comparisons necessary for sorting the array.

Worst case analysis also has practical applications outside of computer science. In network revenue management problems such as finding the minimum cost per unit distance or allocating hub locations, it can be used to identify where resources must be allocated most efficiently given a certain set of constraints. Additionally, random matrices can benefit from an average case approximation ratio when estimating time complexity for particular operations within a matrix structure. By combining these three types of analysis, researchers gain significant insight into how systems will perform under varying conditions.

Key Takeaways From Best, Worst, And Average Case Analysis

Best, worst and average case analysis is a powerful tool used to analyze the total number of operations required for various algorithms. It helps in understanding which algorithm will be more efficient when applied to real-time applications such as sorting an array or determining space complexity of matrix anal. This type of analysis can also be applied to NP complete problems like revenue management model, management science, genetic algorithms, fuzzy control models etc.

The key takeaways from best, worst and average case analysis are that it gives us insight into how well each algorithm performs under different conditions and provides guidance on which one should be used for optimizing performance. We can also use this technique to compare two or more algorithms and determine which one is better suited for particular tasks. Furthermore, we can use it to understand the tradeoffs between time and memory usage when dealing with complex problems. Finally, analyzing the best-case scenario allows us to optimize our code by reducing redundant calculations so that our program runs faster while operating within desired parameters.

Best, worst, and average case analysis is a useful tool for decision-makers to consider the potential outcomes of their decisions. It helps them understand how different scenarios could unfold and provides insight into what measures need to be taken in order to achieve desired results.

By examining best, worst, and average cases separately, decision-makers can gain an understanding of the risks and rewards associated with a particular course of action. This allows them to make informed choices that are tailored according to the specific needs of each situation.

The benefits of this type of analysis include being able to identify opportunities and plan accordingly while also preventing costly mistakes due to lack of foresight or inadequate planning. Additionally, it can provide valuable insights on which strategies may yield better results than others based on previous experiences or data collected from similar contexts. Finally, by understanding both positive and negative implications ahead of time, one can adjust their strategy accordingly in order to maximize success rates.

In conclusion, best, worst, and average case analysis is an effective way for decision-makers to anticipate possible outcomes and help them map out optimal paths forward more confidently. With its ability to analyze multiple scenarios simultaneously and help pinpoint areas where improvements are needed most effectively, it offers invaluable guidance when making difficult decisions that have far-reaching consequences.

PREVIOUS NARROW AI GLOSSARY TERM

Asymptotic analysis

NEXT NARROW AI GLOSSARY TERM

To schedule a demo or learn more about our software products ,  please contact us:

Request a Demo

"Larry will be our digital expert that will enable our sales team and add that technological advantage that our competitors don't have."

PFD Food Services uses Complexica's Order Management System

"Lion is one of Australasia’s largest food and beverage companies, supplying various alcohol products to wholesalers and retailers, and running multiple and frequent trade promotions throughout the year. The creation of promotional plans is a complicated task that requires considerable expertise and effort, and is an area where improved decision-making has the potential to positively impact the sales growth of various Lion products and product categories. Given Complexica’s world-class prediction and optimisation capabilities, award-winning software applications, and significant customer base in the food and alcohol industry, we have selected Complexica as our vendor of choice for trade promotion optimisation."

Lion

"At Liquor Barons we have an entrepreneurial mindset and are proud of being proactive rather than reactive in our approach to delivering the best possible customer service, which includes our premier liquor loyalty program and consumer-driven marketing. Given Complexica’s expertise in the Liquor industry, and significant customer base on both the retail and supplier side, we chose Complexica's Promotional Campaign Manager for digitalizing our spreadsheet-based approach for promotion planning, range management, and supplier portal access, which in turn will lift the sophistication of our key marketing processes."

Richard Verney Marketing Manager Liquor Barons

LB

"Dulux is a leading marketer and manufacturer of some of Australia’s most recognised paint brands. The Dulux Retail sales team manage a diverse portfolio of products and the execution of our sales and marketing activity within both large, medium and small format home improvement retail stores. We consistently challenge ourselves to innovate and grow and to create greater value for our customers and the end consumer. Given the rise and application of Artificial Intelligence in recent times, we have partnered with Complexica to help us identify the right insight at the right time to improve our focus, decision making, execution, and value creation."

Jay Bedford National Retail Sales Manager Dulux

DuluxGroup-logo

"Following a successful proof-of-concept earlier this year, we have selected Complexica as our vendor of choice for standardizing and optimising our promotional planning activities. Complexica’s Promotional Campaign Manager will provide us with a cloud-based platform for automating and optimising promotional planning for more than 2,700 stores, leading to improved decision-making, promotional effectiveness, and financial outcomes for our retail stores."

Metcash_ALM_logo

"After evaluating a number of software applications and vendors available on the market, we have decided to partner with Complexica for sales force optimisation and automation. We have found Complexica’s applications to be best suited for our extensive SKU range and large set of customers, being capable of generating recommendations and insights without burdening our sales staff with endless data analysis and interpretation. 

Polyaire chooses Complexica for sales force optimisation and automation

"DuluxGroup is pleased to expand its relationship with Complexica, a valued strategic partner and supplier to our business. Complexica’s software will enable DuluxGroup to reduce the amount of time required to generate usable insights, increase our campaign automation capability, personalise our communications based on core metrics, and close the loop on sales results to optimise ongoing digital marketing activity."

"Instead of hiring hundreds of data scientists to churn through endless sets of data to provide PFD with customer-specific insights and personalised recommendations, Larry, the Digital Analyst® will serve up the answers we need, when we need them, on a fully automated basis without the time and manual processes typically associated with complex analytical tasks.”

PFD_Food_Services

"As a global innovator in the wine industry, Pernod Ricard Winemakers is always seeking ways to gain efficiencies and best practices across our operational sites. Given the rise of Artificial Intelligence and big data analytics in recent times, we have engaged Complexica to explore how we can achieve a best-in-class wine supply chain using their cloud-based software applications. The engagement is focused on Australia & New Zealand, with a view to expand globally."

Pernod_Ricard_Logo

"70% - 80% of what we do is about promotional activity, promotional pricing -- essentially what we take to the marketplace. This is one of the most comprehensive, most complex, one of the most difficult aspect of our business to get right. With Complexica, we will be best in class - there will not be anybody in the market that can perform this task more effectively or more efficiently than we can."

Liquor Marketing Group LMG uses Complexica's Promotional Campaign Manager

"The key thing that makes such a difference in working with Complexica is their focus on delivering the business benefits and outcomes of the project."

"Australia needs smart technology and people, and it has been a great experience for me to observe Complexica co-founders Zbigniew and Matt Michalewicz assemble great teams of people using their mathematical, logic, programming, and business skills to create world-beating products. They are leaders in taking our bright graduates and forging them into the businesses of the future."

SA-Water.png

"Having known the team behind Complexica for some years ago now, I am struck by their ability to make the complex simple - to use data and all its possibilities for useful purpose. They bring real intelligence to AI and have an commercial approach to its application."

fairfax.png

"I have worked with the team at Complexica for a number of years and have found them professional, innovative and have appreciated their partnership approach to delivering solutions to complex problems."

Asciano.png

“Working with Complexica to deliver Project Automate has been a true partnership from the initial stages of analysis of LMG’s existing processes and data handling, through scoping and development phase and onto delivery and process change adoption. The Complexica team have delivered considerable value at each stage and will continue to be a valued partner to LMG."

“Complexica’s Order Management System and Larry, the Digital Analyst will provide more than 300 Bunzl account managers with real-time analytics and insights, to empower decision making and enhanced support. This will create more time for our teams to enable them to see more customers each day and provide the Bunzl personalised experience.”

Bunzl Australia uses Complexica's Order Management System

"The team behind Complexica develops software products that are at the cutting edge of science and technology, always focused on the opportunities to deliver a decisive competitive edge to business. It has always been a great experience collaborating with Matthew, Zbigniew and Co."

roy-hill.png

"The innovations that the Complexica team are capable of continue to amaze me. They look at problems from the client side and use a unique approach to collaborating with and deeply understanding their customers challenges. This uniquely differentiates what they bring to market and how they deliver value to customers."

toll_logo

"Rather than building out an internal analytics team to investigate and analyse countless data sets, we have partnered with Complexica to provide our sales reps with the answers they need, when they need them, on a fully automated basis. We are excited about the benefits that Larry, the Digital Analyst will deliver to our business.”

Coventry_Group_v2

"After an evaluation process and successful proof-of-concept in 2016, we have chosen to partner with Complexica to upgrade the technological capability of our in-field sales force. The next-generation Customer Opportunity Profiler provided by Complexica will serve as a key tool for sales staff to optimise their daily activities, personalise conversations and interactions with customers, and analyse data to generate actionable insights."

Dulux Group uses Complexica's Customer Opportunity Profiler

 "After evaluating a number of software systems available in the marketplace, we have ultimately selected Complexica as our vendor of choice for sales force automation and CRM. Given the large SKU range we carry and very long tail of customers we serve, Complexica’s applications are best suited to deal with this inherent complexity without burdening our staff with endless data entry."

Haircare Australia to use Complexica's Customer Opportunity Profiler, CRM and Order Management System

“Asahi Beverages is Australia’s largest brewer, supplying a leading portfolio to wholesalers and retailers, including some of Australia’s most iconic brands. Last year Asahi Beverages acquired Carlton & United Breweries, which is its Australian alcohol business division. To harness the strength of our expanded portfolio, we partner with our customers to run multiple and frequent trade promotions throughout the year, delivering long-term growth for both our business and theirs. Given the inherent complexity in optimising promotional plans and our continued focus on revenue and growth management, we have selected Complexica as our vendor of choice after a successful Proof-of-Concept of its world-class optimisation capabilities.”

Asahi

Some of our Customers

Lion-logo

Some of our Recent Awards

SAAS-awards-finalis

Conditions of Use |  Privacy Notice Narrow AI Glossaru | Supply Chain Glossary

Copyright © Complexica Pty Ltd

TQCSI ISO-27001-Certification-Mark

Close

CS2 Software Design & Data Structures

Chapter 4 algorithm analysis.

Show Source |    | About    «   4. 2. Comparing Algorithms   ::   Contents   ::   4. 4. Faster Computer, or Faster Algorithm?   »

4. 3. Best, Worst, and Average Cases ¶

4. 3.1. best, worst, and average cases ¶.

Proficient

When analyzing an algorithm, should we study the best, worst, or average case? Normally we are not interested in the best case, because this might happen only rarely and generally is too optimistic for a fair characterization of the algorithm’s running time. In other words, analysis based on the best case is not likely to be representative of the behavior of the algorithm. However, there are rare instances where a best-case analysis is useful—in particular, when the best case has high probability of occurring. The Shellsort and Quicksort algorithms both can take advantage of the best-case running time of Insertion Sort to become more efficient.

How about the worst case? The advantage to analyzing the worst case is that you know for certain that the algorithm must perform at least that well. This is especially important for real-time applications, such as for the computers that monitor an air traffic control system. Here, it would not be acceptable to use an algorithm that can handle \(n\) airplanes quickly enough most of the time , but which fails to perform quickly enough when all \(n\) airplanes are coming from the same direction.

For other applications—particularly when we wish to aggregate the cost of running the program many times on many different inputs—worst-case analysis might not be a representative measure of the algorithm’s performance. Often we prefer to know the average-case running time. This means that we would like to know the typical behavior of the algorithm on inputs of size \(n\) . Unfortunately, average-case analysis is not always possible. Average-case analysis first requires that we understand how the actual inputs to the program (and their costs) are distributed with respect to the set of all possible inputs to the program. For example, it was stated previously that the sequential search algorithm on average examines half of the array values. This is only true if the element with value \(K\) is equally likely to appear in any position in the array. If this assumption is not correct, then the algorithm does not necessarily examine half of the array values in the average case.

The characteristics of a data distribution have a significant effect on many search algorithms, such as those based on hashing and search trees such as the BST . Incorrect assumptions about data distribution can have disastrous consequences on a program’s space or time performance. Unusual data distributions can also be used to advantage, such as is done by self-organizing lists .

In summary, for real-time applications we are likely to prefer a worst-case analysis of an algorithm. Otherwise, we often desire an average-case analysis if we know enough about the distribution of our input to compute the average case. If not, then we must resort to worst-case analysis.

Contact Us | | Privacy | | License    «   4. 2. Comparing Algorithms   ::   Contents   ::   4. 4. Faster Computer, or Faster Algorithm?   »

Contact Us | | Report a bug

IMAGES

  1. Average-Case Analysis

    average case analysis example

  2. 😱 Business case analysis report. How to Write a Business Case: Template

    average case analysis example

  3. Business Case Analysis: Definition, Format & Examples of a Case Study

    average case analysis example

  4. FREE 6+ Sample Business Case Analysis Templates in PDF

    average case analysis example

  5. Case Analysis Guide

    average case analysis example

  6. PPT

    average case analysis example

VIDEO

  1. How to calculate average in Excel #excel

  2. 106 Average Case Analysis of Binary Search

  3. 6. Theta Notation

  4. Loop invariant and Correctness proof of Insert-sort algorithm

  5. Quick Sort Analysis||Best and Average Case Analysis||Complexity Calculation |Solved Example

  6. Best Case, Worst Case and Average Case Analysis of an Algorithm

COMMENTS

  1. Worst, Average and Best Case Analysis of Algorithms

    1. Worst Case Analysis: Most of the time, we do worst-case analyses to analyze algorithms. In the worst analysis, we guarantee an upper bound on the running time of an algorithm which is good information. 2. Average Case Analysis . The average case analysis is not easy to do in most practical cases and it is rarely done.

  2. Best, Average and Worst case Analysis of Algorithms

    In that case, we perform best, average and worst-case analysis. The best case gives the minimum time, the worst case running time gives the maximum time and average case running time gives the time required on average to execute the algorithm. I will explain all these concepts with the help of two examples - (i) Linear Search and (ii) Insertion ...

  3. 4.3. Best, Worst, and Average Cases

    Unfortunately, average-case analysis is not always possible. Average-case analysis first requires that we understand how the actual inputs to the program (and their costs) are distributed with respect to the set of all possible inputs to the program. For example, it was stated previously that the sequential search algorithm on average examines ...

  4. Average Case Analysis

    The worst-case analysis rules all apply, because they do provide an upper bound. But that bound is sometimes not as tight as it could be. One of the things that makes average case analysis tricky is recognizing when we can get by with the worst-case rules and when we can gain by using a more elaborate average-case rule.

  5. PDF Average Case Analysis Worst-case analysis.

    Average Case Analysis 2 Beyond Worst Case Analysis Worst-case analysis. Analyze running time as function of worst input of a given size. Average case analysis. Analyze average running time over some distribution of inputs. Ex: quicksort. Amortized analysis. Worst-case bound on sequence of operations. Ex: splay trees, union-find. Competitive ...

  6. 8. 4. Best, Worst, and Average Cases

    Average-case analysis first requires that we understand how the actual inputs to the program (and their costs) are distributed with respect to the set of all possible inputs to the program. For example, it was stated previously that the sequential search algorithm on average examines half of the array values.

  7. 19.2 Average-Case Running Time of Linear Search

    Like worst-case and best-case running times, the average-case running time is a function which relates input size to some measure of program efficiency. In this particular example, we found that for the given set of inputs \(\cI_n\) for each \(n\), the average-case running time is asymptotically equal to that of the worst-case.

  8. Best, worst and average case

    For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. ... One modern approach in academic theory to bridge the gap between worst-case and average-case analysis is called smoothed analysis. When analyzing algorithms which often take a small time to complete, ...

  9. Worst, Average, and Best Case Analysis of Algorithms

    Continuing with the sorting algorithm example, average case analysis considers the probability distribution of different input arrays and calculates the expected number of comparisons and swaps required. This analysis provides a more accurate assessment of the algorithm's efficiency in real-world scenarios.

  10. Analysis of Algorithms

    1.4 Average-Case Analysis. Elementary probability theory gives a number of different ways to compute the average value of a quantity. While they are quite closely related, it will be convenient for us to explicitly identify two different approaches to compute the mean. ... 1.5 Example: Analysis of quicksort. The classical quicksort algorithm ...

  11. PDF Algorithms and Data Structures: Average-Case Analysis of Quicksort

    Average-Case Analysis. IA (n ) = number of comparisons done by Quicksort on average if all input arrays of size n are considered equally likely. IIntuition: The average case is closer to the best case than to the worst case, because only repeatedly very unbalanced partitions lead to the worst case. IRecurrence: A (n ) = 0 if n 1 Pn k = 1 1 n.

  12. PDF 0.1 Worst and best case analysis

    For example, the worst case runtime T(n) = 2n2 + 5 can be bounded from below by a constant times n2, so we say that T(n) is (n 2). It can also be bounded from above by a constant times n (for su ciently large ... 0.5 Runtime analysis First we will analyze the runtime of the Merge subroutine. The rst and third lines take constant time. Assume ...

  13. 3. 3. Best, Worst, and Average Cases

    Unfortunately, average-case analysis is not always possible. Average-case analysis first requires that we understand how the actual inputs to the program (and their costs) are distributed with respect to the set of all possible inputs to the program. For example, it was stated previously that the sequential search algorithm on average examines ...

  14. Algorithm Analysis

    Average-Case Analysis - By considering all possible inputs and their likelihood of occurrence, the average-case analysis offers a more realistic perspective on an algorithm's performance. Understanding Big O Notation. ... Take Hashing as an example. In hash table operations, the average-case time complexity for inserting, deleting, or ...

  15. Mastering Algorithm Analysis: Understanding Best, Worst, and Average Cases

    It helps in ensuring that the system can handle the most demanding scenarios. 4. Realistic Expectations: While worst case analysis is critical, it can sometimes be overly pessimistic. Average case ...

  16. Worst, Average and Best Cases

    In this post, we will take an example of Linear Search and analyze it using Asymptotic analysis. We can have three cases to analyze an algorithm: 1) Worst Case. 2) Average Case. 3) Best Case. Let us consider the following implementation of Linear Search. #include <stdio.h>. // Linearly search x in arr[].

  17. Analysis of quicksort (article)

    Quicksort's best case occurs when the partitions are as evenly balanced as possible: their sizes either are equal or are within 1 of each other. The former case occurs if the subarray has an odd number of elements and the pivot is right in the middle after partitioning, and each partition has ( n − 1) / 2 elements.

  18. Average-case complexity

    Average-case analysis requires a notion of an "average" input to an algorithm, which leads to the problem of devising a probability distribution over inputs. ... For example, many sorting algorithms which utilize randomness, such as Quicksort, have a worst-case running time of O(n 2) ...

  19. Worst Case, Average Case, and Best Case

    The average case analysis is not easy to do in most practical cases and is rarely done. In the average case analysis, we need to predict the mathematical distribution of all possible inputs. The Best Case analysis is wrong. ... For example, in the typical quicksort implementation, the worst occurs when the input array is already sorted and the ...

  20. PDF Average case analysis of binary search

    A rudimentary (and incorrect) analysis of the average case . Given a sorted array of N elements, it is tempting to say that in average each element would takes (1+logN)/2 to be found successfully. However, this formula does not take into account the ... Take the array of 15 elements as an example, the average cost is shown below:

  21. Best, worst, and average case analysis

    Best, worst, and average case analysis is a methodology used to assess the time complexity of an algorithm. It involves analyzing the best-case running time, worst-case performance, and average case complexity of an algorithm. The goal of this type of analysis is to determine how well an algorithm performs in different scenarios based on its ...

  22. 4.3. Best, Worst, and Average Cases

    Unfortunately, average-case analysis is not always possible. Average-case analysis first requires that we understand how the actual inputs to the program (and their costs) are distributed with respect to the set of all possible inputs to the program. For example, it was stated previously that the sequential search algorithm on average examines ...