3.1.1 What is an Algorithm?

Algorithms are fundamental to both mathematics and computer science. In simple terms, an algorithm is a step-by-step procedure or set of rules designed to solve a problem or accomplish a specific task. More formally, it can be defined as a finite list of well-defined instructions for completing a computation or achieving some result. In essence, an algorithm takes some input, performs a sequence of computational steps, and produces an output. Algorithms are employed everywhere – from basic arithmetic calculations to complex data processing and artificial intelligence – serving as the “recipes” that tell a computer (or a human problem-solver) exactly what steps to take to solve a given problem.

Formal Definitions and Key Properties

In the early 20th century, the notion of an algorithm was rigorously formalized to establish a theoretical foundation for computation. For example, mathematician Alan Turing introduced the concept of a Turing machine as an abstract model of computation to define algorithms precisely. Around the same time, Alonzo Church developed the lambda calculus formalism. These efforts were responses to David Hilbert’s 1928 Entscheidungsproblem (decision problem) and led to the Church-Turing Thesis, which posits that any effectively calculable process (intuitive algorithm) can be executed by some Turing-complete model. Through these works (by Gödel, Herbrand, Kleene, Church, Post, Turing, and others), the community converged on a theoretical definition of algorithms and what it means for a function to be “computable.”

From a practical standpoint, computer scientists characterize algorithms by several key properties that an effective algorithm should have:

  • Input: The algorithm receives zero or more inputs. These inputs are the data given to the algorithm at the start, drawn from specified sets of objects (some algorithms may not require any input).

  • Output: The algorithm produces at least one output that has a specified relation to the input. In other words, after the steps are executed, we expect some result or solution as the outcome.

  • Definiteness: Each step of the algorithm must be clearly and unambiguously defined. The instructions should be precise so that there is no uncertainty about what action to perform at each stage.

  • Finiteness: An algorithm must always terminate after a finite number of steps. It should not run forever (assuming correct input); instead, it reaches a conclusion in a reasonable number of steps.

  • Effectiveness: All operations in the algorithm should be basic enough that they can, in principle, be carried out exactly and in a finite amount of time by the computing agent. In other words, the steps are effective or feasible – no step is "magical" or unimplementable.

If a procedure satisfies the above criteria, it qualifies as an algorithm. Additionally, desirable properties often include efficiency (using minimal time and memory resources) and generality (applicable to a wide range of inputs), although these are not required by the definition, strictly speaking.

Representation of Algorithms (Pseudocode and Flowcharts)

Algorithms can be expressed in various forms depending on context and audience. In plain language, one might describe an algorithm in English (much like a cooking recipe). More formally, computer scientists often use pseudocode – a language-agnostic, structured description of the steps – or flowcharts – a graphical diagram of the control flow – to illustrate how an algorithm works. These representations help in visualizing and reasoning about the algorithm without getting bogged down by programming syntax.

For example, consider the classic problem of finding the greatest common divisor (GCD) of two positive integers. Euclid’s algorithm (attributed to Euclid ~300 BC) is one of the oldest known algorithms and provides an efficient method for this task. Below is a simple pseudocode for Euclid’s GCD algorithm using the subtraction method (repeatedly subtracting the smaller number from the larger until they become equal):

Algorithm EuclidGCD(a, b):
    while a ≠ b do
        if a > b:
            a := a - b
        else:
            b := b - a
    end while
    return a   // (now a == b) this is the GCD

Euclid's algorithm for computing GCD represented as a flowchart.
In the pseudocode above, a and b are the input integers, and the algorithm repeatedly reduces the pair (a, b) until the two values are equal, at which point that value is the GCD. The same logic is depicted in the flowchart (Figure above), which uses standard symbols: ovals for Start/End, diamonds for decision points (condition checks like A == B? and A > B?), and rectangles for computational steps (A = A - B or B = B - A). Following the flowchart from the top (Start) to bottom (End) traces the algorithm’s execution path for any given input. This combination of pseudocode and flow diagram helps illustrate the algorithm’s control flow clearly: one can see the loop (feedback arrows back to the condition check) and the two possible actions inside the loop.

Euclid’s algorithm exemplifies the key properties mentioned earlier – it has clear steps, guaranteed termination, and produces a correct output (the GCD). Tracing it also provides an intuitive feel for what an algorithm is: a systematic method that, given any two integers, will deterministically produce their greatest common divisor and then stop.

Historical Evolution of the Algorithm Concept

The concept of algorithms has been around for millennia. The word “algorithm” itself is derived from the name of the 9th-century Persian mathematician Muhammad ibn Mūsā al-Khwārizmī, Latinized as Algoritmi. Al-Khwarizmi’s treatises introduced systematic procedures for arithmetic and algebra; in fact, the term algorithm (and algebra) comes from his name and the title of one of his works. This etymology highlights that, historically, algorithms were first associated with mathematical procedures (like methods for addition, multiplication, finding square roots, etc.) long before the advent of computers.

Ancient examples: Euclid’s algorithm for GCD (described above) appeared in Euclid’s Elements around 300 BC and is often cited as one of the earliest recorded algorithms. Another early algorithmic idea can be found in the works of Eratosthenes (the Sieve of Eratosthenes for finding prime numbers). These examples show that humans have been formulating step-by-step procedures to solve problems since ancient times, even if they didn’t use the word "algorithm."

Medieval to Renaissance: The use of algorithmic procedures continued in various forms (such as methods for solving equations or performing computations by hand). Al-Khwarizmi’s influence spread through translations of his books, and by the Middle Ages, the Latin term algorismus referred to the decimal arithmetic methods based on Hindu-Arabic numerals. However, algorithms were still thought of informally as “methods” or “rules.”

19th century: A significant milestone was the work of Ada Lovelace in 1843. Ada Lovelace is often credited with writing the first algorithm intended to be executed by a machine. Working with Charles Babbage on his theoretical Analytical Engine, she authored a step-by-step method for computing Bernoulli numbers – effectively, a program (algorithm) for a mechanical computer. In her own words (from her Notes on the Analytical Engine), she described a sequence of operations for the Engine, making her the world’s first computer programmer. This historical anecdote shows the evolution of algorithms from purely mathematical procedures to instructions for a general-purpose computing machine.

Early 20th century formalization: The formalization of algorithms as a mathematical concept took shape in the 1930s. Besides Turing and Church’s contributions mentioned earlier, other formulations included Gödel, Herbrand, and Kleene’s recursive functions, and Emil Post’s production systems. All these independently defined what it means for a function to be computable by an effective procedure. By the 1940s, with the development of actual electronic computers, the term “algorithm” became common parlance in computer science, referring to any well-defined procedure that could be implemented on a machine.

Late 20th century and beyond: As computer science matured, researchers developed a wealth of algorithms for different problems (searching, sorting, graph traversal, optimization, etc.) and created subfields like algorithm analysis (to evaluate algorithm efficiency) and complexity theory (classifying problems by the resources required by the best algorithms). Donald Knuth’s 1968 series The Art of Computer Programming cemented the central role of algorithms in computing literature. Meanwhile, new types of algorithms emerged (discussed below as “modern” algorithms) that extended the classical deterministic paradigm. Importantly, the theoretical groundwork laid by the pioneers ensures that even these modern approaches have rigorous definitions. Today, an algorithm is understood as a general concept that spans any domain where stepwise problem-solving is applicable – whether on classical computers, quantum computers, or even abstract procedures in nature.

Classical Algorithm Types

When discussing algorithm categories, we can broadly distinguish classical algorithms – those that run on standard deterministic computers (or abstract Turing machines) and follow a predefined sequence of steps – from some of the more modern or advanced paradigms that have emerged (some inspired by nature or new computing models). Within classical algorithms, a primary classification is based on how the algorithm progresses and whether it incorporates randomness:

Deterministic Algorithms

A deterministic algorithm is one that behaves predictably on a given input – it will always produce the same output following the same sequence of states for that input. In other words, there is no randomness in the decision process; each step is determined entirely by the preceding state. Most traditional computer algorithms are deterministic. For example, the standard long division procedure, or a simple sorting algorithm like Insertion Sort, will do exactly the same comparisons and swaps every time you run it on the same data. Deterministic algorithms are valued for their reproducibility and certainty: given input X, you can be confident about the result and the path the algorithm will take.

Properties: Deterministic algorithms are usually easier to debug and test since their behavior is fixed. They either find the solution in a predictable number of steps or report failure (and if they fail for a given input, they will always fail for that input in the same manner). All the examples of algorithms we discussed earlier (Euclid’s GCD, etc.) are deterministic – they perform fixed operations until termination.

Examples: Classical deterministic algorithms abound in computer science. For instance, binary search (for finding an item in a sorted list) will check the middle element, then either the left half or right half, and so on – a fixed sequence given the same target value. Similarly, Dijkstra’s algorithm for shortest paths in a graph deterministically relaxes edges in a specific order. Each of these will produce the same result and intermediate steps each time they run with the same inputs.

Randomized (Probabilistic) Algorithms

Not all algorithms are entirely predictable – some leverage randomness to achieve their goals. A randomized algorithm is an algorithm that uses random numbers or random decisions at least at one step of its logic. This means even with the same input, a randomized algorithm might exhibit different behaviors on different runs (possibly producing different performance or even different outcomes with some probability). Randomized algorithms are also called probabilistic algorithms.

Why introduce randomness? In many cases, randomness can simplify algorithm design or improve efficiency on average. For example, picking a random pivot in Quick Sort avoids worst-case scenarios that a bad fixed pivot selection might lead to. Some algorithms use randomness to get out of local optima or to sample a large search space more uniformly. In certain problems (like primality testing or cryptographic protocols), randomized algorithms can be much faster than any known deterministic approach, at the cost of a tiny probability of error.

Randomized algorithms can be classified further into two types: Las Vegas algorithms, which always produce a correct result (they use randomness only to potentially improve speed) and Monte Carlo algorithms, which run in fixed time but have a small chance of giving an incorrect answer. An example of the former is Randomized Quick Sort (it always sorts correctly, but the runtime is a random variable depending on pivot choices). An example of the latter is the Miller-Rabin primality test, which can quickly determine if a number is prime with high accuracy – there’s a negligible probability it could mistake a composite for a prime, but in practice we can make that probability as small as we like by repeat trials.

Examples: Besides those mentioned, other probabilistic algorithms include hashing algorithms in data structures (where collisions are handled with randomization), randomized graph algorithms (like Karger’s algorithm for minimum cuts), and various Monte Carlo simulations used in scientific computing and AI. These algorithms exploit randomness to either handle uncertainty or to reduce expected runtime. It’s important to note that despite their randomness, we can rigorously analyze their expected behavior and error probabilities.

Modern Algorithm Types

With the growth of computing and the desire to tackle ever more complex problems, several innovative classes of algorithms have been developed. These modern algorithms often extend beyond the straightforward deterministic model, either by drawing inspiration from natural processes or by leveraging non-classical computing paradigms. We will discuss a few prominent categories: heuristic algorithms, evolutionary algorithms (genetic algorithms), and quantum algorithms.

Heuristic Algorithms

In cases where no efficient exact algorithm is known (often for NP-hard problems), or where we just need a “good enough” solution quickly, we turn to heuristics. A heuristic algorithm is one that employs a practical approach or shortcut to produce solutions that may not be optimal but are sufficient for reaching an immediate goal. In other words, it sacrifices guarantees of optimality or completeness for the sake of speed or simplicity. Heuristics are typically tailored to a specific problem domain and use domain-specific knowledge (rules of thumb, educated guesses, intuitive strategies) to make decisions within the algorithm.

Heuristic algorithms do not guarantee a perfect result for every input, but they often find very good solutions with significantly less effort than an exact algorithm would require. A common saying is that a heuristic is “fast but imperfect.” They are especially useful when an exhaustive search is computationally infeasible. For example, imagine trying to schedule hundreds of university classes into timeslots without conflicts – a brute-force search would be hopelessly slow, so one might use a heuristic algorithm that incrementally schedules classes using sensible heuristics (like “assign large classes to large rooms first”) to come up with a valid schedule quickly, even if it’s not the theoretically optimal schedule.

Examples: There are many well-known heuristic algorithms across different fields:

  • In AI and search, algorithms like A* (A-star) use heuristics to guide search through a state-space (e.g., finding the shortest path in a graph faster by prioritizing routes that look more promising). The heuristic function in A* (such as an estimate of distance to the goal) is not always 100% accurate, but it speeds up the search tremendously by focusing exploration.

  • In optimization, techniques like hill climbing, simulated annealing, and tabu search are heuristic algorithms that navigate the solution space by looking for improvements, occasionally making random or non-optimal moves to escape local optima. They often find near-optimal solutions in practice for complex optimization tasks (like the Traveling Salesman Problem) within reasonable time.

  • In machine learning, one could even consider certain training algorithms as heuristic: for instance, gradient descent is a heuristic method for minimizing a function (it doesn’t guarantee finding the global minimum, but it usually finds a good local minimum for training neural networks).

Heuristic algorithms are evaluated by the quality of solutions they produce and their runtime, rather than a guarantee of solving the problem optimally. They are an important part of the algorithm designer’s toolkit when dealing with real-world problems where exact methods are impractical.

Genetic and Evolutionary Algorithms

One fascinating modern approach to algorithms is inspired by natural evolution. Genetic algorithms (GA) and related evolutionary algorithms (EA) fall under this category. A genetic algorithm is a metaheuristic inspired by Charles Darwin’s principle of natural selection. It belongs to the larger class of evolutionary algorithms, which simulate the process of natural evolution to “evolve” solutions to a problem.

In a genetic algorithm, a population of candidate solutions (often encoded as strings, analogous to chromosomes) is maintained. The algorithm then applies genetic operators such as selection, crossover (recombination), and mutation over many iterations (generations):

  • In each generation, candidates are evaluated with a fitness function (which measures how good the solution is for the problem at hand).

  • The fitter solutions are more likely to be selected to form new solutions. Pairs of solutions “reproduce” by exchanging parts of their structure (crossover), producing offspring solutions that combine aspects of both parents.

  • Occasionally, random mutations are applied to introduce new variation.

  • The new generation of solutions replaces some of the old ones, and the process repeats.

Over successive generations, the population “evolves” and tends to improve, ideally converging toward high-quality solutions. Evolutionary algorithms are probabilistic in nature (they use random variation and selection), but they are guided by the pressure of selection to continually improve candidates.

Examples and applications: Genetic algorithms and other evolutionary methods (like evolution strategies, genetic programming, differential evolution, etc.) are particularly useful for optimization problems where the search space is huge or poorly understood. They have been applied in areas such as scheduling, engineering design optimization, evolving neural network weights, and even creative tasks like generating art or music. For instance, a GA can be used to evolve a good set of parameters for an AI agent, or to design an antenna shape with optimal signal properties (a famous NASA antenna was evolved via an algorithm rather than hand-designed).

Evolutionary algorithms are considered modern because they deviate from the predictable, step-by-step nature of classical algorithms. They are stochastic and work on a population of solutions simultaneously, which is very different from the one-path approach of traditional algorithms. While they do not guarantee the optimal solution, they often find very high-quality solutions in complex spaces and are robust to the kinds of noisy, high-dimensional problems that are difficult for classical deterministic methods.

Quantum Algorithms

Another frontier of algorithms comes with the advent of quantum computing. Quantum algorithms are procedures that run on a quantum computer and exploit quantum-mechanical phenomena (superposition, entanglement, interference) to process information in ways that classical algorithms cannot. A quantum algorithm is typically described in terms of operations on quantum bits (qubits) and quantum logic gates, and it is executed on the quantum circuit model (or other models like quantum Turing machines). Importantly, some quantum algorithms offer dramatic speedups over the best known classical equivalents for certain problems.

The theoretical importance of quantum algorithms lies in how they redefine what is efficiently computable. For some problems, quantum algorithms are provably faster than any known classical algorithm:

  • Shor’s algorithm (discovered by Peter Shor in 1994) can factor large integers in polynomial time on a quantum computer, a task for which all known classical algorithms are super-polynomial (believed exponential) in complexity. This has huge implications for cryptography, since the security of RSA encryption relies on the hardness of factoring. Shor’s algorithm is a quintessential example of a quantum algorithm that outperforms classical algorithms by leveraging quantum parallelism and interference.

  • Grover’s algorithm (Lov Grover, 1996) is another famous quantum algorithm that provides a quadratic speedup for unstructured search problems. If you have an unsorted database of N items and you want to find a particular item, any classical algorithm needs O(N) time in the worst case (linear search). Grover’s algorithm can find the item in O(√N) steps, which is a significant improvement for large N. While not exponential, this quadratic speedup is widely applicable (it effectively accelerates any brute-force search).

Quantum algorithms often require a different mode of thinking – they involve operations that manipulate probability amplitudes. Designing quantum algorithms is challenging because one must ensure that measuring the quantum state at the end yields the desired answer with high probability, and interference among amplitudes is orchestrated to cancel out incorrect outcomes. Many quantum algorithms use techniques like amplitude amplification, quantum Fourier transform, or quantum phase estimation as subroutines.

It’s important to note that as of today, practical quantum computers are still in early development, but the theoretical foundation of quantum algorithms is well-established. Researchers continue to discover new quantum algorithms and complexity separations. In an AI education context, quantum algorithms represent the cutting edge of how we define “step-by-step procedures” in a computing paradigm that is fundamentally different from classical sequential execution.


Conclusion

In summary, an algorithm is the theoretical bedrock of computation – a finite, well-defined sequence of steps to solve a problem. We explored its formal definitions (grounded in the work of Turing, Church, and others), essential properties (finiteness, definiteness, etc.), and the historical evolution from ancient mathematics to modern computer science. We also classified algorithms into various types: from classical deterministic algorithms (which produce a predictable outcome on each run) and randomized algorithms (which inject randomness for efficiency or simplicity), to more modern varieties like heuristics (trading optimality for practicality), evolutionary algorithms (mimicking natural selection), and quantum algorithms (harnessing quantum physics for speedups). Each class of algorithm comes with its own theoretical foundations and use-cases, but they all adhere to the core idea of an algorithm – a methodical procedure for solving a problem.

Understanding what algorithms are and how they can be categorized is crucial for anyone in computer science or AI. It provides insight into why certain approaches are chosen to tackle problems and how far we can push the boundaries of computation. As we progress in this AI education series, the concept of algorithms will repeatedly surface – whether in designing machine learning procedures, in analyzing the efficiency of AI systems, or in reasoning about what computers (classical or quantum) can and cannot do. Having a strong grasp of “What is an Algorithm?” and its theoretical underpinnings will enable you to appreciate and design intelligent systems more effectively.

References

  1. Wikipedia: “Algorithm.” Wikipedia, The Free Encyclopedia. Available: https://en.wikipedia.org/wiki/Algorithm. (accessed Nov. 5, 2025)

  2. TutorialsPoint: “Algorithm Specification – Introduction in Data Structure.” Available: https://www.tutorialspoint.com/algorithm-specification-introduction-in-data-structure (accessed Sep. 10, 2025)

  3. Encyclopædia Britannica: “al-Khwārizmī.” Encyclopædia Britannica, Apr. 11, 2025

  4. Max Planck Society – Pioneers of Science: A. Siffert, “Ada Lovelace and the first computer programme in the world.” Max Planck Institute for Mathematics, Interview/Article, 2020

  5. Deterministic Algorithm – Wikipedia: “Deterministic algorithm.” Wikipedia, The Free Encyclopedia. Available: https://en.wikipedia.org/wiki/Deterministic_algorithm. (accessed Nov. 5, 2025)

  6. N. Skochdopole (Stanford University): “Discrete Mathematics and Algorithms – Refresher Notes.” Sep. 2015, pp. 17–18

  7. Heuristic – Wikipedia: “Heuristic (Computer Science).” Wikipedia, The Free Encyclopedia. Available: https://en.wikipedia.org/wiki/Heuristic (accessed Nov. 5, 2025)

  8. Genetic Algorithm – Wikipedia: “Genetic algorithm.” Wikipedia, The Free Encyclopedia. Available: https://en.wikipedia.org/wiki/Genetic_algorithm (accessed Nov. 5, 2025)

  9. Quantum Computing Glossary – Wikipedia: “Quantum algorithm.” in Glossary of Quantum Computing. Wikipedia, The Free Encyclopedia. Available: https://en.wikipedia.org/wiki/Glossary_of_quantum_computing (accessed Nov. 5, 2025)

댓글

이 블로그의 인기 게시물

Expert Systems and Knowledge-Based AI (1960s–1980s)

4.1. Deep Learning Frameworks

Core Technologies of Artificial Intelligence Services part2