List of algorithms
Encyclopedia
General combinatorial algorithms
- Brent's algorithm: finds cycles in iterations using only two iterators
- Floyd's cycle-finding algorithmFloyd's cycle-finding algorithmIn computer science, cycle detection is the algorithmic problem of finding a cycle in a sequence of iterated function values.For any function ƒ that maps a finite set S to itself, and any initial value x0 in S, the sequence of iterated function values x_0,\ x_1=f,\ x_2=f,\ \dots,\ x_i=f,\ \dotsmust...
: finds cycles in iterations - Gale–Shapley algorithm: solve the stable marriage problemStable marriage problemIn mathematics and computer science, the stable marriage problem is the problem of finding a stable matching between two sets of elements given a set of preferences for each element. A matching is a mapping from the elements of one set to the elements of the other set...
- Pseudorandom number generatorPseudorandom number generatorA pseudorandom number generator , also known as a deterministic random bit generator , is an algorithm for generating a sequence of numbers that approximates the properties of random numbers...
s (uniformly distributed):- Blum Blum Shub
- Lagged Fibonacci generatorLagged Fibonacci generatorA Lagged Fibonacci generator is an example of a pseudorandom number generator. This class of random number generator is aimed at being an improvement on the 'standard' linear congruential generator...
- Linear congruential generatorLinear congruential generatorA Linear Congruential Generator represents one of the oldest and best-known pseudorandom number generator algorithms. The theory behind them is easy to understand, and they are easily implemented and fast....
- Mersenne twister
Graph algorithms
- Coloring algorithm: Graph coloring algorithm.
- Hopcroft–Karp algorithm: convert a bipartite graph to a maximum cardinality matching
- Hungarian algorithmHungarian algorithmThe Hungarian method is a combinatorial optimization algorithm which solves the assignment problem in polynomial time and which anticipated later primal-dual methods...
: algorithm for finding a perfect matching - Prüfer codingPrüfer sequenceIn combinatorial mathematics, the Prüfer sequence of a labeled tree is a unique sequence associated with the tree. The sequence for a tree on n vertices has length n − 2, and can be generated by a simple iterative algorithm...
: conversion between a labeled tree and its Prüfer sequencePrüfer sequenceIn combinatorial mathematics, the Prüfer sequence of a labeled tree is a unique sequence associated with the tree. The sequence for a tree on n vertices has length n − 2, and can be generated by a simple iterative algorithm... - Tarjan's off-line least common ancestors algorithmTarjan's off-line least common ancestors algorithmIn computer science, Tarjan's off-line least common ancestors algorithm is an algorithm for computing lowest common ancestors for pairs of nodes in a tree, based on the union-find data structure...
: compute lowest common ancestorLowest common ancestorThe lowest common ancestor is a concept in graph theory and computer science. Let T be a rooted tree with n nodes. The lowest common ancestor is defined between two nodes v and w as the lowest node in T that has both v and w as descendants .The LCA of v and w in T is the shared ancestor of v...
s for pairs of nodes in a tree - Topological sortTopological sortingIn computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that, for every edge uv, u comes before v in the ordering...
: finds linear order of nodes (e.g. jobs) based on their dependencies.
Graph drawing
- Force-based algorithms (also known as force-directed algorithms or spring-based algorithm)
- Spectral layoutSpectral layoutSpectral layout is a class of algorithm for drawing graphs. The layout uses the eigenvectors of a matrix, such as the Laplace matrix of the graph, as Cartesian coordinates of the graph's vertices....
Network theory
- Network analysis
- Link analysis
- Girvan–Newman algorithm: detect communities in complex systems
- Web link analysis
- Hyperlink-Induced Topic Search (HITS) (also known as Hubs and authorities)
- PageRankPageRankPageRank is a link analysis algorithm, named after Larry Page and used by the Google Internet search engine, that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set...
- TrustRankTrustRankTrustRank is a link analysis technique described in a paper by Stanford University and Yahoo! researchers for semi-automatically separating useful webpages from spam.Many Web spam pages are created only with the intention of misleading search engines...
- PRW, PFW
- Link analysis
- Flow networkFlow networkIn graph theory, a flow network is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in Operations Research, a directed graph is called a network, the vertices are called nodes and the edges are...
s- Dinic's algorithmDinic's algorithmDinic's algorithm is a strongly polynomial algorithm for computing the maximum flow in a flow network, conceived in 1970 by Israeli computer scientist Yefim Dinitz. The algorithm runs in O time and is similar to the Edmonds–Karp algorithm, which runs in O time, in that it uses shortest augmenting...
: is a strongly polynomial algorithm for computing the maximum flow in a flow networkFlow networkIn graph theory, a flow network is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in Operations Research, a directed graph is called a network, the vertices are called nodes and the edges are...
. - Edmonds–Karp algorithm: implementation of Ford–Fulkerson
- Ford–Fulkerson algorithm: computes the maximum flowMaximum flow problemIn optimization theory, the maximum flow problem is to find a feasible flow through a single-source, single-sink flow network that is maximum....
in a graph - Karger's algorithmKarger's algorithmIn computer science and graph theory, the Karger's algorithm is a Monte Carlo method to compute the minimum cut of a connected graph developed by David Karger.-Algorithm:The idea of the algorithm is based on the concept of contraction of an edge e in a graph...
: a Monte Carlo method to compute the minimum cutMinimum cutIn graph theory, a minimum cut of a graph is a cut whose cutset has the smallest number of elements or smallest sum of weights possible...
of a connected graph - Push-relabel algorithm: computes a maximum flowMaximum flow problemIn optimization theory, the maximum flow problem is to find a feasible flow through a single-source, single-sink flow network that is maximum....
in a graph
- Dinic's algorithm
Routing
- Edmonds's algorithmEdmonds's algorithmIn graph theory, a branch of mathematics, Edmonds' algorithm or Chu–Liu/Edmonds' algorithm is an algorithm for finding a maximum or minimum optimum branchings. When nodes are connected by weighted edges that are directed, a minimum spanning tree algorithm cannot be used...
(also known as Chu–Liu/Edmonds's algorithm): find maximum or minimum branchings - Euclidean minimum spanning treeEuclidean minimum spanning treeThe Euclidean minimum spanning tree or EMST is a minimum spanning tree of a set of n points in the plane , where the weight of the edge between each pair of points is the distance between those two points...
: algorithms for computing the minimum spanning tree of a set of points in the plane - Longest path problemLongest path problemIn the mathematical discipline of graph theory, the longest path problem is the problem of finding a simple path of maximum length in a given graph. A path is called simple if it does not have any repeated vertices...
: find a simple path of maximum length in a given graph - Minimum spanning treeMinimum spanning treeGiven a connected, undirected graph, a spanning tree of that graph is a subgraph that is a tree and connects all the vertices together. A single graph can have many different spanning trees...
- Boruvka's algorithmBoruvka's algorithmBorůvka's algorithm is an algorithm for finding a minimum spanning tree in a graph for which all edge weights are distinct.It was first published in 1926 by Otakar Borůvka as a method of constructing an efficient electricity network for Moravia....
- Kruskal's algorithmKruskal's algorithmKruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized...
- Prim's algorithmPrim's algorithmIn computer science, Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a connected weighted undirected graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized...
- Reverse-delete algorithmReverse-delete algorithmThe reverse-delete algorithm is an algorithm in graph theory used to obtain a minimum spanning tree from a given connected, edge-weighed graph. If the graph is disconnected, this algorithm will find a minimum spanning tree for each disconnected part of the graph...
- Boruvka's algorithm
- Nonblocking Minimal Spanning SwitchNonblocking minimal spanning switchA nonblocking minimal spanning switch is a device that can connect N inputs to N outputs in any combination. The most familiar use of switches of this type is in a telephone exchange. The term "non-blocking" means that if it is not defective, it can always make the connection...
say, for a telephone exchangeTelephone exchangeIn the field of telecommunications, a telephone exchange or telephone switch is a system of electronic components that connects telephone calls... - Shortest path problemShortest path problemIn graph theory, the shortest path problem is the problem of finding a path between two vertices in a graph such that the sum of the weights of its constituent edges is minimized...
- Bellman–Ford algorithm: computes shortest pathsShortest path problemIn graph theory, the shortest path problem is the problem of finding a path between two vertices in a graph such that the sum of the weights of its constituent edges is minimized...
in a weighted graph (where some of the edge weights may be negative) - Dijkstra's algorithmDijkstra's algorithmDijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and published in 1959, is a graph search algorithm that solves the single-source shortest path problem for a graph with nonnegative edge path costs, producing a shortest path tree...
: computes shortest pathsShortest path problemIn graph theory, the shortest path problem is the problem of finding a path between two vertices in a graph such that the sum of the weights of its constituent edges is minimized...
in a graph with non-negative edge weights - Floyd–Warshall algorithm: solves the all pairs shortest path problem in a weighted, directed graph
- Johnson algorithm: All pairs shortest path algorithm in sparse weighted directed graph
- Perturbation methods: an algorithm that computes a locally shortest pathsShortest path problemIn graph theory, the shortest path problem is the problem of finding a path between two vertices in a graph such that the sum of the weights of its constituent edges is minimized...
in a graph
- Bellman–Ford algorithm: computes shortest paths
- Traveling salesman problem
- Christofides algorithmChristofides algorithmThe goal of the Christofides heuristic algorithm is to find a solution to the instances of the traveling salesman problem where the edge weights satisfy the triangle inequality.Let G= be an instance of TSP, i.e...
- Nearest neighbour algorithmNearest neighbour algorithmThe nearest neighbour algorithm was one of the first algorithms used to determine a solution to the travelling salesman problem. In it, the salesman starts at a random city and repeatedly visits the nearest city until all have been visited...
- Christofides algorithm
- Warnsdorff's algorithm: A heuristic method for solving the Knight's TourKnight's tourThe knight's tour is a mathematical problem involving a knight on a chessboard. The knight is placed on the empty board and, moving according to the rules of chess, must visit each square exactly once. A knight's tour is called a closed tour if the knight ends on a square attacking the square from...
problem.
Search
- A*: special case of best-first search that uses heuristics to improve speed
- B*: a best-first graph search algorithm that finds the least-cost path from a given initial node to any goal node (out of one or more possible goals)
- BacktrackingBacktrackingBacktracking is a general algorithm for finding all solutions to some computational problem, that incrementally builds candidates to the solutions, and abandons each partial candidate c as soon as it determines that c cannot possibly be completed to a valid solution.The classic textbook example...
: abandon partial solutions when they are found not to satisfy a complete solution - Beam searchBeam searchIn computer science, beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements...
: is a heuristic search algorithm that is an optimization of best-first searchBest-first searchBest-first search is a search algorithm which explores a graph by expanding the most promising node chosen according to a specified rule.Judea Pearl described best-first search as estimating the promise of node n by a "heuristic evaluation function f which, in general, may depend on the description...
that reduces its memory requirement - Beam stack searchBeam stack searchBeam Stack Search is a search algorithm that combines chronological backtracking with beam search and is similar to Depth-First Beam Search...
: integrates backtracking with beam searchBeam searchIn computer science, beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements... - Best-first searchBest-first searchBest-first search is a search algorithm which explores a graph by expanding the most promising node chosen according to a specified rule.Judea Pearl described best-first search as estimating the promise of node n by a "heuristic evaluation function f which, in general, may depend on the description...
: traverses a graph in the order of likely importance using a priority queuePriority queueA priority queue is an abstract data type in computer programming.It is exactly like a regular queue or stack data structure, but additionally, each element is associated with a "priority".... - Bidirectional searchBidirectional searchBidirectional search is a graph search algorithm that finds a shortest path from an initial vertex to a goal vertex in a directed graph. It runs two simultaneous searches: one forward from the initial state, and one backward from the goal, stopping when the two meet in the middle...
: find the shortest path from an initial vertex to a goal vertex in a directed graph - Bloom filterBloom filterA Bloom filter, conceived by Burton Howard Bloom in 1970, is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. False positives are possible, but false negatives are not; i.e. a query returns either "inside set " or "definitely not in set"...
: a constant time and memory check to see whether a given element exists in a set. May return a false positive, but never a false negative. - Breadth-first searchBreadth-first searchIn graph theory, breadth-first search is a graph search algorithm that begins at the root node and explores all the neighboring nodes...
: traverses a graph level by level - D*: an incremental heuristic searchIncremental heuristic searchIncremental heuristic search algorithms combine both incremental and heuristic search to speed up searches of sequences of similar search problems, which is important in domains that are only incompletely known or change dynamically. Incremental search has been studied at least since the late 1960s...
algorithm - Depth-first searchDepth-first searchDepth-first search is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root and explores as far as possible along each branch before backtracking....
: traverses a graph branch by branch - Dijkstra's algorithmDijkstra's algorithmDijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and published in 1959, is a graph search algorithm that solves the single-source shortest path problem for a graph with nonnegative edge path costs, producing a shortest path tree...
: A special case of A* for which no heuristic function is used - General Problem SolverGeneral Problem SolverGeneral Problem Solver was a computer program created in 1959 by Herbert Simon, J.C. Shaw, and Allen Newell intended to work as a universal problem solver machine. Any formalized symbolic problem can be solved, in principle, by GPS. For instance: theorems proof, geometric problems and chess...
: a seminal theorem-proving algorithm intended to work as a universal problem solver machine. - Iterative deepening depth-first searchIterative deepening depth-first searchIterative deepening depth-first search is a state space search strategy in which a depth-limited search is run repeatedly, increasing the depth limit with each iteration until it reaches d, the depth of the shallowest goal state...
(IDDFS): a state space search strategy - Lexicographic breadth-first searchLexicographic breadth-first searchIn computer science, lexicographic breadth-first search or Lex-BFS is a linear time algorithm for ordering the vertices of a graph, that is used as part of other graph algorithms such as the recognition of chordal graphs and optimal coloring of distance-hereditary graphs...
(also known as Lex-BFS): a linear time algorithm for ordering the vertices of a graph - Uniform-cost searchUniform-cost searchIn computer science, uniform-cost search is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. The search begins at the root node. The search continues by visiting the next node which has the least total cost from the root...
: a tree searchTree traversalIn computer science, tree-traversal refers to the process of visiting each node in a tree data structure, exactly once, in a systematic way. Such traversals are classified by the order in which the nodes are visited...
that finds the lowest cost route where costs vary - SSS*: state space search traversing a game tree in a best-first fashion similar to that of the A* search algorithm
Subgraphs
- Bron–Kerbosch algorithmBron–Kerbosch algorithmIn computer science, the Bron–Kerbosch algorithm is an algorithm for finding maximal cliques in an undirected graph. That is, it lists all subsets of vertices with the two properties that each pair of vertices in one of the listed subsets is connected by an edge, and no listed subset can have any...
: a technique for finding maximal cliques in an undirected graph - Strongly connected components
- Cheriyan–Mehlhorn/Gabow algorithmGabow's algorithmIn graph theory, the Cheriyan–Mehlhorn/Gabow algorithm is a linear-time method for finding the strongly connected components of a directed graph. It was discovered in 1996 by Joseph Cheriyan and Kurt Mehlhornand rediscovered in 1999 by Harold N...
- Kosaraju's algorithmKosaraju's algorithmIn computer science, the Kosaraju-Sharir algorithm is an algorithm to find the strongly connected components of a directed graph. Aho, Hopcroft and Ullman credit it to an unpublished paper from 1978 by S. Rao Kosaraju and Micha Sharir...
- Tarjan's algorithm
- Cheriyan–Mehlhorn/Gabow algorithm
Approximate matching
- Bitap algorithmBitap algorithmThe bitap algorithm is an approximate string matching algorithm...
: fuzzy algorithm that determines if strings are approximately equal. - Phonetic algorithmPhonetic algorithmA phonetic algorithm is an algorithm for indexing of words by their pronunciation. Most phonetic algorithms were developed for use with the English language; consequently, applying the rules to words in other languages might not give a meaningful result....
s- Daitch–Mokotoff Soundex: a SoundexSoundexSoundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. The algorithm mainly encodes consonants; a vowel will not be encoded unless...
refinement which allows matching of Slavic and Germanic surnames - Double Metaphone: an improvement on Metaphone
- Match Rating ApproachMatch Rating ApproachA phonetic algorithm developed by Western Airlines in 1977 for the indexation and comparison of homophonous names.The algorithm itself has a simple set of encoding rules but a more lengthy set of comparison rules....
: a phonetic algorithm developed by Western Airlines - MetaphoneMetaphoneMetaphone is a phonetic algorithm, an algorithm published in 1990 for indexing words by their English pronunciation. It fundamentally improves on the Soundex algorithm by using information about variations and inconsistencies in English spelling and pronunciation to produce a more accurate...
: an algorithm for indexing words by their sound, when pronounced in English - NYSIISNew York State Identification and Intelligence SystemThe New York State Identification and Intelligence System Phonetic Code, commonly known as NYSIIS, is a phonetic algorithm devised in 1970 as part of the New York State Identification and Intelligence System...
: phonetic algorithmPhonetic algorithmA phonetic algorithm is an algorithm for indexing of words by their pronunciation. Most phonetic algorithms were developed for use with the English language; consequently, applying the rules to words in other languages might not give a meaningful result....
, improves on SoundexSoundexSoundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. The algorithm mainly encodes consonants; a vowel will not be encoded unless... - SoundexSoundexSoundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. The algorithm mainly encodes consonants; a vowel will not be encoded unless...
: a phonetic algorithm for indexing names by sound, as pronounced in English
- Daitch–Mokotoff Soundex: a Soundex
- String metrics: compute a similarity or dissimilarity (distance) score between two pairs of text strings
- Damerau–Levenshtein distance compute a distance measure between two strings, improves on Levenshtein distanceLevenshtein distanceIn information theory and computer science, the Levenshtein distance is a string metric for measuring the amount of difference between two sequences...
- Dice's coefficient (also known as the Dice coefficient): a similarity measure related to the Jaccard indexJaccard indexThe Jaccard index, also known as the Jaccard similarity coefficient , is a statistic used for comparing the similarity and diversity of sample sets....
- Hamming distanceHamming distanceIn information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different...
: sum number of positions which are different - Jaro–Winkler distance: is a measure of similarity between two strings
- Levenshtein edit distanceLevenshtein distanceIn information theory and computer science, the Levenshtein distance is a string metric for measuring the amount of difference between two sequences...
: compute a metric for the amount of difference between two sequences
- Damerau–Levenshtein distance compute a distance measure between two strings, improves on Levenshtein distance
- Trigram searchTrigram searchTrigram search is a method of searching for text when the exact syntax or spelling of the target object is not precisely known. It finds objects which match the maximum number of three-character strings in the entered search terms, i.e. near matches. A threshold can be specified as a cutoff point,...
: search for text when the exact syntax or spelling of the target object is not precisely known
Item search
- Linear searchLinear searchIn computer science, linear search or sequential search is a method for finding a particular value in a list, that consists of checking every one of its elements, one at a time and in sequence, until the desired one is found....
: finds an item in an unsorted list - Selection algorithmSelection algorithmIn computer science, a selection algorithm is an algorithm for finding the kth smallest number in a list . This includes the cases of finding the minimum, maximum, and median elements. There are O, worst-case linear time, selection algorithms...
: finds the kth largest item in a list - Sorted lists
- Binary search algorithmBinary search algorithmIn computer science, a binary search or half-interval search algorithm finds the position of a specified value within a sorted array. At each stage, the algorithm compares the input key value with the key value of the middle element of the array. If the keys match, then a matching element has been...
: locates an item in a sorted list - Fibonacci search techniqueFibonacci search techniqueIn computer science, the Fibonacci search technique is a method of searching a sorted array using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers.Compared to binary search, Fibonacci search examines...
: search a sorted array using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers - Jump search (also called block search)
- Predictive searchInterpolation searchInterpolation search is an algorithm for searching for a given key value in an indexed array that has been ordered by the values of the key. It parallels how humans search through a telephone book for a particular name, the key value by which the book's entries are ordered...
: binary-like search which factors in magnitudeMagnitude (mathematics)The magnitude of an object in mathematics is its size: a property by which it can be compared as larger or smaller than other objects of the same kind; in technical terms, an ordering of the class of objects to which it belongs....
of search term versus the high and low values in the search. Sometimes called dictionary search or interpolated search. - Uniform binary searchUniform binary searchUniform binary search is an optimization of the classic binary search algorithm invented by Donald Knuth and given in Knuth's The Art of Computer Programming...
: an optimization of the classic binary search algorithm
- Binary search algorithm
- Ternary searchTernary searchA ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function...
: a technique for finding the minimum or maximum of a function that is either strictly increasing and then strictly decreasing or vice versa
Merging
- Simple Merge algorithm
- k-way Merge algorithm
- Union (merge, with elements on the output not repeated)
Permutations
- Fisher–Yates shuffle (also known as the Knuth shuffle): randomly shuffle a finite set
- Schensted algorithm: constructs a pair of Young tableaux from a permutation
- Steinhaus–Johnson–Trotter algorithm (also known as the Johnson–Trotter algorithm): generate permutations by transposing elements
Sequence alignment
- Dynamic time warpingDynamic time warpingDynamic time warping is an algorithm for measuring similarity between two sequences which may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even...
: measure similarity between two sequences which may vary in time or speed - Hirschberg's algorithmHirschberg's algorithmHirschberg's algorithm, named after its inventor, Dan Hirschberg, is a dynamic programming algorithm that finds the least cost sequence alignment between two strings, where cost is measured as Levenshtein distance, defined to be the sum of the costs of insertions, replacements, deletions, and null...
: finds the least cost sequence alignmentSequence alignmentIn bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the sequences. Aligned sequences of nucleotide or amino acid residues are...
between two sequences, as measured by their Levenshtein distanceLevenshtein distanceIn information theory and computer science, the Levenshtein distance is a string metric for measuring the amount of difference between two sequences... - Needleman–Wunsch algorithmNeedleman–Wunsch algorithmThe Needleman–Wunsch algorithm performs a global alignment on two sequences . It is commonly used in bioinformatics to align protein or nucleotide sequences. The algorithm was published in 1970 by Saul B. Needleman and Christian D...
: find global alignment between two sequences - Smith–Waterman algorithm: find local sequence alignment
Sorting
- Exchange Sorts
- Bubble sortBubble sortBubble sort, also known as sinking sort, is a simple sorting algorithm that works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which...
: for each pair of indices, swap the items if out of order - Cocktail sortCocktail sortCocktail sort, also known as bidirectional bubble sort, cocktail shaker sort, shaker sort , ripple sort, shuffle sort, shuttle sort or happy hour sort, is a variation of bubble sort that is both a stable sorting algorithm and a comparison sort...
- Comb sortComb sortComb sort is a relatively simplistic sorting algorithm originally designed by Włodzimierz Dobosiewicz in 1980. Later it was rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine . Comb sort improves on bubble sort, and rivals algorithms like Quicksort...
- Gnome sortGnome sortGnome sort, originally proposed by Hamid Sarbazi-Azad in 2000 and called , and then later on described by Dick Grune and named "Gnome sort", is a sorting algorithm which is similar to insertion sort, except that moving an element to its proper place is accomplished by a series of swaps, as in...
- Odd-even sortOdd-even sortIn computing, an odd–even sort or odd–even transposition sort is a relatively simple sorting algorithm, developed originally for use on parallel processors with local interconnections. It is a comparison sort related to bubble sort, with which it shares many characteristics...
- Quicksort: divide list into two, with all items on the first list coming before all items on the second list.; then sort the two lists. Often the method of choice
- Bubble sort
- Humorous or ineffective
- BogosortBogosortIn computer science, bogosort is a particularly ineffective sorting algorithm based on the generate and test paradigm. It is not useful for sorting, but may be used for educational purposes, to contrast it with other more realistic algorithms; it has also been used as an example in logic programming...
- Stooge sortStooge sortStooge sort is a recursive sorting algorithm with a time complexity of ).The running time of the algorithm is thus extremely slow comparedto efficient sorting algorithms, such as Merge sort, and is even slower than Bubble sort, a canonical example of a fairly inefficient and simple sort.The...
- Bogosort
- Hybrid
- FlashsortFlashsortFlashsort is a distribution sorting algorithm showing linear computational complexity O for uniformly distributed data sets and relatively little additional memory requirement...
- IntrosortIntrosortIntrosort or introspective sort is a sorting algorithm designed by David Musser in 1997. It begins with quicksort and switches to heapsort when the recursion depth exceeds a level based on the number of elements being sorted...
: begin with quicksort and switch to heapsort when the recursion depth exceeds a certain level - TimsortTimsortTimsort is a hybrid sorting algorithm, derived from merge sort and insertion sort, designed to perform well on many kinds of real-world data. It was invented by Tim Peters in 2002 for use in the Python programming language. The algorithm is designed to find subsets of the data which are already...
: adaptative algorithm derived from merge sort and insertion sort. Used in Python >=2.3 and Java SE 7.
- Flashsort
- Insertion sorts
- Insertion sortInsertion sortInsertion sort is a simple sorting algorithm: a comparison sort in which the sorted array is built one entry at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort...
: determine where the current item belongs in the list of sorted ones, and insert it there - Library sortLibrary sortLibrary sort, or gapped insertion sort is a sorting algorithm that uses an insertion sort, but with gaps in the array to accelerate subsequent insertions...
- Patience sortingPatience sortingPatience sorting is a sorting algorithm, based on a solitaire card game, that has the property of being able to efficiently compute the length of a longest increasing subsequence in a given array.-The card game:...
- Shell sortShell sortShellsort, also known as Shell sort or Shell's method is an in-place comparison sort. It generalizes an exchanging sort, such as insertion or bubble sort, by allowing the comparison and exchange of elements that lie far apart. Its first version was published by Donald Shell in 1959. The running...
: an attempt to improve insertion sort - Tree sortTree sortA tree sort is a sort algorithm that builds a binary search tree from the keys to be sorted, and then traverses the tree so that the keys come out in sorted order. Its typical use is when sorting the elements of a stream from a file...
(binary tree sort): build binary tree, then traverse it to create sorted list - Cycle sortCycle sortCycle sort is an in-place, unstable sorting algorithm, a comparison sort that is theoretically optimal in terms of the total number of writes to the original array, unlike any other in-place sorting algorithm...
: in-place with theoretically optimal number of writes
- Insertion sort
- Merge sorts
- Merge sortMerge sortMerge sort is an O comparison-based sorting algorithm. Most implementations produce a stable sort, meaning that the implementation preserves the input order of equal elements in the sorted output. It is a divide and conquer algorithm...
: sort the first and second half of the list separately, then merge the sorted lists - Strand sortStrand sortStrand sort is a sorting algorithm. It works by repeatedly pulling sorted sublists out of the list to be sorted and merging them with a result array...
- Merge sort
- Non-comparison sorts
- Bead sortBead sortBead sort is a natural sorting algorithm, developed by Joshua J. Arulanandham, Cristian S. Calude and Michael J. Dinneen in 2002, and published in The Bulletin of the European Association for Theoretical Computer Science...
- Bucket sortBucket sortBucket sort, or bin sort, is a sorting algorithm that works by partitioning an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, and is a cousin of...
- BurstsortBurstsortBurstsort and its variants are cache-efficient algorithms for sorting strings and are faster than quicksort and radix sort for large data sets....
: build a compact, cache efficient burst trie and then traverse it to create sorted output - Counting sortCounting sortIn computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that have each distinct key value, and using arithmetic on those counts to...
- Pigeonhole sortPigeonhole sortPigeonhole sorting, also known as count sort , is a sorting algorithm that is suitable for sorting lists of elements where the number of elements and the number of possible key values are approximately the same...
- Postman sort: variant of Bucket sort which takes advantage of hierarchical structure
- Radix sortRadix sortIn computer science, radix sort is a non-comparative integer sorting algorithm that sorts data with integer keys by grouping keys by the individual digits which share the same significant position and value...
: sorts strings letter by letter
- Bead sort
- Selection sorts
- HeapsortHeapsortHeapsort is a comparison-based sorting algorithm to create a sorted array , and is part of the selection sort family. Although somewhat slower in practice on most machines than a well implemented quicksort, it has the advantage of a more favorable worst-case O runtime...
: convert the list into a heap, keep removing the largest element from the heap and adding it to the end of the list - Selection sortSelection sortSelection sort is a sorting algorithm, specifically an in-place comparison sort. It has O time complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort...
: pick the smallest of the remaining elements, add it to the end of the sorted list - SmoothsortSmoothsortSmoothsort is a comparison-based sorting algorithm. It is a variation of heapsort developed by Edsger Dijkstra in 1981. Like heapsort, smoothsort's upper bound is O...
- Heapsort
- Other
- Bitonic sorterBitonic sorterBitonic mergesort is a parallel algorithm for sorting. It is also used as a construction method for building a sorting network. The algorithm was devised by Ken Batcher...
- Pancake sortingPancake sortingPancake sorting is a variation of the sorting problem in which the only allowed operation is to reverse the elements of some prefix of the sequence. Unlike a traditional sorting algorithm, which attempts to sort with the fewest comparisons possible, the goal is to sort the sequence in as few...
- Topological sortTopological sortingIn computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that, for every edge uv, u comes before v in the ordering...
- Bitonic sorter
- Unknown class
- SamplesortSamplesortSamplesort is a sorting algorithm described in the 1970 paper "Samplesort: A Sampling Approach to Minimal Storage Tree Sorting", by W D Frazer and A C McKellar....
- Samplesort
Subsequences
- Kadane's algorithm: finds maximum sub-array of any size
- Longest common subsequence problemLongest common subsequence problemThe longest common subsequence problem is to find the longest subsequence common to all sequences in a set of sequences . Note that subsequence is different from a substring, see substring vs. subsequence...
: Find the longest subsequence common to all sequences in a set of sequences - Longest increasing subsequence problem: find the longest increasing subsequence of a given sequence
- Shortest common supersequenceShortest common supersequenceThis shortest common supersequence problem is closely related to the longest common subsequence problem. Given two sequences X = and Y = , a sequence U = is a common supersequence of X and Y if U is a supersequence of both X and Y...
problem: Find the shortest supersequence that contains two or more sequences as subsequences
Substrings
- Longest common substring problemLongest common substring problemThe longest common substring problem is to find the longest string that is a substring of two or more strings. It should not be confused with the longest common subsequence problem. The longest common substring problem is to find the longest string (or strings) that is a substring (or are...
: find the longest string (or strings) that is a substring (or are substrings) of two or more strings - Substring search
- Aho–Corasick string matching algorithmAho–Corasick string matching algorithmThe Aho–Corasick string matching algorithm is a string searching algorithm invented by Alfred V. Aho and Margaret J. Corasick. It is a kind of dictionary-matching algorithm that locates elements of a finite set of strings within an input text. It matches all patterns simultaneously...
: trieTrieIn computer science, a trie, or prefix tree, is an ordered tree data structure that is used to store an associative array where the keys are usually strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the...
based algorithm for finding all substring matches to any of a finite set of strings - Boyer–Moore string search algorithmBoyer–Moore string search algorithmThe Boyer–Moore string search algorithm is a particularly efficient string searching algorithm, and it has been the standard benchmark for the practical string search literature. It was developed by Bob Boyer and J Strother Moore in 1977...
: amortized linear (sublinear in most times) algorithm for substring search - Boyer–Moore–Horspool algorithmBoyer–Moore–Horspool algorithmIn computer science, the Boyer–Moore–Horspool algorithm or Horspool's algorithm is an algorithm for finding substrings in strings. It was published by Nigel Horspool in 1980....
: Simplification of Boyer–Moore - Knuth–Morris–Pratt algorithmKnuth–Morris–Pratt algorithmThe Knuth–Morris–Pratt string searching algorithm searches for occurrences of a "word" W within a main "text string" S by employing the observation that when a mismatch occurs, the word itself embodies sufficient information to determine where the next match could begin, thus bypassing...
: substring search which bypasses reexamination of matched characters - Rabin–Karp string search algorithm: searches multiple patterns efficiently
- Zhu–Takaoka string matching algorithm: a variant of the Boyer–Moore
- Aho–Corasick string matching algorithm
- Ukkonen's algorithmUkkonen's algorithmIn 1995, Esko Ukkonen proposed a linear-time, online algorithm for constructing suffix trees that has come to be known as Ukkonen's algorithm.The algorithm begins with an implicit suffix tree containing the first character of the string. Then it steps through the string adding successive characters...
: a linear-time, online algorithmOnline algorithmIn computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an offline algorithm is given the whole problem data from...
for constructing suffix treeSuffix treeIn computer science, a suffix tree is a data structure that presents the suffixes of a given string in a way that allows for a particularly fast implementation of many important string operations.The suffix tree for a string S is a tree whose edges are labeled with strings, such that each suffix...
s
Computational mathematics
Abstract algebra
- Chien search: a recursive algorithm for determining roots of polynomials defined over a finite field
- Schreier–Sims algorithm: computing a base and strong generating setStrong generating setIn abstract algebra, especially in the area of group theory, a strong generating set of a permutation group is a generating set that clearly exhibits the permutation structure as described by a stabilizer chain...
(BSGS) of a permutation groupPermutation groupIn mathematics, a permutation group is a group G whose elements are permutations of a given set M, and whose group operation is the composition of permutations in G ; the relationship is often written as... - Todd–Coxeter algorithm: Procedure for generating cosetCosetIn mathematics, if G is a group, and H is a subgroup of G, and g is an element of G, thenA coset is a left or right coset of some subgroup in G...
s.
Computer algebra
- Buchberger's algorithmBuchberger's algorithmIn computational algebraic geometry and computational commutative algebra, Buchberger's algorithm is a method of transforming a given set of generators for a polynomial ideal into a Gröbner basis with respect to some monomial order. It was invented by Austrian mathematician Bruno Buchberger...
: finds a Gröbner basisGröbner basisIn computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating subset of an ideal I in a polynomial ring R... - Cantor–Zassenhaus algorithm: factor polynomials over finite fields
- Faugère F4 algorithmFaugère F4 algorithmIn computer algebra, the Faugère F4 algorithm, by Jean-Charles Faugère, computes the Gröbner basis of an ideal of a multivariate polynomial ring...
: finds a Gröbner basis (also mentions the F5 algorithm) - Gosper's algorithmGosper's algorithmIn mathematics, Gosper's algorithm is a procedure for finding sums of hypergeometric terms that are themselves hypergeometric terms. That is: suppose we have a + ... + a = S − S, where S is a hypergeometric term ; then necessarily...
: find sums of hypergeometric terms that are themselves hypergeometric terms - Knuth–Bendix completion algorithm: for rewritingRewritingIn mathematics, computer science, and logic, rewriting covers a wide range of methods of replacing subterms of a formula with other terms. What is considered are rewriting systems...
rule systems - Multivariate division algorithmMultivariate division algorithmIn mathematics, polynomials in more than one variable do not form a Euclidean domain, so it is not possible to construct a true division algorithm; but an approximate multivariate division algorithm can be constructed....
: for polynomialPolynomialIn mathematics, a polynomial is an expression of finite length constructed from variables and constants, using only the operations of addition, subtraction, multiplication, and non-negative integer exponents...
s in several indeterminates - Pollard's kangaroo algorithm (also known as Pollard's lambda algorithm ): an algorithm for solving the discrete logarithm problem
- Polynomial long divisionPolynomial long divisionIn algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalised version of the familiar arithmetic technique called long division...
: an algorithm for dividing a polynomial by another polynomial of the same or lower degree - Risch algorithmRisch algorithmThe Risch algorithm, named after Robert Henry Risch, is an algorithm for the calculus operation of indefinite integration . The algorithm transforms the problem of integration into a problem in algebra. It is based on the form of the function being integrated and on methods for integrating rational...
: an algorithm for the calculus operation of indefinite integration (i.e. finding antiderivatives)
Geometry
- Closest pair problem: find the pair of points (from a set of points) with the smallest distance between them
- Collision detectionCollision detectionCollision detection typically refers to the computational problem of detecting the intersection of two or more objects. While the topic is most often associated with its use in video games and other physical simulations, it also has applications in robotics...
algorithms: check for the collision or intersection of two given solids - Cone algorithmCone algorithmIn computational geometry, the cone algorithm identifies surface particles quickly and accurately for three-dimensional clusters composed of discrete particles. It is especially useful for computational surface science and computational nano science. The cone algorithm was first described in a...
: identify surface points - Convex hull algorithmsConvex hull algorithmsAlgorithms that construct convex hulls of various objects have a broad range of applications in mathematics and computer science, see "Convex hull applications"....
: determining the convex hullConvex hullIn mathematics, the convex hull or convex envelope for a set of points X in a real vector space V is the minimal convex set containing X....
of a set of points- Chan's algorithmChan's algorithmIn computational geometry, Chan's algorithm, named after Timothy M. Chan, is an optimal output-sensitive algorithm to compute the convex hull of a set P of n points, in 2 or 3 dimensional space. The algorithm takes O time, where h is the number of vertices of the output...
- Gift wrapping algorithmGift wrapping algorithmIn computational geometry, the gift wrapping algorithm is an algorithm for computing the convex hull of a given set of points.-Planar case:In the two-dimensional case the algorithm is also known as Jarvis march, after R. A. Jarvis, who published it in 1973; it has O time complexity, where n is the...
or Jarvis march - Graham scanGraham scanThe Graham scan is a method of computing the convex hull of a finite set of points in the plane with time complexity O. It is named after Ronald Graham, who published the original algorithm in 1972...
- Kirkpatrick–Seidel algorithmKirkpatrick–Seidel algorithmThe Kirkpatrick–Seidel algorithm, called by its authors "the ultimate planar convex hull algorithm" is an algorithm for computing the convex hull of a set of points in the plane, with O time complexity, where n is the number of input points and h is the number of points in the hull...
- Chan's algorithm
- Euclidean Distance Transform - Computes the distance between every point in a grid and a discrete collection of points.
- Geometric hashingGeometric hashingIn computer science, geometric hashing is originally a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation, though extensions exist to some other object representations and transformations. In an off-line step, the...
: a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformationAffine transformationIn geometry, an affine transformation or affine map or an affinity is a transformation which preserves straight lines. It is the most general class of transformations with this property... - Gilbert–Johnson–Keerthi distance algorithmGilbert–Johnson–Keerthi distance algorithmThe Gilbert–Johnson–Keerthi distance algorithm is a method of determining the minimum distance between two convex sets. Unlike many other distance algorithms, it does not require that the geometry data be stored in any specific format, but instead relies solely on a support function to iteratively...
: determining the smallest distance between two convexConvex setIn Euclidean space, an object is convex if for every pair of points within the object, every point on the straight line segment that joins them is also within the object...
shapes. - Jump-and-Walk algorithmJump-and-Walk algorithmJump-and-Walk is an algorithm for point location in triangulations . Surprisingly, the algorithm does not need any preprocessing or complex data structures except some simple representation of the triangulation itself...
: an algorithm for point location in triangulations - Laplacian smoothingLaplacian smoothingLaplacian smoothing is an algorithm to smooth a polygonal mesh. For each vertex in a mesh, a new position is chosen based on local information and the vertex is moved there...
: an algorithm to smooth a polygonal mesh - Line segment intersectionLine segment intersectionIn computational geometry, the line segment intersection problem supplies a list of line segments in the plane and asks us to determine whether any two of them intersect, or cross....
: finding whether lines intersect, usually with a sweep line algorithmSweep line algorithmIn computational geometry, a sweep line algorithm or plane sweep algorithm is a type of algorithm that uses a conceptual sweep line or sweep surface to solve various problems in Euclidean space...
- Bentley–Ottmann algorithm
- Shamos–Hoey algorithm
- Minimum bounding box algorithms: find the oriented minimum bounding box enclosing a set of points
- Nearest neighbor searchNearest neighbor searchNearest neighbor search , also known as proximity search, similarity search or closest point search, is an optimization problem for finding closest points in metric spaces. The problem is: given a set S of points in a metric space M and a query point q ∈ M, find the closest point in S to q...
: find the nearest point or points to a query point - Point in polygonPoint in polygonIn computational geometry, the point-in-polygon problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon...
algorithms: tests whether a given point lies within a given polygon - Rotating calipersRotating calipersIn computational geometry, rotating calipers is a method used to construct efficient algorithms for a number of problems.The method was first used by Michael Shamos in 1978 for determining all antipodal pairs of points and vertices on a convex polygon....
: determine all antipodalAntipodal pointIn mathematics, the antipodal point of a point on the surface of a sphere is the point which is diametrically opposite to it — so situated that a line drawn from the one to the other passes through the centre of the sphere and forms a true diameter....
pairs of points and vertices on a convex polygonConvex polygonIn geometry, a polygon can be either convex or concave .- Convex polygons :A convex polygon is a simple polygon whose interior is a convex set...
or convex hullConvex hullIn mathematics, the convex hull or convex envelope for a set of points X in a real vector space V is the minimal convex set containing X....
. - Shoelace algorithm: determine the area of a polygon whose vertices are described by ordered pairs in the plane
- Triangulation
- Delaunay triangulationDelaunay triangulationIn mathematics and computational geometry, a Delaunay triangulation for a set P of points in a plane is a triangulation DT such that no point in P is inside the circumcircle of any triangle in DT. Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the...
- Ruppert's algorithmRuppert's algorithmIn mesh generation, Ruppert's algorithm, also known as Delaunay refinement, is an algorithm for creating quality Delaunay triangulations. The algorithm takes a planar straight-line graph and returns a conforming Delaunay triangulation of only quality triangles...
(also known as Delaunay refinement): create quality Delaunay triangulations - Chew's second algorithmChew's second algorithmIn mesh generation, Chew's second algorithm is a Delaunay refinement algorithm for creating quality constrained Delaunay triangulations. The algorithm takes a piecewise linear system and returns a constrained Delaunay triangulation of only quality triangles where quality is defined by the minimum...
: create quality constrained Delaunay triangulationConstrained Delaunay triangulationIn computational geometry, a constrained Delaunay triangulation is a generalization of the Delaunay triangulation that forces certain required segments into the triangulation. Because a Delaunay triangulation is almost always unique, often a constrained Delaunay triangulation contains edges that do...
s
- Ruppert's algorithm
- Marching trianglesMarching trianglesIn computer graphics, marching triangles is a technique for reconstructing two-dimensional surface geometry from an unstructured point cloud. Point clouds are typically generated from 3D laser scanning of real-world objects. In the past, accurate reconstruction methods employed Delaunay...
: reconstruct two-dimensional surface geometry from an unstructured point cloud - Polygon triangulationPolygon triangulationIn computational geometry, polygon triangulation is the decomposition of a polygonal area P into a set of triangles, i.e., finding the set of triangles with pairwise non-intersecting interiors whose union is P....
algorithms: decompose a polygon into a set of triangles - Voronoi diagramVoronoi diagramIn mathematics, a Voronoi diagram is a special kind of decomposition of a given space, e.g., a metric space, determined by distances to a specified family of objects in the space...
s, geometric dualDuality (mathematics)In mathematics, a duality, generally speaking, translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often by means of an involution operation: if the dual of A is B, then the dual of B is A. As involutions sometimes have...
of Delaunay triangulationDelaunay triangulationIn mathematics and computational geometry, a Delaunay triangulation for a set P of points in a plane is a triangulation DT such that no point in P is inside the circumcircle of any triangle in DT. Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the...
- Bowyer–Watson algorithm: create voronoi diagram in any number of dimensions
- Fortune's AlgorithmFortune's algorithmFortune's algorithm is a sweep line algorithm for generating a Voronoi diagram from a set of points in a plane using O time and O space...
: create voronoi diagram
- Delaunay triangulation
Number theoretic algorithms
- Binary GCD algorithmBinary GCD algorithmThe binary GCD algorithm, also known as Stein's algorithm, is an algorithm that computes the greatest common divisor of two nonnegative integers. It gains a measure of efficiency over the ancient Euclidean algorithm by replacing divisions and multiplications with shifts, which are cheaper when...
: Efficient way of calculating GCD. - Booth's multiplication algorithmBooth's multiplication algorithmBooth's multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in two's complement notation. The algorithm was invented by Andrew Donald Booth in 1950 while doing research on crystallography at Birkbeck College in Bloomsbury, London...
- Chakravala methodChakravala methodThe chakravala method is a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation. It is commonly attributed to Bhāskara II, although some attribute it to Jayadeva...
: a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equationPell's equationPell's equation is any Diophantine equation of the formx^2-ny^2=1\,where n is a nonsquare integer. The word Diophantine means that integer values of x and y are sought. Trivially, x = 1 and y = 0 always solve this equation... - Discrete logarithmDiscrete logarithmIn mathematics, specifically in abstract algebra and its applications, discrete logarithms are group-theoretic analogues of ordinary logarithms. In particular, an ordinary logarithm loga is a solution of the equation ax = b over the real or complex numbers...
:- Baby-step giant-stepBaby-step giant-stepIn group theory, a branch of mathematics, the baby-step giant-step is a meet-in-the-middle algorithm computing the discrete logarithm. The discrete log problem is of fundamental importance to the area of public key cryptography...
- Index calculus algorithm
- Pollard's rho algorithm for logarithmsPollard's rho algorithm for logarithmsPollard's rho algorithm for logarithms is an algorithm for solving the discrete logarithm problem analogous to Pollard's rho algorithm for solving the Integer factorization problem....
- Pohlig–Hellman algorithm
- Baby-step giant-step
- Euclidean algorithmEuclidean algorithmIn mathematics, the Euclidean algorithm is an efficient method for computing the greatest common divisor of two integers, also known as the greatest common factor or highest common factor...
: computes the greatest common divisorGreatest common divisorIn mathematics, the greatest common divisor , also known as the greatest common factor , or highest common factor , of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder.For example, the GCD of 8 and 12 is 4.This notion can be extended to... - Extended Euclidean algorithmExtended Euclidean algorithmThe extended Euclidean algorithm is an extension to the Euclidean algorithm. Besides finding the greatest common divisor of integers a and b, as the Euclidean algorithm does, it also finds integers x and y that satisfy Bézout's identityThe extended Euclidean algorithm is particularly useful when a...
: Also solves the equation ax + by = c. - Integer factorizationInteger factorizationIn number theory, integer factorization or prime factorization is the decomposition of a composite number into smaller non-trivial divisors, which when multiplied together equal the original integer....
: breaking an integer into its primePrime numberA prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a prime number is called a composite number. For example 5 is prime, as only 1 and 5 divide it, whereas 6 is composite, since it has the divisors 2...
factors- Congruence of squaresCongruence of squaresIn number theory, a congruence of squares is a congruence commonly used in integer factorization algorithms.-Derivation:Given a positive integer n, Fermat's factorization method relies on finding numbers x, y satisfying the equality...
- Dixon's algorithm
- Fermat's factorization methodFermat's factorization methodFermat's factorization method, named after Pierre de Fermat, is based on the representation of an odd integer as the difference of two squares:N = a^2 - b^2.\...
- General number field sieveGeneral number field sieveIn number theory, the general number field sieve is the most efficient classical algorithm known for factoring integers larger than 100 digits...
- Lenstra elliptic curve factorizationLenstra elliptic curve factorizationThe Lenstra elliptic curve factorization or the elliptic curve factorization method is a fast, sub-exponential running time algorithm for integer factorization which employs elliptic curves. For general purpose factoring, ECM is the third-fastest known factoring method...
- Pollard's p − 1 algorithm
- Pollard's rho algorithmPollard's rho algorithmPollard's rho algorithm is a special-purpose integer factorization algorithm. It was invented by John Pollard in 1975. It is particularly effective at splitting composite numbers with small factors.-Core ideas:...
- prime factorization algorithm
- Quadratic sieveQuadratic sieveThe quadratic sieve algorithm is a modern integer factorization algorithm and, in practice, the second fastest method known . It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve...
- Shor's algorithmShor's algorithmShor's algorithm, named after mathematician Peter Shor, is a quantum algorithm for integer factorization formulated in 1994...
- Special number field sieveSpecial number field sieveIn number theory, a branch of mathematics, the special number field sieve is a special-purpose integer factorization algorithm. The general number field sieve was derived from it....
- Trial divisionTrial divisionTrial division is the most laborious but easiest to understand of the integer factorization algorithms. Its ease of implementation makes it a viable integer factorization option for devices with little available memory, such as graphing calculators....
- Congruence of squares
- Multiplication algorithmMultiplication algorithmA multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are in use...
s: fast multiplication of two numbers- Karatsuba algorithm
- Schönhage–Strassen algorithm
- Toom–Cook multiplication
- Odlyzko–Schönhage algorithm: calculates nontrivial zeroes of the Riemann zeta function
- Primality testPrimality testA primality test is an algorithm for determining whether an input number is prime. Amongst other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not...
s: determining whether a given number is primePrime numberA prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a prime number is called a composite number. For example 5 is prime, as only 1 and 5 divide it, whereas 6 is composite, since it has the divisors 2...
- AKS primality testAKS primality testThe AKS primality test is a deterministic primality-proving algorithm created and published by three Indian Institute of Technology Kanpur computer scientists, Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, on August 6, 2002, in a paper titled "PRIMES is in P"...
- Fermat primality testFermat primality testThe Fermat primality test is a probabilistic test to determine if a number is probable prime.-Concept:Fermat's little theorem states that if p is prime and 1 \le a...
- Lucas primality test
- Miller–Rabin primality test
- Sieve of AtkinSieve of AtkinIn mathematics, the sieve of Atkin is a fast, modern algorithm for finding all prime numbers up to a specified integer. It is an optimized version of the ancient sieve of Eratosthenes, but does some preliminary work and then marks off multiples of primes squared, rather than multiples of primes. ...
- Sieve of EratosthenesSieve of EratosthenesIn mathematics, the sieve of Eratosthenes , one of a number of prime number sieves, is a simple, ancient algorithm for finding all prime numbers up to a specified integer....
- Sieve of SundaramSieve of SundaramIn mathematics, the sieve of Sundaram is a simple deterministic algorithm for finding all prime numbers up to a specified integer. It was discovered in 1934 by S. P. Sundaram, an Indian student from Sathyamangalam.-Algorithm:...
- AKS primality test
Differential equation solving
- Multigrid methods (MG methods), a group of algorithms for solving differential equations using a hierarchy of discretizations
- Partial differential equationPartial differential equationIn mathematics, partial differential equations are a type of differential equation, i.e., a relation involving an unknown function of several independent variables and their partial derivatives with respect to those variables...
:- Finite difference methodFinite difference methodIn mathematics, finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives.- Derivation from Taylor's polynomial :...
- Finite difference method
- Runge–Kutta methodsRunge–Kutta methodsIn numerical analysis, the Runge–Kutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.See the article...
- Euler integrationEuler integrationIn mathematics and computational science, the Euler method, named after Leonhard Euler, is a first-order numerical procedure for solving ordinary differential equations with a given initial value...
- Euler integration
- Verlet integrationVerlet integrationVerlet integration is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics...
(vɛʁˈlɛ): integrate Newton's equations of motion
Elementary and special functions
- Computation of π:
- Borwein's algorithmBorwein's algorithmIn mathematics, Borwein's algorithm is an algorithm devised by Jonathan and Peter Borwein to calculate the value of 1/π.-Jonathan Borwein and Peter Borwein's Version :Start out by setting A &= 63365028312971999585426220 \\...
: an algorithm to calculate the value of 1/π - Gauss–Legendre algorithm: computes the digits of piPi' is a mathematical constant that is the ratio of any circle's circumference to its diameter. is approximately equal to 3.14. Many formulae in mathematics, science, and engineering involve , which makes it one of the most important mathematical constants...
- Bailey–Borwein–Plouffe formulaBailey–Borwein–Plouffe formulaThe Bailey–Borwein–Plouffe formula provides a spigot algorithm for the computation of the nth binary digit of π. This summation formula was discovered in 1995 by Simon Plouffe. The formula is named after the authors of the paper in which the formula was published, David H. Bailey, Peter Borwein,...
: (BBP formula) a spigot algorithm for the computation of the nth binary digit of π
- Borwein's algorithm
- Hyperbolic and Trigonometric Functions:
- BKM algorithmBKM algorithmThe BKM algorithm is a shift-and-add algorithm for computing elementary functions, first published in 1994 by J.C. Bajard, S. Kla, and J.M. Muller. BKM is based on computing complex logarithms and exponentials using a method similar to the algorithm Henry Briggs used to compute logarithms...
: compute elementary functionsElementary function (differential algebra)In mathematics, an elementary function is a function of one variable built from a finite number of exponentials, logarithms, constants, and nth roots through composition and combinations using the four elementary operations...
using a table of logarithms - CORDICCORDICCORDIC is a simple and efficient algorithm to calculate hyperbolic and trigonometric functions...
: compute hyperbolic and trigonometric functions using a table of arctangents
- BKM algorithm
- Exponentiation:
- Addition-chain exponentiationAddition-chain exponentiationIn mathematics and computer science, optimal addition-chain exponentiation is a method of exponentiation by positive integer powers that requires a minimal number of multiplications. It works by creating a shortest addition chain that generates the desired exponent. Each exponentiation in the chain...
exponentiation by positive integer powers that requires a minimal number of multiplications - Exponentiating by squaring: an algorithm used for the fast computation of large integerArbitrary-precision arithmeticIn computer science, arbitrary-precision arithmetic indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most ALU hardware, which typically...
powers of a number
- Addition-chain exponentiation
- Montgomery reductionMontgomery reductionIn arithmetic computation, Montgomery reduction is an algorithm introduced in 1985 by Peter Montgomery that allows modular arithmetic to be performed efficiently when the modulus is large ....
: an algorithm that allows modular arithmeticModular arithmeticIn mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" after they reach a certain value—the modulus....
to be performed efficiently when the modulus is large - Multiplication algorithmMultiplication algorithmA multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are in use...
s: fast multiplication of two numbers- Booth's multiplication algorithmBooth's multiplication algorithmBooth's multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in two's complement notation. The algorithm was invented by Andrew Donald Booth in 1950 while doing research on crystallography at Birkbeck College in Bloomsbury, London...
: a multiplication algorithm that multiplies two signed binary numbers in two's complement notation - Fürer's algorithmFürer's algorithmFürer's algorithm is an integer multiplication algorithm for very large numbers possessing a very low asymptotic complexity. It was created in 2007 by Swiss mathematician Martin Fürer of Pennsylvania State University as an asymptotically faster algorithm than its predecessor, the...
: an integer multiplication algorithm for very large numbers possessing a very low asymptotic complexityComputational complexity theoryComputational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other... - Karatsuba algorithm: an efficient procedure for multiplying large numbers
- Schönhage–Strassen algorithm: an asymptotically fast multiplication algorithm for large integers
- Toom–Cook multiplication: (Toom3) a multiplication algorithm for large integers
- Booth's multiplication algorithm
- Rounding functions: the classic ways to round numbers
- Spigot algorithmSpigot algorithmA spigot algorithm is a type of algorithm used to compute the value of a mathematical constant such as π or e. Unlike recursive algorithms, a spigot algorithm yields digits incrementally without using previously computed digits...
: A way to compute the value of a mathematical constantMathematical constantA mathematical constant is a special number, usually a real number, that is "significantly interesting in some way". Constants arise in many different areas of mathematics, with constants such as and occurring in such diverse contexts as geometry, number theory and calculus.What it means for a...
without knowing preceding digits - Square and Nth root of a number:
- Alpha max plus beta min algorithm: an approximation of the square-root of the sum of two squares
- Methods of computing square rootsMethods of computing square rootsThere are several methods for calculating the principal square root of a nonnegative real number. For the square roots of a negative or complex number, see below.- Rough estimation :...
- nth root algorithm
- Shifting nth-root algorithmShifting nth-root algorithmThe shifting nth root algorithm is an algorithm for extracting the nth root of a positive real number which proceeds iteratively by shifting in n digits of the radicand, starting with the most significant, and produces one digit of the root on each iteration, in a manner similar to long...
: digit by digit root extraction
- Summation:
- Binary splittingBinary splittingIn mathematics, binary splitting is a technique for speeding up numerical evaluation of many types of series with rational terms. In particular, it can be used to evaluate hypergeometric series at rational points...
: a divide and conquerDivide and conquer algorithmIn computer science, divide and conquer is an important algorithm design paradigm based on multi-branched recursion. A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same type, until these become simple enough to be solved directly...
technique which speeds up the numerical evaluation of many types of series with rational terms - Kahan summation algorithmKahan summation algorithmIn numerical analysis, the Kahan summation algorithm significantly reduces the numerical error in the total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach...
: a more accurate method of summing floating-point numbers
- Binary splitting
Geometric
- Filtered back-projection: efficiently compute the inverse 2-dimensional Radon transformRadon transformthumb|right|Radon transform of the [[indicator function]] of two squares shown in the image below. Lighter regions indicate larger function values. Black indicates zero.thumb|right|Original function is equal to one on the white region and zero on the dark region....
. - Level set methodLevel set methodThe level set method is a numerical technique for tracking interfaces and shapes. The advantage of the level set method is that one can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects...
(LSM): a numerical technique for tracking interfaces and shapes
Interpolation and extrapolation
- Birkhoff interpolation: an extension of polynomial interpolation
- Cubic interpolation
- Hermite interpolationHermite interpolationIn numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of interpolating data points as a polynomial function. The generated Hermite polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences.Unlike...
- Linear interpolationLinear interpolationLinear interpolation is a method of curve fitting using linear polynomials. Lerp is an abbreviation for linear interpolation, which can also be used as a verb .-Linear interpolation between two known points:...
: a method of curve fitting using linear polynomials - Monotone cubic interpolationMonotone cubic interpolationIn the mathematical subfield of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated....
: a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. - Multivariate interpolationMultivariate interpolationIn numerical analysis, multivariate interpolation or spatial interpolation is interpolation on functions of more than one variable.The function to be interpolated is known at given points and the interpolation problem consist of yielding values at arbitrary points .-Regular grid:For function...
- Bicubic interpolationBicubic interpolationIn mathematics, bicubic interpolation is an extension of cubic interpolation for interpolating data points on a two dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation...
, a generalization of cubic interpolation to two dimensions - Bilinear interpolationBilinear interpolationIn mathematics, bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables on a regular grid. The interpolated function should not use the term of x^2 or y^2, but x y, which is the bilinear form of x and y.The key idea is to perform linear...
: an extension of linear interpolationLinear interpolationLinear interpolation is a method of curve fitting using linear polynomials. Lerp is an abbreviation for linear interpolation, which can also be used as a verb .-Linear interpolation between two known points:...
for interpolating functions of two variables on a regular grid - Lanczos resamplingLanczos resamplingLanczos resampling is an interpolation method used to compute new values for sampled data. It is often used in multivariate interpolation, for example for image scaling , but can be used for any other digital signal...
("Lanzosh"): a multivariate interpolation method used to compute new values for any digitally sampled data - Nearest-neighbor interpolation
- Tricubic interpolationTricubic interpolationIn the mathematical subfield numerical analysis, tricubic interpolation is a method for obtaining values at arbitrary points in 3D space of a function defined on a regular grid...
, a generalization of cubic interpolation to three dimensions
- Bicubic interpolation
- Pareto interpolationPareto interpolationPareto interpolation is a method of estimating the median and other properties of a population that follows a Pareto distribution. It is used in economics when analysing the distribution of incomes in a population, when one must base estimates on a relatively small random sample taken from the...
: a method of estimating the median and other properties of a population that follows a Pareto distribution. - Polynomial interpolationPolynomial interpolationIn numerical analysis, polynomial interpolation is the interpolation of a given data set by a polynomial: given some points, find a polynomial which goes exactly through these points.- Applications :...
- Neville's algorithmNeville's algorithmIn mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville. Given n + 1 points, there is a unique polynomial of degree ≤ n which goes through the given points...
- Neville's algorithm
- Spline interpolationSpline interpolationIn the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. Spline interpolation is preferred over polynomial interpolation because the interpolation error can be made small even...
: Reduces error with Runge's phenomenonRunge's phenomenonIn the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree...
.- De Boor algorithm: B-splineB-splineIn the mathematical subfield of numerical analysis, a B-spline is a spline function that has minimal support with respect to a given degree, smoothness, and domain partition. B-splines were investigated as early as the nineteenth century by Nikolai Lobachevsky...
s - De Casteljau's algorithmDe Casteljau's algorithmIn the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau...
: Bézier splineBézier splineIn the mathematical field of numerical analysis and in computer graphics, a Bézier spline is a spline curve where each polynomial of the spline is in Bézier form....
s
- De Boor algorithm: B-spline
- Trigonometric interpolationTrigonometric interpolationIn mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and...
Linear algebra
- Eigenvalue algorithmEigenvalue algorithmIn linear algebra, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.-Characteristic polynomial:...
s- Arnoldi iterationArnoldi iterationIn numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of iterative methods. Arnoldi finds the eigenvalues of general matrices; an analogous method for Hermitian matrices is the Lanczos iteration. The Arnoldi iteration was invented by W. E...
- Inverse iterationInverse iterationIn numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows to find an approximateeigenvector when an approximation to a corresponding eigenvalue is already known....
- Jacobi methodJacobi eigenvalue algorithmIn numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix...
- Lanczos iteration
- Power iterationPower iterationIn mathematics, the power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number λ and a nonzero vector v , such that Av = λv....
- QR algorithmQR algorithmIn numerical linear algebra, the QR algorithm is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR transformation was developed in the late 1950s by John G.F. Francis and by Vera N. Kublanovskaya , working independently...
- Rayleigh quotient iterationRayleigh quotient iterationRayleigh quotient iteration is an eigenvalue algorithm which extends the idea of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates....
- Arnoldi iteration
- Gram–Schmidt processGram–Schmidt processIn mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalising a set of vectors in an inner product space, most commonly the Euclidean space Rn...
: orthogonalizes a set of vectors - Matrix multiplicationMatrix multiplicationIn mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. If A is an n-by-m matrix and B is an m-by-p matrix, the result AB of their multiplication is an n-by-p matrix defined only if the number of columns m of the left matrix A is the...
- Cannon's algorithmCannon's algorithmIn computer science, Cannon's algorithm is a distributed algorithm for matrix multiplication for two-dimensional meshes first described in 1969 by Lynn Elliot Cannon.It is especially suitable for computers laid out in an N × N mesh...
: a distributed algorithm for matrix multiplicationMatrix multiplicationIn mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. If A is an n-by-m matrix and B is an m-by-p matrix, the result AB of their multiplication is an n-by-p matrix defined only if the number of columns m of the left matrix A is the...
especially suitable for computers laid out in an N × N mesh - Coppersmith–Winograd algorithmCoppersmith–Winograd algorithmIn the mathematical discipline of linear algebra, the Coppersmith–Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, is the asymptotically fastest known algorithm for square matrix multiplication. It can multiply two n \times n matrices in O time...
: square matrix multiplicationMatrix multiplicationIn mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. If A is an n-by-m matrix and B is an m-by-p matrix, the result AB of their multiplication is an n-by-p matrix defined only if the number of columns m of the left matrix A is the... - Freivalds' algorithm: a randomized algorithm used to verify matrix multiplication
- Strassen algorithmStrassen algorithmIn the mathematical discipline of linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm used for matrix multiplication...
: faster matrix multiplicationMatrix multiplicationIn mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. If A is an n-by-m matrix and B is an m-by-p matrix, the result AB of their multiplication is an n-by-p matrix defined only if the number of columns m of the left matrix A is the...
- Cannon's algorithm
- Solving systems of linear equations
- Biconjugate gradient methodBiconjugate gradient methodIn mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equationsA x= b.\,...
: solves systems of linear equations - Conjugate gradient: an algorithm for the numerical solution of particular systems of linear equations
- Gaussian eliminationGaussian eliminationIn linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations. It can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix...
- Gauss–Jordan eliminationGauss–Jordan eliminationIn linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards....
: solves systems of linear equations - Gauss–Seidel method: solves systems of linear equations iteratively
- Levinson recursionLevinson recursionLevinson recursion or Levinson-Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix...
: solves equation involving a Toeplitz matrixToeplitz matrixIn linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant... - Stone's method: also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations
- Successive over-relaxationSuccessive over-relaxationIn numerical linear algebra, the method of successive over-relaxation is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.It was devised simultaneously by David...
(SOR): method used to speed up convergence of the Gauss–Seidel method - Tridiagonal matrix algorithmTridiagonal matrix algorithmIn numerical linear algebra, the tridiagonal matrix algorithm , also known as the Thomas algorithm , is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations...
(Thomas algorithm): solves systems of tridiagonal equations
- Biconjugate gradient method
- Sparse matrixSparse matrixIn the subfield of numerical analysis, a sparse matrix is a matrix populated primarily with zeros . The term itself was coined by Harry M. Markowitz....
algorithms- Cuthill–McKee algorithm: reduce the bandwidth of sparseSparse matrixIn the subfield of numerical analysis, a sparse matrix is a matrix populated primarily with zeros . The term itself was coined by Harry M. Markowitz....
symmetric matrices - Minimum degree algorithmMinimum degree algorithmIn numerical analysis the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor....
: permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decompositionCholesky decompositionIn linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. It was discovered by André-Louis Cholesky for real matrices... - Symbolic Cholesky decompositionSymbolic Cholesky decompositionIn the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the L factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants.-Algorithm:...
: Efficient way of storing sparse matrix
- Cuthill–McKee algorithm: reduce the bandwidth of sparse
Monte Carlo
- Gibbs samplingGibbs samplingIn statistics and in statistical physics, Gibbs sampling or a Gibbs sampler is an algorithm to generate a sequence of samples from the joint probability distribution of two or more random variables...
: generate a sequence of samples from the joint probability distribution of two or more random variables - Metropolis–Hastings algorithm: used to generate a sequence of samples from the probability distributionProbability distributionIn probability theory, a probability mass, probability density, or probability distribution is a function that describes the probability of a random variable taking certain values....
of one or more variables - Wang and Landau algorithmWang and Landau algorithmThe Wang and Landau algorithm proposed by Fugao Wang and David P. Landau is an extension of Metropolis Monte Carlo sampling. It is designed to calculate the density of states of a computer-simulated system, such as an Ising model of spin glasses, or model atoms in a molecular force field...
: an extension of Metropolis–Hastings algorithm sampling
Numerical integration
- MISER algorithm: Monte Carlo simulation, numerical integrationNumerical integrationIn numerical analysis, numerical integration constitutes a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations. This article focuses on calculation of...
Root finding
- False position methodFalse position methodThe false position method or regula falsi method is a term for problem-solving methods in algebra and calculus. In simple terms, these methods begin by attempting to evaluate a problem using test values for the variables, and then adjust the values accordingly.-Algebra:In algebra, the false...
: approximates roots of a function - Newton's methodNewton's methodIn numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...
: finds zeros of functions with calculusCalculusCalculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. This subject constitutes a major part of modern mathematics education. It has two major branches, differential calculus and integral calculus, which are related by the fundamental theorem... - Secant methodSecant methodIn numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite difference approximation of Newton's method. However, the method was developed...
: approximates roots of a function
Optimization algorithms
- Alpha-beta pruningAlpha-beta pruningAlpha-beta pruning is a search algorithm which seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games...
: search to reduce number of nodes in minimax algorithm - Branch and boundBranch and boundBranch and bound is a general algorithm for finding optimal solutions of various optimization problems, especially in discrete and combinatorial optimization...
- Chain matrix multiplication
- Combinatorial optimizationCombinatorial optimizationIn applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects. In many such problems, exhaustive search is not feasible...
: optimization problems where the set of feasible solutions is discrete- Greedy randomized adaptive search procedureGreedy randomized adaptive search procedureThe greedy randomized adaptive search procedure is a metaheuristic algorithm commonly applied to combinatorial optimization problems. GRASP typically consists of iterations made up from successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a...
(GRASP): successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a local search - Hungarian method: a combinatorial optimization algorithm which solves the assignment problemAssignment problemThe assignment problem is one of the fundamental combinatorial optimization problems in the branch of optimization or operations research in mathematics...
in polynomial time
- Greedy randomized adaptive search procedure
- Constraint satisfactionConstraint satisfactionIn artificial intelligence and operations research, constraint satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy. A solution is therefore a vector of variables that satisfies all constraints.The techniques used in...
- General algorithms for the constraint satisfaction
- AC-3 algorithmAC-3 algorithmThe AC-3 algorithm is one of a series of algorithms used for the solution of constraint satisfaction problems . It was developed by Alan Mackworth in 1977...
- Difference map algorithmDifference map algorithmThe difference-map algorithm is a search algorithm for general constraint satisfaction problems. It is a meta-algorithm in the sense that it is built from more basic algorithms that perform projections onto constraint sets. From a mathematical perspective, the difference-map algorithm is a...
- Min conflicts algorithmMin conflicts algorithmThe min conflicts algorithm is a search algorithm to solve constraint satisfaction problems .It assigns random values to all the variables of a CSP. Then it selects randomly a variable, whose value conflicts with any constraint of the CSP. Then it assigns to this variable the value with the minimum...
- AC-3 algorithm
- Chaff algorithmChaff algorithmChaff is an algorithm for solving instances of the Boolean satisfiability problem in programming. It was designed by researchers at Princeton University, USA...
: an algorithm for solving instances of the boolean satisfiability problem - Davis–Putnam algorithmDavis–Putnam algorithmThe Davis–Putnam algorithm was developed by Martin Davis and Hilary Putnam for checking the validity of a first-order logic formula using a resolution-based decision procedure for propositional logic. Since the set of valid first-order formulas is recursively enumerable but not recursive, there...
: check the validity of a first-order logic formula - Davis–Putnam–Logemann–Loveland algorithmDPLL algorithmThe Davis–Putnam–Logemann–Loveland algorithm is a complete, backtracking-based algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem....
(DPLL): an algorithm for deciding the satisfiability of propositional logic formula in conjunctive normal form, i.e. for solving the CNF-SAT problem - Exact cover problem
- Algorithm X: a nondeterministic algorithmNondeterministic algorithmIn computer science, a nondeterministic algorithm is an algorithm that can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. A concurrent algorithm can perform differently on different...
- Dancing LinksDancing LinksIn computer science, Dancing Links, also known as DLX, is the technique suggested by Donald Knuth to efficiently implement his Algorithm X. Algorithm X is a recursive, nondeterministic, depth-first, backtracking algorithm that finds all solutions to the exact cover problem...
: an efficient implementation of Algorithm X
- Algorithm X: a nondeterministic algorithm
- General algorithms for the constraint satisfaction
- Cross-entropy methodCross-entropy methodThe cross-entropy method attributed to Reuven Rubinstein is a general Monte Carlo approach tocombinatorial and continuous multi-extremal optimization and importance sampling.The method originated from the field of rare event simulation, where...
: a general Monte Carlo approach to combinatorial and continuous multi-extremal optimization and importance samplingImportance samplingIn statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution rather than the distribution of interest. It is related to Umbrella sampling in computational physics... - Differential evolutionDifferential evolutionIn computer science, differential evolution is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized...
- Dynamic ProgrammingDynamic programmingIn mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems which are only slightly smaller and optimal substructure...
: problems exhibiting the properties of overlapping subproblemOverlapping subproblemIn computer science, a problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblem.For example, the...
s and optimal substructureOptimal substructureIn computer science, a problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions to its subproblems... - Ellipsoid methodEllipsoid methodIn mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm, which finds an optimal solution in a finite number of steps.The...
: is an algorithm for solving convex optimization problems - Evolutionary computationEvolutionary computationIn computer science, evolutionary computation is a subfield of artificial intelligence that involves combinatorial optimization problems....
: optimization inspired by biological mechanisms of evolution- Evolution strategyEvolution strategyIn computer science, evolution strategy is an optimization technique based on ideas of adaptation and evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies.-History:...
- Genetic algorithms
- Fitness proportionate selectionFitness proportionate selectionFitness proportionate selection, also known as roulette-wheel selection, is a genetic operator used in genetic algorithms for selecting potentially useful solutions for recombination....
- also known as roulette-wheel selection - Stochastic universal samplingStochastic universal samplingStochastic universal sampling is a technique used in genetic algorithms for selecting potentially useful solutions for recombination. It was introduced by James Baker....
- Truncation selectionTruncation selectionTruncation selection is a selection method used in genetic algorithms to select potential candidate solutions for recombination.In truncation selection the candidate solutions are ordered by fitness, and some proportion, p, , of the fittest individuals are selected and reproduced 1/p times...
- Tournament selectionTournament selectionTournament selection is a method of selecting an individual from a population of individuals in a genetic algorithm. Tournament selection involves running several "tournaments" among a few individuals chosen at random from the population. The winner of each tournament is selected for crossover...
- Fitness proportionate selection
- Memetic algorithmMemetic algorithmMemetic algorithms represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search...
- Swarm intelligenceSwarm intelligenceSwarm intelligence is the collective behaviour of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence...
- Ant colony optimizationAnt colony optimizationIn computer science and operations research, the ant colony optimization algorithm ' is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs....
- Bees algorithmBees algorithmIn computer science and operations research, the bees algorithm is a population-based search algorithm first developed in 2005. It mimics the food foraging behaviour of swarms of honey bees...
: a search algorithm which mimics the food foraging behavior of swarms of honey bees - Particle swarmParticle swarm optimizationIn computer science, particle swarm optimization is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality...
- Ant colony optimization
- Evolution strategy
- Gradient descentGradient descentGradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient of the function at the current point...
- Harmony searchHarmony searchIn computer science and operations research, harmony search is a phenomenon-mimicking algorithm inspired by the improvisation process of musicians...
(HS): a metaheuristicMetaheuristicIn computer science, metaheuristic designates a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Metaheuristics make few or no assumptions about the problem being optimized and can search very large spaces...
algorithm mimicking the improvisation process of musicians - Interior point methodInterior point methodInterior point methods are a certain class of algorithms to solve linear and nonlinear convex optimization problems.The interior point method was invented by John von Neumann...
- Linear programmingLinear programmingLinear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...
- Dantzig–Wolfe decompositionDantzig–Wolfe decompositionDantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Phil Wolfe and initially published in 1960...
: an algorithm for solving linear programming problems with special structure - Delayed column generationDelayed column generationDelayed column generation is an efficient algorithm for solving larger linear programs.The overarching idea is that many linear programs are too large to consider all the variables explicitly. Since most of the variables will be non-basic and assume a value of zero in the optimal solution, only a...
- Integer linear programming: solve linear programming problems where some or all the unknowns are restricted to integer values
- Branch and cutBranch and cutBranch and cut is a method of combinatorial optimization for solving integer linear programs, that is, linear programming problems where some or all the unknowns are restricted to integer values...
- Cutting-plane methodCutting-plane methodIn mathematical optimization, the cutting-plane method is an umbrella term for optimization methods which iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts...
- Branch and cut
- Karmarkar's algorithmKarmarkar's algorithmKarmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time...
: The first reasonably efficient algorithm that solves the linear programmingLinear programmingLinear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...
problem in polynomial time. - Simplex algorithmSimplex algorithmIn mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming. The journal Computing in Science and Engineering listed it as one of the top 10 algorithms of the twentieth century....
: An algorithm for solving the linear programmingLinear programmingLinear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...
problem
- Dantzig–Wolfe decomposition
- Line search
- Local searchLocal search (optimization)In computer science, local search is a metaheuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions...
: a metaheuristic for solving computationally hard optimization problems- Random-restart hill climbing
- Tabu searchTabu searchTabu search is a mathematical optimization method, belonging to the class of trajectory based techniques. Tabu search enhances the performance of a local search method by using memory structures that describe the visited solutions: once a potential solution has been determined, it is marked as...
- Minimax used in game programming
- Nearest neighbor searchNearest neighbor searchNearest neighbor search , also known as proximity search, similarity search or closest point search, is an optimization problem for finding closest points in metric spaces. The problem is: given a set S of points in a metric space M and a query point q ∈ M, find the closest point in S to q...
(NNS): find closest points in a metric spaceMetric spaceIn mathematics, a metric space is a set where a notion of distance between elements of the set is defined.The metric space which most closely corresponds to our intuitive understanding of space is the 3-dimensional Euclidean space...
- Best Bin FirstBest Bin FirstBest bin first is a search algorithm that is designed to efficiently find an approximate solution to the nearest neighbor search problem in very-high-dimensional spaces. The algorithm is based on a variant of the kd-tree search algorithm which makes indexing higher dimensional spaces possible...
: find an approximate solution to the Nearest neighbor searchNearest neighbor searchNearest neighbor search , also known as proximity search, similarity search or closest point search, is an optimization problem for finding closest points in metric spaces. The problem is: given a set S of points in a metric space M and a query point q ∈ M, find the closest point in S to q...
problem in very high dimensional spaces
- Best Bin First
- Newton's method in optimizationNewton's method in optimizationIn mathematics, Newton's method is an iterative method for finding roots of equations. More generally, Newton's method is used to find critical points of differentiable functions, which are the zeros of the derivative function.-Method:...
- Nonlinear optimization
- BFGS methodBFGS methodIn numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno method is a method for solving nonlinear optimization problems ....
: A nonlinear optimization algorithm - Gauss–Newton algorithm: An algorithm for solving nonlinear least squaresLeast squaresThe method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every...
problems. - Levenberg–Marquardt algorithm: An algorithm for solving nonlinear least squaresLeast squaresThe method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every...
problems. - Nelder–Mead method (downhill simplex method): A nonlinear optimization algorithm
- BFGS method
- Odds algorithmOdds algorithmThe odds-algorithm is a mathematical method for computing optimalstrategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the odds-strategy, and the importance of the...
(Bruss algorithm) : Finds the optimal strategy to predict a last specific event in a random sequence event - Simulated annealingSimulated annealingSimulated annealing is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete...
- Stochastic tunnelingStochastic tunnelingStochastic tunneling is an approach to global optimization based on the Monte Carlo method-sampling of the function to be minimized.- Idea :...
- Subset sum algorithm
Astronomy
- Doomsday algorithm: day of the week
- Zeller's congruenceZeller's congruenceZeller's congruence is an algorithm devised by Christian Zeller to calculate the day of the week for any Julian or Gregorian calendar date.- Formula :For the Gregorian calendar, Zeller's congruence is...
is an algorithm to calculate the day of the week for any Julian or Gregorian calendar date - various Easter algorithmsComputusComputus is the calculation of the date of Easter in the Christian calendar. The name has been used for this procedure since the early Middle Ages, as it was one of the most important computations of the age....
are used to calculate the day of Easter
Bioinformatics
- Basic Local Alignment Search Tool also known as BLAST: an algorithm for comparing primary biological sequence information
- Kabsch algorithmKabsch algorithmThe Kabsch algorithm, named after Wolfgang Kabsch, is a method for calculating the optimal rotation matrix that minimizes the RMSD between two paired sets of points...
: calculate the optimal alignment of two sets of points in order to compute the root mean squared deviation between two protein structures. - VelvetVelvet (algorithm)Velvet is a set of algorithms manipulating de Bruijn graphs for genomic and de novo transcriptomic Sequence assembly. It was designed for short read sequencing technologies, such as Solexa or 454 Sequencing and was developed by Daniel Zerbino and Ewan Birney at the European Bioinformatics Institute...
: a set of algorithms manipulating de Bruijn graphDe Bruijn graphIn graph theory, an n-dimensional De Bruijn graph of m symbols is a directed graph representing overlaps between sequences of symbols. It has mn vertices, consisting of all possible length-n sequences of the given symbols; the same symbol may appear multiple times in a sequence...
s for genomic sequence assemblySequence assemblyIn bioinformatics, sequence assembly refers to aligning and merging fragments of a much longer DNA sequence in order to reconstruct the original sequence. This is needed as DNA sequencing technology cannot read whole genomes in one go, but rather reads small pieces of between 20 and 1000 bases,...
Geoscience
- Vincenty's formulaeVincenty's formulaeVincenty's formulae are two related iterative methods used in geodesy to calculate the distance between two points on the surface of a spheroid, developed by Thaddeus Vincenty They are based on the assumption that the figure of the Earth is an oblate spheroid, and hence are more accurate than...
: a fast algorithm to calculate the distance between two latitude/longitude points on an ellipsoid
Linguistics
- Lesk algorithmLesk algorithmThe Lesk algorithm is a classical algorithm for word sense disambiguation introduced by Michael E. Lesk in 1986.-Overview:The Lesk algorithm is based on the assumption that words in a given neighbourhood will tend to share a common topic...
: word sense disambiguation - Stemming algorithmStemmingIn linguistic morphology and information retrieval, stemming is the process for reducing inflected words to their stem, base or root form—generally a written word form. The stem need not be identical to the morphological root of the word; it is usually sufficient that related words map to the same...
: a method of reducing words to their stem, base, or root form - Sukhotin's algorithm: a statistical classification algorithm for classifying characters in a text as vowels or consonants
Medicine
- ESC algorithm for the diagnosis of heart failure
- Manning CriteriaManning CriteriaThe Manning Criteria is a diagnostic algorithm used in the diagnosis of Irritable Bowel Syndrome. The criteria consists of a list of questions the physician can ask the patient...
for irritable bowel syndrome - Pulmonary embolism diagnostic algorithms
- Texas Medication Algorithm ProjectTexas Medication Algorithm ProjectThe Texas Medication Algorithm Project is a decision-tree medical algorithm, the design of which was based on the expert opinions of mental health specialists. It has provided and rolled out a set of psychiatric management guidelines for doctors treating certain mental disorders within Texas'...
Physics
- Constraint algorithmConstraint algorithmIn mechanics, a constraint algorithm is a method for satisfying constraints for bodies that obey Newton's equations of motion. There are three basic approaches to satisfying such constraints: choosing novel unconstrained coordinates , introducing explicit constraint forces, and minimizing...
: a class of algorithms for satisfying constraints for bodies that obey Newton's equations of motion - Demon algorithmDemon algorithmThe demon algorithm is a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy. An additional degree of freedom, called 'the demon', is added to the system and is able to store and provide energy. If a drawn microscopic state has lower energy than the...
: a Monte Carlo methodMonte Carlo methodMonte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems...
for efficiently sampling members of a microcanonical ensembleMicrocanonical ensembleIn statistical physics, the microcanonical ensemble is a theoretical tool used to describe the thermodynamic properties of an isolated system. In such a system, the possible macrostates of the system all have the same energy and the probability for the system to be in any given microstate is the same...
with a given energy - Featherstone's algorithmFeatherstone's algorithmFeatherstone's algorithm is a technique used for computing the effects of forces applied to a structure of joints and links such as a skeleton used in ragdoll physics....
: compute the effects of forces applied to a structure of joints and links - Ground stateGround stateThe ground state of a quantum mechanical system is its lowest-energy state; the energy of the ground state is known as the zero-point energy of the system. An excited state is any state with energy greater than the ground state...
approximation- Variational method
- Ritz methodRitz methodThe Ritz method is a direct method to find an approximate solution for boundary value problems. The method is named after Walter Ritz.In quantum mechanics, a system of particles can be described in terms of an "energy functional" or Hamiltonian, which will measure the energy of any proposed...
- Ritz method
- Variational method
- N-body problemN-body problemThe n-body problem is the problem of predicting the motion of a group of celestial objects that interact with each other gravitationally. Solving this problem has been motivated by the need to understand the motion of the Sun, planets and the visible stars...
s- Barnes–Hut simulation: Solves the n-body problem in an approximate way that has the order instead of as in a direct-sum simulation.
- Fast multipole methodFast Multipole MethodThe fast multipole method is a mathematical technique that was developed to speed up the calculation of long-ranged forces in the n-body problem...
(FMM): speeds up the calculation of long-ranged forces
- Rainflow-counting algorithmRainflow-counting algorithmThe rainflow-counting algorithm is used in the analysis of fatigue data in order to reduce a spectrum of varying stress into a set of simple stress reversals. Its importance is that it allows the application of Miner's rule in order to assess the fatigue life of a structure subject to complex...
: Reduces a complex stressStress (physics)In continuum mechanics, stress is a measure of the internal forces acting within a deformable body. Quantitatively, it is a measure of the average force per unit area of a surface within the body on which internal forces act. These internal forces are a reaction to external forces applied on the body...
history to a count of elementary stress-reversals for use in fatigueFatigue (material)'In materials science, fatigue is the progressive and localized structural damage that occurs when a material is subjected to cyclic loading. The nominal maximum stress values are less than the ultimate tensile stress limit, and may be below the yield stress limit of the material.Fatigue occurs...
analysis - Sweep and pruneSweep and pruneIn physical simulations, sweep and prune is a broad phase algorithm used during collision detection to limit the number of pairs of solids that need to be checked for collision, i.e. intersection. This is achieved by sorting the starts and ends of the bounding volume of each solid along a number...
: a broad phase algorithm used during collision detectionCollision detectionCollision detection typically refers to the computational problem of detecting the intersection of two or more objects. While the topic is most often associated with its use in video games and other physical simulations, it also has applications in robotics...
to limit the number of pairs of solids that need to be checked for collision - VEGAS algorithmVEGAS algorithmThe VEGAS algorithm, due to G. P. Lepage, is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the graph that make the greatest contribution to the final integral.The VEGAS algorithm...
: a method for reducing error in Monte Carlo simulations
Statistics
- Algorithms for calculating varianceAlgorithms for calculating varianceAlgorithms for calculating variance play a major role in statistical computing. A key problem in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with...
: avoiding instability and numerical overflow - Approximate counting algorithmApproximate counting algorithmThe approximate counting algorithm allows the counting of a large number of events using a small amount of memory. Invented in 1977 by Robert Morris of Bell Labs, it uses probabilistic techniques to increment the counter.- Theory of operation :...
: Allows counting large number of events in a small register - Bayesian statisticsBayesian statisticsBayesian statistics is that subset of the entire field of statistics in which the evidence about the true state of the world is expressed in terms of degrees of belief or, more specifically, Bayesian probabilities...
- Nested sampling algorithmNested sampling algorithmThe nested sampling algorithm is a computational approach to the problem of comparing models in Bayesian statistics, developed in 2004 by physicist John Skilling.-Background:...
: a computational approach to the problem of comparing models in Bayesian statistics
- Nested sampling algorithm
- Clustering AlgorithmsData clusteringCluster analysis or clustering is the task of assigning a set of objects into groups so that the objects in the same cluster are more similar to each other than to those in other clusters....
- Average-linkage clusteringUPGMAUPGMA is a simple agglomerative or hierarchical clustering method used in bioinformatics for the creation of phenetic trees...
: a simple agglomerative clustering algorithm - Canopy clustering algorithmCanopy clustering algorithmThe canopy clustering algorithm is an unsupervised pre-clustering algorithm, often used as preprocessing step for the K-means algorithm or the Hierarchical clustering algorithm....
: an unsupervised pre-clustering algorithm related to the K-means algorithm - Complete-linkage clusteringComplete-linkage clusteringIn cluster analysis, complete linkage or farthest neighbour is a method of calculating distances between clusters in agglomerative hierarchical clustering...
: a simple agglomerative clustering algorithm - DBSCANDBSCANDBSCAN is a data clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander and Xiaowei Xu in 1996....
: a density based clustering algorithm - Expectation-maximization algorithmExpectation-maximization algorithmIn statistics, an expectation–maximization algorithm is an iterative method for finding maximum likelihood or maximum a posteriori estimates of parameters in statistical models, where the model depends on unobserved latent variables...
- Fuzzy clusteringFuzzy clusteringFuzzy clustering is a class of algorithms for cluster analysis in which the allocation of data points to clusters is not "hard" but "fuzzy" in the same sense as fuzzy logic.- Explanation of clustering :...
: a class of clustering algorithms where each point has a degree of belonging to clusters- Fuzzy c-means
- FLAME clusteringFLAME clusteringFuzzy clustering by Local Approximation of MEmberships is a data clustering algorithm that defines clusters in the dense parts of a dataset and performs cluster assignment solely based on the neighborhood relationships among objects...
(Fuzzy clustering by Local Approximation of MEmberships): define clusters in the dense parts of a dataset and perform cluster assignment solely based on the neighborhood relationships among objects
- k-means clustering: cluster objects based on attributes into partitions
- k-means++K-means++In applied statistics, k-means++ is an algorithm for choosing the initial values for the k-means clustering algorithm. It was proposed in 2007 by David Arthur and Sergei Vassilvitskii, as an approximation algorithm for the NP-hard k-means problem—a way of avoiding the sometimes poor...
: a variation of this, using modified random seeds - k-medoidsK-medoidsThe -medoids algorithm is a clustering algorithm related to the -means algorithm and the medoidshift algorithm. Both the -means and -medoids algorithms are partitional and both attempt to minimize squared error, the distance between points labeled to be in a cluster and a point designated as the...
: similar to k-means, but chooses datapoints or medoidMedoidMedoids are representative objects of a data set or a cluster with a data set whose average dissimilarity to all the objects in the cluster is minimal. Medoids are similar in concept to means or centroids, but medoids are always members of the data set...
s as centers - Linde–Buzo–Gray algorithm: a vector quantization algorithm to derive a good codebook
- Lloyd's algorithmLloyd's algorithmIn computer science and electrical engineering, Lloyd's algorithm, also known as Voronoi iteration or relaxation, is an algorithm for grouping data points into a given number of categories, used for k-means clustering....
(Voronoi iteration or relaxation): group data points into a given number of categories, a popular algorithm for k-means clustering - OPTICSOPTICS algorithmOPTICS is an algorithm for finding density-based clusters in spatial data. It was presented by Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel and Jörg Sander....
: a density based clustering algorithm with a visual evaluation method - Single-linkage clustering: a simple agglomerative clustering algorithm
- SUBCLUSUBCLUSUBCLU is an algorithm for clustering high-dimensional data by Karin Kailing, Hans-Peter Kriegel and Peer Kröger. It is a subspace clustering algorithm that builds on the density-based clustering algorithm DBSCAN...
: a subspace clustering algorithm
- Average-linkage clustering
- Estimation TheoryEstimation theoryEstimation theory is a branch of statistics and signal processing that deals with estimating the values of parameters based on measured/empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the...
- Expectation-maximization algorithmExpectation-maximization algorithmIn statistics, an expectation–maximization algorithm is an iterative method for finding maximum likelihood or maximum a posteriori estimates of parameters in statistical models, where the model depends on unobserved latent variables...
A class of related algorithms for finding maximum likelihood estimates of parameters in probabilistic models- Ordered subset expectation maximization (OSEM): used in medical imagingMedical imagingMedical imaging is the technique and process used to create images of the human body for clinical purposes or medical science...
for positron emission tomographyPositron emission tomographyPositron emission tomography is nuclear medicine imaging technique that produces a three-dimensional image or picture of functional processes in the body. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide , which is introduced into the body on a...
, single photon emission computed tomographySingle photon emission computed tomographySingle-photon emission computed tomography is a nuclear medicine tomographic imaging technique using gamma rays. It is very similar to conventional nuclear medicine planar imaging using a gamma camera. However, it is able to provide true 3D information...
and X-rayX-rayX-radiation is a form of electromagnetic radiation. X-rays have a wavelength in the range of 0.01 to 10 nanometers, corresponding to frequencies in the range 30 petahertz to 30 exahertz and energies in the range 120 eV to 120 keV. They are shorter in wavelength than UV rays and longer than gamma...
computed tomography.
- Ordered subset expectation maximization (OSEM): used in medical imaging
- Odds algorithmOdds algorithmThe odds-algorithm is a mathematical method for computing optimalstrategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the odds-strategy, and the importance of the...
(Bruss algorithm) Optimal online search for distinguished value in sequential random input - Kalman filterKalman filterIn statistics, the Kalman filter is a mathematical method named after Rudolf E. Kálmán. Its purpose is to use measurements observed over time, containing noise and other inaccuracies, and produce values that tend to be closer to the true values of the measurements and their associated calculated...
: estimate the state of a dynamic systemDynamical systemA dynamical system is a concept in mathematics where a fixed rule describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a...
from a series of noisy measurements
- Expectation-maximization algorithm
- False nearest neighbor algorithmFNN algorithmThe false nearest neighbor algorithm is an algorithm for estimating the embedding dimension....
(FNN) estimates fractal dimensionFractal dimensionIn fractal geometry, the fractal dimension, D, is a statistical quantity that gives an indication of how completely a fractal appears to fill space, as one zooms down to finer and finer scales. There are many specific definitions of fractal dimension. The most important theoretical fractal... - Hidden Markov modelHidden Markov modelA hidden Markov model is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved states. An HMM can be considered as the simplest dynamic Bayesian network. The mathematics behind the HMM was developed by L. E...
- Baum–Welch algorithm: compute maximum likelihood estimates and posterior modeMaximum a posterioriIn Bayesian statistics, a maximum a posteriori probability estimate is a mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data...
estimates for the parameters of a hidden markov modelHidden Markov modelA hidden Markov model is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved states. An HMM can be considered as the simplest dynamic Bayesian network. The mathematics behind the HMM was developed by L. E... - Forward-backward algorithm a dynamic programming algorithm for computing the probability of a particular observation sequence
- Viterbi algorithmViterbi algorithmThe Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states – called the Viterbi path – that results in a sequence of observed events, especially in the context of Markov information sources, and more generally, hidden Markov models...
: find the most likely sequence of hidden states in a hidden markov modelHidden Markov modelA hidden Markov model is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved states. An HMM can be considered as the simplest dynamic Bayesian network. The mathematics behind the HMM was developed by L. E...
- Baum–Welch algorithm: compute maximum likelihood estimates and posterior mode
- Partial least squares regressionPartial least squares regressionPartial least squares regression is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the...
: finds a linear model describing some predicted variables in terms of other observable variables - Queuing theory
- Buzen's algorithmBuzen's algorithmIn queueing theory, a discipline within the mathematical theory of probability, Buzen's algorithm is an algorithm for calculating the normalization constant G in the Gordon–Newell theorem. This method was first proposed by Jeffrey P. Buzen in 1973. Once G is computed the probability distributions...
: an algorithm for calculating the normalization constant G(K) in the Gordon–Newell theoremGordon–Newell theoremIn queueing theory, a discipline within the mathematical theory of probability, the Gordon–Newell theorem is an extension of Jackson's theorem from open queueing networks to closed queueing networks of exponential servers. We cannot apply Jackson's theorem to closed networks because the queue...
- Buzen's algorithm
- RANSACRANSACRANSAC is an abbreviation for "RANdom SAmple Consensus". It is an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers. It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain...
(an abbreviation for "RANdom SAmple Consensus"): an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers - Scoring algorithmScoring algorithmIn statistics, Fisher's scoring algorithm is a form of Newton's method used to solve maximum likelihood equations numerically.-Sketch of Derivation:...
: is a form of Newton's methodNewton's methodIn numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...
used to solve maximum likelihoodMaximum likelihoodIn statistics, maximum-likelihood estimation is a method of estimating the parameters of a statistical model. When applied to a data set and given a statistical model, maximum-likelihood estimation provides estimates for the model's parameters....
equations numerically - Yamartino methodYamartino methodThe Yamartino method is an algorithm for calculating an approximation to the standard deviation σθ of wind direction θ during a single pass through the incoming data...
: calculate an approximation to the standard deviation σθ of wind direction θ during a single pass through the incoming data - Ziggurat algorithmZiggurat algorithmThe ziggurat algorithm is an algorithm for pseudo-random number sampling. Belonging to the class of rejection sampling algorithms, it relies on an underlying source of uniformly-distributed random numbers, typically from a pseudo-random number generator, as well as precomputed tables. The...
: generate random numbers from a non-uniform distribution
Computer architecture
- Tomasulo algorithmTomasulo algorithmThe Tomasulo algorithm is a hardware algorithm developed in 1967 by Robert Tomasulo from IBM. It allows sequential instructions that would normally be stalled due to certain dependencies to execute non-sequentially...
: allows sequential instructions that would normally be stalled due to certain dependencies to execute non-sequentially
Computer graphics
- ClippingClipping (computer graphics)Any procedure which identifies that portion of a picture which is either inside or outside a picture is referred to as a clipping algorithm or clipping.The region against which an object is to be clipped is called clipping window.-Examples:...
- Line clippingLine clippingIn computer graphics, line clipping is the process of removing lines or portions of lines outside of an area of interest. Typically, any line or part thereof which is outside of the viewing area is removed....
- Cohen–Sutherland
- Cyrus–Beck
- Fast-clipping
- Liang–Barsky
- Nicholl–Lee–Nicholl
- Polygon clipping
- Sutherland–Hodgman
- VattiVatti clipping algorithmThe Vatti clipping algorithm is used in computer graphics. It allows clipping of any number of arbitrarily shaped subject polygons by any number of arbitrarily shaped clip polygons. Unlike the Sutherland–Hodgman and Weiler–Atherton polygon clipping algorithms, the Vatti algorithm does not restrict...
- Weiler–Atherton
- Line clipping
- Contour lineContour lineA contour line of a function of two variables is a curve along which the function has a constant value. In cartography, a contour line joins points of equal elevation above a given level, such as mean sea level...
s and IsosurfaceIsosurfaceAn isosurface is a three-dimensional analog of an isoline. It is a surface that represents points of a constant value within a volume of space; in other words, it is a level set of a continuous function whose domain is 3D-space.Isosurfaces are normally displayed using computer graphics, and are...
s- Marching cubesMarching cubesMarching cubes is a computer graphics algorithm, published in the 1987 SIGGRAPH proceedings by Lorensen and Cline, for extracting a polygonal mesh of an isosurface from a three-dimensional scalar field...
: extract a polygonal mesh of an isosurface from a three-dimensional scalar field (sometimes called voxels) - Marching squaresMarching squaresMarching squares is a computer graphics algorithm that generates contours for a two-dimensional scalar field...
: generate contour lines for a two-dimensional scalar field - Marching tetrahedronsMarching TetrahedronsMarching tetrahedrons is an algorithm in the field of computer graphics to render implicit surfaces. It clarifies a minor ambiguity problem of marching cubes with some cube configurations....
: an alternative to Marching cubesMarching cubesMarching cubes is a computer graphics algorithm, published in the 1987 SIGGRAPH proceedings by Lorensen and Cline, for extracting a polygonal mesh of an isosurface from a three-dimensional scalar field...
- Marching cubes
- Discrete Green's Theorem: is an algorithm for computing double integral over a generalized rectangular domain in constant time. It is a natural extension to the summed area table algorithm
- Flood fillFlood fillFlood fill, also called seed fill, is an algorithm that determines the area connected to a given node in a multi-dimensional array. It is used in the "bucket" fill tool of paint programs to determine which parts of a bitmap to fill with color, and in games such as Go and Minesweeper for determining...
: fills a connected region of a multi-dimensional array with a specified symbol - Global illuminationGlobal illuminationGlobal illumination is a general name for a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes...
algorithms: Considers direct illumination and reflection from other objects.- Ambient occlusionAmbient occlusionAmbient occlusion is a shading method used in 3D computer graphics which helps add realism to local reflection models by taking into account attenuation of light due to occlusion...
- Beam tracingBeam tracingBeam tracing is an algorithm to simulate wave propagation.It was developed in the context of computer graphics to render 3D scenes,but it has been also used in other similar areas such as acoustics andelectromagnetism simulations....
- Cone tracingCone tracingCone tracing is a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with cones. Cone tracing is related to beam tracing, but uses circular rather than polygonal cross sections....
- Image-based lighting
- Metropolis light transportMetropolis light transportThe Metropolis light transport is a SIGGRAPH 1997 paper by Eric Veach and Leonidas J. Guibas, describing an application of a variant of the Monte Carlo method called the Metropolis-Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three...
- Path tracingPath TracingPath tracing is a computer graphics rendering technique that attempts to simulate the physical behaviour of light as closely as possible. It is a generalisation of conventional ray tracing, tracing rays from the virtual camera through several bounces on or through objects...
- Photon mappingPhoton mappingIn computer graphics, photon mapping is a two-pass global illumination algorithm developed by Henrik Wann Jensen that solves the rendering equation. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a...
- Radiosity
- Ray tracing
- Ambient occlusion
- Hidden surface removalHidden surface determinationIn 3D computer graphics, hidden surface determination is the process used to determine which surfaces and parts of surfaces are not visible from a certain viewpoint...
or Visual surface determinationHidden surface determinationIn 3D computer graphics, hidden surface determination is the process used to determine which surfaces and parts of surfaces are not visible from a certain viewpoint...
- Newell's algorithmNewell's algorithmNewell's Algorithm is a 3D computer graphics procedure for elimination of polygon cycles in the depth sorting required in hidden surface removal...
: eliminate polygon cycles in the depth sorting required in hidden surface removal - Painter's algorithmPainter's algorithmThe painter's algorithm, also known as a priority fill, is one of the simplest solutions to the visibility problem in 3D computer graphics...
: detects visible parts of a 3-dimensional scenery - Scanline renderingScanline renderingScanline rendering is an algorithm for visible surface determination, in 3D computer graphics,that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis...
: constructs an image by moving an imaginary line over the image - Warnock algorithmWarnock algorithmThe Warnock algorithm is a hidden surface algorithm invented by John Warnock that is typically used in the field of computer graphics.It solves the problem of rendering a complicated image by recursive subdivision of a scene until areas are obtained that are trivial to compute...
- Newell's algorithm
- Line DrawingLine drawing algorithmA line drawing algorithm is a graphical algorithm for approximating a line segment on discrete graphical media. On discrete media, such as pixel-based displays and printers, line drawing requires such an approximation ....
: graphical algorithm for approximating a line segment on discrete graphical media.- Bresenham's line algorithmBresenham's line algorithmThe Bresenham line algorithm is an algorithm which determines which points in an n-dimensional raster should be plotted in order to form a close approximation to a straight line between two given points...
: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses decision variables) - DDA line algorithmDigital Differential Analyzer (graphics algorithm)In computer graphics, a hardware or software implementation of a digital differential analyzer is used for linear interpolation of variables over an interval between start and end point. DDAs are used for rasterization of lines, triangles and polygons...
: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses floating-point math) - Xiaolin Wu's line algorithmXiaolin Wu's line algorithmXiaolin Wu's line algorithm is an algorithm for line antialiasing, which was presented in the article An Efficient Antialiasing Technique in the July 1991 issue of Computer Graphics, as well as in the article Fast Antialiasing in the June 1992 issue of Dr. Dobb's Journal.Bresenham's algorithm draws...
: algorithm for line antialiasing.
- Bresenham's line algorithm
- Midpoint circle algorithmMidpoint circle algorithmIn computer graphics, the midpoint circle algorithm is an algorithm used to determine the points needed for drawing a circle. The algorithm is a variant of Bresenham's line algorithm, and is thus sometimes known as Bresenham's circle algorithm, although not actually invented by Bresenham...
: an algorithm used to determine the points needed for drawing a circle - Ramer–Douglas–Peucker algorithm: Given a 'curve' composed of line segments to find a curve not too dissimilar but that has fewer points
- ShadingShadingShading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness.-Drawing:Shading is a process used in drawing for depicting levels of darkness on paper by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter...
- Gouraud shadingGouraud shadingGouraud shading, named after Henri Gouraud, is an interpolation method used in computer graphics to produce continuous shading of surfaces represented by polygon meshes...
: an algorithm to simulate the differing effects of light and colour across the surface of an object in 3D computer graphics - Phong shadingPhong shadingPhong shading refers to an interpolation technique for surface shading in 3D computer graphics. It is also called Phong interpolation or normal-vector interpolation shading. Specifically, it interpolates surface normals across rasterized polygons and computes pixel colors based on the interpolated...
: an algorithm to interpolate surface normal-vectors for surface shading in 3D computer graphics
- Gouraud shading
- SlerpSlerpIn computer graphics, Slerp is shorthand for spherical linear interpolation, introduced by Ken Shoemake in the context of quaternion interpolation for the purpose of animating 3D rotation...
(spherical linear interpolation): quaternion interpolation for the purpose of animating 3D rotation - Summed area tableSummed Area TableA summed area table is an algorithm for quickly and efficiently generating the sum of values in a rectangular subset of a grid...
(also known as an integral image): an algorithm for computing the sum of values in a rectangular subset of a grid in constant time
Cryptography
- Asymmetric (public key) encryption:
- DSADigital Signature AlgorithmThe Digital Signature Algorithm is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology in August 1991 for use in their Digital Signature Standard , specified in FIPS 186, adopted in 1993. A minor...
- ElGamalElGamal encryptionIn cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1984. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of...
- Elliptic curve cryptographyElliptic curve cryptographyElliptic curve cryptography is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S...
- NTRUEncryptNTRUEncryptThe NTRUEncrypt public key cryptosystem, also known as the NTRU encryption algorithm, is a lattice-based alternative to RSA and ECC and is based on the shortest vector problem in a lattice...
- RSA
- DSA
- Cryptographic hash functionCryptographic hash functionA cryptographic hash function is a deterministic procedure that takes an arbitrary block of data and returns a fixed-size bit string, the hash value, such that an accidental or intentional change to the data will change the hash value...
s:- HMAC: keyed-hash message authentication
- MD5MD5The MD5 Message-Digest Algorithm is a widely used cryptographic hash function that produces a 128-bit hash value. Specified in RFC 1321, MD5 has been employed in a wide variety of security applications, and is also commonly used to check data integrity...
– Note that there is now a method of generating collisions for MD5 - RIPEMD-160
- SHA-1
- SHA-2SHA-2In cryptography, SHA-2 is a set of cryptographic hash functions designed by the National Security Agency and published in 2001 by the NIST as a U.S. Federal Information Processing Standard. SHA stands for Secure Hash Algorithm. SHA-2 includes a significant number of changes from its predecessor,...
(SHA-224, SHA-256, SHA-384, SHA-512) - TigerTiger (hash)In cryptography, Tiger is a cryptographic hash function designed by Ross Anderson and Eli Biham in 1995 for efficiency on 64-bit platforms. The size of a Tiger hash value is 192 bits. Truncated versions can be used for compatibility with protocols assuming a particular hash size...
(TTH), usually used in Tiger tree hashesHash treeIn cryptography and computer science Hash trees or Merkle trees are a type of data structure which contains a tree of summary information about a larger piece of data – for instance a file – used to verify its contents. Hash trees are a combination of hash lists and hash chaining, which in turn are...
- Cryptographically secure pseudo-random number generators
- Blum Blum Shub - based on the hardness of factorizationInteger factorizationIn number theory, integer factorization or prime factorization is the decomposition of a composite number into smaller non-trivial divisors, which when multiplied together equal the original integer....
- FortunaFortuna (PRNG)Fortuna is a cryptographically secure pseudorandom number generator devised by Bruce Schneier and Niels Ferguson. It is named after Fortuna, the Roman goddess of chance.- Design :Fortuna is a family of secure PRNGs; its design...
, intended as an improvement on Yarrow algorithmYarrow algorithmThe Yarrow algorithm is a cryptographically secure pseudorandom number generator. The name is taken from the yarrow plant, the stalks of which are dried and used as a randomising agent in I Ching divination.... - Linear feedback shift registerLinear feedback shift registerA linear feedback shift register is a shift register whose input bit is a linear function of its previous state.The most commonly used linear function of single bits is XOR...
- Yarrow algorithmYarrow algorithmThe Yarrow algorithm is a cryptographically secure pseudorandom number generator. The name is taken from the yarrow plant, the stalks of which are dried and used as a randomising agent in I Ching divination....
- Blum Blum Shub - based on the hardness of factorization
- Key exchange
- Diffie–Hellman key exchange
- Secret sharingSecret sharingSecret sharing refers to method for distributing a secret amongst a group of participants, each of whom is allocated a share of the secret. The secret can be reconstructed only when a sufficient number of shares are combined together; individual shares are of no use on their own.More formally, in a...
, Secret Splitting, Key Splitting, M of N algorithms- Blakey's Scheme
- Shamir's SchemeShamir's Secret SharingShamir's Secret Sharing is an algorithm in cryptography. It is a form of secret sharing, where a secret is divided into parts, giving each participant its own unique part, where some of the parts or all of them are needed in order to reconstruct the secret....
- Symmetric (secret key) encryption:
- Advanced Encryption StandardAdvanced Encryption StandardAdvanced Encryption Standard is a specification for the encryption of electronic data. It has been adopted by the U.S. government and is now used worldwide. It supersedes DES...
(AES), winner of NIST competition, also known as Rijndael - BlowfishBlowfish (cipher)Blowfish is a keyed, symmetric block cipher, designed in 1993 by Bruce Schneier and included in a large number of cipher suites and encryption products. Blowfish provides a good encryption rate in software and no effective cryptanalysis of it has been found to date...
- Data Encryption StandardData Encryption StandardThe Data Encryption Standard is a block cipher that uses shared secret encryption. It was selected by the National Bureau of Standards as an official Federal Information Processing Standard for the United States in 1976 and which has subsequently enjoyed widespread use internationally. It is...
(DES), sometimes DE Algorithm, winner of NBS selection competition, replaced by AES for most purposes - IDEAInternational Data Encryption AlgorithmIn cryptography, the International Data Encryption Algorithm is a block cipher designed by James Massey of ETH Zurich and Xuejia Lai and was first described in 1991. As a block cipher, it is also symmetric. The algorithm was intended as a replacement for the Data Encryption Standard[DES]...
- RC4 (cipher)
- Tiny Encryption AlgorithmTiny Encryption AlgorithmIn cryptography, the Tiny Encryption Algorithm is a block cipher notable for its simplicity of description and implementation, typically a few lines of code...
- Advanced Encryption Standard
Digital logic
- Boolean minimization
- Quine–McCluskey algorithmQuine–McCluskey algorithmThe Quine–McCluskey algorithm is a method used for minimization of boolean functions which was developed by W.V. Quine and Edward J. McCluskey...
: Also called as Q-M algorithm, programmable method for simplifying the boolean equations. - Petrick's methodPetrick's methodIn Boolean algebra, Petrick's method is a technique for determining all minimum sum-of-products solutions from a prime implicant chart...
: Another algorithm for boolean simplification. - Espresso heuristic logic minimizerEspresso heuristic logic minimizerThe Espresso logic minimizer is a computer program using heuristic and specific algorithms for efficiently reducing the complexity of digital electronic gate circuits. Espresso was developed at IBM by Robert Brayton. Rudell later published the variant Espresso-MV in 1986 under the title...
: Fast algorithm for boolean function minimization.
- Quine–McCluskey algorithm
Machine learning and statistical classification
- ALOPEXALOPEXALOPEX is a correlation based machine learning algorithm first proposed by Tzanakou and Harth in 1974.-Principle:...
: a correlation-based machine-learning algorithmMachine learningMachine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases... - Association rule learningAssociation rule learningIn data mining, association rule learning is a popular andwell researched method for discovering interesting relations between variablesin large databases. Piatetsky-Shapirodescribes analyzing and presenting...
: discover interesting relations between variables, used in data miningData miningData mining , a relatively young and interdisciplinary field of computer science is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems...
- Apriori algorithmApriori algorithmIn computer science and data mining, Apriori is a classic algorithm for learning association rules. Apriori is designed to operate on databases containing transactions...
- Eclat algorithm
- FP-growth algorithm
- One-attribute rule
- Zero-attribute rule
- Apriori algorithm
- BoostingBoostingBoosting is a machine learning meta-algorithm for performing supervised learning. Boosting is based on the question posed by Kearns: can a set of weak learners create a single strong learner? A weak learner is defined to be a classifier which is only slightly correlated with the true classification...
: Use many weak learners to boost effectiveness- AdaBoostAdaBoostAdaBoost, short for Adaptive Boosting, is a machine learning algorithm, formulated by Yoav Freund and Robert Schapire. It is a meta-algorithm, and can be used in conjunction with many other learning algorithms to improve their performance. AdaBoost is adaptive in the sense that subsequent...
: adaptive boosting - BrownBoostBrownBoostBrownBoost is a boosting algorithm that may be robust to noisy datasets. BrownBoost is an adaptive version of the boost by majority algorithm. As is true for all boosting algorithms, BrownBoost is used in conjunction with other machine learning methods...
:a boosting algorithm that may be robust to noisy datasets - LogitBoostLogitBoostLogitBoost is a boosting algorithm formulated by Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The original paper casts the AdaBoost algorithm into a statistics framework...
: logistic regressionLogistic regressionIn statistics, logistic regression is used for prediction of the probability of occurrence of an event by fitting data to a logit function logistic curve. It is a generalized linear model used for binomial regression...
boosting - LPBoostLPBoostLinear Programming Boosting is a supervised classifier from the Boosting family of classifiers. LPBoost maximizes a margin between training samples of different classes and hence also belongs to the class of margin-maximizing supervised classification algorithms...
: linear programmingLinear programmingLinear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...
boosting
- AdaBoost
- Bootstrap aggregatingBootstrap aggregatingBootstrap aggregating is a machine learning ensemble meta-algorithm to improve machine learning of statistical classification and regression models in terms of stability and classification accuracy. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision...
(bagging): technique to improve stability and classification accuracy - Decision TreesDecision tree learningDecision tree learning, used in statistics, data mining and machine learning, uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value. More descriptive names for such tree models are classification trees or regression trees...
- C4.5 algorithmC4.5 algorithmC4.5 is an algorithm used to generate a decision tree developed by Ross Quinlan. C4.5 is an extension of Quinlan's earlier ID3 algorithm. The decision trees generated by C4.5 can be used for classification, and for this reason, C4.5 is often referred to as a statistical classifier.-Algorithm:C4.5...
: an extension to ID3 - ID3 algorithmID3 algorithmIn decision tree learning, ID3 is an algorithm used to generate a decision tree invented by Ross Quinlan. ID3 is the precursor to the C4.5 algorithm.-Algorithm:The ID3 algorithm can be summarized as follows:...
(Iterative Dichotomiser 3): Use heuristic to generate small decision trees
- C4.5 algorithm
- k-nearest neighbors (k-NN): a method for classifying objects based on closest training examples in the feature spaceFeature spaceIn pattern recognition a feature space is an abstract space where each pattern sample is represented as a point in n-dimensional space. Its dimension is determined by the number of features used to describe the patterns...
- Linde–Buzo–Gray algorithm: a vector quantization algorithm used to derive a good codebook
- Locality-sensitive hashing (LSH): a method of performing probabilistic dimension reduction of high-dimensional data
- Neural NetworkArtificial neural networkAn artificial neural network , usually called neural network , is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes...
- BackpropagationBackpropagationBackpropagation is a common method of teaching artificial neural networks how to perform a given task. Arthur E. Bryson and Yu-Chi Ho described it as a multi-stage dynamic system optimization method in 1969 . It wasn't until 1974 and later, when applied in the context of neural networks and...
: A supervised learningSupervised learningSupervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value...
method which requires a teacher that knows, or can calculate, the desired output for any given input - Hopfield netHopfield netA Hopfield network is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems with binary threshold units. They are guaranteed to converge to a local minimum, but convergence to one of the stored patterns is not guaranteed...
: a Recurrent neural networkRecurrent neural networkA recurrent neural network is a class of neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process...
in which all connections are symmetric - PerceptronPerceptronThe perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest kind of feedforward neural network: a linear classifier.- Definition :...
: the simplest kind of feedforward neural network: a linear classifierLinear classifierIn the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics...
. - Pulse-coupled neural networks (PCNN): neural modelsNeural networkThe term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes...
proposed by modeling a cat's visual cortexVisual cortexThe visual cortex of the brain is the part of the cerebral cortex responsible for processing visual information. It is located in the occipital lobe, in the back of the brain....
and developed for high-performance biomimeticBionicsBionics is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology.The word bionic was coined by Jack E...
image processing. - Radial basis function networkRadial basis function networkA radial basis function network is an artificial neural network that uses radial basis functions as activation functions. It is a linear combination of radial basis functions...
: an artificial neural network that uses radial basis functionBasis functionIn mathematics, a basis function is an element of a particular basis for a function space. Every continuous function in the function space can be represented as a linear combination of basis functions, just as every vector in a vector space can be represented as a linear combination of basis...
s as activation functions - Self-organizing mapSelf-organizing mapA self-organizing map or self-organizing feature map is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional , discretized representation of the input space of the training samples, called a map...
: an unsupervised network that produces a low-dimensional representation of the input space of the training samples
- Backpropagation
- Random forestRandom forestRandom forest is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. The algorithm for inducing a random forest was developed by Leo Breiman and Adele Cutler, and "Random Forests" is their trademark...
: classify using many decision trees - Random multinomial logitRandom multinomial logitIn statistics and machine learning, random multinomial logit is a technique for statistical classification using repeated multinomial logit analyses via Leo Breiman's random forests.-Rationale for the new method:...
: classify using repeated multinomial logitMultinomial logitIn statistics, economics, and genetics, a multinomial logit model, also known as multinomial logistic regression, is a regression model which generalizes logistic regression by allowing more than two discrete outcomes...
analyses - Reinforcement LearningReinforcement learningInspired by behaviorist psychology, reinforcement learning is an area of machine learning in computer science, concerned with how an agent ought to take actions in an environment so as to maximize some notion of cumulative reward...
:- Q-learningQ-learningQ-learning is a reinforcement learning technique that works by learning an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter. One of the strengths of Q-learning is that it is able to compare the expected utility...
: learn an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter - SARSASARSASARSA is an algorithm for learning a Markov decision process policy, used in the reinforcement learning area of machine learning...
(State-Action-Reward-State-Action): learn a Markov decision processMarkov decision processMarkov decision processes , named after Andrey Markov, provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying a wide range of optimization problems solved via...
policy - Temporal difference learningTemporal difference learningTemporal difference learning is a prediction method. It has been mostly used for solving the reinforcement learning problem. "TD learning is a combination of Monte Carlo ideas and dynamic programming ideas." TD resembles a Monte Carlo method because it learns by sampling the environment according...
- Q-learning
- Relevance Vector MachineRelevance Vector MachineRelevance vector machine is a machine learning technique that uses Bayesian inference to obtain parsimonious solutions for regression and classification...
(RVM): similar to SVM, but provides probabilistic classification - Support Vector Machines (SVM): a set of methods which divide multidimensional data by finding a dividing hyperplane with the maximum margin between the two sets
- Structured SVMStructured SVMThe structured support vector machine is a machine learning algorithm that generalizes the Support Vector Machine classifier. Whereas the SVM classifier supports binary classification, multiclass classification and regression, the structured SVM allows training of a classifier for general...
: allows training of a classifier for general structured output labels.
- Structured SVM
- Winnow algorithm: related to the perceptron, but uses a multiplicative weight-update scheme
Programming language theory
- C3 linearizationC3 linearizationIn computing, the C3 superclass linearization is an algorithm used primarily to obtain a consistent linearization of a multiple inheritance hierarchy in object-oriented programming. This linearization is used to resolve the order in which methods should be inherited, and is often termed "MRO" for...
: an algorithm used primarily to obtain a consistent linearization of a multiple inheritance hierarchy in object-oriented programming - Chaitin's algorithmChaitin's algorithmChaitin's algorithm is a bottom-up, graph coloring register allocation algorithm that uses cost/degree as its spill metric. It is named after its designer, Gregory Chaitin...
: a bottom-up, graph coloring register allocation algorithm that uses cost/degree as its spill metric - Hindley–Milner type inference algorithm
- Rete algorithmRete algorithmThe Rete algorithm is an efficient pattern matching algorithm for implementing production rule systems. The Rete algorithm was designed by Dr Charles L. Forgy of Carnegie Mellon University, first published in a working paper in 1974, and later elaborated in his 1979 Ph.D. thesis and a 1982 paper...
: an efficient pattern matching algorithm for implementing production rule systems - Sethi-Ullman algorithmSethi-Ullman algorithmIn computer science, the Sethi–Ullman algorithm is an algorithm named after Ravi Sethi and Jeffrey D. Ullman, its inventors, for translating abstract syntax trees into machine code that uses as few instructions as possible.-Overview:...
: generate optimal code for arithmetic expressions
Parsing
- CYK algorithmCYK algorithmThe Cocke–Younger–Kasami algorithm is a parsing algorithm for context-free grammars. It employs bottom-up parsing and dynamic programming....
: An O(n3) algorithm for parsing context-free grammarContext-free grammarIn formal language theory, a context-free grammar is a formal grammar in which every production rule is of the formwhere V is a single nonterminal symbol, and w is a string of terminals and/or nonterminals ....
s in Chomsky normal formChomsky normal formIn computer science, a context-free grammar is said to be in Chomsky normal form if all of its production rules are of the form:Every grammar in Chomsky normal form is context-free, and conversely, every context-free grammar can be transformed into an equivalent one which is in Chomsky normal form.... - Earley parserEarley parserIn computer science, the Earley parser is an algorithm for parsing strings that belong to a given context-free language, named after its inventor, Jay Earley...
: Another O(n3) algorithm for parsing any context-free grammarContext-free grammarIn formal language theory, a context-free grammar is a formal grammar in which every production rule is of the formwhere V is a single nonterminal symbol, and w is a string of terminals and/or nonterminals .... - GLR parserGLR parserA GLR parser is an extension of an LR parser algorithm to handle nondeterministic and ambiguous grammars. First described in a 1984 paper by Masaru Tomita, it has also been referred to as a "parallel parser"...
:An algorithm for parsing any context-free grammarContext-free grammarIn formal language theory, a context-free grammar is a formal grammar in which every production rule is of the formwhere V is a single nonterminal symbol, and w is a string of terminals and/or nonterminals ....
by Masaru TomitaMasaru Tomitais a Japanese molecular biologist and computer scientist, best known as the director of the and/or the inventor of GLR parser algorithm. He is a professor of Keio University, president of the , and the founder and board member of . He is also the co-founder and on the board of directors of .From...
. It is tuned for deterministic grammars, on which it performs almost linear time and O(n3) in worst case. - Inside-outside algorithmInside-outside algorithmThe inside-outside algorithm is a way of re-estimating production probabilities in a probabilistic context-free grammar. It was introduced by James K. Baker in 1979 as a generalization of the forward-backward algorithm for parameter estimation on hidden Markov models to stochastic context-free...
: An O(n3) algorithm for re-estimating production probabilities in probabilistic context-free grammars - LL parserLL parserAn LL parser is a top-down parser for a subset of the context-free grammars. It parses the input from Left to right, and constructs a Leftmost derivation of the sentence...
: A relatively simple linear time parsing algorithm for a limited class of context-free grammarContext-free grammarIn formal language theory, a context-free grammar is a formal grammar in which every production rule is of the formwhere V is a single nonterminal symbol, and w is a string of terminals and/or nonterminals ....
s - LR parserLR parserIn computer science, an LR parser is a parser that reads input from Left to right and produces a Rightmost derivation. The term LR parser is also used; where the k refers to the number of unconsumed "look ahead" input symbols that are used in making parsing decisions...
: A more complex linear time parsing algorithm for a larger class of context-free grammarContext-free grammarIn formal language theory, a context-free grammar is a formal grammar in which every production rule is of the formwhere V is a single nonterminal symbol, and w is a string of terminals and/or nonterminals ....
s. Variants:- Canonical LR parserCanonical LR parserA canonical LR parser or LR parser is an LR parser whose parsing tables are constructed in a similar way as with LR parsers except that the items in the item sets also contain a lookahead, i.e., a terminal that is expected by the parser after the right-hand side of the rule...
- LALR (Look-ahead LR) parser
- Operator-precedence parserOperator-precedence parserAn operator precedence parser is a bottom-up parser that interprets an operator-precedence grammar. For example, most calculators use operator precedence parsers to convert from the human-readable infix notation with order of operations format into an internally optimized computer-readable format...
- SLR (Simple LR) parserSimple LR parserAn SLR parser will typically have more conflict states than an LALR parser. For real-world computer languages, an SLR parser is usually not adequate, but for student projects in a compiler class it is a good learning tool....
- Simple precedence parserSimple precedence parserIn computer science, a Simple precedence parser is a type of bottom-up parser for context-free grammars that can be used only by Simple precedence grammars....
- Canonical LR parser
- Packrat parser: A linear time parsing algorithm supporting some context-free grammarContext-free grammarIn formal language theory, a context-free grammar is a formal grammar in which every production rule is of the formwhere V is a single nonterminal symbol, and w is a string of terminals and/or nonterminals ....
s and parsing expression grammarParsing expression grammarA parsing expression grammar, or PEG, is a type of analytic formal grammar, i.e. it describes a formal language in terms of a set of rules for recognizing strings in the language...
s - Recursive descent parserRecursive descent parserA recursive descent parser is a top-down parser built from a set of mutually-recursive procedures where each such procedure usually implements one of the production rules of the grammar...
: A top-down parserTop-down parsingTop-down parsing is a type of parsing strategy where in one first looks at the highest level of the parse tree and works down the parse tree by using the rewriting rules of a formal grammar...
suitable for LL(k) grammars - Shunting yard algorithmShunting yard algorithmThe shunting-yard algorithm is a method for parsing mathematical expressions specified in infix notation. It can be used to produce output in Reverse Polish notation or as an abstract syntax tree...
: convert an infix-notation math expression to postfix
Quantum algorithms
- Deutsch-Jozsa algorithmDeutsch-Jozsa algorithmThe Deutsch–Jozsa algorithm is a quantum algorithm, proposed by David Deutsch and Richard Jozsa in 1992 with improvements by Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca in 1998...
: criterion of balance for Boolean function - Grover's algorithmGrover's algorithmGrover's algorithm is a quantum algorithm for searching an unsorted database with N entries in O time and using O storage space . It was invented by Lov Grover in 1996....
: provides quadratic speedup for many search problems - Shor's algorithmShor's algorithmShor's algorithm, named after mathematician Peter Shor, is a quantum algorithm for integer factorization formulated in 1994...
: provides exponentialExponential functionIn mathematics, the exponential function is the function ex, where e is the number such that the function ex is its own derivative. The exponential function is used to model a relationship in which a constant change in the independent variable gives the same proportional change In mathematics,...
speedup (relative to currently known non-quantum algorithms) for factoring a number - Simon's algorithmSimon's algorithmSimon's algorithm, conceived by Daniel Simon in 1994, is a quantum algorithm which solves a black-box problem exponentially faster than any classical algorithm, including probabilistic algorithms...
: provides a provably exponentialExponential functionIn mathematics, the exponential function is the function ex, where e is the number such that the function ex is its own derivative. The exponential function is used to model a relationship in which a constant change in the independent variable gives the same proportional change In mathematics,...
speedup (relative to any non-quantum algorithm) for a black-box problem
Theory of computation and automata
- Powerset constructionPowerset constructionIn the theory of computation and Automata theory, the powerset construction or subset construction is a standard method for converting a nondeterministic finite automaton into a deterministic finite automaton which recognizes the same formal language...
: Algorithm to convert nondeterministic automaton to deterministic automatonDeterministic automatonDeterministic automaton is a concept of automata theory in which the outcome of a transition from one state to another given a certain input can be predicted for every occurrence....
. - Tarski–Kuratowski algorithmTarski–Kuratowski algorithmIn computability theory and mathematical logic the Tarski–Kuratowski algorithm is a non-deterministic algorithm which provides an upper bound for the complexity of formulas in the arithmetical hierarchy and analytical hierarchy....
: a non-deterministic algorithm which provides an upper bound for the complexity of formulas in the arithmetical hierarchyArithmetical hierarchyIn mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene-Mostowski hierarchy classifies certain sets based on the complexity of formulas that define them...
and analytical hierarchyAnalytical hierarchyIn mathematical logic and descriptive set theory, the analytical hierarchy is a higher type analogue of the arithmetical hierarchy. It thus continues the classification of sets by the formulas that define them.-The analytical hierarchy of formulas:...
Error detection and correction
- BCH CodeBCH codeIn coding theory the BCH codes form a class of parameterised error-correcting codes which have been the subject of much academic attention in the last fifty years. BCH codes were invented in 1959 by Hocquenghem, and independently in 1960 by Bose and Ray-Chaudhuri...
s- Berlekamp–Massey algorithm
- Peterson–Gorenstein–Zierler algorithm
- Reed–Solomon error correctionReed–Solomon error correctionIn coding theory, Reed–Solomon codes are non-binary cyclic error-correcting codes invented by Irving S. Reed and Gustave Solomon. They described a systematic way of building codes that could detect and correct multiple random symbol errors...
- BCJR algorithmBCJR algorithmThe BCJR algorithm is an algorithm for maximum a posteriori decoding of error correcting codes defined on trellises . The algorithm is named after its inventors: Bahl, Cocke, Jelinek and Raviv...
: decoding of error correcting codes defined on trellises (principally convolutional codes) - Forward error correctionForward error correctionIn telecommunication, information theory, and coding theory, forward error correction or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels....
- Gray codeGray codeThe reflected binary code, also known as Gray code after Frank Gray, is a binary numeral system where two successive values differ in only one bit. It is a non-weighted code....
- Hamming codeHamming codeIn telecommunication, Hamming codes are a family of linear error-correcting codes that generalize the Hamming-code invented by Richard Hamming in 1950. Hamming codes can detect up to two and correct up to one bit errors. By contrast, the simple parity code cannot correct errors, and can detect only...
s- Hamming(7,4)Hamming(7,4)In coding theory, Hamming is a linear error-correcting code that encodes 4 bits of data into 7 bits by adding 3 parity bits. It is a member of a larger family of Hamming codes, but the term Hamming code often refers to this specific code that Richard W. Hamming introduced in 1950...
: a Hamming codeHamming codeIn telecommunication, Hamming codes are a family of linear error-correcting codes that generalize the Hamming-code invented by Richard Hamming in 1950. Hamming codes can detect up to two and correct up to one bit errors. By contrast, the simple parity code cannot correct errors, and can detect only...
that encodes 4 bits of data into 7 bits by adding 3 parity bits - Hamming distanceHamming distanceIn information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different...
: sum number of positions which are different - Hamming weightHamming weightThe Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string of bits, this is the number of 1's in the string...
(population count): find the number of 1 bits in a binary word
- Hamming(7,4)
- Redundancy checks
- Adler-32Adler-32Adler-32 is a checksum algorithm which was invented by Mark Adler in 1995, and is a modification of the Fletcher checksum. Compared to a cyclic redundancy check of the same length, it trades reliability for speed. Adler-32 is more reliable than Fletcher-16, and slightly less reliable than Fletcher-32...
- Cyclic redundancy checkCyclic redundancy checkA cyclic redundancy check is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to raw data...
- Fletcher's checksumFletcher's checksumThe Fletcher checksum is an algorithm for computing a position-dependent checksum devised by John G. Fletcher at Lawrence Livermore Labs in the late 1970s. A description of the algorithm and an analysis of the performance characteristics of a particular implementation were published in the IEEE...
- Longitudinal redundancy checkLongitudinal redundancy checkIn telecommunication, a longitudinal redundancy check or horizontal redundancy check is a form of redundancy check that is applied independently to each of a parallel group of bit streams...
(LRC) - Luhn algorithmLuhn algorithmThe Luhn algorithm or Luhn formula, also known as the "modulus 10" or "mod 10" algorithm,is a simple checksum formula used to validate a variety of identification numbers, such as credit card numbers, IMEI numbers, National Provider Identifier numbers in US and Canadian Social Insurance Numbers...
: a method of validating identification numbers - Luhn mod N algorithmLuhn mod N algorithmThe Luhn mod N algorithm is an extension to the Luhn algorithm that allows it to work with sequences of non-numeric characters...
: extension of Luhn to non-numeric characters - ParityParity bitA parity bit is a bit that is added to ensure that the number of bits with the value one in a set of bits is even or odd. Parity bits are used as the simplest form of error detecting code....
: simple/fast error detection technique - Verhoeff algorithmVerhoeff algorithmThe Verhoeff algorithm, a checksum formula for error detection first published in 1969, was developed by Dutch mathematician Jacobus Verhoeff . Like the more widely known Luhn algorithm, it works with strings of decimal digits of any length...
- Adler-32
Lossless compression algorithms
- Burrows–Wheeler transform: preprocessing useful for improving lossless compressionLossless data compressionLossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression, which only allows an approximation of the original data to be reconstructed, in exchange...
- Context tree weightingContext tree weightingThe context tree weighting method is a lossless compression and prediction algorithm by . The CTW algorithm is among the very few such algorithms that offer both theoretical guarantees and good practical performance ....
- Delta encodingDelta encodingDelta encoding is a way of storing or transmitting data in the form of differences between sequential data rather than complete files; more generally this is known as data differencing...
: aid to compression of data in which sequential data occurs frequently - Dynamic Markov compressionDynamic Markov compressionDynamic Markov compression is a lossless data compression algorithm developed by Gordon Cormack and Nigel Horspool . It uses predictive arithmetic coding similar to prediction by partial matching , except that the input is predicted one bit at a time...
: Compression using predictive arithmetic coding - Dictionary coderDictionary coderA dictionary coder, also sometimes known as a substitution coder, is a class of lossless data compression algorithms which operate by searching for matches between the text to be compressed and a set of strings contained in a data structure maintained by the encoder...
s- Byte pair encodingByte pair encodingByte pair encoding or digram coding is a simple form of data compression in which the most common pair of consecutive bytes of data is replaced with a byte that does not occur within that data. A table of the replacements is required to rebuild the original data...
(BPE) - DEFLATE
- Lempel–Ziv
- LZ77 and LZ78LZ77 and LZ78LZ77 and LZ78 are the names for the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for most of the LZ variations including LZW, LZSS, LZMA and...
- Lempel–Ziv Jeff BonwickLZJBLZJB is a lossless data compression algorithm invented by Jeff Bonwick to compress crash dumps and data in ZFS. It includes a number of improvements to the LZRW1 algorithm, a member of the Lempel-Ziv family of compression algorithms.-External links:* * *...
(LZJB) - Lempel–Ziv–Markov chain algorithm (LZMA)
- Lempel–Ziv–Oberhumer (LZO): speed oriented
- Lempel–Ziv–Storer–Szymanski (LZSS)
- Lempel–Ziv–Welch (LZW)
- LZWLLZWLLZWL is a syllable-based variant of the character-based LZW compression algorithm.LZWL can work with syllables obtained by all algorithms of decomposition into syllables...
: syllable-based variant - LZX
- Lempel–Ziv Ross WilliamsLZRWLempel–Ziv Ross Williams, refers to variants of the LZ77 lossless data compression algorithms with an emphasis on improving compression speed through the use of hash tables and other techniques...
(LZRW)
- LZ77 and LZ78
- Byte pair encoding
- Entropy encodingEntropy encodingIn information theory an entropy encoding is a lossless data compression scheme that is independent of the specific characteristics of the medium....
: coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols- Arithmetic codingArithmetic codingArithmetic coding is a form of variable-length entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code...
: advanced entropyEntropyEntropy is a thermodynamic property that can be used to determine the energy available for useful work in a thermodynamic process, such as in energy conversion devices, engines, or machines. Such devices can only be driven by convertible energy, and have a theoretical maximum efficiency when...
coding- Range encodingRange encodingRange encoding is a data compression method defined by G. Nigel N. Martin in a 1979 paper Range encoding is a form of arithmetic coding that was historically of interest for avoiding some patents on particular later-developed arithmetic coding techniques...
: same as arithmetic codingArithmetic codingArithmetic coding is a form of variable-length entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code...
, but looked at in a slightly different way
- Range encoding
- Huffman codingHuffman codingIn computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable-length code table for encoding a source symbol where the variable-length code table has been derived in a particular way based on...
: simple lossless compression taking advantage of relative character frequencies- Adaptive Huffman codingAdaptive Huffman codingAdaptive Huffman coding is an adaptive coding technique based on Huffman coding. It permits building the code as the symbols are being transmitted, having no initial knowledge of source distribution, that allows one-pass encoding and adaptation to changing conditions in data.The benefit of...
: adaptive codingAdaptive codingAdaptive coding refers to variants of entropy encoding methods of lossless data compression. They are particularly suited to streaming data, as they adapt to localized changes in the characteristics of the data, and don't require a first pass over the data to calculate a probability model...
technique based on Huffman coding - Package-merge algorithmPackage-merge algorithmThe package-merge algorithm is an O-time algorithm for finding an optimal length-limited Huffman code for a given distribution on a given alphabet of size n, where no code word is longer than L. It is a greedy algorithm, and a generalization of Huffman's original algorithm...
: Optimizes Huffman coding subject to a length restriction on code strings
- Adaptive Huffman coding
- Shannon–Fano codingShannon–Fano codingIn the field of data compression, Shannon–Fano coding, named after Claude Elwood Shannon and Robert Fano, is a technique for constructing a prefix code based on a set of symbols and their probabilities...
- Shannon–Fano–Elias codingShannon–Fano–Elias codingIn information theory, Shannon–Fano–Elias coding is a precursor to arithmetic coding, in which probabilities are used to determine codewords....
: precursor to arithmetic encoding
- Arithmetic coding
- Entropy coding with known entropy characteristicsEntropy encodingIn information theory an entropy encoding is a lossless data compression scheme that is independent of the specific characteristics of the medium....
- Golomb codingGolomb codingGolomb coding is a lossless data compression method using a family of data compression codes invented by Solomon W. Golomb in the 1960s. Alphabets following a geometric distribution will have a Golomb code as an optimal prefix code, making Golomb coding highly suitable for situations in which the...
: form of entropy coding that is optimal for alphabets following geometric distributions - Rice coding: form of entropy coding that is optimal for alphabets following geometric distributions
- Truncated binary encodingTruncated binary encodingTruncated binary encoding is an entropy encoding typically used for uniform probability distributions with a finite alphabet. It is parameterized by an alphabet with total size of number n. It is a slightly more general form of binary encoding when n is not a power of two.Let n = 2k+b, for 0 ≤ b ≤ 2k...
- Unary codingUnary codingUnary coding, sometimes called thermometer code, is an entropy encoding that represents a natural number, n, with n ones followed by a zero or with n − 1 ones followed by a zero...
: code that represents a number n with n ones followed by a zero - Universal codesUniversal code (data compression)In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic , the expected lengths of the codewords are...
: encodes positive integers into binary code words- Elias deltaElias delta codingElias delta code is a universal code encoding the positive integers developed by Peter Elias. To code a number:#Write it in binary.#Count the bits and write down that number of bits in binary ....
, gammaElias gamma codingElias gamma code is a universal code encoding positive integers developed by Peter Elias. It is used most commonly when coding integers whose upper-bound cannot be determined beforehand.-Encoding:To code a number:#Write it in binary....
, and omegaElias omega codingElias omega coding is a universal code encoding the positive integers developed by Peter Elias. Like Elias gamma coding and Elias delta coding, it works by prefixing the integer with a representation of its order of magnitude in a universal code...
coding - Exponential-Golomb codingExponential-Golomb codingAn exponential-Golomb code of order k is a type of universal code, parameterized by a nonnegative integer k. To encode a nonnegative integer in an order-k exp-Golomb code, one can use the following method:...
- Fibonacci codingFibonacci codingIn mathematics, Fibonacci coding is a universal code which encodes positive integers into binary code words. Each code word ends with "11" and contains no other instances of "11" before the end.-Definition:...
- Levenshtein codingLevenshtein codingLevenstein coding, or Levenshtein coding, is a universal code encoding the non-negative integers developed by Vladimir Levenshtein.The code of zero is "0"; to code a positive number:#Initialize the step count variable C to 1....
- Elias delta
- Golomb coding
- Fast Efficient & Lossless Image Compression SystemFELICSFELICS, which stands for Fast Efficient & Lossless Image Compression System, is a lossless image compression algorithmthat performs 5-times faster than the original lossless JPEG codec and achieves a similar compression ratio.-History:...
(FELICS): a lossless image compression algorithm - Incremental encodingIncremental encodingIncremental encoding, also known as front compression, back compression, or front coding, is a type of delta encoding compression algorithm whereby common prefixes or suffixes and their lengths are recorded so that they need not be duplicated...
: delta encoding applied to sequences of strings - Prediction by partial matchingPPM compression algorithmPrediction by partial matching is an adaptive statistical data compression technique based on context modeling and prediction. PPM models use a set of previous symbols in the uncompressed symbol stream to predict the next symbol in the stream....
(PPM): an adaptive statistical data compression technique based on context modeling and prediction - Run-length encodingRun-length encodingRun-length encoding is a very simple form of data compression in which runs of data are stored as a single data value and count, rather than as the original run...
: lossless data compression taking advantage of strings of repeated characters - SEQUITUR algorithmSEQUITUR algorithmSequitur is a recursive algorithm developed by Craig Nevill-Manning and Ian H. Witten in 1997 that infers a hierarchical structure from a sequence of discrete symbols. The algorithm operates in linear space and time...
: lossless compression by incremental grammar inference on a string
Lossy compression algorithms
- 3Dc3Dc3Dc , also known as DXN, BC5, or Block Compression 5 is a lossy data compression algorithm for normal maps invented and first implemented by ATI. It builds upon the earlier DXT5 algorithm and is an open standard...
: a lossy data compression algorithm for normal mapsNormal mappingIn 3D computer graphics, normal mapping, or "Dot3 bump mapping", is a technique used for faking the lighting of bumps and dents. It is used to add details without using more polygons. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by... - Audio and SpeechSpeech encodingSpeech coding is the application of data compression of digital audio signals containing speech. Speech coding uses speech-specific parameter estimation using audio signal processing techniques to model the speech signal, combined with generic data compression algorithms to represent the resulting...
compression- A-law algorithmA-law algorithmAn A-law algorithm is a standard companding algorithm, used in European digital communications systems to optimize, i.e., modify, the dynamic range of an analog signal for digitizing.It is similar to the μ-law algorithm used in North America and Japan....
: standard companding algorithm - Code-excited linear prediction (CELP): low bit-rate speech compression
- Linear predictive codingLinear predictive codingLinear predictive coding is a tool used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model...
(LPC): lossy compression by representing the spectral envelopeSpectral envelopeA spectral envelope is a curve in the frequency-amplitude plane, derived from a Fourier magnitude spectrum. It describes one point in time ....
of a digital signal of speech in compressed form - Mu-law algorithmMu-law algorithmThe µ-law algorithm is a companding algorithm, primarily used in the digital telecommunication systems of North America and Japan. Companding algorithms reduce the dynamic range of an audio signal...
: standard analog signal compression or companding algorithm - Warped Linear Predictive CodingWarped Linear Predictive CodingWarped linear predictive coding is a variant of linear predictive coding in which the spectral representation of the system is modified, for example by replacing the unit delays used in an LPC implementation with first-order allpass filters...
(WLPC)
- A-law algorithm
- Image CompressionImage compressionThe objective of image compression is to reduce irrelevance and redundancy of the image data in order to be able to store or transmit data in an efficient form.- Lossy and lossless compression :...
- Block Truncation CodingBlock Truncation CodingBlock Truncation Coding, or BTC, is a type of lossy image compression technique for greyscale images. It divides the original images into blocks and then uses a quantiser to reduce the number of grey levels in each block whilst maintaining the same mean and standard deviation...
(BTC): a type of lossy image compression technique for greyscale images - Embedded Zerotree WaveletEmbedded zerotree waveletEZW is a lossy image compression algorithm. At low bit rates most of the coefficients produced by a subband transform...
(EZW) - Fast Cosine Transform algorithms (FCT algorithms): compute Discrete Cosine Transform (DCT) efficiently
- Fractal compressionFractal compressionFractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image...
: method used to compress images using fractals - Set Partitioning in Hierarchical Trees (SPIHT)
- Wavelet compression: form of data compression well suited for image compressionImage compressionThe objective of image compression is to reduce irrelevance and redundancy of the image data in order to be able to store or transmit data in an efficient form.- Lossy and lossless compression :...
(sometimes also video compression and audio compression)
- Block Truncation Coding
- Transform codingTransform codingTransform coding is a type of data compression for "natural" data like audio signals or photographic images. The transformation is typically lossy, resulting in a lower quality copy of the original input....
: type of data compression for "natural" data like audio signals or photographic images - Vector quantizationVector quantizationVector quantization is a classical quantization technique from signal processing which allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points into groups having...
: technique often used in lossy data compression
Digital signal processing
- Adaptive-additive algorithmAdaptive-additive algorithmIn the studies of Fourier optics, sound synthesis, stellar interferometry, optical tweezers, and diffractive optical elements it is often important to know the spatial frequency phase of an observed wave source. In order to reconstruct this phase the Adaptive-Additive Algorithm , which derives...
(AA algorithm): find the spatial frequency phase of an observed wave source - Discrete Fourier transformDiscrete Fourier transformIn mathematics, the discrete Fourier transform is a specific kind of discrete transform, used in Fourier analysis. It transforms one function into another, which is called the frequency domain representation, or simply the DFT, of the original function...
: determines the frequencies contained in a (segment of a) signal- Bluestein's FFT algorithmBluestein's FFT algorithmBluestein's FFT algorithm , commonly called the chirp z-transform algorithm , is a fast Fourier transform algorithm that computes the discrete Fourier transform of arbitrary sizes by re-expressing the DFT as a convolution...
- Bruun's FFT algorithmBruun's FFT algorithmBruun's algorithm is a fast Fourier transform algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996...
- Cooley–Tukey FFT algorithm
- Fast Fourier transformFast Fourier transformA fast Fourier transform is an efficient algorithm to compute the discrete Fourier transform and its inverse. "The FFT has been called the most important numerical algorithm of our lifetime ." There are many distinct FFT algorithms involving a wide range of mathematics, from simple...
- Prime-factor FFT algorithmPrime-factor FFT algorithmThe prime-factor algorithm , also called the Good–Thomas algorithm , is a fast Fourier transform algorithm that re-expresses the discrete Fourier transform of a size N = N1N2 as a two-dimensional N1×N2 DFT, but only for the case where N1 and N2 are relatively prime...
- Rader's FFT algorithmRader's FFT algorithmRader's algorithm is a fast Fourier transform algorithm that computes the discrete Fourier transform of prime sizes by re-expressing the DFT as a cyclic convolution...
- Bluestein's FFT algorithm
- Fast folding algorithmFast Folding AlgorithmIn signal processing, the fast folding algorithm is an efficient algorithm for the detection of approximately-periodic events within time series data. It computes superpositions of the signal modulo various window sizes simultaneously....
: an efficient algorithm for the detection of approximately-periodic events within time series data - Gerchberg–Saxton algorithm: Phase retrieval algorithm for optical planes
- Goertzel algorithmGoertzel algorithmThe Goertzel algorithm is a digital signal processing technique for identifying frequency components of a signal, published by Gerald Goertzel in 1958...
: identify a particular frequency component in a signal. Can be used for DTMF digit decoding. - Karplus-Strong string synthesisKarplus-Strong string synthesisKarplus-Strong string synthesis is a method of physical modelling synthesis that loops a short waveform through a filtered delay line to simulate the sound of a hammered or plucked string or some types of percussion....
: physical modelling synthesis to simulate the sound of a hammered or plucked string or some types of percussion
Image processing
- Connected-component labeling: find and label disjoint regions
- Dithering and half-toning
- Error diffusionError diffusionError diffusion is a type of halftoning in which the quantization residual is distributed to neighboring pixels that have not yet been processed...
- Floyd–Steinberg dithering
- Ordered ditheringOrdered ditheringOrdered dithering is an image dithering algorithm. It is commonly used by programs that need to provide continuous image of higher colors on a display of less color depth. For example, Microsoft Windows uses it in 16-color graphics modes...
- Riemersma dithering
- Error diffusion
- Elser difference-map algorithm: a search algorithm for general constraint satisfaction problems. Originally used for X-Ray diffractionX-ray crystallographyX-ray crystallography is a method of determining the arrangement of atoms within a crystal, in which a beam of X-rays strikes a crystal and causes the beam of light to spread into many specific directions. From the angles and intensities of these diffracted beams, a crystallographer can produce a...
microscopy - Feature detectionFeature detectionIn computer vision and image processing the concept of feature detection refers to methods that aim at computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not...
- Canny edge detector: detect a wide range of edges in images
- Generalised Hough transformGeneralised Hough transformThe Generalised Hough Transform or GHT, introduced by D.H. Ballard in 1981, is the modification of the Hough Transform using the principle of template matching. This modification enables the Hough Transform to be used for not only the detection of an object described with an analytic equation...
- Hough transformHough transformThe Hough transform is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure...
- Marr–Hildreth algorithm: an early edge detectionEdge detectionEdge detection is a fundamental tool in image processing and computer vision, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities...
algorithm - SIFTScale-invariant feature transformScale-invariant feature transform is an algorithm in computer vision to detect and describe local features in images. The algorithm was published by David Lowe in 1999....
(Scale-invariant feature transform): is an algorithm to detect and describe local features in images.
- Richardson–Lucy deconvolution: image de-blurring algorithm
- Seam carvingSeam carvingSeam carving , is an algorithm for image resizing, developed by Shai Avidan, of Mitsubishi Electric Research Labs , and Ariel Shamir, of the Interdisciplinary Center and MERL...
: content-aware image resizing algorithm - SegmentationSegmentation (image processing)In computer vision, segmentation refers to the process of partitioning a digital image into multiple segments . The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze...
: partition a digital image into two or more regions- GrowCut algorithmGrowCut algorithmGrowCut is an interactive segmentation algorithm. It uses Cellular Automaton as an image model. Automata evolution models segmentation process. Each cell of the automata has some label...
: an interactive segmentation algorithm - Random walker algorithm
- Region growingRegion growingRegion growing is a simple region-based image segmentation method. It is also classified as a pixel-based image segmentation method since it involves the selection of initial seed points....
- Watershed transformationWatershed (algorithm)A grey-level image may be seen as a topographic relief, where the grey level of a pixel is interpreted as its altitude in the relief.A drop of water falling on a topographic relief flows along a path to finally reach a local minimum...
: a class of algorithms based on the watershed analogy
- GrowCut algorithm
Software engineering
- Cache algorithmsCache algorithmsIn computing, cache algorithms are optimizing instructions – algorithms – that a computer program or a hardware-maintained structure can follow to manage a cache of information stored on the computer...
- CHS conversion: converting between disk addressing systems
- Double dabbleDouble dabbleIn computer science, the double dabble algorithm is used to convert binary numbers into decimal . The algorithm operates as follows:...
: Convert binary numbers to BCD - Hash FunctionHash functionA hash function is any algorithm or subroutine that maps large data sets to smaller data sets, called keys. For example, a single integer can serve as an index to an array...
: convert a large, possibly variable-sized amount of data into a small datum, usually a single integer that may serve as an index into an array- Fowler–Noll–Vo hash function: fast with low collision rate
- Pearson hashingPearson hashingPearson hashing is a hash function designed for fast execution on processors with 8-bit registers. Given an input consisting of any number of bytes, it produces as output a single byte that is strongly dependent on every byte of the input...
: computes 8 bit value only, optimized for 8 bit computers - Zobrist hashingZobrist hashingZobrist hashing is a hash function construction used in computer programs that play abstract board games, such as chess and Go, to implement transposition tables, a special kind of hash table that is indexed by a board position and used to avoid analyzing the same position more than once...
: used in the implementation of transposition tableTransposition tableIn computer chess and other computer games, transposition tables are used to speed up the search of the game tree. Transposition tables are primarily useful in perfect information games, meaning the entire state of the game is known to all players at all times....
s
- Unicode Collation AlgorithmUnicode collation algorithmThe Unicode collation algorithm is an algorithm defined in Unicode Technical Report #10, which defines a customizable method to compare two strings. These comparisons can then be used to collate or sort text in any writing system and language that can be represented with Unicode.Unicode Technical...
- Xor swap algorithmXOR swap algorithmIn computer programming, the XOR swap is an algorithm that uses the XOR bitwise operation to swap values of distinct variables having the same data type without using a temporary variable...
: swaps the values of two variables without using a buffer
Database algorithms
- Algorithms for Recovery and Isolation Exploiting SemanticsAlgorithms for Recovery and Isolation Exploiting SemanticsIn computer science, Algorithms for Recovery and Isolation Exploiting Semantics, or ARIES is a recovery algorithm designed to work with a no-force, steal database approach; it is used by IBM DB2, Microsoft SQL Server and many other database systems....
(ARIES): transaction recovery - Join algorithmsJoin (SQL)An SQL join clause combines records from two or more tables in a database. It creates a set that can be saved as a table or used as is. A JOIN is a means for combining fields from two tables by using values common to each. ANSI standard SQL specifies four types of JOINs: INNER, OUTER, LEFT, and RIGHT...
- Block nested loopBlock nested loopA block-nested loop is an algorithm used to join two relations in a relational database.This algorithm is a variation on the simple nested loop join used to join two relations R and S...
- Hash joinHash joinThe Hash join is an example of a join algorithm and is used in the implementation of a relational database management system.The task of a join algorithm is to find, for each distinct value of the join attribute, the set of tuples in each relation which have that value.Hash joins require an...
- Nested loop joinNested loop joinA nested loop join is a naive algorithm that joins two relations R and S by making two nested loops: For each tuple r in R do For each tuple s in S do If r and s satisfy the join condition Then output the tuple...
- Sort-Merge JoinSort-merge joinThe Sort-Merge Join is an example of a join algorithm and is used in the implementation of a relational database management system....
- Block nested loop
Distributed systems algorithms
- Bully algorithmBully algorithmThe bully algorithm is a method in distributed computing for dynamically selecting a coordinator by process ID number.Assumptions :* The system is synchronous and uses timeout for identifing process failureMessage types :...
: a method for dynamically selecting a coordinator - Byzantine fault toleranceByzantine fault toleranceByzantine fault tolerance is a sub-field of fault tolerance research inspired by the Byzantine Generals' Problem, which is a generalized version of the Two Generals' Problem....
: good fault toleranceFault-tolerant systemFault-tolerance or graceful degradation is the property that enables a system to continue operating properly in the event of the failure of some of its components. A newer approach is progressive enhancement...
. - Clock synchronizationClock synchronizationClock synchronization is a problem from computer science and engineering which deals with the idea that internal clocks of several computers may differ. Even when initially set accurately, real clocks will differ after some amount of time due to clock drift, caused by clocks counting time at...
- Berkeley algorithmBerkeley algorithmThe Berkeley algorithm is a method of clock synchronisation in distributed computing which assumes no machine has an accurate time source. It was developed by Gusella and Zatti at the University of California, Berkeley in 1989 and like Cristian's algorithm is intended for use within intranets.-...
- Cristian's algorithmCristian's algorithmCristian's Algorithm is a method for clock synchronisation which can be used in many fields of distributive computer science but is primarily used in low-latency intranets...
- Intersection algorithmIntersection algorithmThe Intersection Algorithm is an agreement algorithm used to select sources for estimating accurate time from a number of noisy time sources, it forms part of the modern Network Time Protocol...
- Marzullo's algorithmMarzullo's algorithmMarzullo's algorithm, invented by Keith Marzullo for his Ph.D. dissertation in 1984, is an agreement algorithm used to select sources for estimating accurate time from a number of noisy time sources...
- Berkeley algorithm
- Detection of Process Termination
- Dijkstra-Scholten algorithmDijkstra-Scholten AlgorithmThe Dijkstra–Scholten algorithm is an algorithm for detecting termination in a distributed system. The algorithm was proposed by Dijkstra and Scholten in 1980....
- Huang's algorithmHuang's AlgorithmHuang's algorithm is an algorithm for detecting termination in a distributed system. The algorithm was proposed by Shing-Tsaan Huang in 1989 in the Journal of Computers.-Termination detection:...
- Dijkstra-Scholten algorithm
- Lamport ordering: a partial ordering of events based on the happened-before relation
- Mutual exclusionMutual exclusionMutual exclusion algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code in which a process or thread accesses a common resource...
- Lamport's Distributed Mutual Exclusion AlgorithmLamport's Distributed Mutual Exclusion AlgorithmLamport's Distributed Mutual Exclusion Algorithm is a contention-based algorithm for mutual exclusion on a distributed system.- Nodal properties :# Every process maintains a queue of pending requests for entering critical section order...
- Naimi-Trehel's log(n) Algorithm
- Maekawa's AlgorithmMaekawa's algorithmMaekawa's algorithm is an algorithm for mutual exclusion on a distributed system. The basis of this algorithm is a quorum like approach where any one site needs only to seek permissions from a subset of other sites.- Terminology :...
- Raymond's AlgorithmRaymond's algorithmRaymond's Algorithm is a token based algorithm for mutual exclusion on a distributed system. It imposes a logical structure on distributed resources...
- Ricart-Agrawala AlgorithmRicart-Agrawala algorithmThe Ricart-Agrawala Algorithm is an algorithm for mutual exclusion on a distributed system. This algorithm is an extension and optimization of Lamport's Distributed Mutual Exclusion Algorithm, by removing the need for release messages...
- Lamport's Distributed Mutual Exclusion Algorithm
- Paxos algorithmPaxos algorithmPaxos is a family of protocols for solving consensus in a network of unreliable processors.Consensus is the process of agreeing on one result among a group of participants...
: a family of protocols for solving consensus in a network of unreliable processors - Snapshot algorithmSnapshot algorithmThe snapshot algorithm is an algorithm used in distributed systems for recording a consistent global state of an asynchronous system. It is also known as Chandy-Lamport Algorithm for the determination of consistent global states and obtaining consistent cuts, after Leslie Lamport and K...
: record a consistent global state for an asynchronous system - Vector clocksVector clocksVector clocks are an algorithm for generating a partial ordering of events in a distributed system and detecting causality violations. Just as in Lamport timestamps, interprocess messages contain the state of the sending process's logical clock...
: generate a partial ordering of events in a distributed system and detect causalityCausalityCausality is the relationship between an event and a second event , where the second event is understood as a consequence of the first....
violations
Memory allocation and deallocation algorithms
- Buddy memory allocationBuddy memory allocationThe buddy memory allocation technique is a memory allocation algorithm that divides memory into partitions to try to satisfy a memory request as suitably as possible. This system makes use of splitting memory into halves to try to give a best-fit...
: Algorithm to allocate memory such that fragmentation is less. - Garbage collectorsGarbage collection (computer science)In computer science, garbage collection is a form of automatic memory management. The garbage collector, or just collector, attempts to reclaim garbage, or memory occupied by objects that are no longer in use by the program...
- Boehm garbage collectorBoehm garbage collectorIn computer science, the Boehm-Demers-Weiser garbage collector, often simply known as Boehm GC, is a conservative garbage collector for C and C++.Boehm GC is free software distributed under a permissive free software licence similar to the X11 license....
: Conservative garbage collector - Cheney's algorithmCheney's algorithmCheney's algorithm, first described in a 1970 ACM paper by C.J. Cheney, is a method of garbage collection in computer software systems. In this scheme, the heap is divided into two equal halves, only one of which is in use at any one time...
: An improvement on the Semi-space collector - Generational garbage collectorGarbage collection (computer science)In computer science, garbage collection is a form of automatic memory management. The garbage collector, or just collector, attempts to reclaim garbage, or memory occupied by objects that are no longer in use by the program...
: Fast garbage collectors that segregate memory by age - Mark-compact algorithmMark-compact algorithmIn computer science, a mark-compact algorithm is a type of garbage collection algorithm used to reclaim unreachable memory. Mark-compact algorithms can be regarded as a combination of the mark-sweep algorithm and Cheney's copying algorithm. First, reachable objects are marked, then a compacting...
: a combination of the mark-sweep algorithm and Cheney's copying algorithmCheney's algorithmCheney's algorithm, first described in a 1970 ACM paper by C.J. Cheney, is a method of garbage collection in computer software systems. In this scheme, the heap is divided into two equal halves, only one of which is in use at any one time... - Mark and sweep
- Semi-space collector: An early copying collector
- Boehm garbage collector
- Reference countingReference countingIn computer science, reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object, block of memory, disk space or other resource...
Operating systems algorithms
- Banker's algorithmBanker's algorithmThe Banker's algorithm is a resource allocation & deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of pre-determined maximum possible amounts of all resources, and then makes a "safe-state" check to test for possible deadlock conditions...
: Algorithm used for deadlock avoidance. - Page replacement algorithms: Selecting the victim page under low memory conditions.
- Adaptive replacement cacheAdaptive Replacement CacheAdaptive Replacement Cache is a page replacement algorithm withbetter performance than LRU developed at the IBM Almaden Research Center. This is accomplished by keeping track of both Frequently Used and Recently Used pages plus a recent eviction history for both...
: better performance than LRU - Clock with Adaptive ReplacementClock with Adaptive ReplacementClock with Adaptive Replacement is a page replacement algorithm, which combines Adaptive Replacement Cache and CLOCK, and it has performancecomparable to ARC, and substantially outperforms both...
(CAR): is a page replacement algorithm that has performance comparable to Adaptive replacement cacheAdaptive Replacement CacheAdaptive Replacement Cache is a page replacement algorithm withbetter performance than LRU developed at the IBM Almaden Research Center. This is accomplished by keeping track of both Frequently Used and Recently Used pages plus a recent eviction history for both...
- Adaptive replacement cache
Disk scheduling
- Elevator algorithmElevator algorithmThe elevator algorithm is a disk scheduling algorithm to determine the motion of the disk's arm and head in servicing read and write requests....
: Disk scheduling algorithm that works like an elevator. - Shortest seek firstShortest seek firstShortest seek first is a secondary storage scheduling algorithm to determine the motion of the disk's arm and head in servicing read and write requests.- Description :...
: Disk scheduling algorithm to reduce seek time.
Networking
- Karn's AlgorithmKarn's AlgorithmKarn's Algorithm addresses the problem of getting accurate estimates of the round-trip time for messages when using TCP. The algorithm was proposed by Phil Karn in 1987....
: addresses the problem of getting accurate estimates of the round-trip time for messages when using TCP - Luleå algorithmLuleå algorithmThe Luleå algorithm of computer science, designed by , is a patented technique for storing and searching internet routing tables efficiently. It is named after the Luleå University of Technology, the home institute of the technique's authors...
: a technique for storing and searching internet routing tables efficiently - Network congestionNetwork congestionIn data networking and queueing theory, network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections...
- Exponential backoffExponential backoffExponential backoff is an algorithm that uses feedback to multiplicatively decrease the rate of some process, in order to gradually find an acceptable rate.-Binary exponential backoff / truncated exponential backoff:...
- Nagle's algorithmNagle's algorithmNagle's algorithm, named after John Nagle, is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network....
: improve the efficiency of TCP/IP networks by coalescing packets - Truncated binary exponential backoff
- Exponential backoff
- Traffic shapingTraffic shapingTraffic shaping is the control of computer network traffic in order to optimize or guarantee performance, improve latency, and/or increase usable bandwidth for some kinds of packets by delaying other kinds of packets that meet certain criteria...
and Rate limitingRate limitingIn computer networks, rate limiting is used to control the rate of traffic sent or received on a network interface. Traffic that is less than or equal to the specified rate is sent, whereas traffic that exceeds the rate is dropped or delayed...
- Leaky bucketLeaky bucketThe leaky bucket is an algorithm used in packet switched computer networks and telecommunications networks to check that data transmissions conform to defined limits on bandwidth and burstiness . The leaky bucket algorithm is also used in leaky bucket counters, e.g...
- Token bucketToken bucketThe token bucket is an algorithm used in packet switched computer networks and telecommunications networks to check that data transmissions conform to defined limits on bandwidth and burstiness ....
- Leaky bucket
Process synchronization
- Dekker's algorithmDekker's algorithmDekker's algorithm is the first known correct solution to the mutual exclusion problem in concurrent programming. The solution is attributed to Dutch mathematician Th. J. Dekker by Edsger W. Dijkstra in his manuscript on cooperating sequential processes...
- Lamport's Bakery algorithmLamport's bakery algorithmLamport's bakery algorithm is a computer algorithm devised by computer scientist Leslie Lamport, which is intended to improve the safety in the usage of shared resources among multiple threads by means of mutual exclusion....
- Peterson's algorithmPeterson's algorithmPeterson's algorithm is a concurrent programming algorithm for mutual exclusion that allows two processes to share a single-use resource without conflict, using only shared memory for communication. It was formulated by Gary L. Peterson in 1981...
Scheduling
- Earliest deadline first schedulingEarliest deadline first schedulingEarliest deadline first or least time to go is a dynamic scheduling algorithm used in real-time operating systems. It places processes in a priority queue. Whenever a scheduling event occurs the queue will be searched for the process closest to its deadline...
- Fair-share schedulingFair-share schedulingFair-share scheduling is a scheduling strategy for computer operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes....
- Least slack time schedulingLeast slack time schedulingLeast Slack Time scheduling is a scheduling algorithm. It assigns priority based on the slack time of a process. Slack time is the amount of time left after a job if the job was started now. This algorithm is also known as Least Laxity First. Its most common use is in embedded systems, especially...
- List scheduling
- Multi level feedback queue
- Rate-monotonic schedulingRate-monotonic schedulingIn computer science, rate-monotonic scheduling is a scheduling algorithm used in real-time operating systems with a static-priority scheduling class...
- Round-robin schedulingRound-robin schedulingRound-robin is one of the simplest scheduling algorithms for processes in an operating system. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority . Round-robin scheduling is simple, easy to...
- Shortest job nextShortest job nextShortest job next , also known as Shortest Job First or Shortest Process Next , is a scheduling policy that selects the waiting process with the smallest execution time to execute next. SJN is a non-preemptive algorithm...
- Shortest remaining timeShortest remaining timeShortest remaining time is a scheduling method that is a preemptive version of shortest job next scheduling. In this scheduling algorithm, the process with the smallest amount of time remaining until completion is selected to execute...
- Top-nodes algorithm: resource calendar management
See also
- List of data structures
- List of machine learning algorithms
- List of algorithm general topics
- List of terms relating to algorithms and data structures
- HeuristicHeuristicHeuristic refers to experience-based techniques for problem solving, learning, and discovery. Heuristic methods are used to speed up the process of finding a satisfactory solution, where an exhaustive search is impractical...