Regular expression
Encyclopedia
In computing, a regular expression provides a concise and flexible means for "matching" (specifying and recognizing) strings
of text, such as particular characters, words, or patterns of characters. Abbreviations for "regular expression" include "regex" and "regexp". The concept of regular expressions was first popularized by utilities provided by Unix
distributions, in particular the editor ed
and the filter grep
. A regular expression is written in a formal language that can be interpreted by a regular expression processor, which is a program that either serves as a parser generator
or examines text and identifies parts that match the provided specification
. Historically, the concept of regular expressions is associated with Kleene's
formalism of regular sets, introduced in the 1950s.
Here are examples of specifications that could be expressed in a regular expression:
These examples are simple. Specifications of great complexity can be conveyed by regular expressions.
Regular expressions are used by many text editor
s, utilities, and programming language
s to search and manipulate text based on pattern
s. Some of these languages, including Perl
, Ruby
, AWK, and Tcl
, integrate regular expressions into the syntax of the core language itself. Other programming languages like .NET languages
, Java
, and Python
instead provide regular expressions through standard libraries. For yet other languages, such as Object Pascal
and C
and C++
, non-core libraries are available (however, version C++11 provides regular expressions in its Standard Libraries).
As an example of the syntax, the regular expression
Many modern computing systems provide wildcard character
s in matching filename
s from a file system
. This is a core capability of many command-line shells
and is also known as globbing. Wildcards differ from regular expressions in generally expressing only limited forms of patterns.
of strings. It is more concise to specify a set's members
by rules (such as a pattern) than by a list. For example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern
s, if there exists at least one regex that matches a particular set then there exist an infinite number of such expressions. Most formalisms provide the following operations to construct regular expressions.
Boolean "or"
Grouping
Quantification
These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. For example,
The precise syntax
for regular expressions varies among tools and with context; more detail is given in the Syntax section.
and formal language theory
, both of which are part of theoretical computer science
. These fields study models of computation (automata) and ways to describe and classify formal languages. In the 1950s, mathematician Stephen Cole Kleene
described these models using his mathematical notation called regular sets. The SNOBOL
language was an early implementation of pattern matching
, but not identical to regular expressions. Ken Thompson
built Kleene's notation into the editor QED
as a means to match patterns in text file
s. He later added this capability to the Unix editor ed
, which eventually led to the popular search tool grep
's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor:
, AWK, Emacs
, vi
, and lex
.
Perl
and Tcl
regular expressions were derived from a regex library written by Henry Spencer
, though Perl later expanded on Spencer's library to add many new features. Philip Hazel
developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regular expression functionality and is used by many modern tools including PHP
and Apache HTTP Server
. Part of the effort in the design of Perl 6
is to improve Perl's regular expression integration, and to increase their scope and capabilities to allow the definition of parsing expression grammar
s. The result is a mini-language called Perl 6 rules
, which are used to define Perl 6 grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regular expressions, but also allow BNF
-style definition of a recursive descent parser
via sub-rules.
The use of regular expressions in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML
(precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regular expressions. Its use is evident in the DTD
element group syntax.
s in formal language theory
. They have thus the same expressive power as regular grammar
s.
as regular expressions:
Given regular expressions R and S, the following operations over them are defined
to produce regular expressions:
To avoid parentheses it is assumed that the Kleene star has the highest priority, then concatenation and then set union. If there is no ambiguity then parentheses may be omitted. For example,
Many textbooks use the symbols , , or for alternation instead of the vertical bar.
Examples:
Regular expressions in this sense can express the regular language
s, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially
in the size of the shortest equivalent regular expressions. The standard example here is the languages
Lk consisting of all strings over the alphabet {a,b} whose kth-last letter equals to a. On one hand, a regular expression describing L4 is given by . Generalizing this pattern to Lk gives the expression
On the other hand, it is known that every deterministic finite automaton accepting the language Lk must have at least 2k states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata
(NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars
of the Chomsky hierarchy
.
Finally, it is worth noting that many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; see below for more on this.
It is possible to write an algorithm
which for two given regular expressions decides whether the described languages are essentially equal, reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic
(equivalent).
The redundancy can be eliminated by using Kleene star
and set union
to find an interesting subset of regular expressions that is still fully expressive, but perhaps their use can be restricted. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem
. Recently, Dexter Kozen
axiomatized regular expressions with Kleene algebra
.
, usually the backslash
"\". For example, a dot is normally used as a "wild card" metacharacter to denote any character, but if preceded by a backslash it represents the dot character itself. The pattern
regular expression syntax followed common conventions but often differed from tool to tool. The IEEE
POSIX
Basic Regular Expressions (BRE) standard (released alongside an alternative flavor called Extended Regular Expressions or ERE) was designed mostly for backward compatibility with the traditional (Simple Regular Expression) syntax but provided a common standard which has since been adopted as the default syntax of many Unix regular expression tools, though there is often some variation or additional features. Many such tools also provide support for ERE syntax with command line arguments.
In the BRE syntax, most characters are treated as literals — they match only themselves (e.g.,
s or metasequences.
Examples:
with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example,
String (computer science)
In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set or alphabet....
of text, such as particular characters, words, or patterns of characters. Abbreviations for "regular expression" include "regex" and "regexp". The concept of regular expressions was first popularized by utilities provided by Unix
Unix
Unix is a multitasking, multi-user computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna...
distributions, in particular the editor ed
Ed (text editor)
ed is a line editor for the Unix operating system. It was one of the first end-user programs hosted on the system and has been standard in Unix-based systems ever since. ed was originally written in PDP-11/20 assembler by Ken Thompson in 1971...
and the filter grep
Grep
grep is a command-line text-search utility originally written for Unix. The name comes from the ed command g/re/p...
. A regular expression is written in a formal language that can be interpreted by a regular expression processor, which is a program that either serves as a parser generator
Compiler-compiler
A compiler-compiler or compiler generator is a tool that creates a parser, interpreter, or compiler from some form of formal description of a language and machine...
or examines text and identifies parts that match the provided specification
Specification (technical standard)
A specification is an explicit set of requirements to be satisfied by a material, product, or service. Should a material, product or service fail to meet one or more of the applicable specifications, it may be referred to as being out of specification;the abbreviation OOS may also be used...
. Historically, the concept of regular expressions is associated with Kleene's
Stephen Cole Kleene
Stephen Cole Kleene was an American mathematician who helped lay the foundations for theoretical computer science...
formalism of regular sets, introduced in the 1950s.
Here are examples of specifications that could be expressed in a regular expression:
- the sequence of characters "car" appearing consecutively in any context, such as in "car", "cartoon", or "bicarbonate"
- the sequence of characters "car" occurring in that order with other characters between them, such as in "Icelander" or "chandler"
- the word "car" when it appears as an isolated word
- the word "car" when preceded by the word "blue" or "red"
- the word "car" when not preceded by the word "motor"
- a dollar sign immediately followed by one or more digits, and then optionally a period and exactly two more digits (for example, "$100" or "$245.99").
These examples are simple. Specifications of great complexity can be conveyed by regular expressions.
Regular expressions are used by many text editor
Text editor
A text editor is a type of program used for editing plain text files.Text editors are often provided with operating systems or software development packages, and can be used to change configuration files and programming language source code....
s, utilities, and programming language
Programming language
A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely....
s to search and manipulate text based on pattern
Pattern
A pattern, from the French patron, is a type of theme of recurring events or objects, sometimes referred to as elements of a set of objects.These elements repeat in a predictable manner...
s. Some of these languages, including Perl
Perl
Perl is a high-level, general-purpose, interpreted, dynamic programming language. Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions and become widely popular...
, Ruby
Ruby (programming language)
Ruby is a dynamic, reflective, general-purpose object-oriented programming language that combines syntax inspired by Perl with Smalltalk-like features. Ruby originated in Japan during the mid-1990s and was first developed and designed by Yukihiro "Matz" Matsumoto...
, AWK, and Tcl
Tcl
Tcl is a scripting language created by John Ousterhout. Originally "born out of frustration", according to the author, with programmers devising their own languages intended to be embedded into applications, Tcl gained acceptance on its own...
, integrate regular expressions into the syntax of the core language itself. Other programming languages like .NET languages
.NET Framework
The .NET Framework is a software framework that runs primarily on Microsoft Windows. It includes a large library and supports several programming languages which allows language interoperability...
, Java
Java (programming language)
Java is a programming language originally developed by James Gosling at Sun Microsystems and released in 1995 as a core component of Sun Microsystems' Java platform. The language derives much of its syntax from C and C++ but has a simpler object model and fewer low-level facilities...
, and Python
Python (programming language)
Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Python claims to "[combine] remarkable power with very clear syntax", and its standard library is large and comprehensive...
instead provide regular expressions through standard libraries. For yet other languages, such as Object Pascal
Object Pascal
Object Pascal refers to a branch of object-oriented derivatives of Pascal, mostly known as the primary programming language of Embarcadero Delphi.-Early history at Apple:...
and C
C (programming language)
C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
and C++
C++
C++ is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises a combination of both high-level and low-level language features. It was developed by Bjarne Stroustrup starting in 1979 at Bell...
, non-core libraries are available (however, version C++11 provides regular expressions in its Standard Libraries).
As an example of the syntax, the regular expression
\bex
can be used to search for all instances of the string "ex" that occur after "word boundaries". Thus \bex
will find the matching string "ex" in two possible locations, (1) at the beginning of words, and (2) between two characters in a string, where the first is not a word character and the second is a word character. For instance, in the string "Texts for experts", \bex
matches the "ex" in "experts" but not in "Texts" (because the "ex" occurs inside a word and not immediately after a word boundary).Many modern computing systems provide wildcard character
Wildcard character
-Telecommunication:In telecommunications, a wildcard character is a character that may be substituted for any of a defined subset of all possible characters....
s in matching filename
Filename
The filename is metadata about a file; a string used to uniquely identify a file stored on the file system. Different file systems impose different restrictions on length and allowed characters on filenames.A filename includes one or more of these components:...
s from a file system
File system
A file system is a means to organize data expected to be retained after a program terminates by providing procedures to store, retrieve and update data, as well as manage the available space on the device which contain it. A file system organizes data in an efficient manner and is tuned to the...
. This is a core capability of many command-line shells
Shell (computing)
A shell is a piece of software that provides an interface for users of an operating system which provides access to the services of a kernel. However, the term is also applied very loosely to applications and may include any software that is "built around" a particular component, such as web...
and is also known as globbing. Wildcards differ from regular expressions in generally expressing only limited forms of patterns.
Basic concepts
A regular expression, often called a pattern, is an expression that specifies a setSet (computer science)
In computer science, a set is an abstract data structure that can store certain values, without any particular order, and no repeated values. It is a computer implementation of the mathematical concept of a finite set...
of strings. It is more concise to specify a set's members
Data element
In metadata, the term data element is an atomic unit of data that has precise meaning or precise semantics. A data element has:# An identification such as a data element name# A clear data element definition# One or more representation terms...
by rules (such as a pattern) than by a list. For example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern
H(ä|ae?)ndel
(or alternatively, it is said that the pattern matches each of the three strings). In most formalismFormalism (mathematics)
In foundations of mathematics, philosophy of mathematics, and philosophy of logic, formalism is a theory that holds that statements of mathematics and logic can be thought of as statements about the consequences of certain string manipulation rules....
s, if there exists at least one regex that matches a particular set then there exist an infinite number of such expressions. Most formalisms provide the following operations to construct regular expressions.
Boolean "or"
- A vertical barVertical barThe vertical bar is a character with various uses in mathematics, where it can be used to represent absolute value, among others; in computing and programming and in general typography, as a divider not unlike the interpunct...
separates alternatives. For example,gray|grey
can match "gray" or "grey".
Grouping
- ParenthesesBracketBrackets are tall punctuation marks used in matched pairs within text, to set apart or interject other text. In the United States, "bracket" usually refers specifically to the "square" or "box" type.-List of types:...
are used to define the scope and precedence of the operatorsOperator (programming)Programming languages typically support a set of operators: operations which differ from the language's functions in calling syntax and/or argument passing mode. Common examples that differ by syntax are mathematical arithmetic operations, e.g...
(among other uses). For example,gray|grey
andgr(a|e)y
are equivalent patterns which both describe the set of "gray" and "grey".
Quantification
Quantification
Quantification has several distinct senses. In mathematics and empirical science, it is the act of counting and measuring that maps human sense observations and experiences into members of some set of numbers. Quantification in this sense is fundamental to the scientific method.In logic,...
- A quantifier after a token (such as a character) or group specifies how often that preceding element is allowed to occur. The most common quantifiers are the question markQuestion markThe question mark , is a punctuation mark that replaces the full stop at the end of an interrogative sentence in English and many other languages. The question mark is not used for indirect questions...
?
, the asteriskAsteriskAn asterisk is a typographical symbol or glyph. It is so called because it resembles a conventional image of a star. Computer scientists and mathematicians often pronounce it as star...
*
(derived from the Kleene starKleene starIn mathematical logic and computer science, the Kleene star is a unary operation, either on sets of strings or on sets of symbols or characters. The application of the Kleene star to a set V is written as V*...
), and the plus sign+
(Kleene cross).?
The question mark indicates there is zero or one of the preceding element. For example, colou?r
matches both "color" and "colour".* The asterisk indicates there is zero or more of the preceding element. For example, ab*c
matches "ac", "abc", "abbc", "abbbc", and so on.+
The plus sign indicates there is one or more of the preceding element. For example, ab+c
matches "abc", "abbc", "abbbc", and so on, but not "ac".
These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. For example,
H(ae?|ä)ndel
and H(a|ae|ä)ndel
are both valid patterns which match the same strings as the earlier example, H(ä|ae?)ndel
.The precise syntax
Syntax
In linguistics, syntax is the study of the principles and rules for constructing phrases and sentences in natural languages....
for regular expressions varies among tools and with context; more detail is given in the Syntax section.
History
The origins of regular expressions lie in automata theoryAutomata theory
In theoretical computer science, automata theory is the study of abstract machines and the computational problems that can be solved using these machines. These abstract machines are called automata...
and formal language theory
Formal language
A formal language is a set of words—that is, finite strings of letters, symbols, or tokens that are defined in the language. The set from which these letters are taken is the alphabet over which the language is defined. A formal language is often defined by means of a formal grammar...
, both of which are part of theoretical computer science
Theoretical computer science
Theoretical computer science is a division or subset of general computer science and mathematics which focuses on more abstract or mathematical aspects of computing....
. These fields study models of computation (automata) and ways to describe and classify formal languages. In the 1950s, mathematician Stephen Cole Kleene
Stephen Cole Kleene
Stephen Cole Kleene was an American mathematician who helped lay the foundations for theoretical computer science...
described these models using his mathematical notation called regular sets. The SNOBOL
SNOBOL
SNOBOL is a generic name for the computer programming languages developed between 1962 and 1967 at AT&T Bell Laboratories by David J. Farber, Ralph E. Griswold and Ivan P. Polonsky, culminating in SNOBOL4...
language was an early implementation of pattern matching
Pattern matching
In computer science, pattern matching is the act of checking some sequence of tokens for the presence of the constituents of some pattern. In contrast to pattern recognition, the match usually has to be exact. The patterns generally have the form of either sequences or tree structures...
, but not identical to regular expressions. Ken Thompson
Ken Thompson
Kenneth Lane Thompson , commonly referred to as ken in hacker circles, is an American pioneer of computer science...
built Kleene's notation into the editor QED
QED (text editor)
QED is a line-oriented computer text editor that was developed by Butler Lampson and L. Peter Deutsch for the Berkeley Timesharing System running on the SDS 940. It was implemented by L...
as a means to match patterns in text file
Text file
A text file is a kind of computer file that is structured as a sequence of lines of electronic text. A text file exists within a computer file system...
s. He later added this capability to the Unix editor ed
Ed (text editor)
ed is a line editor for the Unix operating system. It was one of the first end-user programs hosted on the system and has been standard in Unix-based systems ever since. ed was originally written in PDP-11/20 assembler by Ken Thompson in 1971...
, which eventually led to the popular search tool grep
Grep
grep is a command-line text-search utility originally written for Unix. The name comes from the ed command g/re/p...
's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor:
g/re/p
where re stands for regular expression). Since that time, many variations of Thompson's original adaptation of regular expressions have been widely used in Unix and Unix-like utilities including exprExpr
expr is a command line Unix utility which evaluates an expression and outputs the corresponding value. It first appeared in Unix v7 as a standalone program, and was later incorporated into the shell as a built-in command.Syntax: expr ...
, AWK, Emacs
Emacs
Emacs is a class of text editors, usually characterized by their extensibility. GNU Emacs has over 1,000 commands. It also allows the user to combine these commands into macros to automate work.Development began in the mid-1970s and continues actively...
, vi
Vi
vi is a screen-oriented text editor originally created for the Unix operating system. The portable subset of the behavior of vi and programs based on it, and the ex editor language supported within these programs, is described by the Single Unix Specification and POSIX.The original code for vi...
, and lex
Lex programming tool
Lex is a computer program that generates lexical analyzers . Lex is commonly used with the yacc parser generator. Lex, originally written by Mike Lesk and Eric Schmidt, is the standard lexical analyzer generator on many Unix systems, and a tool exhibiting its behavior is specified as part of the...
.
Perl
Perl
Perl is a high-level, general-purpose, interpreted, dynamic programming language. Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions and become widely popular...
and Tcl
Tcl
Tcl is a scripting language created by John Ousterhout. Originally "born out of frustration", according to the author, with programmers devising their own languages intended to be embedded into applications, Tcl gained acceptance on its own...
regular expressions were derived from a regex library written by Henry Spencer
Henry Spencer
Henry Spencer is a Canadian computer programmer and space enthusiast. He wrote "regex", a widely-used software library for regular expressions, and co-wrote C News, a Usenet server program. He also authored The Ten Commandments for C Programmers. He is coauthor, with David Lawrence, of the book...
, though Perl later expanded on Spencer's library to add many new features. Philip Hazel
Philip Hazel
Philip Hazel is a computer programmer best known for writing the Exim mail transport agent and the PCRE regular expression library. He was employed by the University of Cambridge Computing Service until he retired at the end of September 2007...
developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regular expression functionality and is used by many modern tools including PHP
PHP
PHP is a general-purpose server-side scripting language originally designed for web development to produce dynamic web pages. For this purpose, PHP code is embedded into the HTML source document and interpreted by a web server with a PHP processor module, which generates the web page document...
and Apache HTTP Server
Apache HTTP Server
The Apache HTTP Server, commonly referred to as Apache , is web server software notable for playing a key role in the initial growth of the World Wide Web. In 2009 it became the first web server software to surpass the 100 million website milestone...
. Part of the effort in the design of Perl 6
Perl 6
Perl 6 is a major revision to the Perl programming language. It is still in development, as a specification from which several interpreter and compiler implementations are being written. It is introducing elements of many modern and historical languages. Perl 6 is intended to have many...
is to improve Perl's regular expression integration, and to increase their scope and capabilities to allow the definition of parsing expression grammar
Parsing expression grammar
A parsing expression grammar, or PEG, is a type of analytic formal grammar, i.e. it describes a formal language in terms of a set of rules for recognizing strings in the language...
s. The result is a mini-language called Perl 6 rules
Perl 6 rules
Perl 6 rules are the regular expression, pattern matching and general-purpose parsing facility of Perl 6, and are a core part of the language. Since Perl's pattern-matching constructs have exceeded the capabilities of formal regular expressions for some time, Perl 6 documentation refers to them...
, which are used to define Perl 6 grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regular expressions, but also allow BNF
Backus–Naur form
In computer science, BNF is a notation technique for context-free grammars, often used to describe the syntax of languages used in computing, such as computer programming languages, document formats, instruction sets and communication protocols.It is applied wherever exact descriptions of...
-style definition of a recursive descent parser
Recursive descent parser
A recursive descent parser is a top-down parser built from a set of mutually-recursive procedures where each such procedure usually implements one of the production rules of the grammar...
via sub-rules.
The use of regular expressions in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML
Standard Generalized Markup Language
The Standard Generalized Markup Language is an ISO-standard technology for defining generalized markup languages for documents...
(precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regular expressions. Its use is evident in the DTD
Document Type Definition
Document Type Definition is a set of markup declarations that define a document type for SGML-family markup languages...
element group syntax.
Formal language theory
Regular expressions describe regular languageRegular language
In theoretical computer science and formal language theory, a regular language is a formal language that can be expressed using regular expression....
s in formal language theory
Formal language
A formal language is a set of words—that is, finite strings of letters, symbols, or tokens that are defined in the language. The set from which these letters are taken is the alphabet over which the language is defined. A formal language is often defined by means of a formal grammar...
. They have thus the same expressive power as regular grammar
Regular grammar
In theoretical computer science, a regular grammar is a formal grammar that describes a regular language.- Strictly regular grammars :A right regular grammar is a formal grammar such that all the production rules in P are of one of the following forms:# B → a - where B is a non-terminal in N and...
s.
Formal definition
Regular expressions consist of constants and operators that denote sets of strings and operations over these sets, respectively. The following definition is standard, and found as such in most textbooks on formal language theory. Given a finite alphabet Σ, the following constants are definedas regular expressions:
- (empty set) denoting the set .
- (empty stringEmpty stringIn computer science and formal language theory, the empty string is the unique string of length zero. It is denoted with λ or sometimes Λ or ε....
) ε denoting the set containing only the "empty" string, which has no characters at all. - (literal characterString literalA string literal is the representation of a string value within the source code of a computer program. There are numerous alternate notations for specifying string literals, and the exact notation depends on the individual programming language in question...
)a
in Σ denoting the set containing only the character a.
Given regular expressions R and S, the following operations over them are defined
to produce regular expressions:
- (concatenation) RS denoting the set { αβ | α in R and β in S }. For example {"ab", "c"}{"d", "ef"} = {"abd", "abef", "cd", "cef"}.
- (alternation) R | S denoting the set union of R and S. For example {"ab", "c"}|{"ab", "d", "ef"} = {"ab", "c", "d", "ef"}.
- (Kleene starKleene starIn mathematical logic and computer science, the Kleene star is a unary operation, either on sets of strings or on sets of symbols or characters. The application of the Kleene star to a set V is written as V*...
) R* denoting the smallest supersetSubsetIn mathematics, especially in set theory, a set A is a subset of a set B if A is "contained" inside B. A and B may coincide. The relationship of one set being a subset of another is called inclusion or sometimes containment...
of R that contains ε and is closedClosure (mathematics)In mathematics, a set is said to be closed under some operation if performance of that operation on members of the set always produces a unique member of the same set. For example, the real numbers are closed under subtraction, but the natural numbers are not: 3 and 8 are both natural numbers, but...
under string concatenation. This is the set of all strings that can be made by concatenating any finite number (including zero) of strings from R. For example, {"0","1"}* is the set of all finite binary strings (including the empty string), and {"ab", "c"}* = {ε, "ab", "c", "abab", "abc", "cab", "cc", "ababab", "abcab", ... }.
To avoid parentheses it is assumed that the Kleene star has the highest priority, then concatenation and then set union. If there is no ambiguity then parentheses may be omitted. For example,
(ab)c
can be written as abc
, and a|(b(c*))
can be written as a|bc*
.Many textbooks use the symbols , , or for alternation instead of the vertical bar.
Examples:
a|b*
denotes {ε, a, b, bb, bbb, ...}(a|b)*
denotes the set of all strings with no symbols other than a and b, including the empty string: {ε, a, b, aa, ab, ba, bb, aaa, ...}ab*(c|ε)
denotes the set of strings starting with a, then zero or more bs and finally optionally a c: {a, ac, ab, abc, abb, abbc, ...}
Expressive power and compactness
The formal definition of regular expressions is purposely parsimonious and avoids defining the redundant quantifiers?
and +
, which can be expressed as follows: a+
= aa*
, and a?
= (a|ε)
. Sometimes the complement operator is added, to give a generalized regular expression; here Rc matches all strings over Σ* that do not match R. In principle, the complement operator is redundant, as it can always be circumscribed by using the other operators. However, the process for computing such a representation is complex, and the result may require expressions of a size that is double exponentially larger.Regular expressions in this sense can express the regular language
Regular language
In theoretical computer science and formal language theory, a regular language is a formal language that can be expressed using regular expression....
s, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially
Exponential growth
Exponential growth occurs when the growth rate of a mathematical function is proportional to the function's current value...
in the size of the shortest equivalent regular expressions. The standard example here is the languages
Lk consisting of all strings over the alphabet {a,b} whose kth-last letter equals to a. On one hand, a regular expression describing L4 is given by . Generalizing this pattern to Lk gives the expression
On the other hand, it is known that every deterministic finite automaton accepting the language Lk must have at least 2k states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata
Nondeterministic finite state machine
In the automata theory, a nondeterministic finite state machine or nondeterministic finite automaton is a finite state machine where from each state and a given input symbol the automaton may jump into several possible next states...
(NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars
Formal grammar
A formal grammar is a set of formation rules for strings in a formal language. The rules describe how to form strings from the language's alphabet that are valid according to the language's syntax...
of the Chomsky hierarchy
Chomsky hierarchy
Within the field of computer science, specifically in the area of formal languages, the Chomsky hierarchy is a containment hierarchy of classes of formal grammars....
.
Finally, it is worth noting that many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; see below for more on this.
Deciding equivalence of regular expressions
As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results.It is possible to write an algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
which for two given regular expressions decides whether the described languages are essentially equal, reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic
Isomorphism
In abstract algebra, an isomorphism is a mapping between objects that shows a relationship between two properties or operations. If there exists an isomorphism between two structures, the two structures are said to be isomorphic. In a certain sense, isomorphic structures are...
(equivalent).
The redundancy can be eliminated by using Kleene star
Kleene star
In mathematical logic and computer science, the Kleene star is a unary operation, either on sets of strings or on sets of symbols or characters. The application of the Kleene star to a set V is written as V*...
and set union
Union (set theory)
In set theory, the union of a collection of sets is the set of all distinct elements in the collection. The union of a collection of sets S_1, S_2, S_3, \dots , S_n\,\! gives a set S_1 \cup S_2 \cup S_3 \cup \dots \cup S_n.- Definition :...
to find an interesting subset of regular expressions that is still fully expressive, but perhaps their use can be restricted. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem
Star height problem
The star height problem in formal language theory is the question whether all regular languages can be expressed using regular expressions of limited star height, i.e. with a limited nesting depth of Kleene stars...
. Recently, Dexter Kozen
Dexter Kozen
Dexter Campbell Kozen is an American theoretical computer scientist. He is currently Joseph Newton Pew, Jr. Professor in Engineering at Cornell University. He received his B.A...
axiomatized regular expressions with Kleene algebra
Kleene algebra
In mathematics, a Kleene algebra is either of two different things:* A bounded distributive lattice with an involution satisfying De Morgan's laws , additionally satisfying the inequality x∧−x ≤ y∨−y. Kleene algebras are subclasses of Ockham algebras...
.
Syntax
A number of special characters or meta characters are used to denote actions or delimit groups; but it is possible to force these special characters to be interpreted as normal characters by preceding them with a defined escape characterEscape character
In computing and telecommunication, an escape character is a character which invokes an alternative interpretation on subsequent characters in a character sequence. An escape character is a particular case of metacharacters...
, usually the backslash
Backslash
The backslash is a typographical mark used mainly in computing. It was first introduced to computers in 1960 by Bob Bemer. Sometimes called a reverse solidus or a slosh, it is the mirror image of the common slash....
"\". For example, a dot is normally used as a "wild card" metacharacter to denote any character, but if preceded by a backslash it represents the dot character itself. The pattern
c.t
matches "cat", "cot", "cut", and non-words such as "czt" and "c.t"; but c\.t
matches only "c.t". The backslash also escapes itself, i.e., two backslashes are interpreted as a literal backslash character.POSIX Basic Regular Expressions
Traditional UnixUnix
Unix is a multitasking, multi-user computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna...
regular expression syntax followed common conventions but often differed from tool to tool. The IEEE
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers is a non-profit professional association headquartered in New York City that is dedicated to advancing technological innovation and excellence...
POSIX
POSIX
POSIX , an acronym for "Portable Operating System Interface", is a family of standards specified by the IEEE for maintaining compatibility between operating systems...
Basic Regular Expressions (BRE) standard (released alongside an alternative flavor called Extended Regular Expressions or ERE) was designed mostly for backward compatibility with the traditional (Simple Regular Expression) syntax but provided a common standard which has since been adopted as the default syntax of many Unix regular expression tools, though there is often some variation or additional features. Many such tools also provide support for ERE syntax with command line arguments.
In the BRE syntax, most characters are treated as literals — they match only themselves (e.g.,
a
matches "a"). The exceptions, listed below, are called metacharacterMetacharacter
A metacharacter is a character that has a special meaning to a computer program, such as a shell interpreter or a regular expression engine.-Examples:...
s or metasequences.
Metacharacter | Description |
---|---|
. |
Matches any single character (many applications exclude newline Newline In computing, a newline, also known as a line break or end-of-line marker, is a special character or sequence of characters signifying the end of a line of text. The name comes from the fact that the next character after the newline will appear on a new line—that is, on the next line below the... s, and exactly which characters are considered newlines is flavor-, character-encoding-, and platform-specific, but it is safe to assume that the line feed character is included). Within POSIX bracket expressions, the dot character matches a literal dot. For example, a.c matches "abc", etc., but [a.c] matches only "a", ".", or "c". |
[ ] |
A bracket expression. Matches a single character that is contained within the brackets. For example, [abc] matches "a", "b", or "c". [a-z] specifies a range which matches any lowercase letter from "a" to "z". These forms can be mixed: [abcx-z] matches "a", "b", "c", "x", "y", or "z", as does [a-cx-z] .The - character is treated as a literal character if it is the last or the first (after the ^ ) character within the brackets: [abc-] , [-abc] . Note that backslash escapes are not allowed. The ] character can be included in a bracket expression if it is the first (after the ^ ) character: []abc] . |
[^ ] |
Matches a single character that is not contained within the brackets. For example, [^abc] matches any character other than "a", "b", or "c". [^a-z] matches any single character that is not a lowercase letter from "a" to "z". As above, literal characters and ranges can be mixed. |
^ |
Matches the starting position within the string. In line-based tools, it matches the starting position of any line. |
$ |
Matches the ending position of the string or the position just before a string-ending newline. In line-based tools, it matches the ending position of any line. |
Defines a marked subexpression. The string matched within the parentheses can be recalled later (see the next entry, \n ). A marked subexpression is also called a block or capturing group. |
|
\n |
Matches what the nth marked subexpression matched, where n is a digit from 1 to 9. This construct is theoretically irregular and was not adopted in the POSIX ERE syntax. Some tools allow referencing more than nine capturing groups. |
* |
Matches the preceding element zero or more times. For example, ab*c matches "ac", "abc", "abbbc", etc. [xyz]* matches "", "x", "y", "z", "zx", "zyx", "xyzzy", and so on. \(ab\)* matches "", "ab", "abab", "ababab", and so on. |
Matches the preceding element at least m and not more than n times. For example, a\{3,5\} matches only "aaa", "aaaa", and "aaaaa". This is not found in a few older instances of regular expressions. |
|
Examples:
.at
matches any three-character string ending with "at", including "hat", "cat", and "bat".[hc]at
matches "hat" and "cat".[^b]at
matches all strings matched by.at
except "bat".^[hc]at
matches "hat" and "cat", but only at the beginning of the string or line.[hc]at$
matches "hat" and "cat", but only at the end of the string or line.\[.\]
matches any single character surrounded by "[" and "]" since the brackets are escaped, for example: "[a]" and "[b]".
POSIX Extended Regular Expressions
The meaning of metacharacters escapedEscape sequence
An escape sequence is a series of characters used to change the state of computers and their attached peripheral devices. These are also known as control sequences, reflecting their use in device control. Some control sequences are special characters that always have the same meaning...
with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example,
\( \)
is now ( )
and \{ \}
is now { }. Additionally, support is removed for \n
backreferences and the following metacharacters are added:
Metacharacter
Description
?
Matches the preceding element zero or one time. For example, ba?
matches "b" or "ba".
+
Matches the preceding element one or more times. For example, ba+
matches "ba", "baa", "baaa", and so on.
>
abc>def matches "abc" or "def".
Examples:
[hc]+at matches "hat", "cat", "hhat", "chat", "hcat", "ccchat", and so on, but not "at".
[hc]?at
matches "hat", "cat", and "at".
[hc]*at matches "hat", "cat", "hhat", "chat", "hcat", "ccchat", "at", and so on.
cat|dog
matches "cat" or "dog".
POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E.
POSIX character classes
Since many ranges of characters depend on the chosen locale setting (i.e., in some settings letters are organized as abc...zABC...Z, while in some others as aAbBcC...zZ), the POSIX standard defines some classes or categories of characters as shown in the following table:
{| class="wikitable"
|-
! POSIX !! Non-standard !! Perl !! ASCII !! Description
|-
| [:alnum:]
|
|
| [A-Za-z0-9]
| Alphanumeric characters
|-
|
| [:word:]
| \w
| [A-Za-z0-9_]
| Alphanumeric characters plus "_"
|-
|
|
| \W
| [^A-Za-z0-9_]
| Non-word characters
|-
| [:alpha:]
|
|
| [A-Za-z]
| Alphabetic characters
|-
| [:blank:]
|
|
| [ \t]
| Space and tab
|-
|
|
| \b
| [(?<=\W)(?=\w)|(?<=\w)(?=\W)]
| Word boundaries
|-
| [:cntrl:]
|
|
| [\x00-\x1F\x7F]
| Control characterControl characterIn computing and telecommunication, a control character or non-printing character is a code point in a character set, that does not in itself represent a written symbol.It is in-band signaling in the context of character encoding....
s
|-
| [:digit:]
|
| \d
| [0-9]
| Digits
|-
|
|
| \D
| [^0-9]
| Non-digits
|-
| [:graph:]
|
|
| [\x21-\x7E]
| Visible characters
|-
| [:lower:]
|
|
| [a-z]
| Lowercase letters
|-
| [:print:]
|
|
| [\x20-\x7E]
| Visible characters and the space character
|-
| [:punct:]
|
|
| [\]\[!"#$%&'*+,./:;<=>?@\^_`{|}~-]
| Punctuation characters
|-
| [:space:]
|
| \s
| [ \t\r\n\v\f]
| Whitespace characters
|-
|
|
| \S
| [^ \t\r\n\v\f]
| Non-whitespace characters
|-
| [:upper:]
|
|
| [A-Z]
| Uppercase letters
|-
| [:xdigit:]
|
|
| [A-Fa-f0-9]
| Hexadecimal digits
|}
POSIX character classes can only be used within bracket expressions. For example, upper:]ab]
matches the uppercase letters and lowercase "a" and "b".
An additional non-POSIX class understood by some tools is [:word:]
, which is usually defined as [:alnum:]
plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editor VimVim (text editor)Vim is a text editor written by Bram Moolenaar and first released publicly in 1991. Based on the vi editor common to Unix-like systems, Vim is designed for use both from a command line interface and as a standalone application in a graphical user interface...
further distinguishes word and word-head classes (using the notation \w
and \h
) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions.
Note that what the POSIX regular expression standards call character classes are commonly referred to as POSIX character classes in other regular expression flavors which support them. With most other regular expression flavors, the term character class is used to describe what POSIX calls bracket expressions.
Perl-derived regular expressions
PerlPerlPerl is a high-level, general-purpose, interpreted, dynamic programming language. Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions and become widely popular...
has a more consistent and richer syntax than the POSIX basic (BRE) and extended (ERE) regular expression standards. An example of its consistency is that \
always escapes a non-alphanumeric character. Other examples of functionality possible with Perl but not POSIX-compliant regular expressions is the concept of lazy quantification (see the next section), possessive quantifies to control backtrackingBacktrackingBacktracking is a general algorithm for finding all solutions to some computational problem, that incrementally builds candidates to the solutions, and abandons each partial candidate c as soon as it determines that c cannot possibly be completed to a valid solution.The classic textbook example...
, named capture groups, and recursive patterns.
Due largely to its expressive power, many other utilities and programming languages have adopted syntax similar to Perl's — for example, JavaJava (programming language)Java is a programming language originally developed by James Gosling at Sun Microsystems and released in 1995 as a core component of Sun Microsystems' Java platform. The language derives much of its syntax from C and C++ but has a simpler object model and fewer low-level facilities...
, JavaScriptJavaScriptJavaScript is a prototype-based scripting language that is dynamic, weakly typed and has first-class functions. It is a multi-paradigm language, supporting object-oriented, imperative, and functional programming styles....
, PCRE, PythonPython (programming language)Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Python claims to "[combine] remarkable power with very clear syntax", and its standard library is large and comprehensive...
, RubyRuby (programming language)Ruby is a dynamic, reflective, general-purpose object-oriented programming language that combines syntax inspired by Perl with Smalltalk-like features. Ruby originated in Japan during the mid-1990s and was first developed and designed by Yukihiro "Matz" Matsumoto...
, MicrosoftMicrosoftMicrosoft Corporation is an American public multinational corporation headquartered in Redmond, Washington, USA that develops, manufactures, licenses, and supports a wide range of products and services predominantly related to computing through its various product divisions...
's .NET Framework.NET FrameworkThe .NET Framework is a software framework that runs primarily on Microsoft Windows. It includes a large library and supports several programming languages which allows language interoperability...
, and the W3C'sWorld Wide Web ConsortiumThe World Wide Web Consortium is the main international standards organization for the World Wide Web .Founded and headed by Tim Berners-Lee, the consortium is made up of member organizations which maintain full-time staff for the purpose of working together in the development of standards for the...
XML Schema all use regular expression syntax similar to Perl's. Some languages and tools such as Boost and PHPPHPPHP is a general-purpose server-side scripting language originally designed for web development to produce dynamic web pages. For this purpose, PHP code is embedded into the HTML source document and interpreted by a web server with a PHP processor module, which generates the web page document...
support multiple regular expression flavors. Perl-derivative regular expression implementations are not identical, and all implement no more than a subset of Perl's features, usually those of Perl 5.0, released in 1994. With Perl 5.10, this process has come full circle with Perl incorporating syntactic extensions originally developed in Python, PCRE, and the .NET Framework.
Simple Regular Expressions
Simple Regular Expressions is a syntax that may be used by historical versions of application programs, and may be supported within some applications for the purpose of providing backward compatibility. It is deprecated.
Lazy quantification
The standard quantifiers in regular expressions are greedyGreedy algorithmA greedy algorithm is any algorithm that follows the problem solving heuristic of making the locally optimal choice at each stagewith the hope of finding the global optimum....
, meaning they match as much as they can. For example, to find the first instance of an item between the angled bracket symbols < > in this example:
Another whale sighting occurred on , <2004> .
someone new to regexes would likely come up with the pattern <.*>
or similar. However, instead of the " " that might be expected, this pattern will actually return ", <2004> " because the *
quantifier is greedy — it will consume as many characters as possible from the input, and "January 26>, <2004 " has more characters than "January 26".
Though this problem can be avoided in a number of ways (e.g., by specifying the text that is not to be matched: <[^>]*>
), modern regular expression tools allow a quantifier to be specified as lazy (also known as non-greedy, reluctant, minimal, or ungreedy) by putting a question mark after the quantifier (e.g., <.*?>
), or by using a modifier which reverses the greediness of quantifiers (though changing the meaning of the standard quantifiers can be confusing). By using a lazy quantifier, the expression tries the minimal match first. Though in the previous example lazy matching is used to select one of many matching results, in some cases it can also be used to improve performance when greedy matching would require more backtrackingBacktrackingBacktracking is a general algorithm for finding all solutions to some computational problem, that incrementally builds candidates to the solutions, and abandons each partial candidate c as soon as it determines that c cannot possibly be completed to a valid solution.The classic textbook example...
.
Patterns for non-regular languages
Many features found in modern regular expression libraries provide an expressive power that far exceeds the regular languageRegular languageIn theoretical computer science and formal language theory, a regular language is a formal language that can be expressed using regular expression....
s. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (). This means that a pattern can match strings of repeated words like "papa" or "WikiWiki", called squares in formal language theory. The pattern for these strings is (.*)\1
.
The language of squares is not regular, nor is it context-freeContext-free languageIn formal language theory, a context-free language is a language generated by some context-free grammar. The set of all context-free languages is identical to the set of languages accepted by pushdown automata.-Examples:...
. Pattern matchingPattern matchingIn computer science, pattern matching is the act of checking some sequence of tokens for the presence of the constituents of some pattern. In contrast to pattern recognition, the match usually has to be exact. The patterns generally have the form of either sequences or tree structures...
with an unbounded number of back references, as supported by numerous modern tools, is NP-completeNP-completeIn computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...
(see, Theorem 6.2).
However, many tools, libraries, and engines that provide such constructions still use the term regular expression for their patterns. This has led to a nomenclature where the term regular expression has different meanings in formal language theoryFormal languageA formal language is a set of words—that is, finite strings of letters, symbols, or tokens that are defined in the language. The set from which these letters are taken is the alphabet over which the language is defined. A formal language is often defined by means of a formal grammar...
and pattern matching. For this reason, some people have taken to using the term regex or simply pattern to describe the latter. Larry WallLarry WallLarry Wall is a programmer and author, most widely known for his creation of the Perl programming language in 1987.-Education:Wall earned his bachelor's degree from Seattle Pacific University in 1976....
, author of the Perl programming language, writes in an essay about the design of Perl 6:
Fuzzy Regular Expressions
Variants of regular expressions can be used for working with text in natural languageNatural languageIn the philosophy of language, a natural language is any language which arises in an unpremeditated fashion as the result of the innate facility for language possessed by the human intellect. A natural language is typically used for communication, and may be spoken, signed, or written...
, when it is necessary to take into account possible typos and spelling variants. For example, the text "Julius Caesar" might be a fuzzy match for:
- Gaius Julius Caesar
- Yulius Cesar
- G. Juliy Caezar
In such cases the mechanism implements some fuzzy string matchingApproximate string matchingIn computing, approximate string matching is the technique of finding strings that match a pattern approximately...
algorithm and possibly some algorithm for finding the similarityEdit distanceIn information theory and computer science, the edit distance between two strings of characters generally refers to the Levenshtein distance. However, according to Nico Jacobs, “The term ‘edit distance’ is sometimes used to refer to the distance in which insertions and deletions have equal cost and...
between text fragment and pattern.
This task is closely related to both full text searchFull text searchIn text retrieval, full text search refers to techniques for searching a single computer-stored document or a collection in a full text database...
and named entity recognitionNamed entity recognitionNamed-entity recognition is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc.Most research on NER...
.
Some software libraries work with fuzzy regular expressions:
- TRETRE (computing)TRE is an open-source library for texts search, which works like regular expression engine with ability of fuzzy string searching. It is developed by Ville Laurikari under 2-clause BSD-like license....
- well-developed portable free project in CC (programming language)C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
, which uses syntax similar to POSIX
- FREJ - open source project in JavaJava (programming language)Java is a programming language originally developed by James Gosling at Sun Microsystems and released in 1995 as a core component of Sun Microsystems' Java platform. The language derives much of its syntax from C and C++ but has a simpler object model and fewer low-level facilities...
with non-standard syntax (which utilizes prefix, Lisp-like notation), targeted to allow easy use of substitutions of inner matched fragments in outer blocks, but lacks many features of standard regular expressions.
- agrepAgrepagrep is a proprietary fuzzy string searching program, developed by Udi Manber and Sun Wu between 1988 and 1991, for use with the Unix operating system...
- old (since 1989) command-line utility (proprietary, but free for non-commercial usage).
Implementations and running times
There are at least three different algorithmAlgorithmIn mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
s that decide if and how a given regular expression matches a string.
The oldest and fastest two rely on a result in formal language theory that allows every nondeterministic finite automaton (NFA) to be transformed into a deterministic finite automaton (DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of size m has the time and memory cost of OBig O notationIn mathematics, big O notation is used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. It is a member of a larger family of notations that is called Landau notation, Bachmann-Landau notation, or...
(2m), but it can be run on a string of size n in time O(n). An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises to O(m2n). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky.
The third algorithm is to match the pattern against the input string by backtrackingBacktrackingBacktracking is a general algorithm for finding all solutions to some computational problem, that incrementally builds candidates to the solutions, and abandons each partial candidate c as soon as it determines that c cannot possibly be completed to a valid solution.The classic textbook example...
. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like (a|aa)*b
that contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem called Regular expression Denial of Service.
Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match.
Unicode
In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regular expressions were originally written to use ASCII characters as their token set though regular expression libraries have supported numerous other character sets. Many modern regular expression engines offer at least some support for UnicodeUnicodeUnicode is a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world's writing systems...
. In most respects it makes no difference what the character set is, but some issues do arise when extending regular expressions to support Unicode.
- Supported encoding. Some regular expression libraries expect to work on some particular encoding instead of on abstract Unicode characters. Many of these require the UTF-8UTF-8UTF-8 is a multibyte character encoding for Unicode. Like UTF-16 and UTF-32, UTF-8 can represent every character in the Unicode character set. Unlike them, it is backward-compatible with ASCII and avoids the complications of endianness and byte order marks...
encoding, while others might expect UTF-16, or UTF-32. In contrast, Perl and Java are agnostic on encodings, instead operating on decoded characters internally.
- Supported Unicode range. Many regular expression engines support only the Basic Multilingual Plane, that is, the characters which can be encoded with only 16 bits. Currently, only a few regular expression engines (e.g., Perl's and Java's) can handle the full 21-bit Unicode range.
- Extending ASCII-oriented constructs to Unicode. For example, in ASCII-based implementations, character ranges of the form
[x-y]
are valid wherever x and y are codepoints in the range [0x00,0x7F] and codepoint(x) ≤ codepoint(y). The natural extension of such character ranges to Unicode would simply change the requirement that the endpoints lie in [0x00,0x7F] to the requirement that they lie in [0,0x10FFFF]. However, in practice this is often not the case. Some implementations, such as that of gawk, do not allow character ranges to cross Unicode blocks. A range like [0x61,0x7F] is valid since both endpoints fall within the Basic Latin block, as is [0x0530,0x0560] since both endpoints fall within the Armenian block, but a range like [0x0061,0x0532] is invalid since it includes multiple Unicode blocks. Other engines, such as that of the VimVim (text editor)Vim is a text editor written by Bram Moolenaar and first released publicly in 1991. Based on the vi editor common to Unix-like systems, Vim is designed for use both from a command line interface and as a standalone application in a graphical user interface...
editor, allow block-crossing but limit the number of characters in a range to 128.
- Case insensitivity. Some case-insensitivity flags affect only the ASCII characters. Other flags affect all characters. Some engines have two different flags, one for ASCII, the other for Unicode. Exactly which characters belong to the POSIX classes also varies.
- Cousins of case insensitivity. As ASCII has case distinction, case insensitivity became a logical feature in text searching. Unicode introduced alphabetic scripts without case like DevanagariDevanagariDevanagari |deva]]" and "nāgarī" ), also called Nagari , is an abugida alphabet of India and Nepal...
. For these, case sensitivityCase sensitivityText sometimes exhibits case sensitivity; that is, words can differ in meaning based on differing use of uppercase and lowercase letters. Words with capital letters do not always have the same meaning when written with lowercase letters....
is not applicable. For scripts like Chinese, another distinction seems logical: between traditional and simplified. In Arabic scripts, insensitivity to initial, medial, final, and isolated position may be desired. In Japanese, insensitivity between hiraganaHiraganais a Japanese syllabary, one basic component of the Japanese writing system, along with katakana, kanji, and the Latin alphabet . Hiragana and katakana are both kana systems, in which each character represents one mora...
and katakanaKatakanais a Japanese syllabary, one component of the Japanese writing system along with hiragana, kanji, and in some cases the Latin alphabet . The word katakana means "fragmentary kana", as the katakana scripts are derived from components of more complex kanji. Each kana represents one mora...
is sometimes useful.
- Normalization. Unicode has combining characters. Like old typewriters, plain letters can be followed by one of more non-spacing symbols (usually diacritics like accent marks) to form a single printing character, but also provides precomposed characters, i.e. characters that already include one or more combining characters. A sequence of a character + combining character should be matched with the identical single precomposed character. The process of standardizing sequences of characters + combining characters is called normalization.
- New control codes. Unicode introduced amongst others, byte order marks and text direction markers. These codes might have to be dealt with in a special way.
- Introduction of character classes for Unicode blocks, scripts, and numerous other character properties. Block properties are much less useful than script properties, because a block can have code points from several different scripts, and a script can have code points from several different blocks. In PerlPerlPerl is a high-level, general-purpose, interpreted, dynamic programming language. Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions and become widely popular...
and the library, properties of the form \p{InX}
or \p{Block=X}
match characters in block X and \P{InX}
or \P{Block=X}
matches code points not in that block. Similarly, \p{Armenian}
, \p{IsArmenian}
, or \p{Script=Armenian}
matches any character in the Armenian script. In general, \p{X}
matches any character with either the binary property X or the general category X. For example, \p{Lu}
, \p{Uppercase_Letter}
, or \p{GC=Lu}
matches any upper-case letter. Binary properties that are not general categories include \p{White_Space}
, \p{Alphabetic}
, \p{Math}
, and \p{Dash}
. Examples of non-binary properties are \p{Bidi_Class=Right_to_Left}
, \p{Word_Break=A_Letter}
, and \p{Numeric_Value=10}
.
Uses
Regular expressions are useful in the production of syntax highlightingSyntax highlightingSyntax highlighting is a feature of some text editors that display text—especially source code—in different colors and fonts according to the category of terms. This feature eases writing in a structured language such as a programming language or a markup language as both structures and...
systems, data validationData validationIn computer science, data validation is the process of ensuring that a program operates on clean, correct and useful data. It uses routines, often called "validation rules" or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system...
, and many other tasks.
While regular expressions would be useful on search engines such as GoogleGoogleGoogle Inc. is an American multinational public corporation invested in Internet search, cloud computing, and advertising technologies. Google hosts and develops a number of Internet-based services and products, and generates profit primarily from advertising through its AdWords program...
, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions: Google Code SearchGoogle Code SearchGoogle Code Search is a free beta product from Google which debuted in Google Labs on October 5, 2006 allowing web users to search for open-source code on the Internet. Code Search will be shut down along with the Code Search API on January 15, 2012....
, ExaleadExaleadExalead is a software company that provides search platforms and search-based applications for consumer and business users. The company is headquartered in Paris, France, and is a subsidiary of Dassault Systèmes .- CloudView Platform :...
.
See also
- Comparison of regular expression enginesComparison of regular expression engines-Libraries:-Languages:-Language features:NOTE: An application using a library for regular expression support does not necessarily offer the full set of features of the library, e.g...
- Extended Backus–Naur FormExtended Backus–Naur formIn computer science, Extended Backus–Naur Form is a family of metasyntax notations used for expressing context-free grammars: that is, a formal way to describe computer programming languages and other formal languages. They are extensions of the basic Backus–Naur Form metasyntax notation.The...
- List of regular expression software - applications which support regular expressions
- Regular expression examples
- Regular tree grammarRegular tree grammarIn Computer Science, a regular tree grammar is a formal grammar that describes a set of directed trees.-Definition:A regular tree grammar G is defined by the tupleG = ,where* N is a set of nonterminals,...
- Regular languageRegular languageIn theoretical computer science and formal language theory, a regular language is a formal language that can be expressed using regular expression....
External links
- Java Tutorials: Regular Expressions
- Perl Regular Expressions documentation
- VBScript and Regular Expressions
- .NET Framework Regular Expressions
- Pattern matching tools and libraries
- Structural Regular Expressions by Rob Pike
- JavaScript Regular Expressions Chapter and RegExp Object Reference at the Mozilla Developer CenterMozilla Developer CenterMozilla Developer Center , started in early 2005, is the official Mozilla Foundation website for development documentation and news about Firefox, Thunderbird, and other Mozilla Foundation projects. It is intended to supplant the community-driven aspect of documents authored for the mozilla.org...