The time complexity is O(N) on average, O(NM) worst case (N being the length of the longer string, M, the shorter string you search for). As of Python 3.10, heuristics are used to lower the worst-case scenario to O(N + M) by switching algorithms.
The same algorithm is used for str.index()
, str.find()
, str.__contains__()
(the in
operator) and str.replace()
; it is a simplification of the Boyer-Moore with ideas taken from the Boyer–Moore–Horspool and Sunday algorithms.
See the original stringlib discussion post, as well as the fastsearch.h
source code; until Python 3.10, the base algorithm has not changed since introduction in Python 2.5 (apart from some low-level optimisations and corner-case fixes).
The post includes a Python-code outline of the algorithm:
def find(s, p): # find first occurrence of p in s n = len(s) m = len(p) skip = delta1(p)[p[m-1]] i = 0 while i <= n-m: if s[i+m-1] == p[m-1]: # (boyer-moore) # potential match if s[i:i+m-1] == p[:m-1]: return i if s[i+m] not in p: i = i + m + 1 # (sunday) else: i = i + skip # (horspool) else: # skip if s[i+m] not in p: i = i + m + 1 # (sunday) else: i = i + 1 return -1 # not found
as well as speed comparisons.
In Python 3.10, the algorithm was updated to use an enhanced version of the Crochemore and Perrin’s Two-Way string searching algorithm for larger problems (with p
and s
longer than 100 and 2100 characters, respectively, with s
at least 6 times as long as p
), in response to a pathological edgecase someone reported. The commit adding this change included a write-up on how the algorithm works.
The Two-way algorithm has a worst-case time complexity of O(N + M), where O(M) is a cost paid up-front to build a shift table from the s
search needle. Once you have that table, this algorithm does have a best-case performance of O(N/M).